-
Notifications
You must be signed in to change notification settings - Fork 0
FAQ
Short answers to the most common questions. For anything not here, see Troubleshooting.
A self-hosted web app that keeps a Plex library tidy. Point it at Radarr, Sonarr, Lidarr, and Plex and it handles the boring chores: renaming posters, finding duplicates, re-applying borders, searching for quality upgrades, cleaning up orphaned files, and more. See Home.
Same module set, different direction. CHUB is a fork of DAPS with a refreshed UI, live status updates, inline metadata editing with an audit trail, duplicate resolution, and tightened-up security. See Credits for the full list.
No. CHUB is a clean break — no data migration, no compatibility shims. Pull ghcr.io/chodeus/chub:latest into a fresh config directory and reconfigure. Your DAPS install keeps working on its own image alongside CHUB if you want to run both side-by-side during cut-over.
Not directly. CHUB is built for a private network — LAN or VPN. It has built-in login and rate limiting, but nothing to defend against a determined attacker. For remote access, put it behind a reverse proxy with TLS plus a second auth layer (Authelia, Authentik, Cloudflare Access, etc.).
Two options:
Run the reset command and restart CHUB:
docker compose run --rm chub python3 main.py --reset-authOr stop CHUB, delete the auth: block from config.yml, and start CHUB again. Either way the first-run form reappears.
Yes. Use nestarr to move items between split libraries based on ARR path mappings, and configure each Plex library separately under instances.plex. poster_renamerr also accepts a library_names list so you can scope each Plex instance.
Yes for upgradinatorr — full album search, artist grouping, and all three search modes (upgrade, missing, cutoff). There's no dedicated Lidarr UI because music library browsing is covered by Lidarr itself. Open an issue if you'd like a Lidarr-specific workflow built into CHUB.
Dashboard → New run, pick the module, Run.
Settings → Jobs → click the running job → Cancel. Most modules stop cleanly within a few seconds. border_replacerr is the one full exception — it runs to completion. plex_maintenance is partial: its PhotoTranscoder cleanup stops, but the three Plex-API tasks (empty_trash, clean_bundles, optimize_db) run to completion since Plex's API has no interrupt. Restart the container if you need to kill one of those.
Everything lives under whatever folder you mounted to /config in Docker:
-
config.yml— your settings -
chub.db— the database (users, jobs, media, poster index, edit history) -
logs/— per-module log files -
backups/— backup zips
Your poster and media trees live on the volumes you mount separately.
Two options:
-
From the UI: Settings → Backup → Create backup. Downloads a zip of your config and database; a copy is also kept under
backups/in your config folder. - From the filesystem: stop the container, copy the whole config folder somewhere safe, start it back up.
docker compose pull
docker compose up -dSchema changes are applied automatically on startup — no manual migrations.
Yes. Multi-instance support is optional — configure only the instances you actually run.
There's no separate API key. For inbound webhooks from Sonarr/Radarr/Tautulli, set general.webhook_secret in config.yml (or in Settings → General). If you want to script against CHUB itself, see the Developer Guide.
No. They solve different problems — Kometa manages Plex metadata and collections, CHUB manages poster file trees and media-asset chores. Many people run both. poster_renamerr is explicitly designed to consume Kometa's asset output.
Yes. Just don't configure them and don't schedule them. Every module has a sidebar entry, but an unconfigured module simply won't do anything when triggered.
The image runs as dockeruser, with UID/GID set via PUID/PGID env vars. The defaults are 100 / 99 (matching Unraid). Most other Linux hosts want 1000 / 1000. Mount paths must be writable by whichever UID/GID you pick.
See the Developer Guide for the contributing checklist and local dev setup.