Hark watches your Docker container logs and sends you a notification when something goes wrong. When it goes right again, it tells you that too.
No agents. No dashboards. No 40-container Loki stack. One container, one config file, and a Pushover ping when Caddy starts throwing 502s at 3am.
Most alerting tools fire a notification every time a pattern matches. Tail a busy log for a few hours and you'll have 400 "ERROR detected" pings sitting in your notification history, which means you'll start ignoring them, which defeats the point.
Hark tracks faults instead of matches. The first time caddy logs an error, you get one notification. If the error keeps happening, it shows up in your morning digest. When it stops, you get a resolved notification. Three pings for a fault that lasted two weeks, not two thousand.
[May 01 09:14] NEW FAULT caddy / error
"upstream connect error or disconnect/reset before headers"
[May 02 08:00] DIGEST 1 active fault
caddy / error — 1 day
[May 16 08:00] DIGEST 1 active fault
caddy / error — 15 days
[May 16 18:02] RESOLVED caddy / error
Was active for 15 days 8 hours
Create a config.yml (there's a fully commented config.example.yml in this repo), then:
# docker-compose.yml
services:
hark:
image: ghcr.io/bothari/hark:latest
container_name: hark
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config.yml:/app/config.yml:ro
- ./data:/data
environment:
- TZ=America/Los_AngelesThe /data volume is where Hark keeps its SQLite database. Without it, every restart will re-open every fault as "new". You probably want it.
docker compose up -dnotifications:
- ntfy://ntfy.sh/my-topic # or pover://, discord://, gotify://, ...
default_patterns:
- level: error
label: error
severity: errorThat's enough to get going. Hark will watch every container on the host and notify you when anything logs at error level.
Hark uses Apprise under the hood, which supports 60+ services. Some common ones:
| Service | URL format |
|---|---|
| Pushover | pover://token/userkey |
| ntfy | ntfy://ntfy.sh/your-topic |
| Gotify | gotifys://gotify.example.com/token |
| Discord | discord://webhook_id/webhook_token |
| Slack | slack://token_a/token_b/token_c |
| Telegram | tgram://bot_token/chat_id |
mailto://user:pass@gmail.com |
Full list at github.com/caronc/apprise/wiki. You can add as many URLs as you want — Hark sends to all of them.
default_patterns:
# Match common log level strings (error, [ERROR], "level":"error", etc.)
- level: error
label: error
severity: error
# Match an arbitrary regex
- regex: "authentication error|401 Unauthorized"
label: auth-failure
severity: errorEvery pattern needs a label. The label is the fault key — two errors from the same container with the same label are the same fault. Two different labels are two different faults tracked independently.
fault_tracking:
# How long with no matching log lines before a fault is considered resolved.
quiet_window_minutes: 60
digest:
enabled: true
time: "08:00" # Respects your TZ environment variable.
send_if_clear: false # Flip this to get a daily "all clear" when nothing is broken.containers:
# Silence a noisy container.
- name: uptime-kuma
ignore: true
# Add patterns on top of the defaults.
- name: ddns-updater
extra_patterns:
- regex: "HTTP status is not valid"
label: http-error
severity: error
# Replace the defaults entirely for one container.
- name: some-chatty-service
patterns:
- regex: "CRITICAL"
label: critical
severity: error
# Glob patterns work.
- name: "arr_*"
ignore: trueContainer names support globs — arr_* matches arr_sonarr, arr_radarr, and so on.
If a container runs once a day, the default 60-minute quiet window will resolve its fault between runs. Next day it fails again and you get a fresh "new fault" — not a persistent one.
Fix this by extending the quiet window beyond the run interval:
containers:
- name: kometa
quiet_window_minutes: 1500 # 25 hours
extra_patterns:
- regex: "error|failed"
label: collection-error
severity: errorNow the fault bridges the gap between runs and accumulates days in your digest until Kometa actually has a clean run.
If you'd rather not touch Hark's config, you can silence a container from its own compose file:
services:
uptime-kuma:
labels:
hark.enable: "false"A fault is identified by container name + pattern label. It opens the first time a pattern matches and closes when there are no new matches for quiet_window_minutes.
- New match, no existing fault → sends a "new fault" notification immediately
- New match, fault already open → refreshes the fault's timestamp, no notification
- No matches for quiet_window → sends a "resolved" notification
- Match after a resolved fault → opens a new fault, sends "new fault" again
- Every morning at your configured time → one digest listing every open fault
Fault state is stored in /data/hark.db and survives container restarts. If Hark restarts while a fault is open, it stays open.
git clone https://github.com/Bothari/Hark
cd hark
python3 -m venv .venv && .venv/bin/pip install -r requirements.txt
# run tests
.venv/bin/python -m pytest tests/ -vMulti-arch Docker build:
docker buildx build --platform linux/amd64,linux/arm64 -t ghcr.io/bothari/hark:latest --push .