Guide: Admin Guide | Section: Installation
This page covers the three supported deployment methods for Pantera: Docker standalone, Docker Compose (production), and JAR file.
| Requirement | Minimum | Notes |
|---|---|---|
| Docker | 24+ | Required for container-based deployment |
| Docker Compose | v2+ | Required for production stack |
| JDK | 21+ (Temurin) | Required only for JAR file deployment |
| Maven | 3.4+ | Required only for building from source |
The Docker image is based on eclipse-temurin:21-jre-alpine and runs as user 2021:2020 (pantera:pantera).
docker run -d \
--name pantera \
-p 8080:8080 \
-p 8086:8086 \
-p 8087:8087 \
-v /path/to/pantera.yml:/etc/pantera/pantera.yml \
-v /path/to/data:/var/pantera \
-v /path/to/jwt-private.pem:/etc/pantera/jwt-private.pem:ro \
-v /path/to/jwt-public.pem:/etc/pantera/jwt-public.pem:ro \
-e JWT_PRIVATE_KEY_PATH=/etc/pantera/jwt-private.pem \
-e JWT_PUBLIC_KEY_PATH=/etc/pantera/jwt-public.pem \
-e PANTERA_USER_NAME=admin \
-e PANTERA_USER_PASS=changeme \
pantera:2.1.2Create a minimal /etc/pantera/pantera.yml on the host:
meta:
storage:
type: fs
path: /var/pantera/repo
credentials:
- type: envcurl http://localhost:8080/.health
# {"status":"ok"}On a fresh install Pantera creates a default admin user automatically:
| Username | Password |
|---|---|
admin |
admin |
The must_change_password flag is set, so the very first login goes to a forced password-change screen. The new password must meet these rules (server-side PasswordPolicy.java):
- Minimum 12 characters
- Uppercase + lowercase + digit + special character
- Not equal to the username
- Not a well-known weak password (
admin,password,changeme, etc.)
Change the default immediately in production. Any non-compliant password is rejected with HTTP 400 WEAK_PASSWORD.
The bootstrap only runs when the users table is empty, so an existing install is never overwritten.
| Port | Purpose |
|---|---|
8080 |
Repository traffic (artifact push/pull, Docker registry API) |
8086 |
REST management API |
8087 |
Prometheus metrics |
8090 |
Management UI (separate container) |
| Container Path | Purpose |
|---|---|
/etc/pantera/pantera.yml |
Main configuration file |
/etc/pantera/log4j2.xml |
Logging configuration (optional) |
/var/pantera/repo |
Repository configuration YAML files |
/var/pantera/data |
Artifact data storage |
/var/pantera/security |
RBAC policy files |
/var/pantera/cache |
Cache directory (S3 disk cache, temp files) |
/var/pantera/logs |
Log files, GC logs, heap dumps |
The container runs as 2021:2020 (pantera:pantera). All mounted volumes must be readable and writable by this UID/GID. Set ownership before starting:
sudo chown -R 2021:2020 /path/to/dataThe production Docker Compose stack provides the full Pantera deployment with all supporting services.
git clone https://github.com/auto1-oss/pantera.git
cd pantera/pantera-main/docker-compose
cp .env.example .env # Edit with your secrets
docker compose up -d| Service | Image | Port | Description |
|---|---|---|---|
| Pantera | pantera:2.0.0 |
8088 (mapped from 8080) |
Artifact registry |
| API | -- | 8086 |
REST management API |
| Metrics | -- | 8087 |
Prometheus metrics endpoint |
| Nginx | nginx:latest |
8081 / 8443 |
Reverse proxy (HTTP/HTTPS) |
| PostgreSQL | postgres:17.8-alpine |
5432 |
Metadata and settings database |
| Valkey | valkey/valkey:8.1.4 |
6379 |
Distributed cache and pub/sub |
| Keycloak | quay.io/keycloak/keycloak:26.0.0 |
8080 |
Identity provider (SSO) |
| Prometheus | prom/prometheus:latest |
9090 |
Metrics collection |
| Grafana | grafana/grafana:latest |
3000 |
Monitoring dashboards |
| Pantera UI | Custom build | 8090 |
Vue.js management interface |
For production workloads, allocate the following resources to the Pantera container:
services:
pantera:
cpus: 4
mem_limit: 6gb
mem_reservation: 6gb
ulimits:
nofile:
soft: 1048576
hard: 1048576
nproc:
soft: 65536
hard: 65536| Resource | Recommended | Notes |
|---|---|---|
| CPUs | 4+ | Minimum for parallel request handling |
| Memory | 6 GB | Reservation and limit for the Pantera container |
| File descriptors | 1,048,576 | Required for concurrent proxy connections |
| Process limit | 65,536 | Maximum threads/processes |
The .env file configures all stack services. Key variables to set before first start:
| Variable | Example | Description |
|---|---|---|
PANTERA_VERSION |
2.1.2 |
Docker image tag |
PANTERA_USER_NAME |
admin |
Bootstrap admin username |
PANTERA_USER_PASS |
changeme |
Bootstrap admin password |
JWT_PRIVATE_KEY_PATH |
/etc/pantera/jwt-private.pem |
Path to the RSA private key used to sign tokens (RS256). Generate with openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out jwt-private.pem. |
JWT_PUBLIC_KEY_PATH |
/etc/pantera/jwt-public.pem |
Path to the matching RSA public key for verification. Generate with openssl rsa -in jwt-private.pem -pubout -out jwt-public.pem. |
POSTGRES_USER |
pantera |
Database username |
POSTGRES_PASSWORD |
(set a strong password) | Database password |
KEYCLOAK_CLIENT_SECRET |
(from Keycloak console) | OIDC client secret |
For the full list of .env variables, see the Configuration Reference.
Run Pantera directly from the JAR without Docker. This requires JDK 21+ installed on the host.
git clone https://github.com/auto1-oss/pantera.git
cd pantera
mvn clean install -DskipTestsThe resulting JAR and dependencies are placed under pantera-main/target/.
java \
-XX:+UseG1GC -XX:MaxRAMPercentage=75.0 \
--add-opens java.base/java.util=ALL-UNNAMED \
--add-opens java.base/java.security=ALL-UNNAMED \
-cp pantera.jar:lib/* \
com.auto1.pantera.VertxMain \
--config-file=/etc/pantera/pantera.yml \
--port=8080 \
--api-port=8086| Option | Long Form | Default | Description |
|---|---|---|---|
-f |
--config-file |
-- | Path to pantera.yml (required) |
-p |
--port |
80 |
Repository server port |
-ap |
--api-port |
8086 |
REST API port |
Create the same directory layout used by the Docker image:
sudo mkdir -p /etc/pantera /var/pantera/{repo,data,security,cache/tmp,logs/dumps}
sudo chown -R $(whoami) /var/pantera /etc/panteraFor production JAR deployments, create a systemd service unit:
[Unit]
Description=Pantera Artifact Registry
After=network.target postgresql.service
[Service]
Type=simple
User=pantera
Group=pantera
Environment=JVM_ARGS=-XX:+UseG1GC -XX:MaxRAMPercentage=75.0
ExecStart=/usr/bin/java \
${JVM_ARGS} \
--add-opens java.base/java.util=ALL-UNNAMED \
--add-opens java.base/java.security=ALL-UNNAMED \
-cp /usr/lib/pantera/pantera.jar:/usr/lib/pantera/lib/* \
com.auto1.pantera.VertxMain \
--config-file=/etc/pantera/pantera.yml \
--port=8080 \
--api-port=8086
Restart=on-failure
RestartSec=10
LimitNOFILE=1048576
LimitNPROC=65536
[Install]
WantedBy=multi-user.targetThe dashboard (artifact count, storage usage, top repositories) reads from two PostgreSQL materialized views. These views must be refreshed on a schedule via pg_cron. Without this, the dashboard will show zeros.
Why pg_cron and not the application? Pantera deploys one verticle per CPU core. Each verticle previously issued
REFRESHindependently, causing up to 25 simultaneous refresh sessions and severeLock:Relationcontention on the database. Delegating topg_crongives a single, predictable refresh with no application-side locking.
Amazon RDS / Aurora — add to the DB parameter group and reboot:
shared_preload_libraries = pg_cron
cron.database_name = <your_database_name>
Self-managed PostgreSQL:
# Debian/Ubuntu
apt-get install postgresql-<version>-cron
# RHEL/CentOS
yum install pg_cron_<version>Add to postgresql.conf, then restart:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'artifacts'
Run once as a superuser against your Pantera database:
CREATE EXTENSION IF NOT EXISTS pg_cron;
GRANT USAGE ON SCHEMA cron TO pantera; -- replace 'pantera' with your app userConnect as the pantera application user (owner of the materialized views):
-- Refresh global totals every 30 minutes
SELECT cron.schedule(
'refresh-mv-artifact-totals',
'*/30 * * * *',
$$REFRESH MATERIALIZED VIEW CONCURRENTLY mv_artifact_totals$$
);
-- Refresh per-repo stats every 30 minutes
SELECT cron.schedule(
'refresh-mv-artifact-per-repo',
'*/30 * * * *',
$$REFRESH MATERIALIZED VIEW CONCURRENTLY mv_artifact_per_repo$$
);For high-traffic environments reduce to */15 * * * *; for low-traffic increase to 0 * * * *.
The views start empty on first deploy. Populate them immediately:
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_artifact_totals;
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_artifact_per_repo;Expired cooldown blocks (where blocked_until is in the past) accumulate in the database because they are only deleted lazily when the specific artifact is next requested. Add a pg_cron job to purge them hourly:
SELECT cron.schedule(
'cleanup-expired-cooldowns',
'0 * * * *',
$$DELETE FROM artifact_cooldowns
WHERE status = 'ACTIVE' AND blocked_until < EXTRACT(EPOCH FROM NOW()) * 1000$$
);A partial index (idx_cooldowns_status_blocked_until) is created automatically by Pantera at startup to make this DELETE efficient.
-- Confirm jobs are registered
SELECT jobid, jobname, schedule, active FROM cron.job;
-- Check recent run history (pg_cron 1.4+)
SELECT jobid, status, start_time, return_message
FROM cron.job_run_details
ORDER BY start_time DESC LIMIT 10;After starting Pantera by any method, verify the deployment:
# Health check (repository port)
curl http://localhost:8080/.health
# Health check (API port)
curl http://localhost:8086/api/v1/health
# Version check
curl http://localhost:8080/.version
# Obtain a token (if env auth is configured)
curl -X POST http://localhost:8086/api/v1/auth/token \
-H "Content-Type: application/json" \
-d '{"name":"admin","pass":"changeme"}'- Configuration -- Configure pantera.yml after installation
- Environment Variables -- All tunable environment variables
- High Availability -- Multi-node production deployment
- Performance Tuning -- Resource allocation and JVM tuning