This repository provides a Docker Compose deployment for HashiCorp Vault with a secure single-node default. It is designed for teams that want a Vault instance that is easy to bootstrap, uses TLS from the start, persists data on the host, and includes a clearer operational workflow than a bare container run.
The project’s intent, based on the code and comments, is to run Vault as an internal service with:
- TLS enabled from first startup
- persistent storage on host-mounted directories
- audit logging enabled by default
- container hardening such as dropped capabilities and
no-new-privileges - a small operational surface area that still makes bootstrap steps explicit
The stack now uses single-node integrated storage (raft) instead of the older file backend. For the common single-node case, this does not materially complicate deployment: you still start the stack, initialize Vault once, and unseal it. The extra difference is that Vault now stores its state in the Raft data directory and listens on the internal cluster port 8201.
The stack has three services:
init-volumesCreates and normalizes host-backed directories and permissions before Vault starts.cert-genEither generates self-signed TLS certificates or validates externally supplied ones.vaultRuns the Vault server with TLS, single-node Raft storage, audit logging, and the UI.
Key files:
- docker-compose.yaml Main service definition and runtime wiring.
- vault.hcl Vault configuration template rendered at container startup.
- prepare-volumes.sh Prepares volume ownership and permissions.
- generate-certs.sh Handles TLS generation or validation.
- entrypoint.sh Renders runtime config values and launches Vault.
Helper scripts:
.
├── .env.example
├── docker-compose.yaml
├── scripts
│ ├── backup-vault.sh
│ ├── restore-vault.sh
│ ├── vault-init.sh
│ ├── vault-status.sh
│ └── vault-unseal.sh
└── vault
├── config
│ └── vault.hcl
└── scripts
├── entrypoint.sh
├── generate-certs.sh
└── prepare-volumes.sh
- Docker 24 or newer
- Docker Compose 2.20 or newer
- a hostname or IP address that clients will use to reach Vault
- a place to store unseal keys and the initial root token securely
Create your local environment file:
cp .env.example .envImportant variables:
| Variable | Purpose |
|---|---|
VAULT_HOSTNAME |
Public DNS name or IP clients use to reach Vault. |
VAULT_PORT |
Host port published for the HTTPS API and UI. |
TLS_MODE |
generate for self-signed certs, external to use existing certificate files. |
TLS_EXTRA_SANS |
Extra SANs such as DNS:vault.example.internal,IP:10.0.0.5. |
VAULT_DATA_DIR |
Host path for Vault Raft data. |
VAULT_LOGS_DIR |
Host path for audit logs. |
VAULT_CERTS_DIR |
Host path for TLS materials. |
VAULT_API_ADDR |
Optional explicit Vault API address. Defaults to https://<VAULT_HOSTNAME>:8200. |
VAULT_CLUSTER_ADDR |
Optional explicit cluster address. Defaults to https://vault-server:8201 for single-node use. |
VAULT_NODE_ID |
Single-node Raft node identifier. |
VAULT_UID / VAULT_GID |
Ownership target for mounted files and directories. |
VAULT_DEFAULT_LEASE_TTL |
Default lease/token TTL. |
VAULT_MAX_LEASE_TTL |
Maximum lease/token TTL. |
VAULT_UI |
Enables or disables the Vault UI. |
VAULT_LOG_LEVEL |
Vault log verbosity. |
VAULT_IMAGE |
Vault image tag to run. |
All generated runtime directories are intended to live in the repository root by default:
./vault-data./vault-logs./vault-certs
They are created automatically on first startup by Docker and the init-volumes container.
With TLS_MODE=generate, the stack creates:
ca.crtca.keyvault.crtvault.key
This is the default and is the easiest path for internal labs or isolated environments.
With TLS_MODE=external, the stack will not generate anything. Instead, place these files in VAULT_CERTS_DIR before startup:
vault.crtvault.keyca.crt
The ca.crt file should contain the CA certificate that Vault clients inside the container should trust. That keeps vault CLI operations inside the container working consistently.
Start the stack:
docker compose up -dFollow logs:
docker compose logs -f init-volumes
docker compose logs -f cert-gen
docker compose logs -f vaultWhat happens on startup:
init-volumesprepares the data, log, and certificate directories.cert-geneither generates TLS assets or validates the external files you supplied.vaultwaits for readable TLS files, renders runtime config values, and starts.
Expected container state after a successful first startup:
vault-serverstays runningvault-init-volumesexits with code0vault-cert-genexits with code0
That is normal. The two helper containers are one-shot init containers, not long-running services.
You can bootstrap this single-node Raft Vault in either of these ways:
Open the Vault UI at:
https://<VAULT_HOSTNAME>:<VAULT_PORT>
If you are using self-signed TLS, your browser may warn about the certificate until you trust vault-certs/ca.crt on your machine.
For this deployment, choose the UI flow to create a new Raft server and initialize Vault. That is the correct path for a fresh single-node setup.
During the UI flow you can:
- choose the number of key shares
- choose the unseal threshold
- receive the unseal keys
- receive the initial root token
After initialization, unseal Vault with the required number of keys and log in with the generated root token.
This UI flow has been validated against this repository and is a correct initialization method for the default single-node deployment.
Do not use the "Join an existing Raft cluster" path for this repository unless you are intentionally extending this setup into a multi-node cluster. This stack is designed to start as a single-node Raft deployment.
Initialize Vault once:
./scripts/vault-init.shThis returns unseal keys and the initial root token. Store them securely.
Unseal Vault with the required keys:
./scripts/vault-unseal.sh <key-1> <key-2> <key-3>Check state:
./scripts/vault-status.shLogin manually if needed:
docker compose exec vault vault loginThe web UI flow and terminal flow are equivalent in purpose. Both initialize the same Vault instance, generate unseal keys according to your chosen settings, and produce an initial root token. The terminal scripts are mainly there for operators who prefer CLI-driven setup or repeatable runbooks.
After initialization and unseal, the UI should be available at:
https://<VAULT_HOSTNAME>:<VAULT_PORT>
If you use self-signed TLS, distribute or trust:
<VAULT_CERTS_DIR>/ca.crt
A typical client configuration looks like:
export VAULT_ADDR="https://<VAULT_HOSTNAME>:<VAULT_PORT>"
export VAULT_CACERT="$(pwd)/vault-certs/ca.crt"Adjust the VAULT_CACERT path if your VAULT_CERTS_DIR differs.
For the default .env, both https://localhost:8200 and https://vault.internal:8200 are covered by the generated certificate only if you connect using names present in the SAN list. By default the generated certificate includes:
VAULT_HOSTNAMElocalhost127.0.0.1
If you plan to access Vault via any other DNS name or IP address, add it to TLS_EXTRA_SANS before first startup.
Start:
docker compose up -dStop:
docker compose downRemove containers and the generated rendered-config volume:
docker compose down -vRemove generated repo-local runtime folders for a fully clean redeploy:
rm -rf vault-data vault-logs vault-certsRestart Vault:
docker compose restart vaultView recent logs:
docker compose logs --tail=100 vaultCheck seal state:
./scripts/vault-status.shAudit log location:
<VAULT_LOGS_DIR>/audit.log
Create a backup archive:
./scripts/backup-vault.shOr provide an explicit output path:
./scripts/backup-vault.sh ./backups/my-vault-backup.tar.gzRestore from an archive:
./scripts/restore-vault.sh ./backups/my-vault-backup.tar.gzThe backup and restore helpers operate on:
VAULT_DATA_DIRVAULT_LOGS_DIRVAULT_CERTS_DIR
- Vault is exposed only over HTTPS.
- Single-node integrated storage is now used instead of the older file backend.
- Audit logging is enabled by default.
IPC_LOCKis enabled formlock.- All other Linux capabilities are dropped.
- Directory ownership and permissions are normalized before startup.
- This remains a single-node deployment by default.
- The Compose file is not a full HA cluster recipe.
VAULT_CLUSTER_ADDRdefaults to an internal single-node-friendly address, so expanding to multi-node later will require additional networking changes.- Initialization, unseal, policy setup, auth methods, and secret engine enablement still require operator action.
For this repository, single-node Raft is a good upgrade because it improves the storage backend without making the common deployment flow meaningfully harder. In practice:
- operators still run the same init and unseal sequence
- no extra host port is required for the common case
- data now lives under the dedicated Raft data directory
If you later want multi-node HA, Raft is the right foundation, but that expansion is a separate deployment design rather than something this stack tries to automate implicitly.