codexlb is a Go load-balancing proxy and wrapper for Codex CLI.
It lets you run Codex through a local proxy that can switch between multiple authenticated accounts, handle failover, and expose live status.
- Multi-account enrollment (
auth.jsonper alias) - Dockerized browser login flow for importing host Codex credentials
- Per-request account selection (
usage_balanced,sticky,round_robin) - Recursive proxy chaining across child
codexlbinstances - Automatic failover/cooldown on
429and5xx - Automatic disable on auth errors (
401,403) - Periodic background quota/auth refresh for idle accounts
- Wrapper execution via local proxy (
OPENAI_BASE_URL) - Structured logs under
~/.codex-lb/logs - Tunable runtime config in
~/.codex-lb/config.toml - Config hot reload while proxy is running
Build locally:
go build ./cmd/codexlbWith Make:
make build
make installUser-local install example:
make install PREFIX=$HOME/.local- Add accounts:
./codexlb account login alice
./codexlb account login bob
# Or import an existing Codex home:
./codexlb account import --from ~/.agentlb/sessions/alice alice
# Or bootstrap a host CODEX_HOME through a Dockerized Chrome login:
printf '%s\n' "$CHATGPT_PASSWORD" | ./codexlb login-with work --username you@example.com --password-stdin --docker-network vpn_net- Start proxy:
./codexlb proxy --listen 127.0.0.1:8765 --upstream https://chatgpt.com/backend-api- Check state:
./codexlb status
./codexlb status --short
# Pin the proxy to a specific account while it remains healthy:
./codexlb account pin alice- Run Codex through proxy:
./codexlb run
./codexlb run exec --json "fix this"Build and run with Compose:
docker compose up -d --buildStop:
docker compose downDefaults:
- Proxy listens on
0.0.0.0:8765in container and is published as127.0.0.1:8765on host. - Host
~/.codex-lbis bind-mounted to/datain container. - Container runs
codexlb proxy --root /data. - Image includes both
codexlband thecodexCLI. - Image sets
HOME=/datasocodexcan store~/.codexwhen running as an arbitrary UID.
Mounted data under /data includes:
store.jsonconfig.tomlaccounts/runtime/logs/
Optional environment overrides:
CODEXLB_ROOT_DIR(host path to mount instead of~/.codex-lb)CODEXLB_UPSTREAMCODEXLB_BIND_HOSTCODEXLB_PORTUID/GID(container runtime user)
Optional build args:
CODEX_NPM_VERSION(defaults tolatest)
CLI environment overrides:
CODEXLB_ROOTsets the default--rootfor commands that operate on the local store.CODEXLB_PROXY_URLsets the default--proxy-urlfor commands that talk to a proxy or remote admin API.CODEXLB_PROXY_NAMEoverridesproxy.namefor the running proxy process.CODEXLB_CHILD_PROXY_URLSoverridesproxy.child_proxy_urlsfor the running proxy process.
Default root is ~/.codex-lb.
| Path | Purpose |
|---|---|
~/.codex-lb/store.json |
Runtime state (accounts, quotas, active account) |
~/.codex-lb/config.toml |
Tunable settings |
~/.codex-lb/accounts/<alias>/auth.json |
Per-account auth |
~/.codex-lb/runtime/ |
Wrapper runtime CODEX_HOME |
~/.codex-lb/logs/proxy.current.jsonl |
Proxy event log |
~/.codex-lb/logs/launchd.stdout.log |
launchd stdout (if installed) |
~/.codex-lb/logs/launchd.stderr.log |
launchd stderr (if installed) |
codexlb creates ~/.codex-lb/config.toml on first run.
This file is the source of truth for settings.
[proxy]
name = "proxy-a"
listen = "127.0.0.1:8765"
upstream_base_url = "https://chatgpt.com/backend-api"
# Optional: route through other codexlb proxies instead of local accounts.
# Child proxies receive the original request path unchanged, and can themselves
# chain to more child proxies without a hard depth limit.
child_proxy_urls = []
# Global default URL used by commands that talk to the proxy
# (`run`, `status`, and `proxy logs`) when --proxy-url is not provided.
# If empty, falls back to "http://<proxy.listen>".
proxy_url = ""
max_attempts = 3
usage_timeout_ms = 30000
cooldown_default_seconds = 5
[policy]
mode = "usage_balanced" # usage_balanced | sticky | round_robin
delta_percent = 10
[policy.weights]
daily = 60
weekly = 40
[quota]
refresh_interval_minutes = 10
refresh_interval_messages = 10
cache_ttl_minutes = 30
[commands]
# Base command for `codexlb account login`.
login = ["login"]
# Prefix prepended to args passed to `codexlb run`.
# Example: run Codex in yolo mode by default.
run = ["exec", "--yolo"]
[run]
# Run codex via the current shell (`$SHELL -lc ...`).
inherit_shell = trueIf proxy.name is omitted, codexlb generates a random stable name on first load and writes it back into config.toml.
Hot reload behavior:
- Proxy polls
config.tomland reloads updates automatically (default: every 1s) proxy.listenchanges are detected but require proxy restart to take effectproxy.child_proxy_urlschanges apply without restart- CLI flags on
codexlb proxyoverride settings for that process only (not persisted)
When proxy.child_proxy_urls is non-empty, the proxy selects among those child proxies with the same policy (usage_balanced, sticky, round_robin) it normally uses for local accounts. This makes chained proxies opaque to clients: consumers still talk only to the top-level proxy. codexlb status includes the proxy name for each account so you can see which proxy in the chain owns it.
Selection happens per request:
- Refresh runtime state (expire cooldowns, maybe refresh quotas).
- Build healthy account set:
enabled = true- no
disabled_reason cooldown_until_msin the past
- Select by policy.
If no healthy account exists, proxy returns 503.
For each account:
daily_remaining = clamp((daily_limit - daily_used) / daily_limit, 0..1)weekly_remaining = clamp((weekly_limit - weekly_used) / weekly_limit, 0..1)- Unknown window defaults to
0.30 - Score uses normalized
[policy.weights]
usage_balanced: choose highest score with hysteresisdelta_percentsticky: keep active while healthy, else fallback to first healthyround_robin: rotate through healthy accounts- Pinned account (if set and healthy) overrides normal policy
Per request attempt (up to proxy.max_attempts):
2xx: success429/5xx: cooldown, then retry with another healthy account401/403: disable account, retry with another account- transport error: default cooldown, then retry
After attempts are exhausted, proxy returns last upstream response (or 503).
For chained proxies, the same retry loop applies at the child-proxy layer. Child proxy selection is driven by each child proxy's /status response, so multi-hop chains keep working recursively.
Run local LB proxy.
Usage:
codexlb proxy [flags]To fetch logs from a running proxy instance:
codexlb proxy logs [flags]Key flags:
| Flag | Description |
|---|---|
--root |
State directory (default ~/.codex-lb) |
--listen |
Listen address (example 127.0.0.1:8765) |
--upstream |
Upstream base URL |
--max-attempts |
Retry attempts per request |
--usage-timeout-ms |
Usage API timeout |
--cooldown-default-seconds |
Fallback cooldown |
--quota-refresh-minutes |
Time-based quota refresh interval |
--quota-refresh-messages |
Message-count quota refresh interval |
--quota-cache-ttl-minutes |
Quota cache TTL |
Examples:
codexlb proxy
codexlb proxy --listen 127.0.0.1:9000 --upstream https://chatgpt.com/backend-api
CODEXLB_ROOT=/tmp/codexlb codexlb proxyFetch or tail proxy event logs over HTTP from a running proxy.
Usage:
codexlb proxy logs [--root DIR] [--proxy-url URL] [--tail 100] [--offset N] [--limit 500] [--follow] [--interval 2s]Notes:
- Use
--proxy-urlfor remote instances. --followpolls/logswith byte offsets and prints only new lines.CODEXLB_ROOTandCODEXLB_PROXY_URLprovide the default values for--rootand--proxy-url.
Create/use ~/.codex-lb/accounts/<alias> as CODEX_HOME and execute login command.
Usage:
codexlb account login [--root DIR] [--proxy-url URL] [--proxy-name NAME] [--codex-bin PATH] <alias> [-- <extra-login-args...>]Notes:
commands.loginis prepended before extra args.- With
--proxy-url, runs login locally and uploads the resulting account data to the remote proxy. --proxy-nametargets a named proxy recursively from the configured proxy chain rooted at--proxy-url,CODEXLB_PROXY_URL, orproxy.proxy_url.- The CLI sends the target name to the root proxy admin API; proxies forward recursively, so child proxy URLs do not need to be reachable from the CLI host.
- With
--proxy-name, the login itself runs on the target proxy, which is required when auth must happen from that proxy's network namespace. - With
--proxy-nameand no extra login args, the target proxy defaults to device auth when its configuredcommands.logindoes not already include--device-auth, and streams the device-login output back to the CLI.
Import an existing Codex home auth.
Usage:
codexlb account import [--root DIR] [--proxy-url URL] [--proxy-name NAME] [--into local|proxy] [--from <CODEX_HOME>] [<alias>]Notes:
--fromis always read on the local machine running the CLI.- If
--fromis omitted, defaults toCODEX_HOMEor~/.codex. - If
<alias>is omitted, codexlb derives one from the sourceconfig.toml/auth when possible, otherwise generates a random alias. --into localimports into the local store under~/.codex-lb/accounts/<alias>(or--root).--into proxyuploads the localauth.jsonand optionalconfig.tomlto the remote proxy admin API.--into proxyrequires--proxy-url,--proxy-name,CODEXLB_PROXY_URL, orproxy.proxy_url.- When
--proxy-nameis used, the selected root proxy forwards the import recursively to the named child proxy.
List enrolled accounts and health/state summary.
Usage:
codexlb account list [--root DIR] [--proxy-url URL] [--proxy-name NAME]Remove account and its stored account directory.
Usage:
codexlb account rm [--root DIR] [--proxy-url URL] [--proxy-name NAME] <alias>Pin selection to a specific account alias.
Usage:
codexlb account pin [--root DIR] [--proxy-url URL] [--proxy-name NAME] <alias>Clear pinned account selection.
Usage:
codexlb account unpin [--root DIR] [--proxy-url URL] [--proxy-name NAME]Run codex login inside a published Docker image, complete the OpenAI login flow in Chromium, and import the resulting credentials back into the host CODEX_HOME and a named codexlb account alias.
Usage:
codexlb login-with [--root DIR] <alias> --username <email> (--password <password> | --password-stdin) [--codex-home DIR] [--docker-network NAME] [--docker-image TAG] [--timeout 10m]Notes:
- The first positional argument is the codexlb account alias that will receive the imported auth under
~/.codex-lb/accounts/<alias>(or the chosen--root). - The container is attached to the Docker network selected by
--docker-network, so you can point it at a VPN-enabled network namespace when needed. - Credentials are also written back into host
CODEX_HOME($CODEX_HOMEwhen set, otherwise~/.codex). - By default the command uses the published image
ghcr.io/gngeorgiev/agent-lb-proxy-login:latest;--docker-imageoverrides it. --password-stdinavoids leaking the password into shell history.- The automation is designed for username/password sign-in. If OpenAI prompts for CAPTCHA, MFA, or another interactive checkpoint, the containerized flow may still require manual handling.
For local CLI overrides without repeating flags:
export CODEXLB_ROOT=/path/to/state
export CODEXLB_PROXY_URL=http://127.0.0.1:9000
export CODEXLB_PROXY_NAME=edge-main
export CODEXLB_CHILD_PROXY_URLS=http://10.0.0.11:8765,http://10.0.0.12:8765These act as defaults for --root and --proxy-url. An explicit flag still wins over the environment.
When --proxy-url is used for account commands, codexlb calls these proxy endpoints:
GET /admin/accountsPOST /admin/account/importPOST /admin/account/rmPOST /admin/account/pinPOST /admin/account/unpinGET /admin/runtime-auth
Security note:
- Admin API is currently unauthenticated; expose it only on trusted networks (or behind your own auth/TLS layer).
GET /admin/runtime-authreturns the selected account's rawauth.jsonpayload for runtime bootstrapping; treat it as highly sensitive credential material.
Run Codex with proxy env wiring.
Usage:
codexlb run [--root DIR] [--proxy-url URL] [--codex-bin PATH] [--codex-home DIR] [--command] [<codex-args...>]Flags:
| Flag | Description |
|---|---|
--root |
State directory |
--proxy-url |
Override proxy URL (default proxy.proxy_url, else http://<listen-from-store>) |
--codex-bin |
Codex executable path |
--codex-home |
Wrapper runtime CODEX_HOME |
--command |
Print wrapped command and exit (do not execute) |
Runtime env behavior:
- Sets
OPENAI_BASE_URLto proxy URL - Sets
OPENAI_API_KEY=codex-lb-local-keyif missing - Uses
CODEX_HOMEfrom--codex-homeor~/.codex-lb/runtime - Runs through
$SHELL -lcwhenrun.inherit_shell = true(default) - If runtime
auth.jsonis missing/invalid, seeds from an enrolled account when available - If no local accounts are enrolled, attempts to fetch runtime auth from remote proxy
GET /admin/runtime-auth - If remote auth is unavailable, writes a local proxy-only stub auth (includes both
access_tokenandid_token) - Prepends
commands.runto passed args
Examples:
codexlb run
codexlb run --command exec --json "ping"
codexlb run -- --json "ping" # pass flags that start with '-'Query GET /status from proxy.
Default output renders a table with active/pin markers, account identity, health/cooldown state, quota percentages, score, and last-switch metadata.
The proxy serves cached status immediately and refreshes quota and child-proxy data in the background, so codexlb status does not block on slow upstream health checks.
The proxy process also runs the same quota/auth refresh path periodically while idle, using quota.refresh_interval_minutes as the freshness target, so long-unused accounts can rotate expired auth before they are selected for live traffic.
Usage:
codexlb status [--root DIR] [--proxy-url URL] [--timeout 3s] [--short | --json]Flags:
| Flag | Description |
|---|---|
--root |
State directory (for default proxy URL resolution) |
--proxy-url |
Explicit proxy URL (default proxy.proxy_url, else http://<listen-from-store>) |
--timeout |
HTTP timeout (default 3s) |
--short |
One-line output for status bars |
--json |
Raw JSON output |
--short format:
lb=<alias> reason=<switch-reason> mode=<policy-mode>
make help
make build
make install
make test
make test-real
make fmt
make run-proxy
make status
make install-daemons
make uninstall-daemons
make install-systemd
make install-launchdConfigurable variables:
ROOT(default~/.codex-lb)BINARY_WORKDIR(directory containing the daemon binary, default current repo dir)LISTEN(default127.0.0.1:8765)UPSTREAM(defaulthttps://chatgpt.com/backend-api)PREFIX(default/usr/local)BINDIR(default$(PREFIX)/bin)DESTDIR(optional staging root)
make install-systemd
systemctl --user status codexlb-proxy.serviceUse a binary from a separate workdir:
make install-systemd BINARY_WORKDIR=/path/to/workdirThe installed daemon runs codexlb proxy --root <ROOT> and reads listen/upstream from
config.toml, so service restarts do not overwrite runtime config values.
Unit path: ~/.config/systemd/user/codexlb-proxy.service
make install-launchd
launchctl print gui/$(id -u)/com.codexlb.proxyPlist path: ~/Library/LaunchAgents/com.codexlb.proxy.plist
make install-daemons
make uninstall-daemonsEvent examples in ~/.codex-lb/logs/proxy.current.jsonl:
request.receivedrequest.account_selectedrequest.switchedaccount.cooldownaccount.disabledquota.refreshedconfig.reloadedconfig.reload_failed
Standard suite:
go test ./...With real Codex override check:
CODEXLB_RUN_REAL_CODEX_TEST=1 go test ./...Verified in this environment (codex-cli 0.107.0):
- Codex honors
OPENAI_BASE_URL - Traffic is sent to
<OPENAI_BASE_URL>/responses(WebSocket first, HTTPS fallback)
Real check command:
CODEXLB_RUN_REAL_CODEX_TEST=1 go test ./internal/lb -run TestRealCodexUsesOPENAI_BASE_URL -vcodexlb --help
codexlb proxy --help
codexlb account login --help
codexlb account pin --help
codexlb run --help
codexlb status --help