Control-plane orchestration for void-box runtime execution.
void-control should be understood as a host for orchestration strategies, not
as a single-purpose swarm console.
Current direction:
swarm: first implemented orchestration strategysupervision: implemented orchestrator-worker strategy
Shared control-plane primitives across strategies:
- execution specs and policies
- candidate planning and reduction
- persisted control-plane events
- message-box / MCP-backed collaboration state
- graph-first execution inspection in the UI
The strategy changes the orchestration semantics. It should not require a different product surface or a different backend contract family.
Click the preview above for the full-quality MP4, or use the direct file link: void-control demo video.
This recording shows the canonical first-release flow:
- a live 3-agent swarm execution
- graph-first orchestration inspection
- right-side metrics and event inspection
- runtime drill-down through
Open Runtime Graph
Direct link: void-control swarm execution demo.
What this example does:
- runs three sibling optimization strategies against the same Transform-02 workload
- compares candidates by measured metrics, not invented estimates
- uses swarm reduction to select the best candidate for the iteration
What to look for:
- candidate fan-out in the graph
- metrics and event inspection on the right
- winner selection and runtime drill-down through
Open Runtime Graph
This recording shows the supervision operator flow in the real UI:
Launch Specwith the checked-in supervision example- supervision execution selection in the left rail
- supervision graph in the center pane
- supervision-specific inspector state on the right
- runtime drill-down through
Open Runtime Graph
Direct link: void-control supervision execution demo.
What this example does:
- runs three specialized Transform-02 workers under one supervisor
- collects each worker output and evaluates
metrics.approved - finalizes only after the workers are reviewed and approved
What to look for:
- supervisor-to-worker graph semantics instead of swarm fan-out/ranking
- review and approval state in the right inspector
- finalization flow and runtime drill-down through
Open Runtime Graph
- Current release target:
v0.0.2 - Release artifacts are published through GitHub Releases
- Supported
void-boxbaseline forv0.0.2:void-boxv0.1.2or an equivalent validated production build - Release process and compatibility gate details: docs/release-process.md
void-control is the control-plane side of the stack:
- launches and manages runtime work on
void-box - normalizes runtime payloads into a stable control-plane contract
- plans and tracks orchestration executions across multiple candidates
- persists execution, event, candidate, and message-box state
- provides terminal-first and graph-first operator UX
- enforces runtime contract compatibility with
void-box
- Architecture: docs/architecture.md
- Contributor and agent guide: AGENTS.md
- Release and compatibility process: docs/release-process.md
- Execution examples and live swarm workflow: examples/README.md
spec/: Runtime and orchestration contracts.src/: Rust orchestration client/runtime normalization logic.templates/: File-backed template-first API definitions for single-agent and warm-agent execution.tests/: Contract and compatibility tests.web/void-control-ux/: React operator dashboard (graph + inspector).
The daemon defaults to AF_UNIX at mode 0o600 and auto-discovers a
socket path under $XDG_RUNTIME_DIR/voidbox.sock →
$TMPDIR/voidbox-$UID.sock → /tmp/voidbox-$UID.sock. Same-uid
invocations of void-control find it without configuration.
Linux:
cargo run --bin voidbox -- servemacOS (Apple Silicon, Virtualization.framework):
# after building the guest kernel + rootfs per void-box's macOS guide
VOID_BOX_KERNEL=target/vmlinuz \
VOID_BOX_INITRAMFS=target/void-box-claude.cpio.gz \
cargo run --bin voidbox -- serveTo listen on TCP instead (cross-uid deployments, dev hosts where AF_UNIX permissions are inconvenient):
cargo run --bin voidbox -- serve --listen tcp://127.0.0.1:43100TCP requires a bearer token. The daemon resolves it from
VOIDBOX_DAEMON_TOKEN_FILE, then VOIDBOX_DAEMON_TOKEN, then a
generated 0o600 file at $XDG_CONFIG_HOME/voidbox/daemon-token (or
~/.config/voidbox/daemon-token). void-control reads the same chain
when VOID_BOX_BASE_URL is set to a TCP URL.
macOS requires VOID_BOX_KERNEL and VOID_BOX_INITRAMFS pointing at the
pre-built guest artifacts. The initramfs filename on macOS is
target/void-box-claude.cpio.gz (not the Linux void-box-rootfs.cpio.gz).
Running through cargo run also applies codesign automatically — direct
binary invocation needs manual codesigning. See the void-box macOS guide
for full details.
cargo test --features serdeAll test targets are currently gated behind the serde feature, so
cargo test without it runs zero tests. Use --features serde as the
one validation path.
The contract gate dials the daemon directly and needs VOID_BOX_BASE_URL
set explicitly. AF_UNIX form mirrors the production default; the TCP
form is for hosts that listen on TCP.
# AF_UNIX (default daemon listener)
VOID_BOX_BASE_URL=unix://$XDG_RUNTIME_DIR/voidbox.sock \
cargo test --features serde --test void_box_contract -- --ignored --nocapture
# TCP
VOID_BOX_BASE_URL=http://127.0.0.1:43100 \
cargo test --features serde --test void_box_contract -- --ignored --nocapturecd web/void-control-ux
npm install
npm run devThe dev server proxies /api to the void-control bridge at
http://127.0.0.1:43210 (see vite.config.ts). The bridge dispatches to
the daemon over AF_UNIX (default) or TCP under the hood; browsers don't
speak AF_UNIX, so the browser hop must terminate at the bridge. Leave
VITE_VOID_BOX_BASE_URL unset during local dev — only override it when
the daemon is reachable from the browser via a CORS-aware reverse proxy.
Start the bridge in a second terminal:
cargo run --features serde --bin voidctl -- serveRun bridge mode in another terminal:
cargo run --features serde --bin voidctl -- serveThen start UI with bridge URL:
cd web/void-control-ux
VITE_VOID_CONTROL_BASE_URL=http://127.0.0.1:43210 \
npm run devThe bridge serves CORS headers, so VITE_VOID_CONTROL_BASE_URL can point
directly at it. Continue to leave VITE_VOID_BOX_BASE_URL unset so the
Vite /api proxy is used for daemon calls.
Use the three-candidate swarm as the default validation path:
curl -sS -X POST http://127.0.0.1:43210/v1/executions \
-H 'Content-Type: text/yaml' \
--data-binary @examples/swarm-transform-optimization-3way.yamlexamples/swarm-transform-optimization.yaml remains available as the wider
eight-candidate stress case, but it is less reliable for routine validation.
This is also the canonical first-release orchestration workflow:
- load a top-level orchestration YAML
- launch through the bridge or UI
- inspect the execution graph, inspector, and event stream
- follow candidate metrics and
leader/broadcastcollaboration events
Phase 1 also exposes file-backed templates through the bridge:
curl -sS http://127.0.0.1:43210/v1/templates
curl -sS http://127.0.0.1:43210/v1/templates/single-agent-basic
curl -sS -X POST http://127.0.0.1:43210/v1/templates/single-agent-basic/dry-run \
-H 'Content-Type: application/json' \
-d '{
"inputs": {
"goal": "Summarize this repo",
"prompt": "Read the repo and summarize risks",
"provider": "claude"
}
}'
curl -sS -X POST http://127.0.0.1:43210/v1/templates/warm-agent-basic/execute \
-H 'Content-Type: application/json' \
-d '{
"inputs": {
"goal": "Keep a warm agent ready",
"prompt": "Stay alive for follow-up repo work."
}
}'These template endpoints compile into normal ExecutionSpec objects and then
reuse the existing dry-run and execution creation flow. Phase 1 ships two
starter templates:
single-agent-basicwarm-agent-basic
Terminal access is also available through voidctl:
voidctl template list
voidctl template get single-agent-basic
voidctl template dry-run single-agent-basic template-inputs.json
voidctl template execute warm-agent-basic template-inputs.jsontemplate-inputs.json must be a JSON request body in the same shape the bridge
accepts, for example:
{
"inputs": {
"goal": "Summarize this repo",
"prompt": "Read the repo and summarize risks",
"provider": "claude"
}
}Inside the interactive voidctl console, the same surface is available as:
/template list
/template get single-agent-basic
/template dry-run single-agent-basic template-inputs.json
/template execute warm-agent-basic template-inputs.json
batch is the canonical high-level surface for remote background work that
fans out one worker template across multiple prompts. yolo is an accepted
alias for the same API and CLI path.
Bridge routes:
curl -sS -X POST http://127.0.0.1:43210/v1/batch/dry-run \
-H 'Content-Type: application/json' \
-d '{
"api_version": "v1",
"kind": "batch",
"worker": {
"template": "examples/runtime-templates/warm_agent_basic.yaml",
"provider": "claude"
},
"mode": {
"parallelism": 2
},
"jobs": [
{ "prompt": "Fix failing auth tests" },
{ "prompt": "Improve retry logging" },
{ "prompt": "Review DB migration safety" }
]
}'
curl -sS -X POST http://127.0.0.1:43210/v1/yolo/run \
-H 'Content-Type: application/json' \
-d '{
"api_version": "v1",
"kind": "yolo",
"worker": {
"template": "examples/runtime-templates/warm_agent_basic.yaml"
},
"jobs": [
{ "prompt": "Review migration safety" }
]
}'CLI:
voidctl batch dry-run examples/batch/background_repo_work.yaml
voidctl batch run examples/batch/background_repo_work.yaml
cat examples/batch/background_repo_work.yaml | voidctl yolo run --stdinInteractive console:
/batch dry-run examples/batch/background_repo_work.yaml
/batch run examples/batch/background_repo_work.yaml
/yolo run examples/batch/background_repo_work.yaml
team is the phase-1 high-level multi-agent authoring surface. Users define
agents, tasks, and a process, and void-control compiles that into the
existing orchestration engine.
Current phase-1 limitation:
depends_onis not supported yetsequentialpreserves task ordering, but does not thread task outputs between agents
HTTP:
curl -sS -X POST http://127.0.0.1:43210/v1/teams/dry-run \
-H 'Content-Type: text/yaml' \
--data-binary @examples/team/rust_article_team.yaml
curl -sS -X POST http://127.0.0.1:43210/v1/teams/run \
-H 'Content-Type: text/yaml' \
--data-binary @examples/team/rust_article_team.yamlCLI:
voidctl team dry-run examples/team/rust_article_team.yaml
voidctl team run examples/team/rust_article_team.yaml
cat examples/team/rust_article_team.yaml | voidctl team run --stdinInteractive console:
/team dry-run examples/team/rust_article_team.yaml
/team run examples/team/rust_article_team.yaml
Use the checked-in supervision example to exercise the flat orchestrator-worker path:
curl -sS -X POST http://127.0.0.1:43210/v1/executions \
-H 'Content-Type: text/yaml' \
--data-binary @examples/supervision-transform-review.yamlCurrent v1 supervision contract:
- workers still run a normal runtime template on
void-box - approval is reducer-driven in
void-control - worker output must include
metrics.approved - the bundled supervision worker template appends that metric after the measured benchmark run
Rust validation:
cargo fmt --all -- --check
cargo clippy --all-targets --all-features -- -D warnings
cargo test --features serde
RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --all-featuresUI validation:
cd web/void-control-ux
npm ci
npm run buildOptional local pre-commit setup:
pip install pre-commit
pre-commit install
pre-commit run --all-filescargo run --features serde --bin voidctlThe non-interactive voidctl execution ... commands are the terminal equivalent
of the UI launcher and inspector.
Submit an orchestration spec:
voidctl execution submit examples/swarm-transform-optimization-3way.yamlDry-run the same spec:
voidctl execution dry-run examples/swarm-transform-optimization-3way.yamlSubmit a generated spec from stdin:
cat generated.yaml | voidctl execution submit --stdinInspect and follow an execution:
voidctl execution watch <execution-id>
voidctl execution inspect <execution-id>
voidctl execution events <execution-id>
voidctl execution result <execution-id>
voidctl execution runtime <execution-id>Template-backed agent runs use the voidctl template ... surface and expect a
JSON request body on disk or stdin:
{
"inputs": {
"goal": "Summarize this repo",
"prompt": "Read the repo and summarize risks",
"provider": "claude"
}
}Dry-run and execute a checked-in template:
voidctl template list
voidctl template get single-agent-basic
voidctl template dry-run single-agent-basic template-inputs.json
voidctl template execute warm-agent-basic template-inputs.jsonThe interactive voidctl console exposes the same path:
/template list
/template get single-agent-basic
/template dry-run single-agent-basic template-inputs.json
/template execute warm-agent-basic template-inputs.json
Example execution:
problem:
optimize Transform-02 with multiple competing approaches in parallel
generated flow:
voidctl execution submit --stdin
execution_id: exec-1775679556549
status: Completed
winner: candidate-2
strategy: vectorized-parse
runtime_run_id: run-1775679567037
Final candidate scores from that run:
candidate-1 baseline latency_p99_ms=3.027 cpu_pct=93.4 error_rate=0.333
candidate-2 vectorized-parse latency_p99_ms=1.740 cpu_pct=75.8 error_rate=0.333
candidate-3 cache-aware latency_p99_ms=3.287 cpu_pct=91.0 error_rate=0.333
candidate-4 high-throughput latency_p99_ms=2.110 cpu_pct=97.0 error_rate=0.333
Follow-up commands for the same execution:
voidctl execution inspect exec-1775679556549
voidctl execution events exec-1775679556549
voidctl execution result exec-1775679556549
voidctl execution runtime exec-1775679556549
voidctl execution runtime exec-1775679556549 candidate-2The repo packages a void-control skill so Claude or Codex can operate the
control plane from the terminal instead of the UI.
Canonical skill source:
Claude wrapper:
Codex install entrypoint:
Codex follows the same install pattern used by Superpowers: tell Codex to fetch
and follow the repo-hosted .codex/INSTALL.md file.
Example prompts after installation:
Use the void-control skill to optimize this workload with a swarm.Use the void-control skill to run this snapshot pipeline and summarize the result.Use the void-control skill to inspect why this execution failed and resolve the runtime run behind it.Use the void-control skill to generate a spec from this problem statement and submit it through voidctl.Use the void-control skill to dispatch a swarm of agents for a complex problem, let it continue in the background, and later summarize the result.
For Claude-backed swarm/service runs, the skill should prefer the validated service pattern:
agent.mode: servicellm.provider: claudesandbox.network: trueagent.output_fileset- runtime-assets directory mount when possible
agent.messaging.enabled: truefor sibling swarm candidates
- Dashboard uses daemon APIs (
/v1/runs,/v1/runs/{id}/events,/v1/runs/{id}/stages,/v1/runs/{id}/telemetry). + Launch Specsupports:- orchestration YAML through bridge execution create (
POST /v1/executions) - raw runtime spec upload through bridge launch (
POST /v1/launch) - path-only fallback launch (
POST /v1/runs) when no spec text is provided
- orchestration YAML through bridge execution create (



