AI-powered task execution system with pluggable storage backends (GitHub, Linear, TK), dependency-aware scheduling, and smart concurrency calculation. The runner owns task selection, status updates, and logging; agents execute tasks they're given.
- Pluggable Storage Backends: GitHub Issues, Linear, or local TK (markdown) tickets
- Task Engine: Graph-based scheduler with dependency resolution and parent-child hierarchies
- Smart Concurrency: Automatically calculates optimal parallel execution from dependency graphs
- TDD Mode: Strict Red/Green/Refactor enforcement for test-driven development
- Structured Logging: JSONL event streams with log browser TUI
- Installation Scripts: One-line install via
install.shorinstall.ps1 - Multi-Backend Support: OpenCode, Codex, Claude, Kimi
yolo-agent- Task orchestration and schedulingyolo-task- Task management operationsyolo-tui- Real-time event monitoring with log browser
See MIGRATION.md for historical command mapping.
# macOS/Linux
curl -sSL https://raw.githubusercontent.com/egv/yolo-runner/main/install.sh | bash
# Windows PowerShell
irm https://raw.githubusercontent.com/egv/yolo-runner/main/install.ps1 | iexmake install./bin/yolo-agent --version
./bin/yolo-task --version
./bin/yolo-tui --versionYolo-runner supports multiple task storage backends:
# .yolo-runner/config.yaml
profiles:
github:
tracker:
type: github
github:
scope:
owner: egv
repo: yolo-runner
auth:
token_env: GITHUB_TOKENprofiles:
linear:
tracker:
type: linear
linear:
scope:
workspace: my-workspace
auth:
token_env: LINEAR_API_KEYprofiles:
tk:
tracker:
type: tkTK stores tickets as markdown files in .tickets/ with frontmatter for metadata.
The production stdin monitor (yolo-tui) follows an Elm-style Model/Update/View architecture and uses:
- Bubble Tea for event-driven terminal application state updates
- Bubbles for reusable UI components and interaction primitives
- Lip Gloss for deterministic styling/layout output
These UI dependencies are mandatory for GUI workflow evolution and should be treated as part of the runtime contract.
yolo-agentowns task selection, dependency-aware scheduling, retries, review, and event emission.yolo-taskexposes direct tracker operations.yolo-tuiandyolo-webuiconsume the event stream for monitoring.
- Loads tasks from tracker/storage backends such as GitHub, Linear, TK, or beads/br.
- Builds a dependency graph and calculates runnable concurrency.
- Runs the selected coding-agent backend for implementation and review.
- Writes structured JSONL events and per-task backend logs.
- Updates task status/data and manages task clones under
.yolo-runner/clones/.
opencodeCLI available.gitinstalled and repo cloned.- Go 1.21+ for building the runner.
goplsavailable onPATH(required by Serena/OpenCode for Go language services).
From repo root:
make build
./bin/yolo-agent --version
./bin/yolo-task --version
./bin/yolo-tui --version
./bin/yolo-webui --versionSupported platforms:
| Platform | Architecture | Install Method |
|---|---|---|
| macOS | amd64, arm64 | install.sh, make install, release |
| Linux | amd64, arm64 | install.sh, make install, release |
| Windows | amd64 | install.ps1, release |
Installation verification: docs/install-matrix.md
make test
Run the E8 release gate after self-hosting demos:
make release-gate-e8Verifies:
TestE2E_CodexTKConcurrency2LandsViaMergeQueueTestE2E_ClaudeConflictRetryPathFinalizesWithLandingOrBlockedTriageTestE2E_KimiLinearProfileProcessesAndClosesIssueTestE2E_GitHubProfileProcessesAndClosesIssue
GitHub Actions:
.github/workflows/ci.yml- Build and test on push/PR.github/workflows/release.yml- Automated releases on tags
Release Process:
- Tag:
git tag v1.2.3 - Push:
git push origin v1.2.3 - Release workflow publishes artifacts
- Install script pulls latest
After completing the E8 self-host demos, run the release gate checklist:
make release-gate-e8
The gate verifies these acceptance tests:
TestE2E_CodexTKConcurrency2LandsViaMergeQueueTestE2E_ClaudeConflictRetryPathFinalizesWithLandingOrBlockedTriageTestE2E_KimiLinearProfileProcessesAndClosesIssueTestE2E_GitHubProfileProcessesAndClosesIssue
It also validates docs contracts for this checklist and the migration guidance.
Copy these bundled assets into the repository-local .opencode/ tree before running OpenCode-backed flows:
yolo.md->.opencode/agent/yolo.mdagent/release.md->.opencode/agent/release.md(when present)skills/task-splitting/SKILL.md->.opencode/skills/task-splitting/SKILL.mdcommands/split-tasks.md->.opencode/commands/split-tasks.mdcommands/split-tasks-strict.md->.opencode/commands/split-tasks-strict.md
These files are intentionally repo-local so task clones inherit the same OpenCode agent, skill, and command behavior.
The repo ships a reusable OpenCode task-splitting skill plus two command wrappers:
/split-tasks/split-tasks-strict
Use them to turn ADRs, PRDs, or broad implementation requests into strict-TDD epics and micro-tasks with explicit dependency order.
Examples:
/split-tasks @docs/adr/ADR-002-server-backed-agent-runtimes.md
/split-tasks-strict Break this feature request into the smallest useful tasks for an autonomous coding agent.
The Task Engine builds a directed graph from task relationships:
- Dependencies:
depends-onrelationships block tasks until dependencies complete - Parent-Child: Epic/task hierarchies are respected
- Smart Concurrency: Automatically calculated from graph structure
Example dependency in ticket frontmatter:
---
id: task-123
deps: [task-456, task-789]
---Concurrency is calculated dynamically based on the dependency graph:
# Auto-calculate from graph (respects dependencies)
./bin/yolo-agent --repo . --root <epic> --concurrency auto
# Fixed concurrency (default: 1)
./bin/yolo-agent --repo . --root <epic> --concurrency 3Enforces Red/Green/Refactor workflow:
./bin/yolo-agent --repo . --root <epic> --tddWhen --tdd is enabled:
- Tests must be written first (RED)
- Implementation makes tests pass (GREEN)
- Refactor while keeping tests green
Validates task clarity before execution:
./bin/yolo-agent --repo . --root <epic> --quality-gateChecks for:
- Clear description
- Concrete acceptance criteria
- No vague language ("maybe", "consider")
- Required fields present
Browse logs grouped by task:
./bin/yolo-tui --events-stdin < runner-logs/run.events.jsonlFeatures:
- Tree view of tasks and epics
- Search/filter logs
- View agent thoughts and decisions
- Export logs
From repo root:
./bin/yolo-agent --repo . --root algi-8bt --model gpt-4o
./bin/yolo-agent --repo . --root algi-8bt --dry-run
Pipe --stream output into yolo-tui for live monitoring, or use yolo-webui against the distributed bus.
Common options:
--max Nlimit number of tasks processed--dry-runprint the task prompt without running OpenCode--concurrency Nor--concurrency auto- Parallel task execution (default: 1)--tddenable strict TDD mode (Red/Green/Refactor)--quality-gatevalidate task clarity before execution--mode stream|uiset output mode for event delivery--streamoutput JSONL events for TUI consumption--events PATHwrite events to file--retry-budget Nmax retries per task (default: 5)--profile NAMEuse tracker profile from config--backend codex|opencode|claude|kimi|geminiagent backend--model MODELmodel name (e.g., openai/gpt-5.3-codex)--runner-timeout DURATIONper-task timeout (e.g., 20m)
Use the queue-backed transport with Redis or NATS, started via Podman Compose. Services bind to Tailscale (tailnet) addresses for security - only accessible from within your tailnet.
make build
make distributed-dev-up
export GITHUB_TOKEN=$(gh auth token)
# Get your tailscale IP (or set YOLO_TAILNET_IP in .env)
export YOLO_TAILNET_IP=$(tailscale ip -4)
./bin/yolo-agent \
--repo . \
--root <root-id> \
--profile github \
--distributed-bus-backend redis \
--distributed-bus-address "redis://${YOLO_TAILNET_IP}:16379" \
--stream | ./bin/yolo-tui --events-stdinSwitch to NATS by changing the backend and address:
./bin/yolo-agent \
--repo . \
--root <root-id> \
--profile github \
--distributed-bus-backend nats \
--distributed-bus-address "nats://${YOLO_TAILNET_IP}:14222" \
--stream | ./bin/yolo-tui --events-stdinWhen done, stop the containers:
make distributed-dev-down# Start Redis and NATS containers (bound to tailnet IP)
make distributed-dev-up
# Stop and remove containers with volumes
make distributed-dev-downThese targets use podman compose with dev/distributed/docker-compose.yml. Services bind to YOLO_TAILNET_IP (default: 100.85.134.92) so they're only accessible from your Tailscale network.
Start the web UI to monitor task queue, task graph, workers, and send control commands:
export YOLO_TAILNET_IP=$(tailscale ip -4)
./bin/yolo-webui \
--repo . \
--listen "${YOLO_TAILNET_IP}:8080" \
--distributed-bus-backend redis \
--distributed-bus-address "redis://${YOLO_TAILNET_IP}:16379" \
--auth-token "${YOLO_WEBUI_TOKEN:-your-secret-token}"Then open in your browser (only accessible from tailnet):
http://<your-tailnet-ip>:8080/?token=your-secret-token
Features:
- Real-time task queue visualization
- Task graph with status
- Worker summaries
- Control panel to change task status (blocked, in_progress, closed)
- Run history and triage
Fallback backend: When --distributed-bus-backend is omitted, it defaults to redis. Pass --distributed-bus-backend nats to use NATS instead.
Startup: Before starting yolo-agent in distributed mode, verify the bus is reachable:
# Redis
redis-cli -u "redis://${YOLO_TAILNET_IP}:16379" ping
# NATS
nats account info --server "nats://${YOLO_TAILNET_IP}:14222"If the bus is unavailable at startup, yolo-agent exits immediately with a connection error. Run make distributed-dev-up and confirm containers are running before retrying.
Cancellation: Send SIGTERM or press Ctrl+C to stop the scheduler. The agent finishes its current dispatch loop iteration and then exits cleanly. In-flight tasks that were already dispatched to executors continue running; their results are recorded when they complete. Tasks that were queued but not yet dispatched are left in their current state and will be picked up on the next run.
Teardown: Stop containers and remove volumes after a distributed run:
make distributed-dev-downIf in-flight tasks were interrupted before completing, reset them before the next run:
- Set interrupted tasks back to
openstatus. - Remove stale entries from
.yolo-runner/scheduler-state.json. - Remove stale clone directories under
.yolo-runner/clones/<task-id>.
Stream events to TUI for real-time monitoring:
./bin/yolo-agent --repo . --root <root-id> --stream | ./bin/yolo-tui --events-stdinSave events to file while streaming:
./bin/yolo-agent --repo . --root <root-id> --stream --events "run-$(date +%Y%m%d).events.jsonl" | ./bin/yolo-tui --events-stdinTDD mode with streaming:
./bin/yolo-agent --repo . --root <root-id> --tdd --stream | ./bin/yolo-tui --events-stdinThe TUI is decoder-safe: malformed JSONL lines are surfaced as warnings while valid events continue rendering.
Connect TUI directly to the distributed bus - useful when running agent separately or monitoring remote runs:
export YOLO_TAILNET_IP=$(tailscale ip -4)
./bin/yolo-tui \
--repo . \
--events-bus \
--events-bus-backend redis \
--events-bus-address "redis://${YOLO_TAILNET_IP}:16379"TUI shows:
- Task queue with pending/ready tasks
- Task graph with dependency tree and statuses
- Worker summaries (active executors)
- Run history and landing/triage outcomes
- Real-time status bar with metrics
TUI vs Web UI:
- Use
yolo-tuifor terminal-based monitoring, local or SSH sessions - Use
yolo-webuifor browser access, remote monitoring, and sending control commands
Always commit and push ticket/config changes before starting yolo-agent.
- Required before run: commit
.tickets/*.mdand related config/code changes, then rungit push. - Why: each task runs in a fresh clone that syncs against
origin/main; local-only commits are not visible in task clones. - Symptom when skipped: runner output shows errors like
ticket '<id>' not foundin clone context.
Quick preflight:
git status --short
git push
export GITHUB_TOKEN=$(gh auth token)
./bin/yolo-agent --repo . --root <root-id> --backend codex --concurrency 3 --events "runner-logs/<run>.events.jsonl" --stream | ./bin/yolo-tui --events-stdin
If a run is interrupted, reset state before restarting:
- Stop
yolo-agent. - Move interrupted tasks back to
open. - Remove stale clone directories under
.yolo-runner/clones/<task-id>. - Remove stale
in_flightentries from.yolo-runner/scheduler-state.json.
Use --runner-timeout to cap each task execution. Start with these defaults and tune for your repo/task size.
- Default behavior (flag omitted):
--runner-timeout 0s(no hard per-runner deadline) plus the no-output watchdog (10m default) still prevents indefinite hangs. - Local profile:
--runner-timeout 10mkeeps hangs bounded while still allowing normal coding loops. - CI profile:
--runner-timeout 20mallows slower shared runners and heavier validation steps. - Long-task profile:
--runner-timeout 45mfor large refactors or slower model/provider backends.
Examples:
./bin/yolo-agent --repo . --root <root-id> --model openai/gpt-5.3-codex --runner-timeout 10m
./bin/yolo-agent --repo . --root <root-id> --model openai/gpt-5.3-codex --runner-timeout 20m
./bin/yolo-agent --repo . --root <root-id> --model openai/gpt-5.3-codex --runner-timeout 45m
yolo-agent can load defaults from the agent: block in .yolo-runner/config.yaml.
Example:
default_profile: default
profiles:
default:
tracker:
type: tk
agent:
backend: codex
model: openai/gpt-5.3-codex
concurrency: 2
runner_timeout: 20m
watchdog_timeout: 10m
watchdog_interval: 5s
retry_budget: 5Precedence rules:
- Backend:
--agent-backend > --backend > YOLO_AGENT_BACKEND > agent.backend > codex - Profile:
--profile > YOLO_PROFILE > default_profile > default - Model and numeric/duration defaults: CLI flag value wins; if unset,
agent.*value is used. - Retry budget defaults to
5per task when neither--retry-budgetnoragent.retry_budgetis set.
Validation rules for agent.* values:
agent.backendmust be one ofopencode,opencode-serve,opencode-acp,codex,codex-cli,claude,kimi,gemini.agent.modemust be one ofstream,uiwhen set; omit for headless (default: no streaming).agent.concurrencymust be greater than0.agent.runner_timeoutmust be greater than or equal to0.agent.watchdog_timeoutmust be greater than0.agent.watchdog_intervalmust be greater than0.agent.retry_budgetmust be greater than or equal to0.
Invalid config values fail startup with field-specific errors that reference .yolo-runner/config.yaml.
To use the Gemini backend:
- Ensure the
geminiCLI is onPATH. - Set
GEMINI_API_KEYin your environment. - Point
agent.backendtogeminiin.yolo-runner/config.yaml, or pass--backend gemini. - Select an allowed model like
gemini-2.5-flashorgemini-2.0-pro.
Example:
agent:
backend: gemini
model: gemini-2.5-flashUse config init to scaffold a starter config, then run config validate before starting longer agent runs.
Bootstrap:
./bin/yolo-agent config init --repo .If the file already exists and you intentionally want to overwrite it:
./bin/yolo-agent config init --repo . --forceValidate in human-readable mode:
./bin/yolo-agent config validate --repo .Typical success output:
config is valid
Typical failure output:
config is invalid
field: agent.concurrency
reason: must be greater than 0
remediation: Set agent.concurrency to an integer greater than 0 in .yolo-runner/config.yaml.
Machine-readable validation (for CI hooks):
./bin/yolo-agent config validate --repo . --format jsonTroubleshooting details and additional failure/remediation cases are documented in docs/config-workflow.md.
TK (Local Markdown):
tk create "Task title" -t task -p 1
tk create "Epic title" -t epic -p 0
tk dep <task-id> <depends-on-id> # Add dependency
tk link <task1> <task2> # Link related tasksGitHub Issues: Standard GitHub issue creation with sub-issues for hierarchy.
---
id: unique-id
parent: parent-epic-id # For hierarchy
deps: [dep1, dep2] # Dependencies that block this task
status: open|in_progress|closed
type: task|epic|bug
priority: 0-4 # 0=highest, 4=lowest
assignee: username
---Full schema: docs/ticket-frontmatter-schema.md
The prompt includes:
- Bead ID and title
- Description
- Acceptance criteria
- Strict TDD rules
The runner selects work by traversing container types (epic, molecule). Traversable containers are in open or in_progress status, and leaf work is eligible when it is open only.
The YOLO agent must only work on the prompt provided. It must not call beads commands.
All events are emitted as JSONL (newline-delimited JSON) with consistent schema:
{"type": "task_started", "task_id": "abc-123", "task_title": "...", "ts": "2026-02-22T10:00:00Z"}
{"type": "runner_output", "task_id": "abc-123", "message": "...", "ts": "2026-02-22T10:00:05Z"}
{"type": "task_finished", "task_id": "abc-123", "metadata": {"status": "completed"}, "ts": "2026-02-22T10:05:00Z"}Log locations:
- Events:
runner-logs/<run-id>.events.jsonl - Agent output:
.yolo-runner/clones/<task-id>/runner-logs/ - Schema:
docs/logging-schema.md
Browse logs interactively:
# From saved events
./bin/yolo-tui --events-file runner-logs/run.events.jsonl
# From stdin
cat runner-logs/run.events.jsonl | ./bin/yolo-tui --events-stdinFeatures:
- Tree view organized by epic → task
- Filter by event type
- Search messages
- View agent thoughts and tool calls
- Export filtered logs
- Event stream:
runner-logs/*.events.jsonl - Per-task backend logs:
.yolo-runner/clones/<task-id>/runner-logs/
- Tail the OpenCode log:
tail -f runner-logs/opencode/opencode.log - Identify the current task: run
bd show <issue-id>from the last "selected bead" line in the output
If OpenCode/Serena fails during startup you may see errors like "gopls is not installed" and the run can end up idle.
Install gopls via Go and ensure it is on PATH:
GOBIN=~/.local/bin go install golang.org/x/tools/gopls@latest
The runner sets XDG_CONFIG_HOME=~/.config/opencode-runner so OpenCode reads and writes config in an isolated directory instead of your default ~/.config/opencode.
If flags are added later to change the config location, use those to override the default. Otherwise inspect the effective config by checking ~/.config/opencode-runner directly or exporting a different XDG_CONFIG_HOME before running the binary.
- Create a throwaway branch and ensure the repo is clean.
- Confirm the repo-local
.opencode/assets are installed. - Run
./bin/yolo-agent --repo . --root <root-id> --max 1 --stream | ./bin/yolo-tui --events-stdin. - Inspect the resulting commit and confirm it only includes the expected task changes.
- Review the emitted event log and the per-task backend log under
.yolo-runner/clones/<task-id>/runner-logs/.
Success looks like: the agent run finishes without errors, task status/data are updated as expected, and the logs show a complete implementation/review cycle.
After finishing a batch of tasks:
# Close completed epics
tk epic close-eligible
# Or for GitHub
git issue list --state closed | gh issue edit <epic> --add-label "completed"
# Clean up stale clones
rm -rf .yolo-runner/clones/*This keeps tk ready output clean and removes old working directories.
- Confirm
.opencode/agent/yolo.mdexists. - Confirm it includes
permission: allow. - Confirm
.opencode/skills/task-splitting/SKILL.mdexists when task splitting is expected. - Confirm
.opencode/commands/split-tasks.mdand.opencode/commands/split-tasks-strict.mdexist when those commands are expected.
Cause: Ticket/config changes not pushed to origin.
Fix:
git add .tickets/*.md .yolo-runner/config.yaml
git commit -m "Add ticket/config changes"
git pushIf a run is interrupted:
# Stop agent
pkill yolo-agent
# Reset task status
tk status <task-id> open
# Remove stale clone
rm -rf .yolo-runner/clones/<task-id>
# Clear scheduler state
# Edit .yolo-runner/scheduler-state.json and remove stale entriesWhen using --tdd, review may fail if:
- Production code is written before tests
- Tests don't fail first (RED phase)
- Implementation is too broad
Fix: Remove production code, keep only failing tests, retry.
Enable verbose output:
./bin/yolo-agent --repo . --root <epic> --stream --verbose 2>&1 | tee debug.logIf startup fails with agent/skill/command errors:
- Reinstall the repo-local
.opencode/assets from the tracked source files in this repo. - Confirm
.opencode/agent/yolo.mdexists and includespermission: allow. - Confirm
.opencode/skills/task-splitting/SKILL.mdexists. - Confirm
.opencode/commands/split-tasks.mdand.opencode/commands/split-tasks-strict.mdexist. - Re-run the agent after the OpenCode asset installation is complete.
- OpenCode is run in CI mode to avoid interactive prompts.