A unified Go library for driving LLM agent CLIs as subprocess backends.
Instead of implementing LLM API clients, agentbridge drives each vendor's official CLI tool as a subprocess and exposes a single Backend interface. The caller assembles the prompt; agentbridge translates a RunRequest into the correct CLI invocation and parses the output back into a RunResult.
| Provider | CLI | Thread resume | Token usage |
|---|---|---|---|
codex |
codex | ✓ | ✓ |
claude |
claude | ✓ | ✓ |
gemini |
gemini | ✓ | ✓ |
kimi |
kimi CLI | ✓ | — |
opencode |
opencode | ✓ | ✓ |
go get github.com/Alice-space/agentbridgeZero external dependencies. Only the Go standard library is required.
- Root package: public facade (
Backend,RunRequest, factory config, multi-backend routing, interactive sessions). providers/<name>: provider-specific CLI runners and parsers used by the root facade.internal/: shared implementation helpers that are not part of the public API.
package main
import (
"context"
"fmt"
agentbridge "github.com/Alice-space/agentbridge"
)
func main() {
backend, err := agentbridge.NewBackend(agentbridge.FactoryConfig{
Provider: agentbridge.ProviderClaude,
Claude: agentbridge.ClaudeConfig{
Command: "claude",
},
})
if err != nil {
panic(err)
}
result, err := backend.Run(context.Background(), agentbridge.RunRequest{
UserText: "What is 2 + 2?",
})
if err != nil {
panic(err)
}
fmt.Println(result.Reply)
}Route requests to different CLI backends based on RunRequest.Provider:
multi, err := agentbridge.NewMultiBackend("codex", map[string]agentbridge.Backend{
"codex": codexBackend,
"claude": claudeBackend,
"gemini": geminiBackend,
})
result, err := multi.Run(ctx, agentbridge.RunRequest{
Provider: "claude",
UserText: "Hello!",
})All backends support resuming a previous conversation via ThreadID:
// First turn
result, err := backend.Run(ctx, agentbridge.RunRequest{
UserText: "Start a new task",
})
// Second turn — resume the same session
result2, err := backend.Run(ctx, agentbridge.RunRequest{
ThreadID: result.NextThreadID,
UserText: "Continue from where you left off",
})Receive intermediate messages during a long-running codex session:
result, err := backend.Run(ctx, agentbridge.RunRequest{
UserText: "Refactor this file",
OnProgress: func(step string) {
if strings.HasPrefix(step, "[file_change] ") {
fmt.Println("File changed:", strings.TrimPrefix(step, "[file_change] "))
} else {
fmt.Println("Agent:", step)
}
},
})For chat surfaces that receive new user input while an agent is still running,
use NewInteractiveProviderSession instead of killing and relaunching the CLI.
session, err := agentbridge.NewInteractiveProviderSession(agentbridge.FactoryConfig{
Provider: agentbridge.ProviderCodex,
Codex: agentbridge.CodexConfig{Command: "codex"},
})
if err != nil {
panic(err)
}
defer session.Close()
_, _ = session.Submit(ctx, agentbridge.RunRequest{UserText: "Refactor this package"})
// If the Codex turn is still active, this is delivered with turn/steer instead
// of cancelling the running process.
_, _ = session.Submit(ctx, agentbridge.RunRequest{UserText: "Keep the public API unchanged"})Provider behavior:
| Provider | Interactive transport | Busy-turn behavior |
|---|---|---|
codex |
codex app-server --listen stdio:// |
Native steer via turn/steer |
kimi |
kimi --wire --yolo |
Native steer via Wire steer |
opencode |
opencode serve app-server API |
Native enqueue via session.prompt_async |
claude |
claude -p --input-format stream-json |
Native enqueue via streaming stdin |
gemini |
existing gemini wrapper |
Queue until idle |
The library does not assemble prompts. RunRequest.UserText is passed directly to the CLI. The caller is responsible for constructing the final prompt (system instructions, reply tokens, etc.) before calling Run.
No logging. The library returns errors and lets callers decide how to log them.
Provider-specific flags (model, sandbox policy, reasoning effort, personality) are mapped to the appropriate CLI arguments by each backend.
agentbridge.CodexConfig{
Command: "codex", // CLI binary name or path
Timeout: 10 * time.Minute, // overall execution timeout
DefaultIdleTimeout: 15 * time.Minute, // idle timeout (default reasoning)
HighIdleTimeout: 30 * time.Minute, // idle timeout for high reasoning
XHighIdleTimeout: 60 * time.Minute, // idle timeout for xhigh reasoning
Model: "o4-mini",
ReasoningEffort: "medium",
Env: map[string]string{"MY_KEY": "value"},
WorkspaceDir: "/path/to/project",
DefaultExecPolicy: agentbridge.ExecPolicyConfig{
Sandbox: "workspace-write",
AskForApproval: "never",
},
ProfileOverrides: map[string]agentbridge.ProfileRunnerConfig{
"executor": {ReasoningEffort: "xhigh"},
},
}agentbridge.ClaudeConfig{
Command: "claude",
Timeout: 10 * time.Minute,
Env: map[string]string{},
WorkspaceDir: "/path/to/project",
ProfileOverrides: map[string]agentbridge.ProfileRunnerConfig{
"fast": {Command: "claude-fast"},
},
}Interactive Claude sessions use stream-json by default. Set
DisableStreamJSON: true only as an experimental rollback to the one-shot
runner.
agentbridge.OpenCodeConfig{
Command: "opencode",
Timeout: 10 * time.Minute,
Model: "anthropic/claude-sonnet-4-5",
Variant: "max",
WorkspaceDir: "/path/to/project",
// Optional: connect to an existing `opencode serve` process.
ServerURL: "http://127.0.0.1:4096",
}Interactive OpenCode sessions start opencode serve by default and append
busy-turn input with session.prompt_async. Set DisableAppServer: true only
as an experimental rollback to the one-shot opencode run wrapper.
MIT