Skip to content
This repository was archived by the owner on May 5, 2026. It is now read-only.

Alice-space/agentbridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

agentbridge

A unified Go library for driving LLM agent CLIs as subprocess backends.

Instead of implementing LLM API clients, agentbridge drives each vendor's official CLI tool as a subprocess and exposes a single Backend interface. The caller assembles the prompt; agentbridge translates a RunRequest into the correct CLI invocation and parses the output back into a RunResult.

Supported Backends

Provider CLI Thread resume Token usage
codex codex
claude claude
gemini gemini
kimi kimi CLI
opencode opencode

Installation

go get github.com/Alice-space/agentbridge

Zero external dependencies. Only the Go standard library is required.

Repository Layout

  • Root package: public facade (Backend, RunRequest, factory config, multi-backend routing, interactive sessions).
  • providers/<name>: provider-specific CLI runners and parsers used by the root facade.
  • internal/: shared implementation helpers that are not part of the public API.

Quick Start

package main

import (
    "context"
    "fmt"

    agentbridge "github.com/Alice-space/agentbridge"
)

func main() {
    backend, err := agentbridge.NewBackend(agentbridge.FactoryConfig{
        Provider: agentbridge.ProviderClaude,
        Claude: agentbridge.ClaudeConfig{
            Command: "claude",
        },
    })
    if err != nil {
        panic(err)
    }

    result, err := backend.Run(context.Background(), agentbridge.RunRequest{
        UserText: "What is 2 + 2?",
    })
    if err != nil {
        panic(err)
    }
    fmt.Println(result.Reply)
}

Multi-backend Routing

Route requests to different CLI backends based on RunRequest.Provider:

multi, err := agentbridge.NewMultiBackend("codex", map[string]agentbridge.Backend{
    "codex":  codexBackend,
    "claude": claudeBackend,
    "gemini": geminiBackend,
})

result, err := multi.Run(ctx, agentbridge.RunRequest{
    Provider: "claude",
    UserText:  "Hello!",
})

Thread Resumption

All backends support resuming a previous conversation via ThreadID:

// First turn
result, err := backend.Run(ctx, agentbridge.RunRequest{
    UserText: "Start a new task",
})

// Second turn — resume the same session
result2, err := backend.Run(ctx, agentbridge.RunRequest{
    ThreadID: result.NextThreadID,
    UserText: "Continue from where you left off",
})

Streaming Progress

Receive intermediate messages during a long-running codex session:

result, err := backend.Run(ctx, agentbridge.RunRequest{
    UserText: "Refactor this file",
    OnProgress: func(step string) {
        if strings.HasPrefix(step, "[file_change] ") {
            fmt.Println("File changed:", strings.TrimPrefix(step, "[file_change] "))
        } else {
            fmt.Println("Agent:", step)
        }
    },
})

Interactive Steering

For chat surfaces that receive new user input while an agent is still running, use NewInteractiveProviderSession instead of killing and relaunching the CLI.

session, err := agentbridge.NewInteractiveProviderSession(agentbridge.FactoryConfig{
    Provider: agentbridge.ProviderCodex,
    Codex: agentbridge.CodexConfig{Command: "codex"},
})
if err != nil {
    panic(err)
}
defer session.Close()

_, _ = session.Submit(ctx, agentbridge.RunRequest{UserText: "Refactor this package"})

// If the Codex turn is still active, this is delivered with turn/steer instead
// of cancelling the running process.
_, _ = session.Submit(ctx, agentbridge.RunRequest{UserText: "Keep the public API unchanged"})

Provider behavior:

Provider Interactive transport Busy-turn behavior
codex codex app-server --listen stdio:// Native steer via turn/steer
kimi kimi --wire --yolo Native steer via Wire steer
opencode opencode serve app-server API Native enqueue via session.prompt_async
claude claude -p --input-format stream-json Native enqueue via streaming stdin
gemini existing gemini wrapper Queue until idle

Design

The library does not assemble prompts. RunRequest.UserText is passed directly to the CLI. The caller is responsible for constructing the final prompt (system instructions, reply tokens, etc.) before calling Run.

No logging. The library returns errors and lets callers decide how to log them.

Provider-specific flags (model, sandbox policy, reasoning effort, personality) are mapped to the appropriate CLI arguments by each backend.

Configuration Reference

CodexConfig

agentbridge.CodexConfig{
    Command:            "codex",          // CLI binary name or path
    Timeout:            10 * time.Minute, // overall execution timeout
    DefaultIdleTimeout: 15 * time.Minute, // idle timeout (default reasoning)
    HighIdleTimeout:    30 * time.Minute, // idle timeout for high reasoning
    XHighIdleTimeout:   60 * time.Minute, // idle timeout for xhigh reasoning
    Model:              "o4-mini",
    ReasoningEffort:    "medium",
    Env:                map[string]string{"MY_KEY": "value"},
    WorkspaceDir:       "/path/to/project",
    DefaultExecPolicy: agentbridge.ExecPolicyConfig{
        Sandbox:        "workspace-write",
        AskForApproval: "never",
    },
    ProfileOverrides: map[string]agentbridge.ProfileRunnerConfig{
        "executor": {ReasoningEffort: "xhigh"},
    },
}

ClaudeConfig / GeminiConfig / KimiConfig

agentbridge.ClaudeConfig{
    Command:      "claude",
    Timeout:      10 * time.Minute,
    Env:          map[string]string{},
    WorkspaceDir: "/path/to/project",
    ProfileOverrides: map[string]agentbridge.ProfileRunnerConfig{
        "fast": {Command: "claude-fast"},
    },
}

Interactive Claude sessions use stream-json by default. Set DisableStreamJSON: true only as an experimental rollback to the one-shot runner.

OpenCodeConfig

agentbridge.OpenCodeConfig{
    Command:      "opencode",
    Timeout:      10 * time.Minute,
    Model:        "anthropic/claude-sonnet-4-5",
    Variant:      "max",
    WorkspaceDir: "/path/to/project",
    // Optional: connect to an existing `opencode serve` process.
    ServerURL: "http://127.0.0.1:4096",
}

Interactive OpenCode sessions start opencode serve by default and append busy-turn input with session.prompt_async. Set DisableAppServer: true only as an experimental rollback to the one-shot opencode run wrapper.

License

MIT

About

Unified Go library for driving LLM agent CLIs (claude, codex, gemini, kimi) as subprocess backends

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages