This repo provides an open-source Codex runtime bridge that host apps can embed. It exposes a small REST + SSE surface that applications can call for chat sessions, auth, runtime info, and streaming events. OpenAI is the primary default path, Osirus is the next featured provider, and other OpenAI-compatible backends remain supported.
Supported runtime providers, in recommended order:
openaiosirusosirus_agentollamavllmopenai_compatible
Many applications need Codex to fit into their own auth, UI, and workflow model. This bridge keeps the integration light: the host app owns permissions and UX, while the runtime focuses on execution and streaming.
The Codex Bridge enables open, interoperable AI editing for CMS platforms without vendor lock-in. It reduces integration time from weeks to hours, supports local or cloud models via OpenAI-compatible routes, and keeps platform-specific permissions and content workflows in the CMS where they belong. Funding will help us harden the runtime, improve performance, and deliver plug-and-play adapters for popular CMS stacks.
server.mjs: bridge bootstrap and service wiringapp-server-client.mjs: client utilities for the app-server adapter runtimeserver-lib/auth-service.mjs: device auth and login stateserver-lib/chat-session-service.mjs: session queueing, streaming, turn execution, CMS generation helpersserver-lib/request-handler.mjs: HTTP route dispatchvscode-extension/: VS Code extension, bundled-runtime helpers, and packaging flowcmd.sh: bridge-local helper commands for runtime builds, extension packaging, and Docker workflows
Host App UI
|
v
VS Code extension or other host integration
|
| HTTP + SSE
v
codex-bridge
|
| spawn / app-server
v
Native Codex CLI or Codex App Server
|
v
Workspace reads, file edits, streamed events
The current VS Code path is:
- The extension resolves a Codex executable from
codexBridge.localCodexPath, bundled runtime, orcodexonPATH. - The extension starts
server.mjsas the local sidecar. - The extension sends runtime config, workspace root, and chat context to the bridge.
- The bridge launches native Codex with
workspace-writesandboxing. - Codex reads and edits files in the workspace, and the bridge streams those events back to the extension.
What is wired and working:
- native Codex can be built from source with
cmd.sh - the VS Code extension can bundle that runtime and launch it via
CODEX_BIN - the bridge passes the active workspace root to Codex
- chat sessions stream over SSE
- Codex can edit files in the workspace through
workspace-write - Codex can run workspace shell commands, including git status/history inspection, within the configured sandbox
- image attachments from the VS Code chat panel are sent through the bridge to native Codex
- provider/runtime config can be changed without changing the bridge code
What is still partial or missing:
- the chat webview is still custom/minimal, not a full React-style app shell
- there is no diff/approval UI yet before workspace edits
- non-image file attachments are not yet materialized into a richer native Codex file-ingest flow; the current bridge attachment path is image-first
- provider model pickers can influence runtime config, but not every upstream model behaves as well as native Codex for code editing
- the bridge is modular now, but
server-lib/chat-session-service.mjsis still the biggest remaining class and could be split further
Codex Bridge now treats every backend as a codex_agent runtime, not a plain chat integration. The runtime can satisfy that contract in three different ways:
native_tools: the runtime exposes Codex-style workspace tools directly, so file edits, command execution, and git inspection can happen nativelymodel_tools: the runtime depends on the selected upstream model supporting tool use correctly; this path is model-dependent and should be treated as experimental until verifiedbridge_tools: the host app intends to augment the model with a bridge-side tool adapter; this is the right shape for regular chat models, but the edit adapter still needs to be implemented
The key rule is that model text is never the source of truth for workspace actions. A file edit, command run, or git inspection only counts as real when the bridge receives verified tool results from the runtime or from a future bridge-side adapter.
Both model_tools and bridge_tools ultimately depend on the same local workspace tool protocol. The difference is how tool intent reaches that local executor:
model_tools: the upstream model/runtime emits structured tool-call intent directly, and the local bridge executes those actions against the real VS Code workspacebridge_tools: the upstream model does not natively produce trustworthy tool calls, so the bridge must translate model intent into the same local workspace tool protocol itself
That means the long-term architecture is still local execution in every case. The only thing that changes between model_tools and bridge_tools is where the structured tool intent comes from.
The planned local-first tool protocol covers bridge-executed actions such as:
list_filesread_filesearch_textrun_commandgit_statusgit_loggit_diffapply_patch
Fastest direct bridge run:
npm install -g @openai/codex
node server.mjsThe bridge listens on port 4399 by default. You can override via CODEX_BRIDGE_PORT.
If you do not want to rely on a globally installed codex, clone the Codex source and build a local runtime instead. The bridge and VS Code extension can use that local runtime through the helper commands below.
From the codex-bridge repo root:
./cmd.sh codexruntimebuildwin
./cmd.sh codexruntimebuildlinux
./cmd.sh codexbridgevscodebuild
./cmd.sh codexbridgevscodepackage
./cmd.sh codexbridgeup
./cmd.sh codexbridgelogsNotes:
- staged local runtimes live under
tools/codex-runtime/ - by default
./cmd.sh codexruntimebuild*looks for Codex source at../codex/codex-rs - if your Codex source lives elsewhere, set
CODEX_SOURCE_DIR=/absolute/path/to/codex/codex-rs
The easiest open-source layout is to check out the Codex source next to codex-bridge:
cd ..
git clone https://github.com/techcto/codex.gitThat gives you:
.../codex-bridge
.../codex
and ./cmd.sh codexruntimebuild* will automatically look in:
../codex/codex-rs
Example:
CODEX_SOURCE_DIR=/absolute/path/to/codex/codex-rs ./cmd.sh codexruntimebuildwinPrerequisites:
- Rust toolchain with
cargo - for Windows builds, the MSVC Rust toolchain plus Visual Studio Build Tools with C++
Build commands:
./cmd.sh codexruntimebuildwin
./cmd.sh codexruntimebuildlinux
./cmd.sh codexruntimebuildmacThose commands stage the built runtime under:
tools/codex-runtime/win32-x64/codex.exe
tools/codex-runtime/linux-x64/codex
tools/codex-runtime/darwin-arm64/codex
The VS Code extension build will automatically bundle any staged runtimes it finds under:
tools/codex-runtime/<platform>/codex[.exe]
Build and package from the bridge repo root:
./cmd.sh codexruntimebuildwin
./cmd.sh codexbridgevscodebuild
./cmd.sh codexbridgevscodepackageOr directly from the extension folder:
cd vscode-extension
npm install
npm run buildTo test in VS Code:
- Open
vscode-extension/in VS Code. - Press
F5to launch an Extension Development Host. - Use
Codex Bridge: Configure Connection. - Start the local bridge or let the extension auto-start it.
The extension README has the provider-specific details:
The native Codex path is:
cmd.sh codexruntimebuild*
-> tools/codex-runtime/<platform>/codex[.exe]
-> bundled into vscode-extension/bundled-runtime/<platform>/
-> resolved by the extension
-> passed to the bridge as CODEX_BIN
-> launched by the bridge for chat turns
The bridge reports its active executable and sandbox mode through GET /runtime/info.
A CMS image or VM can copy this repository into /opt/codex-bridge and run:
node /opt/codex-bridge/server.mjsThe CMS talks to the bridge through the Codex service URL (see provider settings in your platform). For CMS environments, the preferred deployment model is a host-managed codex-bridge service with separate logs, health checks, and restart policy from the CMS itself.
Common variables:
CODEX_BRIDGE_PORT(default4399)CODEX_RUNTIME_KIND(defaultapp_server_adapter)CODEX_WORKSPACE_ROOT(CMS workspace root)CODEX_MAX_CONCURRENT_TURNS(default4)CODEX_MAX_QUEUED_TURNS(default40)
Run the bridge locally with Docker Compose if you want a containerized test path:
docker compose up --buildThis starts the bridge on port 4399 and mounts the current repo into the container. You can override environment variables in docker-compose.yml.
Docker is optional. The primary local dev path is running server.mjs directly, and the preferred CMS runtime path is a host-managed bridge service.
This repository is intended to be safe to publish publicly.
Local-only artifacts are gitignored, including:
tools/staged runtimesvscode-extension/bundled-runtime/*/codex*vscode-extension/.vscode/vscode-extension/.claude/vscode-extension/.codex*.vsix- local
.env*files
Do not commit:
- API keys or bearer tokens
- local Codex runtime binaries
- VS Code workspace state
- packaged extension output
The examples in this README use placeholders like YOUR_API_KEY; replace them locally, not in committed files.
| Method | Path | Purpose |
|---|---|---|
GET |
/health |
Health + auth state |
GET |
/runtime/info |
Runtime metadata |
GET |
/runtime/config |
Current runtime config |
POST |
/runtime/config |
Update runtime config |
ANY |
/v1/* |
Proxy OpenAI-compatible upstream routes |
POST |
/chat/sessions |
Create a session |
GET |
/chat/sessions/:id |
Read a session |
GET |
/chat/sessions/:id/stream |
SSE stream |
POST |
/chat/sessions/:id/messages |
Send a message |
DELETE |
/chat/sessions |
Clear sessions |
GET |
/auth/device |
Device auth start |
GET |
/auth/status |
Auth status |
GET /runtime/info reports:
runtime_kindcodex_commandsandbox_modeworkspace_rootruntime_config- bridge load counters
curl -s http://localhost:4399/runtime/info | jq .Point the bridge at an Osirus agent-scoped compatibility route:
curl -s -X POST http://localhost:4399/runtime/config \
-H "Content-Type: application/json" \
-d '{
"runtime_provider": "osirus",
"auth_mode": "api_key",
"provider_api_base_url": "https://example.osirus.ai/api/agents/AGENT_ID/v1",
"provider_api_key": "YOUR_API_KEY"
}' | jq .Create a session and send a message:
curl -s -X POST http://localhost:4399/chat/sessions \
-H "Content-Type: application/json" \
-d '{"context":{"context_name":"Homepage","context_type":"page","context_id":"123"}}' | jq .
curl -s -X POST http://localhost:4399/chat/sessions/SESSION_ID/messages \
-H "Content-Type: application/json" \
-d '{"message":"Create a hero section with a call-to-action."}' | jq .If bridge concurrency is saturated, POST /chat/sessions/:id/messages now returns either a bounded queued response or a busy error instead of allowing unbounded turn fan-out.
- License:
LICENSE(Apache 2.0) - Security policy:
SECURITY.md - Changelog:
CHANGELOG.md - Notices:
NOTICE - Roadmap:
docs/roadmap.md