A proxy server that allows CLINE (Claude Code) and other OpenAI-compatible extensions to use ChatGPT Plus tokens from Codex authentication instead of requiring separate OpenAI API keys.
This proxy bridges the gap between:
- Input: Standard OpenAI Chat Completions API format (what CLINE expects)
- Output: ChatGPT Responses API format (what ChatGPT backend uses)
- ✅ OpenAI API Compatibility: Accepts standard OpenAI Chat Completions requests
- ✅ ChatGPT Plus Integration: Uses your existing ChatGPT Plus tokens
- ✅ Cloudflare Bypass: Handles ChatGPT's Cloudflare protection with browser-like headers
- ✅ HTTPS Support: Works with extensions requiring secure connections (via ngrok)
- ✅ Streaming Responses: Full streaming support for real-time responses
- ✅ CLINE Compatible: Tested extensively with CLINE VS Code extension
- ✅ Array Content Support: Handles both string and array message formats from OpenAI SDK
- ✅ Universal Routing: Bulletproof request routing that bypasses complex warp conflicts
git clone https://github.com/Securiteru/codex-openai-proxy.git
cd codex-openai-proxy
cargo build --release
./target/release/codex-openai-proxy --port 8888 --auth-path ~/.codex/auth.jsonMost VS Code extensions require HTTPS:
# Install ngrok and create your own static domain at https://dashboard.ngrok.com/domains
# Replace 'your-static-domain' with your unique domain name
ngrok http 8888 --domain=your-static-domain.ngrok-free.appSecurity Note: Always use your own unique ngrok domain. Do not share your domain publicly to prevent unauthorized access to your proxy.
In VS Code CLINE settings:
- Base URL:
https://your-static-domain.ngrok-free.app - Model:
gpt-5(orgpt-4) - API Key: Any value (not used, but required by extension)
# Health check
curl https://your-static-domain.ngrok-free.app/health
# Test completion
curl -X POST https://your-static-domain.ngrok-free.app/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer test-key" \
-d '{
"model": "gpt-5",
"messages": [{"role": "user", "content": "Hello!"}]
}'- CLINE → Chat Completions format → Proxy
- Proxy → Converts to Responses API → ChatGPT Backend
- ChatGPT Backend → Responses API format → Proxy
- Proxy → Converts to Chat Completions → CLINE
Chat Completions Request:
{
"model": "gpt-5",
"messages": [
{"role": "user", "content": "Hello!"}
]
}Responses API Request:
{
"model": "gpt-5",
"instructions": "You are a helpful AI assistant.",
"input": [
{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "Hello!"}]
}
],
"tools": [],
"tool_choice": "auto",
"store": false,
"stream": false
}codex-openai-proxy [OPTIONS]
Options:
-p, --port <PORT> Port to listen on [default: 8080]
--auth-path <PATH> Path to Codex auth.json [default: ~/.codex/auth.json]
-h, --help Print help
-v, --version Print versionThe proxy automatically reads authentication from your Codex auth.json file:
{
"access_token": "eyJ...",
"account_id": "db1fc050-5df3-42c1-be65-9463d9d23f0b",
"api_key": "sk-proj-..."
}Priority: Uses access_token + account_id for ChatGPT Plus accounts, falls back to api_key for standard OpenAI accounts.
- GET
/health - Returns service status
- POST
/v1/chat/completions - OpenAI-compatible chat completions endpoint
- Supports: messages, model, temperature, max_tokens, stream, tools
Connection Refused:
# Check if proxy is running
curl http://localhost:8080/healthAuthentication Errors:
# Verify auth.json exists and has valid tokens
cat ~/.codex/auth.json | jq .Backend Errors:
# Check proxy logs for detailed error messages
RUST_LOG=debug cargo run# Run with debug logging
RUST_LOG=debug cargo run -- --port 8080
# Test with verbose curl
curl -v -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-5", "messages": [{"role": "user", "content": "Test"}]}'cargo build
cargo test
cargo clippy
cargo fmtThe proxy is designed to be extensible:
- New endpoints: Add routes in
main.rs - Format conversion: Modify conversion functions
- Authentication: Extend
AuthDatastructure - Streaming: Add SSE support for real-time responses
This project is part of the Codex ecosystem and follows the same licensing as the main Codex repository.