What feature would you like to see?
Hi! I’m using code on Windows and I’m trying to connect it to Open WebUI (or any OpenAI-compatible UI). Right now I can run code app-server, but I don’t see it listening on any TCP port (verified with Get-NetTCPConnection), and I can’t find documented flags to bind a host/port or expose an HTTP API.
Could you add a simple HTTP server mode that exposes an OpenAI-compatible API, so code can be used directly as a backend (no custom wrappers)?
Request:
-
Something like: code serve --host 127.0.0.1 --port 8001 (or app-server with --host/--port)
-
Endpoints:
GET /v1/models
POST /v1/chat/completions (support stream: true SSE, and stream: false)
-
Optional/required Bearer auth (API key), Session handling-Respect an incoming session_id (or generate one) to preserve CLI/session context for agentic workflows
This would make integration with "chat" UIs like Open WebUI and similar tools straightforward while keeping all the “agentic” CLI functionality under the hood.
What feature would you like to see?
Hi! I’m using code on Windows and I’m trying to connect it to Open WebUI (or any OpenAI-compatible UI). Right now I can run code app-server, but I don’t see it listening on any TCP port (verified with Get-NetTCPConnection), and I can’t find documented flags to bind a host/port or expose an HTTP API.
Could you add a simple HTTP server mode that exposes an OpenAI-compatible API, so code can be used directly as a backend (no custom wrappers)?
Request:
Something like: code serve --host 127.0.0.1 --port 8001 (or app-server with --host/--port)
Endpoints:
GET /v1/models
POST /v1/chat/completions (support stream: true SSE, and stream: false)
Optional/required Bearer auth (API key), Session handling-Respect an incoming session_id (or generate one) to preserve CLI/session context for agentic workflows
This would make integration with "chat" UIs like Open WebUI and similar tools straightforward while keeping all the “agentic” CLI functionality under the hood.