AI coding assistants are incredible. They're also expensive. Every prompt you send to Claude Code or Codex carries context, your files, your history, your instructions, and your token consumption is crazy.
Edgee sits between your coding agent and the LLM APIs and compresses that context before it reaches the model. Same output. Fewer tokens. Lower bill.
Claude Code ──► edgee ──► Anthropic API
↑
token compression
happens here
macOS / Linux (curl)
curl -fsSL https://install.edgee.ai | bashHomebrew
brew tap edgee-ai/tap
brew install edgeeedgee launch claudeThat's it. Edgee configures itself as a gateway and Claude Code routes through it automatically.
edgee launch codexToken compression — Edgee analyzes your request context and removes redundancy before sending it upstream. It's lossless from the model's perspective: the response is identical, but the prompt is leaner.
Usage tracking — See how many tokens you're sending, how many you're saving, and what it costs — in real time.
| Tool | Setup command | Status |
|---|---|---|
| Claude Code | edgee launch claude |
✅ Supported |
| Codex | edgee launch codex |
✅ Supported |
| Opencode | edgee launch opencode |
✅ Supported |
| Cursor | edgee launch cursor |
🔜 Coming soon |
Edgee is Apache 2.0 licensed and we genuinely want your contributions.
git clone https://github.com/edgee-ai/edgee
cd edgee
cargo buildSee CONTRIBUTING.md for the full guide. For bigger changes, open an issue first so we can align before you build.
- Discord — fastest way to get help
- GitHub Issues — bugs and feature requests
- Twitter / X — updates and releases