diff --git a/.gitignore b/.gitignore index b086116945..32ebbf74dd 100644 --- a/.gitignore +++ b/.gitignore @@ -50,3 +50,5 @@ _bmad-output/* # macOS .DS_Store ._* +.gocache/ +.porting/ diff --git a/.goreleaser.yml b/.goreleaser.yml index f8bebfc1d9..c479255eaf 100644 --- a/.goreleaser.yml +++ b/.goreleaser.yml @@ -19,6 +19,8 @@ builds: archives: - id: "cli-proxy-api" format: tar.gz + name_template: >- + {{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{- if eq .Arch "arm64" -}}aarch64{{- else -}}{{ .Arch }}{{- end -}} format_overrides: - goos: windows format: zip diff --git a/Dockerfile b/Dockerfile index 3e10c4f9f8..b4caaee325 100644 --- a/Dockerfile +++ b/Dockerfile @@ -14,7 +14,7 @@ ARG BUILD_DATE=unknown RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w -X 'main.Version=${VERSION}' -X 'main.Commit=${COMMIT}' -X 'main.BuildDate=${BUILD_DATE}'" -o ./CLIProxyAPI ./cmd/server/ -FROM alpine:3.22.0 +FROM alpine:3.23 RUN apk add --no-cache tzdata @@ -32,4 +32,4 @@ ENV TZ=Asia/Shanghai RUN cp /usr/share/zoneinfo/${TZ} /etc/localtime && echo "${TZ}" > /etc/timezone -CMD ["./CLIProxyAPI"] \ No newline at end of file +CMD ["./CLIProxyAPI"] diff --git a/README.md b/README.md index 53acdd5178..8064db7d77 100644 --- a/README.md +++ b/README.md @@ -10,23 +10,19 @@ So you can use local or multi-account CLI access with OpenAI(include Responses)/ ## Sponsor -[![z.ai](https://assets.router-for.me/english-5-0.jpg)](https://z.ai/subscribe?ic=8JVLJQFSKB) +[![https://www.packyapi.com/register?aff=cliproxyapi](./assets/packycode-en.png)](https://www.packyapi.com/register?aff=cliproxyapi) -This project is sponsored by Z.ai, supporting us with their GLM CODING PLAN. +Thanks to PackyCode for sponsoring this project! -GLM CODING PLAN is a subscription service designed for AI coding, starting at just $10/month. It provides access to their flagship GLM-4.7 & (GLM-5 Only Available for Pro Users)model across 10+ popular AI coding tools (Claude Code, Cline, Roo Code, etc.), offering developers top-tier, fast, and stable coding experiences. +PackyCode is a reliable and efficient API relay service provider, offering relay services for Claude Code, Codex, Gemini, and more. -Get 10% OFF GLM CODING PLAN:https://z.ai/subscribe?ic=8JVLJQFSKB +PackyCode provides special discounts for our software users: register using this link and enter the "cliproxyapi" promo code during recharge to get 10% off. --- - - - - @@ -35,12 +31,10 @@ Get 10% OFF GLM CODING PLAN:https://z.ai/subscribe?ic=8JVLJQFSKB - - - - - - + +
PackyCodeThanks to PackyCode for sponsoring this project! PackyCode is a reliable and efficient API relay service provider, offering relay services for Claude Code, Codex, Gemini, and more. PackyCode provides special discounts for our software users: register using this link and enter the "cliproxyapi" promo code during recharge to get 10% off.
AICodeMirror Thanks to AICodeMirror for sponsoring this project! AICodeMirror provides official high-stability relay services for Claude Code / Codex / Gemini CLI, with enterprise-grade concurrency, fast invoicing, and 24/7 dedicated technical support. Claude Code / Codex / Gemini official channels at 38% / 2% / 9% of original price, with extra discounts on top-ups! AICodeMirror offers special benefits for CLIProxyAPI users: register via this link to enjoy 20% off your first top-up, and enterprise customers can get up to 25% off!
Huge thanks to BmoPlus for sponsoring this project! BmoPlus is a highly reliable AI account provider built strictly for heavy AI users and developers. They offer rock-solid, ready-to-use accounts and official top-up services for ChatGPT Plus / ChatGPT Pro (Full Warranty) / Claude Pro / Super Grok / Gemini Pro. By registering and ordering through BmoPlus - Premium AI Accounts & Top-ups, users can unlock the mind-blowing rate of 10% of the official GPT subscription price (90% OFF)!
LingtrueAPIThanks to LingtrueAPI for its sponsorship of this project! LingtrueAPI is a global large - model API intermediary service platform that provides API calling services for various top - notch models such as Claude Code, Codex, and Gemini. It is committed to enabling users to connect to global AI capabilities at low cost and with high stability. LingtrueAPI offers special discounts to users of this software: register using this link, and enter the promo code "LingtrueAPI" when making the first recharge to enjoy a 10% discount.
PoixeAIThanks to Poixe AI for sponsoring this project! Poixe AI provides reliable LLM API services. You can leverage the platform's API endpoints to seamlessly build AI-powered products. Additionally, you can become a vendor by providing AI API resources to the platform and earn revenue. Register through the exclusive CLIProxyAPI referral link and receive a bonus of $5 USD on your first top-up.VisionCoderThanks to VisionCoder for supporting this project. VisionCoder Developer Platform is a reliable and efficient API relay service provider, offering access to mainstream AI models such as Claude Code, Codex, and Gemini. It helps developers and teams integrate AI capabilities more easily and improve productivity. +

+VisionCoder is also offering our users a limited-time Token Plan promotion: buy 1 month and get 1 month free.
@@ -51,7 +45,7 @@ Get 10% OFF GLM CODING PLAN:https://z.ai/subscribe?ic=8JVLJQFSKB - OpenAI Codex support (GPT models) via OAuth login - Claude Code support via OAuth login - Amp CLI and IDE extensions support with provider routing -- Streaming and non-streaming responses +- Streaming, non-streaming, and WebSocket responses where supported - Function calling/tools support - Multimodal input support (text and images) - Multiple accounts with round-robin load balancing (Gemini, OpenAI, Claude) @@ -72,6 +66,22 @@ CLIProxyAPI Guides: [https://help.router-for.me/](https://help.router-for.me/) see [MANAGEMENT_API.md](https://help.router-for.me/management/api) +## Usage Statistics + +Since v6.10.0, CLIProxyAPI and [CPAMC](https://github.com/router-for-me/Cli-Proxy-API-Management-Center) no longer ship built-in usage statistics. If you need usage statistics, use: + +### [CPA Usage Keeper](https://github.com/Willxup/cpa-usage-keeper) + +Standalone persistence and visualization service for CLIProxyAPI, with periodic data sync, SQLite storage, aggregate APIs, and a built-in dashboard for usage and statistics. + +### [CLIProxyAPI Usage Dashboard](https://github.com/zhanglunet/cliproxyapi-usage-dashboard) + +Local-first usage and quota dashboard for CLIProxyAPI. It collects per-request token usage from the Redis-compatible usage queue into SQLite, visualizes daily and recent-window usage by account and model, and shows Codex 5h/7d quota remaining in a local web UI. + +### [CPA-Manager](https://github.com/seakee/CPA-Manager) + +Full CLIProxyAPI management center with request-level monitoring and cost estimates. CPA-Manager tracks collected requests by account, model, channel, latency, status, and token usage; estimates cost with editable model prices and one-click LiteLLM price sync; persists events in SQLite; and provides Codex account-pool operations with batch inspection, quota detection, unhealthy account discovery, cleanup suggestions, and one-click execution for day-to-day multi-account maintenance. + ## Amp CLI Support CLIProxyAPI includes integrated support for [Amp CLI](https://ampcode.com) and Amp IDE extensions, enabling you to use your Google/ChatGPT/Claude OAuth subscriptions with Amp's coding tools: @@ -120,7 +130,7 @@ Native macOS menu bar app to use your Claude Code & ChatGPT subscriptions with A ### [Subtitle Translator](https://github.com/VjayC/SRT-Subtitle-Translator-Validator) -Browser-based tool to translate SRT subtitles using your Gemini subscription via CLIProxyAPI with automatic validation/error correction - no API keys needed +A cross-platform desktop and web app to translate and validate SRT subtitles using your existing LLM subscriptions (Gemini, ChatGPT, Claude, etc.) via CLIProxyAPI - no API keys needed. ### [CCS (Claude Code Switch)](https://github.com/kaitranntt/ccs) @@ -181,6 +191,14 @@ Cross-platform desktop app (macOS, Windows, Linux) wrapping CLIProxyAPI with a n Ready-to-use cross-platform quota inspector for CLIProxyAPI, supporting per-account codex 5h/7d quota windows, plan-based sorting, status coloring, and multi-account summary analytics. +### [CodexCliPlus](https://github.com/C4AL/CodexCliPlus) + +Windows-focused, local-first desktop management platform for Codex CLI built on CLIProxyAPI, focused on simplifying local setup, account and runtime management, and providing a more complete Codex CLI experience for local users. + +### [CLIProxy Pool Watch](https://github.com/murasame612/CLIProxyPoolWidget) + +Native macOS SwiftUI app for monitoring ChatGPT/Codex account quotas in CLIProxyAPI pools. Displays account availability, Plus-base capacity, 5-hour and weekly quota bars, plan weights, and restore forecasts through the Management API. + > [!NOTE] > If you developed a project based on CLIProxyAPI, please open a PR to add it to this list. @@ -198,6 +216,10 @@ Never stop coding. Smart routing to FREE & low-cost AI models with automatic fal OmniRoute is an AI gateway for multi-provider LLMs: an OpenAI-compatible endpoint with smart routing, load balancing, retries, and fallbacks. Add policies, rate limits, caching, and observability for reliable, cost-aware inference. +### [Playful Proxy API Panel (PPAP)](https://github.com/daishuge/playful-proxy-api-panel) + +A public CLIProxyAPI-compatible fork and bundled management panel. It keeps upstream-style usage while restoring built-in usage statistics, adding cache hit rate, first-byte latency, TPS tracking, and Docker-oriented self-hosted installation docs. + > [!NOTE] > If you have developed a port of CLIProxyAPI or a project inspired by it, please open a PR to add it to this list. diff --git a/README_CN.md b/README_CN.md index 86ea954209..c912eb47a1 100644 --- a/README_CN.md +++ b/README_CN.md @@ -10,37 +10,31 @@ ## 赞助商 -[![bigmodel.cn](https://assets.router-for.me/chinese-5-0.jpg)](https://www.bigmodel.cn/claude-code?ic=RRVJPB5SII) +[![https://www.packyapi.com/register?aff=cliproxyapi](./assets/packycode-cn.png)](https://www.packyapi.com/register?aff=cliproxyapi) -本项目由 Z智谱 提供赞助, 他们通过 GLM CODING PLAN 对本项目提供技术支持。 +感谢 PackyCode 对本项目的赞助! -GLM CODING PLAN 是专为AI编码打造的订阅套餐,每月最低仅需20元,即可在十余款主流AI编码工具如 Claude Code、Cline、Roo Code 中畅享智谱旗舰模型GLM-4.7(受限于算力,目前仅限Pro用户开放),为开发者提供顶尖的编码体验。 +PackyCode 是一家可靠高效的 API 中转服务商,提供 Claude Code、Codex、Gemini 等多种服务的中转。 -智谱AI为本产品提供了特别优惠,使用以下链接购买可以享受九折优惠:https://www.bigmodel.cn/claude-code?ic=RRVJPB5SII +PackyCode 为本软件用户提供了特别优惠:使用此链接注册,并在充值时输入 "cliproxyapi" 优惠码即可享受九折优惠。 --- - - - - - + - + - - - - - - + +
PackyCode感谢 PackyCode 对本项目的赞助!PackyCode 是一家可靠高效的 API 中转服务商,提供 Claude Code、Codex、Gemini 等多种服务的中转。PackyCode 为本软件用户提供了特别优惠:使用此链接注册,并在充值时输入 "cliproxyapi" 优惠码即可享受九折优惠。
AICodeMirror感谢 AICodeMirror 赞助了本项目!AICodeMirror 提供 Claude Code / Codex / Gemini CLI 官方高稳定中转服务,支持企业级高并发、极速开票、7×24 专属技术支持。 Claude Code / Codex / Gemini 官方渠道低至 3.8 / 0.2 / 0.9 折,充值更有折上折!AICodeMirror 为 CLIProxyAPI 的用户提供了特别福利,通过此链接注册的用户,可享受首充8折,企业客户最高可享 7.5 折!感谢 AICodeMirror 赞助了本项目!AICodeMirror 提供 Claude Code / Codex / Gemini CLI 官方高稳定中转服务,支持企业级高并发、极速开票、7×24 专属技术支持。 Claude Code / Codex / Gemini 官方渠道低至 3.8 / 0.2 / 0.9 折,充值更有折上折!AICodeMirror 为 CLIProxyAPI 的用户提供了特别福利,通过此链接注册的用户,可享受首充8折,企业客户最高可享 7.5 折!
BmoPlus感谢 BmoPlus 赞助了本项目!BmoPlus 是一家专为AI订阅重度用户打造的可靠 AI 账号代充服务商,提供稳定的 ChatGPT Plus / ChatGPT Pro(全程质保) / Claude Pro / Super Grok / Gemini Pro 的官方代充&成品账号。 通过BmoPlus AI成品号专卖/代充注册下单的用户,可享GPT 官网订阅一折 的震撼价格!感谢 BmoPlus 赞助了本项目!BmoPlus 是一家专为AI订阅重度用户打造的可靠 AI 账号代充服务商,提供稳定的 ChatGPT Plus / ChatGPT Pro(全程质保) / Claude Pro / Super Grok / Gemini Pro 的官方代充&成品账号。 通过BmoPlus AI成品号专卖/代充注册下单的用户,可享GPT 官网订阅一折 的震撼价格!
LingtrueAPI感谢 LingtrueAPI 对本项目的赞助!LingtrueAPI 是一家全球大模型API中转服务平台,提供Claude Code、Codex、Gemini 等多种顶级模型API调用服务,致力于让用户以低成本、高稳定性链接全球AI能力。LingtrueAPI为本软件用户提供了特别优惠:使用此链接注册,并在首次充值时输入 "LingtrueAPI" 优惠码即可享受9折优惠。
PoixeAI感谢 Poixe AI 对本项目的赞助!Poixe AI 提供可靠的 AI 模型接口服务,您可以使用平台提供的 LLM API 接口轻松构建 AI 产品,同时也可以成为供应商,为平台提供大模型资源以赚取收益。通过 CLIProxyAPI 专属链接注册,充值额外赠送 $5 美金VisionCoder感谢 VisionCoder 对本项目的支持。VisionCoder 开发平台 是一个可靠高效的 API 中继服务提供商,提供 Claude Code、Codex、Gemini 等主流 AI 模型,帮助开发者和团队更轻松地集成 AI 功能,提升工作效率。 +

+VisionCoder 还为我们的用户提供 Token Plan 限时活动:购买 1 个月,赠送 1 个月。
@@ -51,7 +45,7 @@ GLM CODING PLAN 是专为AI编码打造的订阅套餐,每月最低仅需20元 - 为 CLI 模型提供 OpenAI/Gemini/Claude/Codex 兼容的 API 端点 - 新增 OpenAI Codex(GPT 系列)支持(OAuth 登录) - 新增 Claude Code 支持(OAuth 登录) -- 支持流式与非流式响应 +- 支持流式、非流式响应,以及受支持场景下的 WebSocket 响应 - 函数调用/工具支持 - 多模态输入(文本、图片) - 多账户支持与轮询负载均衡(Gemini、OpenAI、Claude) @@ -72,6 +66,22 @@ CLIProxyAPI 用户手册: [https://help.router-for.me/](https://help.router-fo 请参见 [MANAGEMENT_API_CN.md](https://help.router-for.me/cn/management/api) +## 使用量统计 + +自v6.10.0版本以后,CLIProxyAPI及 [CPAMC](https://github.com/router-for-me/Cli-Proxy-API-Management-Center) 项目不再预置数据统计功能,如果有数据统计需求的请使用以下项目: + +### [CPA Usage Keeper](https://github.com/Willxup/cpa-usage-keeper) + +独立的 CLIProxyAPI 使用量持久化与可视化服务,定期同步 CLIProxyAPI 数据,存储到 SQLite,提供聚合 API,并内置使用量分析与统计仪表盘。 + +### [CLIProxyAPI Usage Dashboard](https://github.com/zhanglunet/cliproxyapi-usage-dashboard) + +面向 CLIProxyAPI 的本地优先使用量与配额看板。它从 Redis 兼容使用量队列采集每次请求的 Token 消耗并写入 SQLite,按账号和模型可视化每日及最近时间窗口的用量,并在本地网页中显示 Codex 5h/7d 配额余量。 + +### [CPA-Manager](https://github.com/seakee/CPA-Manager) + +面向 CLIProxyAPI 的完整管理中心,提供请求级监控和费用预估。CPA-Manager 可按账号、模型、渠道、延迟、状态和 token 用量追踪采集到的请求;支持可编辑模型价格与一键同步 LiteLLM 价格来估算费用;用 SQLite 持久化事件;并提供面向 Codex 账号池的批量巡检、配额识别、异常账号定位、清理建议与一键执行能力,适合多账号池的日常运维管理。 + ## Amp CLI 支持 CLIProxyAPI 已内置对 [Amp CLI](https://ampcode.com) 和 Amp IDE 扩展的支持,可让你使用自己的 Google/ChatGPT/Claude OAuth 订阅来配合 Amp 编码工具: @@ -119,7 +129,7 @@ CLIProxyAPI 已内置对 [Amp CLI](https://ampcode.com) 和 Amp IDE 扩展的支 ### [Subtitle Translator](https://github.com/VjayC/SRT-Subtitle-Translator-Validator) -一款基于浏览器的 SRT 字幕翻译工具,可通过 CLI 代理 API 使用您的 Gemini 订阅。内置自动验证与错误修正功能,无需 API 密钥。 +一款跨平台的桌面和 Web 应用程序,可通过 CLIProxyAPI 使用您现有的 LLM 订阅(Gemini、ChatGPT、Claude, etc.)来翻译和验证 SRT 字幕 - 无需 API 密钥。 ### [CCS (Claude Code Switch)](https://github.com/kaitranntt/ccs) @@ -177,6 +187,14 @@ Shadow AI 是一款专为受限环境设计的 AI 辅助工具。提供无窗口 上手即用的面向 CLIProxyAPI 跨平台配额查询工具,支持按账号展示 codex 5h/7d 配额窗口、按计划排序、状态着色及多账号汇总分析。 +### [CodexCliPlus](https://github.com/C4AL/CodexCliPlus) + +基于 CLIProxyAPI 的 Windows Codex CLI 本地优先桌面管理平台,聚焦简化本机配置、账号与运行状态管理,并为本地用户提供更完整的 Codex CLI 使用体验。 + +### [CLIProxy Pool Watch](https://github.com/murasame612/CLIProxyPoolWidget) + +原生 macOS SwiftUI 应用,用于监控 CLIProxyAPI 池中的 ChatGPT/Codex 账号额度。通过 Management API 展示账号可用状态、Plus 基准容量、5 小时与周额度进度条、套餐权重和恢复预测。 + > [!NOTE] > 如果你开发了基于 CLIProxyAPI 的项目,请提交一个 PR(拉取请求)将其添加到此列表中。 @@ -194,6 +212,10 @@ Shadow AI 是一款专为受限环境设计的 AI 辅助工具。提供无窗口 OmniRoute 是一个面向多供应商大语言模型的 AI 网关:它提供兼容 OpenAI 的端点,具备智能路由、负载均衡、重试及回退机制。通过添加策略、速率限制、缓存和可观测性,确保推理过程既可靠又具备成本意识。 +### [Playful Proxy API Panel (PPAP)](https://github.com/daishuge/playful-proxy-api-panel) + +一个公开的 CLIProxyAPI 兼容二开版本和配套管理面板,尽量保持与上游一致的使用方式,同时恢复内置使用量统计,并补充缓存命中率、首字响应时间、TPS 记录和面向 Docker 自托管的安装说明。 + > [!NOTE] > 如果你开发了 CLIProxyAPI 的移植或衍生项目,请提交 PR 将其添加到此列表中。 diff --git a/README_JA.md b/README_JA.md index 8c34325b49..ba96c3c1e5 100644 --- a/README_JA.md +++ b/README_JA.md @@ -10,23 +10,19 @@ OAuth経由でOpenAI Codex(GPTモデル)およびClaude Codeもサポート ## スポンサー -[![z.ai](https://assets.router-for.me/english-5-0.jpg)](https://z.ai/subscribe?ic=8JVLJQFSKB) +[![https://www.packyapi.com/register?aff=cliproxyapi](./assets/packycode-en.png)](https://www.packyapi.com/register?aff=cliproxyapi) -本プロジェクトはZ.aiにスポンサーされており、GLM CODING PLANの提供を受けています。 +PackyCodeのスポンサーシップに感謝します! -GLM CODING PLANはAIコーディング向けに設計されたサブスクリプションサービスで、月額わずか$10から利用可能です。フラッグシップのGLM-4.7および(GLM-5はProユーザーのみ利用可能)モデルを10以上の人気AIコーディングツール(Claude Code、Cline、Roo Codeなど)で利用でき、開発者にトップクラスの高速かつ安定したコーディング体験を提供します。 +PackyCodeは信頼性が高く効率的なAPIリレーサービスプロバイダーで、Claude Code、Codex、Geminiなどのリレーサービスを提供しています。 -GLM CODING PLANを10%割引で取得:https://z.ai/subscribe?ic=8JVLJQFSKB +PackyCodeは当ソフトウェアのユーザーに特別割引を提供しています:こちらのリンクから登録し、チャージ時にプロモーションコード「cliproxyapi」を入力すると10%割引になります。 --- - - - - @@ -35,12 +31,8 @@ GLM CODING PLANを10%割引で取得:https://z.ai/subscribe?ic=8JVLJQFSKB - - - - - - + +
PackyCodePackyCodeのスポンサーシップに感謝します!PackyCodeは信頼性が高く効率的なAPIリレーサービスプロバイダーで、Claude Code、Codex、Geminiなどのリレーサービスを提供しています。PackyCodeは当ソフトウェアのユーザーに特別割引を提供しています:こちらのリンクから登録し、チャージ時にプロモーションコード「cliproxyapi」を入力すると10%割引になります。
AICodeMirror AICodeMirrorのスポンサーシップに感謝します!AICodeMirrorはClaude Code / Codex / Gemini CLI向けの公式高安定性リレーサービスを提供しており、エンタープライズグレードの同時接続、迅速な請求書発行、24時間365日の専任技術サポートを備えています。Claude Code / Codex / Geminiの公式チャネルが元の価格の38% / 2% / 9%で利用でき、チャージ時にはさらに割引があります!CLIProxyAPIユーザー向けの特別特典:こちらのリンクから登録すると、初回チャージが20%割引になり、エンタープライズのお客様は最大25%割引を受けられます!
本プロジェクトにご支援いただいた BmoPlus に感謝いたします!BmoPlusは、AIサブスクリプションのヘビーユーザー向けに特化した信頼性の高いAIアカウントサービスプロバイダーであり、安定した ChatGPT Plus / ChatGPT Pro (完全保証) / Claude Pro / Super Grok / Gemini Pro の公式代行チャージおよび即納アカウントを提供しています。こちらのBmoPlus AIアカウント専門店/代行チャージ経由でご登録・ご注文いただいたユーザー様は、GPTを 公式サイト価格の約1割(90% OFF) という驚異的な価格でご利用いただけます!
LingtrueAPILingtrueAPIのスポンサーシップに感謝します!LingtrueAPIはグローバルな大規模モデルAPIリレーサービスプラットフォームで、Claude Code、Codex、GeminiなどのトップモデルAPI呼び出しサービスを提供し、ユーザーが低コストかつ高い安定性で世界中のAI能力に接続できるよう支援しています。LingtrueAPIは本ソフトウェアのユーザーに特別割引を提供しています:こちらのリンクから登録し、初回チャージ時にプロモーションコード「LingtrueAPI」を入力すると10%割引になります。
PoixeAIPoixe AIのスポンサーシップに感謝します!Poixe AIは信頼できるAIモデルAPIサービスを提供しており、プラットフォームが提供するLLM APIを使って簡単にAI製品を構築できます。また、サプライヤーとしてプラットフォームに大規模モデルのリソースを提供し、収益を得ることも可能です。CLIProxyAPIの専用リンクから登録すると、チャージ時に追加で$5が付与されます。VisionCoderVisionCoderのご支援に感謝します!VisionCoder 開発プラットフォーム は、信頼性が高く効率的なAPIリレーサービスプロバイダーで、Claude Code、Codex、Geminiなどの主要AIモデルを提供し、開発者やチームがより簡単にAI機能を統合して生産性を向上できるよう支援します。さらに、VisionCoderはユーザー向けに Token Plan の期間限定キャンペーン(1か月購入で1か月分プレゼント)も提供しています。
@@ -51,7 +43,7 @@ GLM CODING PLANを10%割引で取得:https://z.ai/subscribe?ic=8JVLJQFSKB - OAuthログインによるOpenAI Codexサポート(GPTモデル) - OAuthログインによるClaude Codeサポート - プロバイダールーティングによるAmp CLIおよびIDE拡張機能のサポート -- ストリーミングおよび非ストリーミングレスポンス +- ストリーミング、非ストリーミング、および対応環境でのWebSocketレスポンス - 関数呼び出し/ツールのサポート - マルチモーダル入力サポート(テキストと画像) - ラウンドロビン負荷分散による複数アカウント対応(Gemini、OpenAI、Claude) @@ -72,6 +64,22 @@ CLIProxyAPIガイド:[https://help.router-for.me/](https://help.router-for.me/ [MANAGEMENT_API.md](https://help.router-for.me/management/api)を参照 +## 使用量統計 + +v6.10.0以降、CLIProxyAPIおよび [CPAMC](https://github.com/router-for-me/Cli-Proxy-API-Management-Center) プロジェクトには使用量統計機能がプリセットされなくなりました。使用量統計が必要な場合は、次のプロジェクトをご利用ください: + +### [CPA Usage Keeper](https://github.com/Willxup/cpa-usage-keeper) + +CLIProxyAPI向けの独立した使用量永続化・可視化サービス。CLIProxyAPIデータを定期同期してSQLiteに保存し、集計APIと、使用量や各種統計を確認できる組み込みダッシュボードを提供します。 + +### [CLIProxyAPI Usage Dashboard](https://github.com/zhanglunet/cliproxyapi-usage-dashboard) + +CLIProxyAPI向けのローカル優先の使用量・クォータダッシュボード。Redis互換の使用量キューからリクエストごとのToken使用量を収集してSQLiteに保存し、アカウント別・モデル別の日次および直近時間枠の使用量を可視化し、Codex 5h/7dクォータ残量をローカルWeb UIで表示します。 + +### [CPA-Manager](https://github.com/seakee/CPA-Manager) + +リクエスト単位の監視とコスト推定を備えたCLIProxyAPI向けのフル管理センターです。CPA-Managerは、収集したリクエストをアカウント、モデル、チャネル、レイテンシ、ステータス、Token使用量ごとに追跡し、編集可能なモデル価格とLiteLLM価格のワンクリック同期でコストを推定します。SQLiteでイベントを永続化し、Codexアカウントプール向けに一括検査、クォータ判定、異常アカウント検出、クリーンアップ提案、ワンクリック実行を提供し、日常的なマルチアカウント運用に適しています。 + ## Amp CLIサポート CLIProxyAPIは[Amp CLI](https://ampcode.com)およびAmp IDE拡張機能の統合サポートを含んでおり、Google/ChatGPT/ClaudeのOAuthサブスクリプションをAmpのコーディングツールで使用できます: @@ -120,7 +128,7 @@ macOSネイティブのメニューバーアプリで、Claude CodeとChatGPTの ### [Subtitle Translator](https://github.com/VjayC/SRT-Subtitle-Translator-Validator) -CLIProxyAPI経由でGeminiサブスクリプションを使用してSRT字幕を翻訳するブラウザベースのツール。自動検証/エラー修正機能付き - APIキー不要 +CLIProxyAPI経由で既存のLLMサブスクリプション(Gemini、ChatGPT、Claude, etc.)を使用してSRT字幕を翻訳および検証する、クロスプラットフォームのデスクトップおよびWebアプリ - APIキー不要。 ### [CCS (Claude Code Switch)](https://github.com/kaitranntt/ccs) @@ -178,6 +186,14 @@ CLIProxyAPIをネイティブGUIでラップしたクロスプラットフォー CLIProxyAPI向けのすぐに使えるクロスプラットフォームのクォータ確認ツール。アカウントごとの codex 5h/7d クォータ表示、プラン別ソート、ステータス色分け、複数アカウントの集計分析に対応。 +### [CodexCliPlus](https://github.com/C4AL/CodexCliPlus) + +CLIProxyAPIを基盤にしたWindows向けのローカル優先Codex CLIデスクトップ管理プラットフォーム。ローカル設定、アカウント、実行状態の管理を簡素化し、ローカルユーザーにより包括的なCodex CLI体験を提供します。 + +### [CLIProxy Pool Watch](https://github.com/murasame612/CLIProxyPoolWidget) + +CLIProxyAPIプール内のChatGPT/Codexアカウントクォータを監視するmacOSネイティブSwiftUIアプリ。Management APIを通じて、アカウントの可用性、Plus基準の容量、5時間/週次クォータバー、プラン重み、復元予測を表示します。 + > [!NOTE] > CLIProxyAPIをベースにプロジェクトを開発した場合は、PRを送ってこのリストに追加してください。 @@ -195,6 +211,10 @@ CLIProxyAPIに触発されたNext.js実装。インストールと使用が簡 OmniRouteはマルチプロバイダーLLM向けのAIゲートウェイです:スマートルーティング、負荷分散、リトライ、フォールバックを備えたOpenAI互換エンドポイント。ポリシー、レート制限、キャッシュ、可観測性を追加して、信頼性が高くコストを意識した推論を実現します。 +### [Playful Proxy API Panel (PPAP)](https://github.com/daishuge/playful-proxy-api-panel) + +上流に近い使い方を維持する公開CLIProxyAPI互換フォーク兼管理パネルです。内蔵の使用量統計を復元し、キャッシュヒット率、初回バイト待ち時間、TPSの記録、Docker向けのセルフホスト手順を追加しています。 + > [!NOTE] > CLIProxyAPIの移植版またはそれに触発されたプロジェクトを開発した場合は、PRを送ってこのリストに追加してください。 diff --git a/assets/packycode-cn.png b/assets/packycode-cn.png new file mode 100644 index 0000000000..3e34d6caed Binary files /dev/null and b/assets/packycode-cn.png differ diff --git a/assets/packycode-en.png b/assets/packycode-en.png new file mode 100644 index 0000000000..90f716e2a4 Binary files /dev/null and b/assets/packycode-en.png differ diff --git a/assets/visioncoder.png b/assets/visioncoder.png new file mode 100644 index 0000000000..24b1760ce5 Binary files /dev/null and b/assets/visioncoder.png differ diff --git a/build-web.ps1 b/build-web.ps1 new file mode 100644 index 0000000000..c1aaf3a6fa --- /dev/null +++ b/build-web.ps1 @@ -0,0 +1,22 @@ +$ErrorActionPreference = "Stop" + +Write-Host "Building Next.js frontend..." +Set-Location web +npm run build +if ($LASTEXITCODE -ne 0) { + Write-Host "Frontend build failed!" + exit 1 +} +Set-Location .. + +Write-Host "Copying static export to embed directory..." +$dest = "internal\managementasset\web_static" +if (Test-Path $dest) { + Get-ChildItem $dest | Remove-Item -Recurse -Force +} +if (-not (Test-Path $dest)) { + New-Item -ItemType Directory -Path $dest -Force | Out-Null +} +Copy-Item -Path "web\out\*" -Destination $dest -Recurse -Force + +Write-Host "Done! Web assets embedded." diff --git a/build-web.sh b/build-web.sh new file mode 100644 index 0000000000..6ad846be17 --- /dev/null +++ b/build-web.sh @@ -0,0 +1,15 @@ +#!/usr/bin/env bash +set -e + +echo "Building Next.js frontend..." +cd web +npm run build +cd .. + +echo "Copying static export to embed directory..." +dest="internal/managementasset/web_static" +rm -rf "$dest" +mkdir -p "$dest" +cp -r web/out/* "$dest/" + +echo "Done! Web assets embedded." diff --git a/cmd/fetch_antigravity_models/main.go b/cmd/fetch_antigravity_models/main.go index d4328eb32f..250bcbdfa3 100644 --- a/cmd/fetch_antigravity_models/main.go +++ b/cmd/fetch_antigravity_models/main.go @@ -25,11 +25,11 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - sdkauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + sdkauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" ) diff --git a/cmd/mcpdebug/main.go b/cmd/mcpdebug/main.go new file mode 100644 index 0000000000..28e810dbc6 --- /dev/null +++ b/cmd/mcpdebug/main.go @@ -0,0 +1,20 @@ +package main + +import ( + "encoding/hex" + "fmt" + "os" + + cursorproto "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/cursor/proto" +) + +func main() { + // Encode MCP result with empty execId + resultBytes := cursorproto.EncodeExecMcpResult(1, "", `{"test": "data"}`, false) + fmt.Printf("Result protobuf hex: %s\n", hex.EncodeToString(resultBytes)) + fmt.Printf("Result length: %d bytes\n", len(resultBytes)) + + // Write to file for analysis + os.WriteFile("mcp_result.bin", resultBytes, 0644) + fmt.Println("Wrote mcp_result.bin") +} diff --git a/cmd/protocheck/main.go b/cmd/protocheck/main.go new file mode 100644 index 0000000000..2e40580017 --- /dev/null +++ b/cmd/protocheck/main.go @@ -0,0 +1,35 @@ +package main + +import ( + "fmt" + + cursorproto "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/cursor/proto" + "google.golang.org/protobuf/reflect/protoreflect" + "google.golang.org/protobuf/types/dynamicpb" +) + +func main() { + ecm := dynamicpb.NewMessage(cursorproto.Msg("ExecClientMessage")) + + // Try different field names + names := []string{ + "mcp_result", "mcpResult", "McpResult", "MCP_RESULT", + "shell_result", "shellResult", + } + + for _, name := range names { + fd := ecm.Descriptor().Fields().ByName(protoreflect.Name(name)) + if fd != nil { + fmt.Printf("Found field %q: number=%d, kind=%s\n", name, fd.Number(), fd.Kind()) + } else { + fmt.Printf("Field %q NOT FOUND\n", name) + } + } + + // List all fields + fmt.Println("\nAll fields in ExecClientMessage:") + for i := 0; i < ecm.Descriptor().Fields().Len(); i++ { + f := ecm.Descriptor().Fields().Get(i) + fmt.Printf(" %d: %q (number=%d)\n", i, f.Name(), f.Number()) + } +} diff --git a/cmd/server/main.go b/cmd/server/main.go index b8707f0a43..1ef8300661 100644 --- a/cmd/server/main.go +++ b/cmd/server/main.go @@ -10,28 +10,31 @@ import ( "fmt" "io" "io/fs" + "net" "net/url" "os" "path/filepath" + "strconv" "strings" "time" "github.com/joho/godotenv" - configaccess "github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access" - "github.com/router-for-me/CLIProxyAPI/v6/internal/buildinfo" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cmd" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/managementasset" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/store" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator" - "github.com/router-for-me/CLIProxyAPI/v6/internal/tui" - "github.com/router-for-me/CLIProxyAPI/v6/internal/usage" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + configaccess "github.com/router-for-me/CLIProxyAPI/v7/internal/access/config_access" + "github.com/router-for-me/CLIProxyAPI/v7/internal/buildinfo" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cmd" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/home" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/managementasset" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/store" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/tui" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) @@ -70,6 +73,8 @@ func main() { var vertexImportPrefix string var configPath string var password string + var homeAddr string + var homePassword string var tuiMode bool var standalone bool var localModel bool @@ -88,6 +93,8 @@ func main() { flag.StringVar(&vertexImport, "vertex-import", "", "Import Vertex service account key JSON file") flag.StringVar(&vertexImportPrefix, "vertex-import-prefix", "", "Prefix for Vertex model namespacing (use with -vertex-import)") flag.StringVar(&password, "password", "", "") + flag.StringVar(&homeAddr, "home", "", "Home control plane address in host:port format (loads config from home and skips local config file)") + flag.StringVar(&homePassword, "home-password", "", "Home control plane password (Redis AUTH)") flag.BoolVar(&tuiMode, "tui", false, "Start with terminal management UI") flag.BoolVar(&standalone, "standalone", false, "In TUI mode, start an embedded local server") flag.BoolVar(&localModel, "local-model", false, "Use embedded model catalog only, skip remote model fetching") @@ -126,6 +133,7 @@ func main() { var err error var cfg *config.Config var isCloudDeploy bool + var configLoadedFromHome bool var ( usePostgresStore bool pgStoreDSN string @@ -236,7 +244,68 @@ func main() { // Determine and load the configuration file. // Prefer the Postgres store when configured, otherwise fallback to git or local files. var configFilePath string - if usePostgresStore { + if strings.TrimSpace(homeAddr) != "" { + configLoadedFromHome = true + trimmedHomePassword := strings.TrimSpace(homePassword) + host, portStr, errSplit := net.SplitHostPort(strings.TrimSpace(homeAddr)) + if errSplit != nil { + log.Errorf("invalid -home address %q (expected host:port): %v", homeAddr, errSplit) + return + } + host = strings.TrimSpace(host) + if host == "" { + log.Errorf("invalid -home address %q: host is empty", homeAddr) + return + } + port, errPort := strconv.Atoi(strings.TrimSpace(portStr)) + if errPort != nil || port <= 0 { + log.Errorf("invalid -home address %q: invalid port %q", homeAddr, portStr) + return + } + + homeCfg := config.HomeConfig{ + Enabled: true, + Host: host, + Port: port, + Password: trimmedHomePassword, + } + homeClient := home.New(homeCfg) + defer homeClient.Close() + + ctxHome, cancelHome := context.WithTimeout(context.Background(), 30*time.Second) + raw, errGetConfig := homeClient.GetConfig(ctxHome) + cancelHome() + if errGetConfig != nil { + log.Errorf("failed to fetch config from home: %v", errGetConfig) + return + } + + parsed, errParseConfig := config.ParseConfigBytes(raw) + if errParseConfig != nil { + log.Errorf("failed to parse config payload from home: %v", errParseConfig) + return + } + if parsed == nil { + parsed = &config.Config{} + } + parsed.Home = homeCfg + parsed.Port = 8317 // Default to 8317 for home mode, can be overridden by home config + parsed.UsageStatisticsEnabled = true + cfg = parsed + + // Keep a non-empty config path for downstream components (log paths, management assets, etc), + // but do not require the file to exist when loading config from home. + if strings.TrimSpace(configPath) != "" { + configFilePath = configPath + } else { + configFilePath = filepath.Join(wd, "config.yaml") + } + + // Local stores are intentionally disabled when config is loaded from home. + usePostgresStore = false + useObjectStore = false + useGitStore = false + } else if usePostgresStore { if pgStoreLocalPath == "" { pgStoreLocalPath = wd } @@ -400,24 +469,29 @@ func main() { // In cloud deploy mode, check if we have a valid configuration var configFileExists bool if isCloudDeploy { - if info, errStat := os.Stat(configFilePath); errStat != nil { - // Don't mislead: API server will not start until configuration is provided. - log.Info("Cloud deploy mode: No configuration file detected; standing by for configuration") - configFileExists = false - } else if info.IsDir() { - log.Info("Cloud deploy mode: Config path is a directory; standing by for configuration") - configFileExists = false - } else if cfg.Port == 0 { - // LoadConfigOptional returns empty config when file is empty or invalid. - // Config file exists but is empty or invalid; treat as missing config - log.Info("Cloud deploy mode: Configuration file is empty or invalid; standing by for valid configuration") - configFileExists = false + if configLoadedFromHome && cfg != nil { + configFileExists = cfg.Port != 0 } else { - log.Info("Cloud deploy mode: Configuration file detected; starting service") - configFileExists = true + if info, errStat := os.Stat(configFilePath); errStat != nil { + // Don't mislead: API server will not start until configuration is provided. + log.Info("Cloud deploy mode: No configuration file detected; standing by for configuration") + configFileExists = false + } else if info.IsDir() { + log.Info("Cloud deploy mode: Config path is a directory; standing by for configuration") + configFileExists = false + } else if cfg.Port == 0 { + // LoadConfigOptional returns empty config when file is empty or invalid. + // Config file exists but is empty or invalid; treat as missing config + log.Info("Cloud deploy mode: Configuration file is empty or invalid; standing by for valid configuration") + configFileExists = false + } else { + log.Info("Cloud deploy mode: Configuration file detected; starting service") + configFileExists = true + } } } - usage.SetStatisticsEnabled(cfg.UsageStatisticsEnabled) + redisqueue.SetUsageStatisticsEnabled(cfg.UsageStatisticsEnabled) + redisqueue.SetRetentionSeconds(cfg.RedisUsageQueueRetentionSeconds) coreauth.SetQuotaCooldownDisabled(cfg.DisableCooling) if err = logging.ConfigureLogOutput(cfg); err != nil { @@ -495,8 +569,10 @@ func main() { // Standalone mode: start an embedded local server and connect TUI client to it. managementasset.StartAutoUpdater(context.Background(), configFilePath) misc.StartAntigravityVersionUpdater(context.Background()) - if !localModel { + if !localModel && !cfg.Home.Enabled { registry.StartModelsUpdater(context.Background()) + } else if cfg.Home.Enabled { + log.Info("Home mode: remote model updates disabled") } hook := tui.NewLogHook(2000) hook.SetFormatter(&logging.LogFormatter{}) @@ -571,8 +647,10 @@ func main() { // Start the main proxy service managementasset.StartAutoUpdater(context.Background(), configFilePath) misc.StartAntigravityVersionUpdater(context.Background()) - if !localModel { + if !localModel && !cfg.Home.Enabled { registry.StartModelsUpdater(context.Background()) + } else if cfg.Home.Enabled { + log.Info("Home mode: remote model updates disabled") } cmd.StartService(cfg, configFilePath, password) } diff --git a/config.example.yaml b/config.example.yaml index 734dd7d522..0dcbe7303e 100644 --- a/config.example.yaml +++ b/config.example.yaml @@ -1,6 +1,6 @@ # Server host/interface to bind to. Default is empty ("") to bind all interfaces (IPv4 + IPv6). # Use "127.0.0.1" or "localhost" to restrict access to local machine only. -host: "" +host: '' # Server port port: 8317 @@ -8,8 +8,8 @@ port: 8317 # TLS settings for HTTPS. When enabled, the server listens with the provided certificate and key. tls: enable: false - cert: "" - key: "" + cert: '' + key: '' # Management API settings remote-management: @@ -20,7 +20,7 @@ remote-management: # Management key. If a plaintext value is provided here, it will be hashed on startup. # All management requests (even from localhost) require this key. # Leave empty to disable the Management API entirely (404 for all /v0/management routes). - secret-key: "" + secret-key: '' # Disable the bundled management control panel asset download and HTTP route when true. disable-control-panel: false @@ -30,16 +30,16 @@ remote-management: # disable-auto-update-panel: false # GitHub repository for the management control panel. Accepts a repository URL or releases API URL. - panel-github-repository: "https://github.com/router-for-me/Cli-Proxy-API-Management-Center" + panel-github-repository: 'https://github.com/router-for-me/Cli-Proxy-API-Management-Center' # Authentication directory (supports ~ for home directory) -auth-dir: "~/.cli-proxy-api" +auth-dir: '~/.cli-proxy-api' # API keys for authentication api-keys: - - "your-api-key-1" - - "your-api-key-2" - - "your-api-key-3" + - 'your-api-key-1' + - 'your-api-key-2' + - 'your-api-key-3' # Enable debug logging debug: false @@ -47,11 +47,16 @@ debug: false # Enable pprof HTTP debug server (host:port). Keep it bound to localhost for safety. pprof: enable: false - addr: "127.0.0.1:8316" + addr: '127.0.0.1:8316' # When true, disable high-overhead HTTP middleware features to reduce per-request memory usage under high concurrency. commercial-mode: false +# Open OAuth URLs in incognito/private browser mode. +# Useful when you want to login with a different account without logging out from your current session. +# Default: false (but Kiro auth defaults to true for multi-account support) +incognito-browser: true + # When true, write application logs to rotating files instead of stdout logging-to-file: false @@ -226,6 +231,40 @@ nonstream-keepalive-interval: 0 # user-agent: "codex_cli_rs/0.114.0 (Mac OS 14.2.0; x86_64) vscode/1.111.0" # beta-features: "multi_agent" +# BaoTa (BT Panel) AI configuration +# Phone is stored in plaintext; password is stored as base64-encoded. +# Use: ./server --bt-login (reads credentials from config and saves JSON to auth directory) +#bt: +# - phone: "13800138000" +# password: "base64-encoded-password" +# prefix: "" # optional: namespace model aliases +# proxy-url: "" # optional: per-key proxy override +# models: # optional: model aliases +# - name: "qwen3.6-plus" # upstream model name +# alias: "qwen" # client-visible alias +# excluded-models: [] # optional: models to exclude +# priority: 0 # optional: selection priority (higher preferred) + +# Kiro (AWS CodeWhisperer) configuration +# Note: Kiro API currently only operates in us-east-1 region +#kiro: +# - token-file: "~/.aws/sso/cache/kiro-auth-token.json" # path to Kiro token file +# agent-task-type: "" # optional: "vibe" or empty (API default) +# start-url: "https://your-company.awsapps.com/start" # optional: IDC start URL (preset for login) +# region: "us-east-1" # optional: OIDC region for IDC login and token refresh +# - access-token: "aoaAAAAA..." # or provide tokens directly +# refresh-token: "aorAAAAA..." +# profile-arn: "arn:aws:codewhisperer:us-east-1:..." +# proxy-url: "socks5://proxy.example.com:1080" # optional: proxy override + +# Kilocode (OAuth-based code assistant) +# Note: Kilocode uses OAuth device flow authentication. +# Use the CLI command: ./server --kilo-login +# This will save credentials to the auth directory (default: ~/.cli-proxy-api/) +# oauth-model-alias: +# kilo: +# - name: "minimax/minimax-m2.5:free" + # OpenAI compatibility providers # openai-compatibility: # - name: "openrouter" # The name of the provider; it will be used in the user agent and other places. @@ -309,7 +348,7 @@ nonstream-keepalive-interval: 0 # Global OAuth model name aliases (per channel) # These aliases rename model IDs for both model listing and request routing. -# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, kimi. +# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, iflow, kiro, github-copilot, kimi, codearts, joycode, gitlab, cursor, qoder, codebuddy, codebuddy-ai, kilo, bt. # NOTE: Aliases do not apply to gemini-api-key, codex-api-key, claude-api-key, openai-compatibility, vertex-api-key, or ampcode. # NOTE: Because aliases affect the merged /v1 model list and merged request routing, overlapping # client-visible names can become ambiguous across providers. /api/provider/{provider}/... helps @@ -317,6 +356,21 @@ nonstream-keepalive-interval: 0 # model/alias. For strict backend pinning, use unique aliases/prefixes or avoid overlapping names. # You can repeat the same name with different aliases to expose multiple client model names. # oauth-model-alias: +# antigravity: +# - name: "rev19-uic3-1p" +# alias: "gemini-2.5-computer-use-preview-10-2025" +# - name: "gemini-3-pro-image" +# alias: "gemini-3-pro-image-preview" +# - name: "gemini-3-pro-high" +# alias: "gemini-3-pro-preview" +# - name: "gemini-3-flash" +# alias: "gemini-3-flash-preview" +# - name: "claude-sonnet-4-5" +# alias: "gemini-claude-sonnet-4-5" +# - name: "claude-sonnet-4-5-thinking" +# alias: "gemini-claude-sonnet-4-5-thinking" +# - name: "claude-opus-4-5-thinking" +# alias: "gemini-claude-opus-4-5-thinking" # gemini-cli: # - name: "gemini-2.5-pro" # original model name under this channel # alias: "g2.5p" # client-visible alias @@ -327,9 +381,6 @@ nonstream-keepalive-interval: 0 # aistudio: # - name: "gemini-2.5-pro" # alias: "g2.5p" -# antigravity: -# - name: "gemini-3-pro-high" -# alias: "gemini-3-pro-preview" # claude: # - name: "claude-sonnet-4-5-20250929" # alias: "cs4.5" @@ -339,8 +390,15 @@ nonstream-keepalive-interval: 0 # kimi: # - name: "kimi-k2.5" # alias: "k2.5" +# kiro: +# - name: "kiro-claude-opus-4-5" +# alias: "op45" +# github-copilot: +# - name: "gpt-5" +# alias: "copilot-gpt5" # OAuth provider excluded models +# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, iflow, kiro, github-copilot, codearts, joycode, gitlab, cursor, qoder, codebuddy, codebuddy-ai, kilo, bt. # oauth-excluded-models: # gemini-cli: # - "gemini-2.5-pro" # exclude specific models (exact match) @@ -359,6 +417,10 @@ nonstream-keepalive-interval: 0 # - "gpt-5-codex-mini" # kimi: # - "kimi-k2-thinking" +# kiro: +# - "kiro-claude-haiku-4-5" +# github-copilot: +# - "raptor-mini" # Optional payload configuration # payload: @@ -393,3 +455,8 @@ nonstream-keepalive-interval: 0 # params: # JSON paths (gjson/sjson syntax) to remove from the payload # - "generationConfig.thinkingConfig.thinkingBudget" # - "generationConfig.responseJsonSchema" + +# JoyCode (JD JoyCode AI) configuration +# Login via: ./cli-proxy-api --joycode-login +# Or use the web OAuth flow at: http://localhost:/v0/oauth/joycode +# After login, credentials are saved in the auth directory (default: ~/.cli-proxy-api/) diff --git a/docker-build.sh b/docker-build.sh index 4538b80716..ebe7d92384 100644 --- a/docker-build.sh +++ b/docker-build.sh @@ -5,123 +5,13 @@ # This script automates the process of building and running the Docker container # with version information dynamically injected at build time. -# Hidden feature: Preserve usage statistics across rebuilds -# Usage: ./docker-build.sh --with-usage -# First run prompts for management API key, saved to temp/stats/.api_secret - set -euo pipefail -STATS_DIR="temp/stats" -STATS_FILE="${STATS_DIR}/.usage_backup.json" -SECRET_FILE="${STATS_DIR}/.api_secret" -WITH_USAGE=false - -get_port() { - if [[ -f "config.yaml" ]]; then - grep -E "^port:" config.yaml | sed -E 's/^port: *["'"'"']?([0-9]+)["'"'"']?.*$/\1/' - else - echo "8317" - fi -} - -export_stats_api_secret() { - if [[ -f "${SECRET_FILE}" ]]; then - API_SECRET=$(cat "${SECRET_FILE}") - else - if [[ ! -d "${STATS_DIR}" ]]; then - mkdir -p "${STATS_DIR}" - fi - echo "First time using --with-usage. Management API key required." - read -r -p "Enter management key: " -s API_SECRET - echo - echo "${API_SECRET}" > "${SECRET_FILE}" - chmod 600 "${SECRET_FILE}" - fi -} - -check_container_running() { - local port - port=$(get_port) - - if ! curl -s -o /dev/null -w "%{http_code}" "http://localhost:${port}/" | grep -q "200"; then - echo "Error: cli-proxy-api service is not responding at localhost:${port}" - echo "Please start the container first or use without --with-usage flag." - exit 1 - fi -} - -export_stats() { - local port - port=$(get_port) - - if [[ ! -d "${STATS_DIR}" ]]; then - mkdir -p "${STATS_DIR}" - fi - check_container_running - echo "Exporting usage statistics..." - EXPORT_RESPONSE=$(curl -s -w "\n%{http_code}" -H "X-Management-Key: ${API_SECRET}" \ - "http://localhost:${port}/v0/management/usage/export") - HTTP_CODE=$(echo "${EXPORT_RESPONSE}" | tail -n1) - RESPONSE_BODY=$(echo "${EXPORT_RESPONSE}" | sed '$d') - - if [[ "${HTTP_CODE}" != "200" ]]; then - echo "Export failed (HTTP ${HTTP_CODE}): ${RESPONSE_BODY}" - exit 1 - fi - - echo "${RESPONSE_BODY}" > "${STATS_FILE}" - echo "Statistics exported to ${STATS_FILE}" -} - -import_stats() { - local port - port=$(get_port) - - echo "Importing usage statistics..." - IMPORT_RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \ - -H "X-Management-Key: ${API_SECRET}" \ - -H "Content-Type: application/json" \ - -d @"${STATS_FILE}" \ - "http://localhost:${port}/v0/management/usage/import") - IMPORT_CODE=$(echo "${IMPORT_RESPONSE}" | tail -n1) - IMPORT_BODY=$(echo "${IMPORT_RESPONSE}" | sed '$d') - - if [[ "${IMPORT_CODE}" == "200" ]]; then - echo "Statistics imported successfully" - else - echo "Import failed (HTTP ${IMPORT_CODE}): ${IMPORT_BODY}" - fi - - rm -f "${STATS_FILE}" -} - -wait_for_service() { - local port - port=$(get_port) - - echo "Waiting for service to be ready..." - for i in {1..30}; do - if curl -s -o /dev/null -w "%{http_code}" "http://localhost:${port}/" | grep -q "200"; then - break - fi - sleep 1 - done - sleep 2 -} - -case "${1:-}" in - "") - ;; - "--with-usage") - WITH_USAGE=true - export_stats_api_secret - ;; - *) - echo "Error: unknown option '${1}'. Did you mean '--with-usage'?" - echo "Usage: ./docker-build.sh [--with-usage]" - exit 1 - ;; -esac +if [[ "${1:-}" != "" ]]; then + echo "Error: unknown option '${1}'." + echo "Usage: ./docker-build.sh" + exit 1 +fi # --- Step 1: Choose Environment --- echo "Please select an option:" @@ -133,14 +23,7 @@ read -r -p "Enter choice [1-2]: " choice case "$choice" in 1) echo "--- Running with Pre-built Image ---" - if [[ "${WITH_USAGE}" == "true" ]]; then - export_stats - fi docker compose up -d --remove-orphans --no-build - if [[ "${WITH_USAGE}" == "true" ]]; then - wait_for_service - import_stats - fi echo "Services are starting from remote image." echo "Run 'docker compose logs -f' to see the logs." ;; @@ -167,18 +50,9 @@ case "$choice" in --build-arg COMMIT="${COMMIT}" \ --build-arg BUILD_DATE="${BUILD_DATE}" - if [[ "${WITH_USAGE}" == "true" ]]; then - export_stats - fi - echo "Starting the services..." docker compose up -d --remove-orphans --pull never - if [[ "${WITH_USAGE}" == "true" ]]; then - wait_for_service - import_stats - fi - echo "Build complete. Services are starting." echo "Run 'docker compose logs -f' to see the logs." ;; diff --git a/examples/custom-provider/main.go b/examples/custom-provider/main.go index fdbae275e8..6f37c341de 100644 --- a/examples/custom-provider/main.go +++ b/examples/custom-provider/main.go @@ -24,14 +24,14 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - clipexec "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/logging" - sdktr "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + clipexec "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/logging" + sdktr "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) const ( diff --git a/examples/http-request/main.go b/examples/http-request/main.go index a667a9ca0c..1e0215ecea 100644 --- a/examples/http-request/main.go +++ b/examples/http-request/main.go @@ -16,8 +16,8 @@ import ( "strings" "time" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - clipexec "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + clipexec "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" log "github.com/sirupsen/logrus" ) diff --git a/examples/translator/main.go b/examples/translator/main.go index 88f142a3d2..524a303eb8 100644 --- a/examples/translator/main.go +++ b/examples/translator/main.go @@ -4,8 +4,8 @@ import ( "context" "fmt" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" - _ "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator/builtin" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + _ "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator/builtin" ) func main() { diff --git a/go.mod b/go.mod index 7ad363a716..52d27d1dc0 100644 --- a/go.mod +++ b/go.mod @@ -1,4 +1,4 @@ -module github.com/router-for-me/CLIProxyAPI/v6 +module github.com/router-for-me/CLIProxyAPI/v7 go 1.26.0 @@ -9,6 +9,7 @@ require ( github.com/charmbracelet/bubbletea v1.3.10 github.com/charmbracelet/lipgloss v1.1.0 github.com/fsnotify/fsnotify v1.9.0 + github.com/fxamacker/cbor/v2 v2.9.2 github.com/gin-gonic/gin v1.10.1 github.com/go-git/go-git/v6 v6.0.0-20251009132922-75a182125145 github.com/google/uuid v1.6.0 @@ -17,6 +18,7 @@ require ( github.com/joho/godotenv v1.5.1 github.com/klauspost/compress v1.17.4 github.com/minio/minio-go/v7 v7.0.66 + github.com/redis/go-redis/v9 v9.19.0 github.com/refraction-networking/utls v1.8.2 github.com/sirupsen/logrus v1.9.3 github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 @@ -27,10 +29,17 @@ require ( golang.org/x/net v0.47.0 golang.org/x/oauth2 v0.30.0 golang.org/x/sync v0.18.0 + golang.org/x/term v0.37.0 gopkg.in/natefinch/lumberjack.v2 v2.2.1 gopkg.in/yaml.v3 v3.0.1 ) +require ( + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/x448/float16 v0.8.4 // indirect + go.uber.org/atomic v1.11.0 // indirect +) + require ( cloud.google.com/go/compute/metadata v0.3.0 // indirect github.com/Microsoft/go-winio v0.6.2 // indirect @@ -94,6 +103,6 @@ require ( golang.org/x/arch v0.8.0 // indirect golang.org/x/sys v0.38.0 // indirect golang.org/x/text v0.31.0 // indirect - google.golang.org/protobuf v1.34.1 // indirect + google.golang.org/protobuf v1.34.1 gopkg.in/ini.v1 v1.67.0 // indirect ) diff --git a/go.sum b/go.sum index e811b0123b..c44495a48b 100644 --- a/go.sum +++ b/go.sum @@ -14,10 +14,16 @@ github.com/atotto/clipboard v0.1.4 h1:EH0zSVneZPSuFR11BlR9YppQTVDbh5+16AmcJi4g1z github.com/atotto/clipboard v0.1.4/go.mod h1:ZY9tmq7sm5xIbd9bOK4onWV4S6X0u6GY7Vn0Yu86PYI= github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k= github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8= +github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= +github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= +github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= +github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0= github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4= github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM= github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/charmbracelet/bubbles v1.0.0 h1:12J8/ak/uCZEMQ6KU7pcfwceyjLlWsDLAxB5fXonfvc= github.com/charmbracelet/bubbles v1.0.0/go.mod h1:9d/Zd5GdnauMI5ivUIVisuEm3ave1XwXtD1ckyV6r3E= github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw= @@ -61,6 +67,8 @@ github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6 github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM= github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.2 h1:X4Ksno9+x3cz0TZv69ec1hxP/+tymuR8PXQJyDwfh78= +github.com/fxamacker/cbor/v2 v2.9.2/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE= @@ -158,6 +166,8 @@ github.com/pjbgf/sha1cd v0.5.0 h1:a+UkboSi1znleCDUNT3M5YxjOnN1fz2FhN48FlwCxs0= github.com/pjbgf/sha1cd v0.5.0/go.mod h1:lhpGlyHLpQZoxMv8HcgXvZEhcGs0PG/vsZnEJ7H0iCM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/redis/go-redis/v9 v9.19.0 h1:XPVaaPSnG6RhYf7p+rmSa9zZfeVAnWsH5h3lxthOm/k= +github.com/redis/go-redis/v9 v9.19.0/go.mod h1:v/M13XI1PVCDcm01VtPFOADfZtHf8YW3baQf57KlIkA= github.com/refraction-networking/utls v1.8.2 h1:j4Q1gJj0xngdeH+Ox/qND11aEfhpgoEvV+S9iJ2IdQo= github.com/refraction-networking/utls v1.8.2/go.mod h1:jkSOEkLqn+S/jtpEHPOsVv/4V4EVnelwbMQl4vCWXAM= github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= @@ -201,8 +211,14 @@ github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08= github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE= github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no= github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM= +github.com/zeebo/xxh3 v1.1.0 h1:s7DLGDK45Dyfg7++yxI0khrfwq9661w9EN78eP/UZVs= +github.com/zeebo/xxh3 v1.1.0/go.mod h1:IisAie1LELR4xhVinxWS5+zf1lA4p0MW4T+w+W07F5s= +go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= +go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8= golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc= golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys= diff --git a/internal/access/config_access/provider.go b/internal/access/config_access/provider.go index 84e8abcb0e..915160b76f 100644 --- a/internal/access/config_access/provider.go +++ b/internal/access/config_access/provider.go @@ -5,8 +5,8 @@ import ( "net/http" "strings" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) // Register ensures the config-access provider is available to the access manager. diff --git a/internal/access/reconcile.go b/internal/access/reconcile.go index 36601f9998..d71e2b8d28 100644 --- a/internal/access/reconcile.go +++ b/internal/access/reconcile.go @@ -6,9 +6,9 @@ import ( "sort" "strings" - configaccess "github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" + configaccess "github.com/router-for-me/CLIProxyAPI/v7/internal/access/config_access" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" log "github.com/sirupsen/logrus" ) diff --git a/internal/api/buffered_conn.go b/internal/api/buffered_conn.go new file mode 100644 index 0000000000..5eb55f9658 --- /dev/null +++ b/internal/api/buffered_conn.go @@ -0,0 +1,32 @@ +package api + +import ( + "bufio" + "crypto/tls" + "net" +) + +type bufferedConn struct { + net.Conn + reader *bufio.Reader +} + +func (c *bufferedConn) Read(p []byte) (int, error) { + if c == nil { + return 0, net.ErrClosed + } + if c.reader == nil { + return c.Conn.Read(p) + } + return c.reader.Read(p) +} + +func (c *bufferedConn) ConnectionState() tls.ConnectionState { + if c == nil || c.Conn == nil { + return tls.ConnectionState{} + } + if stater, ok := c.Conn.(interface{ ConnectionState() tls.ConnectionState }); ok { + return stater.ConnectionState() + } + return tls.ConnectionState{} +} diff --git a/internal/api/handlers/management/api_key_usage.go b/internal/api/handlers/management/api_key_usage.go new file mode 100644 index 0000000000..dbe6fbd998 --- /dev/null +++ b/internal/api/handlers/management/api_key_usage.go @@ -0,0 +1,107 @@ +package management + +import ( + "net/http" + "strings" + "time" + + "github.com/gin-gonic/gin" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +type apiKeyUsageEntry struct { + Success int64 `json:"success"` + Failed int64 `json:"failed"` + RecentRequests []coreauth.RecentRequestBucket `json:"recent_requests"` +} + +func mergeRecentRequestBuckets(dst, src []coreauth.RecentRequestBucket) []coreauth.RecentRequestBucket { + if len(dst) == 0 { + return src + } + if len(src) == 0 { + return dst + } + if len(dst) != len(src) { + n := len(dst) + if len(src) < n { + n = len(src) + } + for i := 0; i < n; i++ { + dst[i].Success += src[i].Success + dst[i].Failed += src[i].Failed + } + return dst + } + for i := range dst { + dst[i].Success += src[i].Success + dst[i].Failed += src[i].Failed + } + return dst +} + +// GetAPIKeyUsage returns recent request buckets for all in-memory api_key auths, +// grouped by provider and keyed by "base_url|api_key". +func (h *Handler) GetAPIKeyUsage(c *gin.Context) { + if h == nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "handler not initialized"}) + return + } + + h.mu.Lock() + manager := h.authManager + h.mu.Unlock() + if manager == nil { + c.JSON(http.StatusServiceUnavailable, gin.H{"error": "core auth manager unavailable"}) + return + } + + now := time.Now() + out := make(map[string]map[string]apiKeyUsageEntry) + for _, auth := range manager.List() { + if auth == nil { + continue + } + kind, apiKey := auth.AccountInfo() + if !strings.EqualFold(strings.TrimSpace(kind), "api_key") { + continue + } + apiKey = strings.TrimSpace(apiKey) + if apiKey == "" { + continue + } + baseURL := "" + if auth.Attributes != nil { + baseURL = strings.TrimSpace(auth.Attributes["base_url"]) + if baseURL == "" { + baseURL = strings.TrimSpace(auth.Attributes["base-url"]) + } + } + compositeKey := baseURL + "|" + apiKey + provider := strings.ToLower(strings.TrimSpace(auth.Provider)) + if provider == "" { + provider = "unknown" + } + + recent := auth.RecentRequestsSnapshot(now) + providerBucket, ok := out[provider] + if !ok { + providerBucket = make(map[string]apiKeyUsageEntry) + out[provider] = providerBucket + } + if existing, exists := providerBucket[compositeKey]; exists { + existing.Success += auth.Success + existing.Failed += auth.Failed + existing.RecentRequests = mergeRecentRequestBuckets(existing.RecentRequests, recent) + providerBucket[compositeKey] = existing + continue + } + providerBucket[compositeKey] = apiKeyUsageEntry{ + Success: auth.Success, + Failed: auth.Failed, + RecentRequests: recent, + } + } + + c.JSON(http.StatusOK, out) +} diff --git a/internal/api/handlers/management/api_key_usage_test.go b/internal/api/handlers/management/api_key_usage_test.go new file mode 100644 index 0000000000..f2be17d7db --- /dev/null +++ b/internal/api/handlers/management/api_key_usage_test.go @@ -0,0 +1,95 @@ +package management + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +func sumRecentRequestBuckets(buckets []coreauth.RecentRequestBucket) (int64, int64) { + var success int64 + var failed int64 + for _, bucket := range buckets { + success += bucket.Success + failed += bucket.Failed + } + return success, failed +} + +func TestGetAPIKeyUsage_GroupsByProviderAndAPIKey(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "") + gin.SetMode(gin.TestMode) + + manager := coreauth.NewManager(nil, nil, nil) + if _, err := manager.Register(context.Background(), &coreauth.Auth{ + ID: "codex-auth", + Provider: "codex", + Attributes: map[string]string{ + "api_key": "codex-key", + "base_url": "https://codex.example.com", + }, + }); err != nil { + t.Fatalf("register codex auth: %v", err) + } + if _, err := manager.Register(context.Background(), &coreauth.Auth{ + ID: "claude-auth", + Provider: "claude", + Attributes: map[string]string{ + "api_key": "claude-key", + "base_url": "https://claude.example.com", + }, + }); err != nil { + t.Fatalf("register claude auth: %v", err) + } + + manager.MarkResult(context.Background(), coreauth.Result{AuthID: "codex-auth", Provider: "codex", Model: "gpt-5", Success: true}) + manager.MarkResult(context.Background(), coreauth.Result{AuthID: "codex-auth", Provider: "codex", Model: "gpt-5", Success: false}) + manager.MarkResult(context.Background(), coreauth.Result{AuthID: "claude-auth", Provider: "claude", Model: "claude-4", Success: true}) + + h := NewHandlerWithoutConfigFilePath(&config.Config{AuthDir: t.TempDir()}, manager) + + rec := httptest.NewRecorder() + ginCtx, _ := gin.CreateTestContext(rec) + req := httptest.NewRequest(http.MethodGet, "/v0/management/api-key-usage", nil) + ginCtx.Request = req + h.GetAPIKeyUsage(ginCtx) + + if rec.Code != http.StatusOK { + t.Fatalf("status = %d, want %d body=%s", rec.Code, http.StatusOK, rec.Body.String()) + } + + var payload map[string]map[string]apiKeyUsageEntry + if err := json.Unmarshal(rec.Body.Bytes(), &payload); err != nil { + t.Fatalf("decode payload: %v", err) + } + + codexEntry := payload["codex"]["https://codex.example.com|codex-key"] + if codexEntry.Success != 1 || codexEntry.Failed != 1 { + t.Fatalf("codex totals = %d/%d, want 1/1", codexEntry.Success, codexEntry.Failed) + } + if len(codexEntry.RecentRequests) != 20 { + t.Fatalf("codex buckets len = %d, want 20", len(codexEntry.RecentRequests)) + } + codexSuccess, codexFailed := sumRecentRequestBuckets(codexEntry.RecentRequests) + if codexSuccess != 1 || codexFailed != 1 { + t.Fatalf("codex totals = %d/%d, want 1/1", codexSuccess, codexFailed) + } + + claudeEntry := payload["claude"]["https://claude.example.com|claude-key"] + if claudeEntry.Success != 1 || claudeEntry.Failed != 0 { + t.Fatalf("claude totals = %d/%d, want 1/0", claudeEntry.Success, claudeEntry.Failed) + } + if len(claudeEntry.RecentRequests) != 20 { + t.Fatalf("claude buckets len = %d, want 20", len(claudeEntry.RecentRequests)) + } + claudeSuccess, claudeFailed := sumRecentRequestBuckets(claudeEntry.RecentRequests) + if claudeSuccess != 1 || claudeFailed != 0 { + t.Fatalf("claude totals = %d/%d, want 1/0", claudeSuccess, claudeFailed) + } +} diff --git a/internal/api/handlers/management/api_tools.go b/internal/api/handlers/management/api_tools.go index cb4805e9ef..f10850701a 100644 --- a/internal/api/handlers/management/api_tools.go +++ b/internal/api/handlers/management/api_tools.go @@ -11,10 +11,10 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/geminicli" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/geminicli" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" "golang.org/x/oauth2" "golang.org/x/oauth2/google" @@ -766,6 +766,9 @@ func resolveOpenAICompatAPIKeyProxyURL(cfg *config.Config, auth *coreauth.Auth, for i := range cfg.OpenAICompatibility { compat := &cfg.OpenAICompatibility[i] + if compat.Disabled { + continue + } for _, candidate := range candidates { if candidate != "" && strings.EqualFold(strings.TrimSpace(candidate), compat.Name) { for j := range compat.APIKeyEntries { diff --git a/internal/api/handlers/management/api_tools_cbor_test.go b/internal/api/handlers/management/api_tools_cbor_test.go new file mode 100644 index 0000000000..8b7570a916 --- /dev/null +++ b/internal/api/handlers/management/api_tools_cbor_test.go @@ -0,0 +1,149 @@ +package management + +import ( + "bytes" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + + "github.com/fxamacker/cbor/v2" + "github.com/gin-gonic/gin" +) + +func TestAPICall_CBOR_Support(t *testing.T) { + gin.SetMode(gin.TestMode) + + // Create a test handler + h := &Handler{} + + // Create test request data + reqData := apiCallRequest{ + Method: "GET", + URL: "https://httpbin.org/get", + Header: map[string]string{ + "User-Agent": "test-client", + }, + } + + t.Run("JSON request and response", func(t *testing.T) { + // Marshal request as JSON + jsonData, err := json.Marshal(reqData) + if err != nil { + t.Fatalf("Failed to marshal JSON: %v", err) + } + + // Create HTTP request + req := httptest.NewRequest(http.MethodPost, "/v0/management/api-call", bytes.NewReader(jsonData)) + req.Header.Set("Content-Type", "application/json") + + // Create response recorder + w := httptest.NewRecorder() + + // Create Gin context + c, _ := gin.CreateTestContext(w) + c.Request = req + + // Call handler + h.APICall(c) + + // Verify response + if w.Code != http.StatusOK && w.Code != http.StatusBadGateway { + t.Logf("Response status: %d", w.Code) + t.Logf("Response body: %s", w.Body.String()) + } + + // Check content type + contentType := w.Header().Get("Content-Type") + if w.Code == http.StatusOK && !contains(contentType, "application/json") { + t.Errorf("Expected JSON response, got: %s", contentType) + } + }) + + t.Run("CBOR request and response", func(t *testing.T) { + // Marshal request as CBOR + cborData, err := cbor.Marshal(reqData) + if err != nil { + t.Fatalf("Failed to marshal CBOR: %v", err) + } + + // Create HTTP request + req := httptest.NewRequest(http.MethodPost, "/v0/management/api-call", bytes.NewReader(cborData)) + req.Header.Set("Content-Type", "application/cbor") + + // Create response recorder + w := httptest.NewRecorder() + + // Create Gin context + c, _ := gin.CreateTestContext(w) + c.Request = req + + // Call handler + h.APICall(c) + + // Verify response + if w.Code != http.StatusOK && w.Code != http.StatusBadGateway { + t.Logf("Response status: %d", w.Code) + t.Logf("Response body: %s", w.Body.String()) + } + + // Check content type + contentType := w.Header().Get("Content-Type") + if w.Code == http.StatusOK && !contains(contentType, "application/cbor") { + t.Errorf("Expected CBOR response, got: %s", contentType) + } + + // Try to decode CBOR response + if w.Code == http.StatusOK { + var response apiCallResponse + if err := cbor.Unmarshal(w.Body.Bytes(), &response); err != nil { + t.Errorf("Failed to unmarshal CBOR response: %v", err) + } else { + t.Logf("CBOR response decoded successfully: status_code=%d", response.StatusCode) + } + } + }) + + t.Run("CBOR encoding and decoding consistency", func(t *testing.T) { + // Test data + testReq := apiCallRequest{ + Method: "POST", + URL: "https://example.com/api", + Header: map[string]string{ + "Authorization": "Bearer $TOKEN$", + "Content-Type": "application/json", + }, + Data: `{"key":"value"}`, + } + + // Encode to CBOR + cborData, err := cbor.Marshal(testReq) + if err != nil { + t.Fatalf("Failed to marshal to CBOR: %v", err) + } + + // Decode from CBOR + var decoded apiCallRequest + if err := cbor.Unmarshal(cborData, &decoded); err != nil { + t.Fatalf("Failed to unmarshal from CBOR: %v", err) + } + + // Verify fields + if decoded.Method != testReq.Method { + t.Errorf("Method mismatch: got %s, want %s", decoded.Method, testReq.Method) + } + if decoded.URL != testReq.URL { + t.Errorf("URL mismatch: got %s, want %s", decoded.URL, testReq.URL) + } + if decoded.Data != testReq.Data { + t.Errorf("Data mismatch: got %s, want %s", decoded.Data, testReq.Data) + } + if len(decoded.Header) != len(testReq.Header) { + t.Errorf("Header count mismatch: got %d, want %d", len(decoded.Header), len(testReq.Header)) + } + }) +} + +func contains(s, substr string) bool { + return len(s) > 0 && len(substr) > 0 && (s == substr || len(s) >= len(substr) && s[:len(substr)] == substr || bytes.Contains([]byte(s), []byte(substr))) +} diff --git a/internal/api/handlers/management/api_tools_test.go b/internal/api/handlers/management/api_tools_test.go index b27fe6395a..b089eb4a6e 100644 --- a/internal/api/handlers/management/api_tools_test.go +++ b/internal/api/handlers/management/api_tools_test.go @@ -5,9 +5,9 @@ import ( "net/http" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestAPICallTransportDirectBypassesGlobalProxy(t *testing.T) { diff --git a/internal/api/handlers/management/auth_files.go b/internal/api/handlers/management/auth_files.go index 8f7b8c5e19..d7e798977e 100644 --- a/internal/api/handlers/management/auth_files.go +++ b/internal/api/handlers/management/auth_files.go @@ -22,17 +22,17 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/antigravity" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/claude" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - geminiAuth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/gemini" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/kimi" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/antigravity" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/claude" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + geminiAuth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/gemini" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kimi" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "golang.org/x/oauth2" @@ -388,6 +388,9 @@ func (h *Handler) buildAuthFileEntry(auth *coreauth.Auth) gin.H { "source": "memory", "size": int64(0), } + entry["success"] = auth.Success + entry["failed"] = auth.Failed + entry["recent_requests"] = auth.RecentRequestsSnapshot(time.Now()) if email := authEmail(auth); email != "" { entry["email"] = email } @@ -2395,23 +2398,10 @@ func performGeminiCLISetup(ctx context.Context, httpClient *http.Client, storage finalProjectID := projectID if responseProjectID != "" { if explicitProject && !strings.EqualFold(responseProjectID, projectID) { - // Check if this is a free user (gen-lang-client projects or free/legacy tier) - isFreeUser := strings.HasPrefix(projectID, "gen-lang-client-") || - strings.EqualFold(tierID, "FREE") || - strings.EqualFold(tierID, "LEGACY") - - if isFreeUser { - // For free users, use backend project ID for preview model access - log.Infof("Gemini onboarding: frontend project %s maps to backend project %s", projectID, responseProjectID) - log.Infof("Using backend project ID: %s (recommended for preview model access)", responseProjectID) - finalProjectID = responseProjectID - } else { - // Pro users: keep requested project ID (original behavior) - log.Warnf("Gemini onboarding returned project %s instead of requested %s; keeping requested project ID.", responseProjectID, projectID) - } - } else { - finalProjectID = responseProjectID + log.Infof("Gemini onboarding: requested project %s maps to backend project %s", projectID, responseProjectID) + log.Infof("Using backend project ID: %s", responseProjectID) } + finalProjectID = responseProjectID } storage.ProjectID = strings.TrimSpace(finalProjectID) diff --git a/internal/api/handlers/management/auth_files_batch_test.go b/internal/api/handlers/management/auth_files_batch_test.go index 44cdbd5b5f..ec001ae586 100644 --- a/internal/api/handlers/management/auth_files_batch_test.go +++ b/internal/api/handlers/management/auth_files_batch_test.go @@ -12,8 +12,8 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestUploadAuthFile_BatchMultipart(t *testing.T) { diff --git a/internal/api/handlers/management/auth_files_delete_test.go b/internal/api/handlers/management/auth_files_delete_test.go index 7b7b888c4b..a57c9993ad 100644 --- a/internal/api/handlers/management/auth_files_delete_test.go +++ b/internal/api/handlers/management/auth_files_delete_test.go @@ -11,8 +11,8 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestDeleteAuthFile_UsesAuthPathFromManager(t *testing.T) { diff --git a/internal/api/handlers/management/auth_files_download_test.go b/internal/api/handlers/management/auth_files_download_test.go index a2a20d305a..88024fbba5 100644 --- a/internal/api/handlers/management/auth_files_download_test.go +++ b/internal/api/handlers/management/auth_files_download_test.go @@ -9,7 +9,7 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestDownloadAuthFile_ReturnsFile(t *testing.T) { diff --git a/internal/api/handlers/management/auth_files_download_windows_test.go b/internal/api/handlers/management/auth_files_download_windows_test.go index 8c174ccf51..88fc7f1146 100644 --- a/internal/api/handlers/management/auth_files_download_windows_test.go +++ b/internal/api/handlers/management/auth_files_download_windows_test.go @@ -11,7 +11,7 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestDownloadAuthFile_PreventsWindowsSlashTraversal(t *testing.T) { diff --git a/internal/api/handlers/management/auth_files_gitlab_test.go b/internal/api/handlers/management/auth_files_gitlab_test.go new file mode 100644 index 0000000000..6a3eb935ff --- /dev/null +++ b/internal/api/handlers/management/auth_files_gitlab_test.go @@ -0,0 +1,164 @@ +package management + +import ( + "encoding/json" + "net/http" + "net/http/httptest" + "os" + "path/filepath" + "strings" + "testing" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +func TestRequestGitLabPATToken_SavesAuthRecord(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "") + gin.SetMode(gin.TestMode) + + upstream := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if got := r.Header.Get("Authorization"); got != "Bearer glpat-test-token" { + t.Fatalf("authorization header = %q, want Bearer glpat-test-token", got) + } + + w.Header().Set("Content-Type", "application/json") + switch r.URL.Path { + case "/api/v4/user": + _ = json.NewEncoder(w).Encode(map[string]any{ + "id": 42, + "username": "gitlab-user", + "name": "GitLab User", + "email": "gitlab@example.com", + }) + case "/api/v4/personal_access_tokens/self": + _ = json.NewEncoder(w).Encode(map[string]any{ + "id": 7, + "name": "management-center", + "scopes": []string{"api", "read_user"}, + "user_id": 42, + }) + case "/api/v4/code_suggestions/direct_access": + _ = json.NewEncoder(w).Encode(map[string]any{ + "base_url": "https://cloud.gitlab.example.com", + "token": "gateway-token", + "expires_at": 1893456000, + "headers": map[string]string{ + "X-Gitlab-Realm": "saas", + }, + "model_details": map[string]any{ + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + }) + default: + http.NotFound(w, r) + } + })) + defer upstream.Close() + + store := &memoryAuthStore{} + h := NewHandlerWithoutConfigFilePath(&config.Config{AuthDir: t.TempDir()}, coreauth.NewManager(nil, nil, nil)) + h.tokenStore = store + + rec := httptest.NewRecorder() + ctx, _ := gin.CreateTestContext(rec) + ctx.Request = httptest.NewRequest(http.MethodPost, "/v0/management/gitlab-auth-url", strings.NewReader(`{"base_url":"`+upstream.URL+`","personal_access_token":"glpat-test-token"}`)) + ctx.Request.Header.Set("Content-Type", "application/json") + + h.RequestGitLabPATToken(ctx) + + if rec.Code != http.StatusOK { + t.Fatalf("expected status %d, got %d with body %s", http.StatusOK, rec.Code, rec.Body.String()) + } + + var resp map[string]any + if err := json.Unmarshal(rec.Body.Bytes(), &resp); err != nil { + t.Fatalf("decode response: %v", err) + } + if got := resp["status"]; got != "ok" { + t.Fatalf("status = %#v, want ok", got) + } + if got := resp["model_provider"]; got != "anthropic" { + t.Fatalf("model_provider = %#v, want anthropic", got) + } + if got := resp["model_name"]; got != "claude-sonnet-4-5" { + t.Fatalf("model_name = %#v, want claude-sonnet-4-5", got) + } + + store.mu.Lock() + defer store.mu.Unlock() + if len(store.items) != 1 { + t.Fatalf("expected 1 saved auth record, got %d", len(store.items)) + } + var saved *coreauth.Auth + for _, item := range store.items { + saved = item + } + if saved == nil { + t.Fatal("expected saved auth record") + } + if saved.Provider != "gitlab" { + t.Fatalf("provider = %q, want gitlab", saved.Provider) + } + if got := saved.Metadata["auth_kind"]; got != "personal_access_token" { + t.Fatalf("auth_kind = %#v, want personal_access_token", got) + } + if got := saved.Metadata["model_provider"]; got != "anthropic" { + t.Fatalf("saved model_provider = %#v, want anthropic", got) + } + if got := saved.Metadata["duo_gateway_token"]; got != "gateway-token" { + t.Fatalf("saved duo_gateway_token = %#v, want gateway-token", got) + } +} + +func TestPostOAuthCallback_GitLabWritesPendingCallbackFile(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "") + gin.SetMode(gin.TestMode) + + authDir := t.TempDir() + state := "gitlab-state-123" + RegisterOAuthSession(state, "gitlab") + t.Cleanup(func() { CompleteOAuthSession(state) }) + + h := NewHandlerWithoutConfigFilePath(&config.Config{AuthDir: authDir}, coreauth.NewManager(nil, nil, nil)) + + rec := httptest.NewRecorder() + ctx, _ := gin.CreateTestContext(rec) + ctx.Request = httptest.NewRequest(http.MethodPost, "/v0/management/oauth-callback", strings.NewReader(`{"provider":"gitlab","redirect_url":"http://localhost:17171/auth/callback?code=test-code&state=`+state+`"}`)) + ctx.Request.Header.Set("Content-Type", "application/json") + + h.PostOAuthCallback(ctx) + + if rec.Code != http.StatusOK { + t.Fatalf("expected status %d, got %d with body %s", http.StatusOK, rec.Code, rec.Body.String()) + } + + filePath := filepath.Join(authDir, ".oauth-gitlab-"+state+".oauth") + data, err := os.ReadFile(filePath) + if err != nil { + t.Fatalf("read callback file: %v", err) + } + + var payload map[string]string + if err := json.Unmarshal(data, &payload); err != nil { + t.Fatalf("decode callback payload: %v", err) + } + if got := payload["code"]; got != "test-code" { + t.Fatalf("callback code = %q, want test-code", got) + } + if got := payload["state"]; got != state { + t.Fatalf("callback state = %q, want %q", got, state) + } +} + +func TestNormalizeOAuthProvider_GitLab(t *testing.T) { + provider, err := NormalizeOAuthProvider("gitlab") + if err != nil { + t.Fatalf("NormalizeOAuthProvider returned error: %v", err) + } + if provider != "gitlab" { + t.Fatalf("provider = %q, want gitlab", provider) + } +} diff --git a/internal/api/handlers/management/auth_files_patch_fields_test.go b/internal/api/handlers/management/auth_files_patch_fields_test.go index 3ca70012c0..568700a0d6 100644 --- a/internal/api/handlers/management/auth_files_patch_fields_test.go +++ b/internal/api/handlers/management/auth_files_patch_fields_test.go @@ -9,8 +9,8 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestPatchAuthFileFields_MergeHeadersAndDeleteEmptyValues(t *testing.T) { diff --git a/internal/api/handlers/management/auth_files_recent_requests_test.go b/internal/api/handlers/management/auth_files_recent_requests_test.go new file mode 100644 index 0000000000..404bf4848f --- /dev/null +++ b/internal/api/handlers/management/auth_files_recent_requests_test.go @@ -0,0 +1,94 @@ +package management + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +func TestListAuthFiles_IncludesRecentRequestsBuckets(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "") + gin.SetMode(gin.TestMode) + + manager := coreauth.NewManager(nil, nil, nil) + record := &coreauth.Auth{ + ID: "runtime-only-auth-1", + Provider: "codex", + Attributes: map[string]string{ + "runtime_only": "true", + }, + Metadata: map[string]any{ + "type": "codex", + }, + } + if _, errRegister := manager.Register(context.Background(), record); errRegister != nil { + t.Fatalf("failed to register auth record: %v", errRegister) + } + + h := NewHandlerWithoutConfigFilePath(&config.Config{AuthDir: t.TempDir()}, manager) + h.tokenStore = &memoryAuthStore{} + + rec := httptest.NewRecorder() + ginCtx, _ := gin.CreateTestContext(rec) + req := httptest.NewRequest(http.MethodGet, "/v0/management/auth-files", nil) + ginCtx.Request = req + + h.ListAuthFiles(ginCtx) + + if rec.Code != http.StatusOK { + t.Fatalf("expected list status %d, got %d with body %s", http.StatusOK, rec.Code, rec.Body.String()) + } + + var payload map[string]any + if errUnmarshal := json.Unmarshal(rec.Body.Bytes(), &payload); errUnmarshal != nil { + t.Fatalf("failed to decode list payload: %v", errUnmarshal) + } + filesRaw, ok := payload["files"].([]any) + if !ok { + t.Fatalf("expected files array, payload: %#v", payload) + } + if len(filesRaw) != 1 { + t.Fatalf("expected 1 auth entry, got %d", len(filesRaw)) + } + + fileEntry, ok := filesRaw[0].(map[string]any) + if !ok { + t.Fatalf("expected file entry object, got %#v", filesRaw[0]) + } + + if _, ok := fileEntry["success"].(float64); !ok { + t.Fatalf("expected success number, got %#v", fileEntry["success"]) + } + if _, ok := fileEntry["failed"].(float64); !ok { + t.Fatalf("expected failed number, got %#v", fileEntry["failed"]) + } + + recentRaw, ok := fileEntry["recent_requests"].([]any) + if !ok { + t.Fatalf("expected recent_requests array, got %#v", fileEntry["recent_requests"]) + } + if len(recentRaw) != 20 { + t.Fatalf("expected 20 recent_requests buckets, got %d", len(recentRaw)) + } + for idx, item := range recentRaw { + bucket, ok := item.(map[string]any) + if !ok { + t.Fatalf("expected bucket object at %d, got %#v", idx, item) + } + if _, ok := bucket["time"].(string); !ok { + t.Fatalf("expected bucket time string at %d, got %#v", idx, bucket["time"]) + } + if _, ok := bucket["success"].(float64); !ok { + t.Fatalf("expected bucket success number at %d, got %#v", idx, bucket["success"]) + } + if _, ok := bucket["failed"].(float64); !ok { + t.Fatalf("expected bucket failed number at %d, got %#v", idx, bucket["failed"]) + } + } +} diff --git a/internal/api/handlers/management/config_auth_index.go b/internal/api/handlers/management/config_auth_index.go new file mode 100644 index 0000000000..f2bbc2ff38 --- /dev/null +++ b/internal/api/handlers/management/config_auth_index.go @@ -0,0 +1,243 @@ +package management + +import ( + "fmt" + "strings" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/synthesizer" +) + +type geminiKeyWithAuthIndex struct { + config.GeminiKey + AuthIndex string `json:"auth-index,omitempty"` +} + +type claudeKeyWithAuthIndex struct { + config.ClaudeKey + AuthIndex string `json:"auth-index,omitempty"` +} + +type codexKeyWithAuthIndex struct { + config.CodexKey + AuthIndex string `json:"auth-index,omitempty"` +} + +type vertexCompatKeyWithAuthIndex struct { + config.VertexCompatKey + AuthIndex string `json:"auth-index,omitempty"` +} + +type openAICompatibilityAPIKeyWithAuthIndex struct { + config.OpenAICompatibilityAPIKey + AuthIndex string `json:"auth-index,omitempty"` +} + +type openAICompatibilityWithAuthIndex struct { + Name string `json:"name"` + Priority int `json:"priority,omitempty"` + Disabled bool `json:"disabled"` + Prefix string `json:"prefix,omitempty"` + BaseURL string `json:"base-url"` + APIKeyEntries []openAICompatibilityAPIKeyWithAuthIndex `json:"api-key-entries,omitempty"` + Models []config.OpenAICompatibilityModel `json:"models,omitempty"` + Headers map[string]string `json:"headers,omitempty"` + AuthIndex string `json:"auth-index,omitempty"` +} + +func (h *Handler) liveAuthIndexByID() map[string]string { + out := map[string]string{} + if h == nil { + return out + } + h.mu.Lock() + manager := h.authManager + h.mu.Unlock() + if manager == nil { + return out + } + // authManager.List() returns clones, so EnsureIndex only affects these copies. + for _, auth := range manager.List() { + if auth == nil { + continue + } + id := strings.TrimSpace(auth.ID) + if id == "" { + continue + } + idx := strings.TrimSpace(auth.Index) + if idx == "" { + idx = auth.EnsureIndex() + } + if idx == "" { + continue + } + out[id] = idx + } + return out +} + +func (h *Handler) geminiKeysWithAuthIndex() []geminiKeyWithAuthIndex { + if h == nil { + return nil + } + liveIndexByID := h.liveAuthIndexByID() + + h.mu.Lock() + defer h.mu.Unlock() + if h.cfg == nil { + return nil + } + + idGen := synthesizer.NewStableIDGenerator() + out := make([]geminiKeyWithAuthIndex, len(h.cfg.GeminiKey)) + for i := range h.cfg.GeminiKey { + entry := h.cfg.GeminiKey[i] + authIndex := "" + if key := strings.TrimSpace(entry.APIKey); key != "" { + id, _ := idGen.Next("gemini:apikey", key, entry.BaseURL) + authIndex = liveIndexByID[id] + } + out[i] = geminiKeyWithAuthIndex{ + GeminiKey: entry, + AuthIndex: authIndex, + } + } + return out +} + +func (h *Handler) claudeKeysWithAuthIndex() []claudeKeyWithAuthIndex { + if h == nil { + return nil + } + liveIndexByID := h.liveAuthIndexByID() + + h.mu.Lock() + defer h.mu.Unlock() + if h.cfg == nil { + return nil + } + + idGen := synthesizer.NewStableIDGenerator() + out := make([]claudeKeyWithAuthIndex, len(h.cfg.ClaudeKey)) + for i := range h.cfg.ClaudeKey { + entry := h.cfg.ClaudeKey[i] + authIndex := "" + if key := strings.TrimSpace(entry.APIKey); key != "" { + id, _ := idGen.Next("claude:apikey", key, entry.BaseURL) + authIndex = liveIndexByID[id] + } + out[i] = claudeKeyWithAuthIndex{ + ClaudeKey: entry, + AuthIndex: authIndex, + } + } + return out +} + +func (h *Handler) codexKeysWithAuthIndex() []codexKeyWithAuthIndex { + if h == nil { + return nil + } + liveIndexByID := h.liveAuthIndexByID() + + h.mu.Lock() + defer h.mu.Unlock() + if h.cfg == nil { + return nil + } + + idGen := synthesizer.NewStableIDGenerator() + out := make([]codexKeyWithAuthIndex, len(h.cfg.CodexKey)) + for i := range h.cfg.CodexKey { + entry := h.cfg.CodexKey[i] + authIndex := "" + if key := strings.TrimSpace(entry.APIKey); key != "" { + id, _ := idGen.Next("codex:apikey", key, entry.BaseURL) + authIndex = liveIndexByID[id] + } + out[i] = codexKeyWithAuthIndex{ + CodexKey: entry, + AuthIndex: authIndex, + } + } + return out +} + +func (h *Handler) vertexCompatKeysWithAuthIndex() []vertexCompatKeyWithAuthIndex { + if h == nil { + return nil + } + liveIndexByID := h.liveAuthIndexByID() + + h.mu.Lock() + defer h.mu.Unlock() + if h.cfg == nil { + return nil + } + + idGen := synthesizer.NewStableIDGenerator() + out := make([]vertexCompatKeyWithAuthIndex, len(h.cfg.VertexCompatAPIKey)) + for i := range h.cfg.VertexCompatAPIKey { + entry := h.cfg.VertexCompatAPIKey[i] + id, _ := idGen.Next("vertex:apikey", entry.APIKey, entry.BaseURL, entry.ProxyURL) + authIndex := liveIndexByID[id] + out[i] = vertexCompatKeyWithAuthIndex{ + VertexCompatKey: entry, + AuthIndex: authIndex, + } + } + return out +} + +func (h *Handler) openAICompatibilityWithAuthIndex() []openAICompatibilityWithAuthIndex { + if h == nil { + return nil + } + liveIndexByID := h.liveAuthIndexByID() + + h.mu.Lock() + defer h.mu.Unlock() + if h.cfg == nil { + return nil + } + + normalized := normalizedOpenAICompatibilityEntries(h.cfg.OpenAICompatibility) + out := make([]openAICompatibilityWithAuthIndex, len(normalized)) + idGen := synthesizer.NewStableIDGenerator() + for i := range normalized { + entry := normalized[i] + providerName := strings.ToLower(strings.TrimSpace(entry.Name)) + if providerName == "" { + providerName = "openai-compatibility" + } + idKind := fmt.Sprintf("openai-compatibility:%s", providerName) + + response := openAICompatibilityWithAuthIndex{ + Name: entry.Name, + Priority: entry.Priority, + Disabled: entry.Disabled, + Prefix: entry.Prefix, + BaseURL: entry.BaseURL, + Models: entry.Models, + Headers: entry.Headers, + AuthIndex: "", + } + if len(entry.APIKeyEntries) == 0 { + id, _ := idGen.Next(idKind, entry.BaseURL) + response.AuthIndex = liveIndexByID[id] + } else { + response.APIKeyEntries = make([]openAICompatibilityAPIKeyWithAuthIndex, len(entry.APIKeyEntries)) + for j := range entry.APIKeyEntries { + apiKeyEntry := entry.APIKeyEntries[j] + id, _ := idGen.Next(idKind, apiKeyEntry.APIKey, entry.BaseURL, apiKeyEntry.ProxyURL) + response.APIKeyEntries[j] = openAICompatibilityAPIKeyWithAuthIndex{ + OpenAICompatibilityAPIKey: apiKeyEntry, + AuthIndex: liveIndexByID[id], + } + } + } + out[i] = response + } + return out +} diff --git a/internal/api/handlers/management/config_basic.go b/internal/api/handlers/management/config_basic.go index f77e91e9ba..a0818aa8ae 100644 --- a/internal/api/handlers/management/config_basic.go +++ b/internal/api/handlers/management/config_basic.go @@ -11,9 +11,9 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" log "github.com/sirupsen/logrus" "gopkg.in/yaml.v3" ) diff --git a/internal/api/handlers/management/config_lists.go b/internal/api/handlers/management/config_lists.go index fbaad956e0..f8ef3203c7 100644 --- a/internal/api/handlers/management/config_lists.go +++ b/internal/api/handlers/management/config_lists.go @@ -6,7 +6,7 @@ import ( "strings" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) // Generic helpers for list[string] @@ -120,7 +120,7 @@ func (h *Handler) DeleteAPIKeys(c *gin.Context) { // gemini-api-key: []GeminiKey func (h *Handler) GetGeminiKeys(c *gin.Context) { - c.JSON(200, gin.H{"gemini-api-key": h.cfg.GeminiKey}) + c.JSON(200, gin.H{"gemini-api-key": h.geminiKeysWithAuthIndex()}) } func (h *Handler) PutGeminiKeys(c *gin.Context) { data, err := c.GetRawData() @@ -139,9 +139,11 @@ func (h *Handler) PutGeminiKeys(c *gin.Context) { } arr = obj.Items } + h.mu.Lock() + defer h.mu.Unlock() h.cfg.GeminiKey = append([]config.GeminiKey(nil), arr...) h.cfg.SanitizeGeminiKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) PatchGeminiKey(c *gin.Context) { type geminiKeyPatch struct { @@ -161,6 +163,9 @@ func (h *Handler) PatchGeminiKey(c *gin.Context) { c.JSON(400, gin.H{"error": "invalid body"}) return } + + h.mu.Lock() + defer h.mu.Unlock() targetIndex := -1 if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.GeminiKey) { targetIndex = *body.Index @@ -187,7 +192,7 @@ func (h *Handler) PatchGeminiKey(c *gin.Context) { if trimmed == "" { h.cfg.GeminiKey = append(h.cfg.GeminiKey[:targetIndex], h.cfg.GeminiKey[targetIndex+1:]...) h.cfg.SanitizeGeminiKeys() - h.persist(c) + h.persistLocked(c) return } entry.APIKey = trimmed @@ -209,10 +214,12 @@ func (h *Handler) PatchGeminiKey(c *gin.Context) { } h.cfg.GeminiKey[targetIndex] = entry h.cfg.SanitizeGeminiKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) DeleteGeminiKey(c *gin.Context) { + h.mu.Lock() + defer h.mu.Unlock() if val := strings.TrimSpace(c.Query("api-key")); val != "" { if baseRaw, okBase := c.GetQuery("base-url"); okBase { base := strings.TrimSpace(baseRaw) @@ -226,7 +233,7 @@ func (h *Handler) DeleteGeminiKey(c *gin.Context) { if len(out) != len(h.cfg.GeminiKey) { h.cfg.GeminiKey = out h.cfg.SanitizeGeminiKeys() - h.persist(c) + h.persistLocked(c) } else { c.JSON(404, gin.H{"error": "item not found"}) } @@ -253,7 +260,7 @@ func (h *Handler) DeleteGeminiKey(c *gin.Context) { } h.cfg.GeminiKey = append(h.cfg.GeminiKey[:matchIndex], h.cfg.GeminiKey[matchIndex+1:]...) h.cfg.SanitizeGeminiKeys() - h.persist(c) + h.persistLocked(c) return } if idxStr := c.Query("index"); idxStr != "" { @@ -261,7 +268,7 @@ func (h *Handler) DeleteGeminiKey(c *gin.Context) { if _, err := fmt.Sscanf(idxStr, "%d", &idx); err == nil && idx >= 0 && idx < len(h.cfg.GeminiKey) { h.cfg.GeminiKey = append(h.cfg.GeminiKey[:idx], h.cfg.GeminiKey[idx+1:]...) h.cfg.SanitizeGeminiKeys() - h.persist(c) + h.persistLocked(c) return } } @@ -270,7 +277,7 @@ func (h *Handler) DeleteGeminiKey(c *gin.Context) { // claude-api-key: []ClaudeKey func (h *Handler) GetClaudeKeys(c *gin.Context) { - c.JSON(200, gin.H{"claude-api-key": h.cfg.ClaudeKey}) + c.JSON(200, gin.H{"claude-api-key": h.claudeKeysWithAuthIndex()}) } func (h *Handler) PutClaudeKeys(c *gin.Context) { data, err := c.GetRawData() @@ -292,9 +299,11 @@ func (h *Handler) PutClaudeKeys(c *gin.Context) { for i := range arr { normalizeClaudeKey(&arr[i]) } + h.mu.Lock() + defer h.mu.Unlock() h.cfg.ClaudeKey = arr h.cfg.SanitizeClaudeKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) PatchClaudeKey(c *gin.Context) { type claudeKeyPatch struct { @@ -315,6 +324,9 @@ func (h *Handler) PatchClaudeKey(c *gin.Context) { c.JSON(400, gin.H{"error": "invalid body"}) return } + + h.mu.Lock() + defer h.mu.Unlock() targetIndex := -1 if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.ClaudeKey) { targetIndex = *body.Index @@ -358,10 +370,12 @@ func (h *Handler) PatchClaudeKey(c *gin.Context) { normalizeClaudeKey(&entry) h.cfg.ClaudeKey[targetIndex] = entry h.cfg.SanitizeClaudeKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) DeleteClaudeKey(c *gin.Context) { + h.mu.Lock() + defer h.mu.Unlock() if val := strings.TrimSpace(c.Query("api-key")); val != "" { if baseRaw, okBase := c.GetQuery("base-url"); okBase { base := strings.TrimSpace(baseRaw) @@ -374,7 +388,7 @@ func (h *Handler) DeleteClaudeKey(c *gin.Context) { } h.cfg.ClaudeKey = out h.cfg.SanitizeClaudeKeys() - h.persist(c) + h.persistLocked(c) return } @@ -396,7 +410,7 @@ func (h *Handler) DeleteClaudeKey(c *gin.Context) { h.cfg.ClaudeKey = append(h.cfg.ClaudeKey[:matchIndex], h.cfg.ClaudeKey[matchIndex+1:]...) } h.cfg.SanitizeClaudeKeys() - h.persist(c) + h.persistLocked(c) return } if idxStr := c.Query("index"); idxStr != "" { @@ -405,7 +419,7 @@ func (h *Handler) DeleteClaudeKey(c *gin.Context) { if err == nil && idx >= 0 && idx < len(h.cfg.ClaudeKey) { h.cfg.ClaudeKey = append(h.cfg.ClaudeKey[:idx], h.cfg.ClaudeKey[idx+1:]...) h.cfg.SanitizeClaudeKeys() - h.persist(c) + h.persistLocked(c) return } } @@ -414,7 +428,7 @@ func (h *Handler) DeleteClaudeKey(c *gin.Context) { // openai-compatibility: []OpenAICompatibility func (h *Handler) GetOpenAICompat(c *gin.Context) { - c.JSON(200, gin.H{"openai-compatibility": normalizedOpenAICompatibilityEntries(h.cfg.OpenAICompatibility)}) + c.JSON(200, gin.H{"openai-compatibility": h.openAICompatibilityWithAuthIndex()}) } func (h *Handler) PutOpenAICompat(c *gin.Context) { data, err := c.GetRawData() @@ -440,14 +454,17 @@ func (h *Handler) PutOpenAICompat(c *gin.Context) { filtered = append(filtered, arr[i]) } } + h.mu.Lock() + defer h.mu.Unlock() h.cfg.OpenAICompatibility = filtered h.cfg.SanitizeOpenAICompatibility() - h.persist(c) + h.persistLocked(c) } func (h *Handler) PatchOpenAICompat(c *gin.Context) { type openAICompatPatch struct { Name *string `json:"name"` Prefix *string `json:"prefix"` + Disabled *bool `json:"disabled"` BaseURL *string `json:"base-url"` APIKeyEntries *[]config.OpenAICompatibilityAPIKey `json:"api-key-entries"` Models *[]config.OpenAICompatibilityModel `json:"models"` @@ -462,6 +479,9 @@ func (h *Handler) PatchOpenAICompat(c *gin.Context) { c.JSON(400, gin.H{"error": "invalid body"}) return } + + h.mu.Lock() + defer h.mu.Unlock() targetIndex := -1 if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.OpenAICompatibility) { targetIndex = *body.Index @@ -487,12 +507,15 @@ func (h *Handler) PatchOpenAICompat(c *gin.Context) { if body.Value.Prefix != nil { entry.Prefix = strings.TrimSpace(*body.Value.Prefix) } + if body.Value.Disabled != nil { + entry.Disabled = *body.Value.Disabled + } if body.Value.BaseURL != nil { trimmed := strings.TrimSpace(*body.Value.BaseURL) if trimmed == "" { h.cfg.OpenAICompatibility = append(h.cfg.OpenAICompatibility[:targetIndex], h.cfg.OpenAICompatibility[targetIndex+1:]...) h.cfg.SanitizeOpenAICompatibility() - h.persist(c) + h.persistLocked(c) return } entry.BaseURL = trimmed @@ -509,10 +532,12 @@ func (h *Handler) PatchOpenAICompat(c *gin.Context) { normalizeOpenAICompatibilityEntry(&entry) h.cfg.OpenAICompatibility[targetIndex] = entry h.cfg.SanitizeOpenAICompatibility() - h.persist(c) + h.persistLocked(c) } func (h *Handler) DeleteOpenAICompat(c *gin.Context) { + h.mu.Lock() + defer h.mu.Unlock() if name := c.Query("name"); name != "" { out := make([]config.OpenAICompatibility, 0, len(h.cfg.OpenAICompatibility)) for _, v := range h.cfg.OpenAICompatibility { @@ -522,7 +547,7 @@ func (h *Handler) DeleteOpenAICompat(c *gin.Context) { } h.cfg.OpenAICompatibility = out h.cfg.SanitizeOpenAICompatibility() - h.persist(c) + h.persistLocked(c) return } if idxStr := c.Query("index"); idxStr != "" { @@ -531,7 +556,7 @@ func (h *Handler) DeleteOpenAICompat(c *gin.Context) { if err == nil && idx >= 0 && idx < len(h.cfg.OpenAICompatibility) { h.cfg.OpenAICompatibility = append(h.cfg.OpenAICompatibility[:idx], h.cfg.OpenAICompatibility[idx+1:]...) h.cfg.SanitizeOpenAICompatibility() - h.persist(c) + h.persistLocked(c) return } } @@ -540,7 +565,7 @@ func (h *Handler) DeleteOpenAICompat(c *gin.Context) { // vertex-api-key: []VertexCompatKey func (h *Handler) GetVertexCompatKeys(c *gin.Context) { - c.JSON(200, gin.H{"vertex-api-key": h.cfg.VertexCompatAPIKey}) + c.JSON(200, gin.H{"vertex-api-key": h.vertexCompatKeysWithAuthIndex()}) } func (h *Handler) PutVertexCompatKeys(c *gin.Context) { data, err := c.GetRawData() @@ -566,9 +591,11 @@ func (h *Handler) PutVertexCompatKeys(c *gin.Context) { return } } + h.mu.Lock() + defer h.mu.Unlock() h.cfg.VertexCompatAPIKey = append([]config.VertexCompatKey(nil), arr...) h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) PatchVertexCompatKey(c *gin.Context) { type vertexCompatPatch struct { @@ -589,6 +616,9 @@ func (h *Handler) PatchVertexCompatKey(c *gin.Context) { c.JSON(400, gin.H{"error": "invalid body"}) return } + + h.mu.Lock() + defer h.mu.Unlock() targetIndex := -1 if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.VertexCompatAPIKey) { targetIndex = *body.Index @@ -615,7 +645,7 @@ func (h *Handler) PatchVertexCompatKey(c *gin.Context) { if trimmed == "" { h.cfg.VertexCompatAPIKey = append(h.cfg.VertexCompatAPIKey[:targetIndex], h.cfg.VertexCompatAPIKey[targetIndex+1:]...) h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) return } entry.APIKey = trimmed @@ -628,7 +658,7 @@ func (h *Handler) PatchVertexCompatKey(c *gin.Context) { if trimmed == "" { h.cfg.VertexCompatAPIKey = append(h.cfg.VertexCompatAPIKey[:targetIndex], h.cfg.VertexCompatAPIKey[targetIndex+1:]...) h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) return } entry.BaseURL = trimmed @@ -648,10 +678,12 @@ func (h *Handler) PatchVertexCompatKey(c *gin.Context) { normalizeVertexCompatKey(&entry) h.cfg.VertexCompatAPIKey[targetIndex] = entry h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) DeleteVertexCompatKey(c *gin.Context) { + h.mu.Lock() + defer h.mu.Unlock() if val := strings.TrimSpace(c.Query("api-key")); val != "" { if baseRaw, okBase := c.GetQuery("base-url"); okBase { base := strings.TrimSpace(baseRaw) @@ -664,7 +696,7 @@ func (h *Handler) DeleteVertexCompatKey(c *gin.Context) { } h.cfg.VertexCompatAPIKey = out h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) return } @@ -686,7 +718,7 @@ func (h *Handler) DeleteVertexCompatKey(c *gin.Context) { h.cfg.VertexCompatAPIKey = append(h.cfg.VertexCompatAPIKey[:matchIndex], h.cfg.VertexCompatAPIKey[matchIndex+1:]...) } h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) return } if idxStr := c.Query("index"); idxStr != "" { @@ -695,7 +727,7 @@ func (h *Handler) DeleteVertexCompatKey(c *gin.Context) { if errScan == nil && idx >= 0 && idx < len(h.cfg.VertexCompatAPIKey) { h.cfg.VertexCompatAPIKey = append(h.cfg.VertexCompatAPIKey[:idx], h.cfg.VertexCompatAPIKey[idx+1:]...) h.cfg.SanitizeVertexCompatKeys() - h.persist(c) + h.persistLocked(c) return } } @@ -886,7 +918,7 @@ func (h *Handler) DeleteOAuthModelAlias(c *gin.Context) { // codex-api-key: []CodexKey func (h *Handler) GetCodexKeys(c *gin.Context) { - c.JSON(200, gin.H{"codex-api-key": h.cfg.CodexKey}) + c.JSON(200, gin.H{"codex-api-key": h.codexKeysWithAuthIndex()}) } func (h *Handler) PutCodexKeys(c *gin.Context) { data, err := c.GetRawData() @@ -915,9 +947,11 @@ func (h *Handler) PutCodexKeys(c *gin.Context) { } filtered = append(filtered, entry) } + h.mu.Lock() + defer h.mu.Unlock() h.cfg.CodexKey = filtered h.cfg.SanitizeCodexKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) PatchCodexKey(c *gin.Context) { type codexKeyPatch struct { @@ -938,6 +972,9 @@ func (h *Handler) PatchCodexKey(c *gin.Context) { c.JSON(400, gin.H{"error": "invalid body"}) return } + + h.mu.Lock() + defer h.mu.Unlock() targetIndex := -1 if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.CodexKey) { targetIndex = *body.Index @@ -968,7 +1005,7 @@ func (h *Handler) PatchCodexKey(c *gin.Context) { if trimmed == "" { h.cfg.CodexKey = append(h.cfg.CodexKey[:targetIndex], h.cfg.CodexKey[targetIndex+1:]...) h.cfg.SanitizeCodexKeys() - h.persist(c) + h.persistLocked(c) return } entry.BaseURL = trimmed @@ -988,10 +1025,12 @@ func (h *Handler) PatchCodexKey(c *gin.Context) { normalizeCodexKey(&entry) h.cfg.CodexKey[targetIndex] = entry h.cfg.SanitizeCodexKeys() - h.persist(c) + h.persistLocked(c) } func (h *Handler) DeleteCodexKey(c *gin.Context) { + h.mu.Lock() + defer h.mu.Unlock() if val := strings.TrimSpace(c.Query("api-key")); val != "" { if baseRaw, okBase := c.GetQuery("base-url"); okBase { base := strings.TrimSpace(baseRaw) @@ -1004,7 +1043,7 @@ func (h *Handler) DeleteCodexKey(c *gin.Context) { } h.cfg.CodexKey = out h.cfg.SanitizeCodexKeys() - h.persist(c) + h.persistLocked(c) return } @@ -1026,7 +1065,7 @@ func (h *Handler) DeleteCodexKey(c *gin.Context) { h.cfg.CodexKey = append(h.cfg.CodexKey[:matchIndex], h.cfg.CodexKey[matchIndex+1:]...) } h.cfg.SanitizeCodexKeys() - h.persist(c) + h.persistLocked(c) return } if idxStr := c.Query("index"); idxStr != "" { @@ -1035,7 +1074,7 @@ func (h *Handler) DeleteCodexKey(c *gin.Context) { if err == nil && idx >= 0 && idx < len(h.cfg.CodexKey) { h.cfg.CodexKey = append(h.cfg.CodexKey[:idx], h.cfg.CodexKey[idx+1:]...) h.cfg.SanitizeCodexKeys() - h.persist(c) + h.persistLocked(c) return } } diff --git a/internal/api/handlers/management/config_lists_delete_keys_test.go b/internal/api/handlers/management/config_lists_delete_keys_test.go index aaa43910e7..a548805eda 100644 --- a/internal/api/handlers/management/config_lists_delete_keys_test.go +++ b/internal/api/handlers/management/config_lists_delete_keys_test.go @@ -8,7 +8,7 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func writeTestConfigFile(t *testing.T) string { diff --git a/internal/api/handlers/management/handler.go b/internal/api/handlers/management/handler.go index 45786b9d3e..33ea6b79ef 100644 --- a/internal/api/handlers/management/handler.go +++ b/internal/api/handlers/management/handler.go @@ -13,11 +13,11 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/buildinfo" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/usage" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/buildinfo" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/usage" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" "golang.org/x/crypto/bcrypt" ) @@ -41,6 +41,7 @@ type Handler struct { attemptsMu sync.Mutex failedAttempts map[string]*attemptInfo // keyed by client IP authManager *coreauth.Manager + sdkAuthManager *sdkAuth.Manager usageStats *usage.RequestStatistics tokenStore coreauth.Store localPassword string @@ -60,8 +61,8 @@ func NewHandler(cfg *config.Config, configFilePath string, manager *coreauth.Man configFilePath: configFilePath, failedAttempts: make(map[string]*attemptInfo), authManager: manager, - usageStats: usage.GetRequestStatistics(), tokenStore: sdkAuth.GetTokenStore(), + usageStats: usage.GetRequestStatistics(), allowRemoteOverride: envSecret != "", envSecret: envSecret, } @@ -105,13 +106,24 @@ func NewHandlerWithoutConfigFilePath(cfg *config.Config, manager *coreauth.Manag } // SetConfig updates the in-memory config reference when the server hot-reloads. -func (h *Handler) SetConfig(cfg *config.Config) { h.cfg = cfg } +func (h *Handler) SetConfig(cfg *config.Config) { + if h == nil { + return + } + h.mu.Lock() + h.cfg = cfg + h.mu.Unlock() +} // SetAuthManager updates the auth manager reference used by management endpoints. -func (h *Handler) SetAuthManager(manager *coreauth.Manager) { h.authManager = manager } - -// SetUsageStatistics allows replacing the usage statistics reference. -func (h *Handler) SetUsageStatistics(stats *usage.RequestStatistics) { h.usageStats = stats } +func (h *Handler) SetAuthManager(manager *coreauth.Manager) { + if h == nil { + return + } + h.mu.Lock() + h.authManager = manager + h.mu.Unlock() +} // SetLocalPassword configures the runtime-local password accepted for localhost requests. func (h *Handler) SetLocalPassword(password string) { h.localPassword = password } @@ -138,9 +150,6 @@ func (h *Handler) SetPostAuthHook(hook coreauth.PostAuthHook) { // All requests (local and remote) require a valid management key. // Additionally, remote access requires allow-remote-management=true. func (h *Handler) Middleware() gin.HandlerFunc { - const maxFailures = 5 - const banDuration = 30 * time.Minute - return func(c *gin.Context) { c.Header("X-CPA-VERSION", buildinfo.Version) c.Header("X-CPA-COMMIT", buildinfo.Commit) @@ -148,64 +157,6 @@ func (h *Handler) Middleware() gin.HandlerFunc { clientIP := c.ClientIP() localClient := clientIP == "127.0.0.1" || clientIP == "::1" - cfg := h.cfg - var ( - allowRemote bool - secretHash string - ) - if cfg != nil { - allowRemote = cfg.RemoteManagement.AllowRemote - secretHash = cfg.RemoteManagement.SecretKey - } - if h.allowRemoteOverride { - allowRemote = true - } - envSecret := h.envSecret - - fail := func() {} - if !localClient { - h.attemptsMu.Lock() - ai := h.failedAttempts[clientIP] - if ai != nil { - if !ai.blockedUntil.IsZero() { - if time.Now().Before(ai.blockedUntil) { - remaining := time.Until(ai.blockedUntil).Round(time.Second) - h.attemptsMu.Unlock() - c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": fmt.Sprintf("IP banned due to too many failed attempts. Try again in %s", remaining)}) - return - } - // Ban expired, reset state - ai.blockedUntil = time.Time{} - ai.count = 0 - } - } - h.attemptsMu.Unlock() - - if !allowRemote { - c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": "remote management disabled"}) - return - } - - fail = func() { - h.attemptsMu.Lock() - aip := h.failedAttempts[clientIP] - if aip == nil { - aip = &attemptInfo{} - h.failedAttempts[clientIP] = aip - } - aip.count++ - aip.lastActivity = time.Now() - if aip.count >= maxFailures { - aip.blockedUntil = time.Now().Add(banDuration) - aip.count = 0 - } - h.attemptsMu.Unlock() - } - } - if secretHash == "" && envSecret == "" { - c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": "remote management key not set"}) - return - } // Accept either Authorization: Bearer or X-Management-Key var provided string @@ -221,61 +172,126 @@ func (h *Handler) Middleware() gin.HandlerFunc { provided = c.GetHeader("X-Management-Key") } - if provided == "" { - if !localClient { - fail() - } - c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "missing management key"}) + allowed, statusCode, errMsg := h.AuthenticateManagementKey(clientIP, localClient, provided) + if !allowed { + c.AbortWithStatusJSON(statusCode, gin.H{"error": errMsg}) return } + c.Next() + } +} - if localClient { - if lp := h.localPassword; lp != "" { - if subtle.ConstantTimeCompare([]byte(provided), []byte(lp)) == 1 { - c.Next() - return - } - } +// AuthenticateManagementKey verifies the provided management key for the given client. +// It mirrors the behaviour of Middleware() so non-HTTP callers can reuse the same logic. +func (h *Handler) AuthenticateManagementKey(clientIP string, localClient bool, provided string) (bool, int, string) { + const maxFailures = 5 + const banDuration = 30 * time.Minute + + if h == nil { + return false, http.StatusForbidden, "remote management disabled" + } + + cfg := h.cfg + var ( + allowRemote bool + secretHash string + ) + if cfg != nil { + allowRemote = cfg.RemoteManagement.AllowRemote + secretHash = cfg.RemoteManagement.SecretKey + } + if h.allowRemoteOverride { + allowRemote = true + } + envSecret := h.envSecret + + now := time.Now() + h.attemptsMu.Lock() + ai := h.failedAttempts[clientIP] + if ai != nil && !ai.blockedUntil.IsZero() { + if now.Before(ai.blockedUntil) { + remaining := ai.blockedUntil.Sub(now).Round(time.Second) + h.attemptsMu.Unlock() + return false, http.StatusForbidden, fmt.Sprintf("IP banned due to too many failed attempts. Try again in %s", remaining) } + // Ban expired, reset state + ai.blockedUntil = time.Time{} + ai.count = 0 + } + h.attemptsMu.Unlock() - if envSecret != "" && subtle.ConstantTimeCompare([]byte(provided), []byte(envSecret)) == 1 { - if !localClient { - h.attemptsMu.Lock() - if ai := h.failedAttempts[clientIP]; ai != nil { - ai.count = 0 - ai.blockedUntil = time.Time{} - } - h.attemptsMu.Unlock() - } - c.Next() - return + if !localClient && !allowRemote { + return false, http.StatusForbidden, "remote management disabled" + } + + fail := func() { + h.attemptsMu.Lock() + aip := h.failedAttempts[clientIP] + if aip == nil { + aip = &attemptInfo{} + h.failedAttempts[clientIP] = aip } + aip.count++ + aip.lastActivity = time.Now() + if aip.count >= maxFailures { + aip.blockedUntil = time.Now().Add(banDuration) + aip.count = 0 + } + h.attemptsMu.Unlock() + } - if secretHash == "" || bcrypt.CompareHashAndPassword([]byte(secretHash), []byte(provided)) != nil { - if !localClient { - fail() - } - c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "invalid management key"}) - return + reset := func() { + h.attemptsMu.Lock() + if ai := h.failedAttempts[clientIP]; ai != nil { + ai.count = 0 + ai.blockedUntil = time.Time{} } + h.attemptsMu.Unlock() + } + + if secretHash == "" && envSecret == "" { + return false, http.StatusForbidden, "remote management key not set" + } + + if provided == "" { + fail() + return false, http.StatusUnauthorized, "missing management key" + } - if !localClient { - h.attemptsMu.Lock() - if ai := h.failedAttempts[clientIP]; ai != nil { - ai.count = 0 - ai.blockedUntil = time.Time{} + if localClient { + if lp := h.localPassword; lp != "" { + if subtle.ConstantTimeCompare([]byte(provided), []byte(lp)) == 1 { + reset() + return true, 0, "" } - h.attemptsMu.Unlock() } + } - c.Next() + if envSecret != "" && subtle.ConstantTimeCompare([]byte(provided), []byte(envSecret)) == 1 { + reset() + return true, 0, "" } + + if secretHash == "" || bcrypt.CompareHashAndPassword([]byte(secretHash), []byte(provided)) != nil { + fail() + return false, http.StatusUnauthorized, "invalid management key" + } + + reset() + + return true, 0, "" } // persist saves the current in-memory config to disk. func (h *Handler) persist(c *gin.Context) bool { h.mu.Lock() defer h.mu.Unlock() + return h.persistLocked(c) +} + +// persistLocked saves the current in-memory config to disk. +// It expects the caller to hold h.mu. +func (h *Handler) persistLocked(c *gin.Context) bool { // Preserve comments when writing if err := config.SaveConfigPreserveComments(h.configFilePath, h.cfg); err != nil { c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to save config: %v", err)}) @@ -321,3 +337,9 @@ func (h *Handler) updateStringField(c *gin.Context, set func(string)) { set(*body.Value) h.persist(c) } + +// SetSDKAuthManager sets the SDK auth manager for OAuth provider operations. +func (h *Handler) SetSDKAuthManager(manager *sdkAuth.Manager) { h.sdkAuthManager = manager } + +// SetUsageStats sets the usage statistics tracker for the management handler. +func (h *Handler) SetUsageStats(stats *usage.RequestStatistics) { h.usageStats = stats } diff --git a/internal/api/handlers/management/handler_test.go b/internal/api/handlers/management/handler_test.go new file mode 100644 index 0000000000..a77dc36f35 --- /dev/null +++ b/internal/api/handlers/management/handler_test.go @@ -0,0 +1,38 @@ +package management + +import ( + "net/http" + "strings" + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" +) + +func TestAuthenticateManagementKey_LocalhostIPBan_BlocksCorrectKeyDuringBan(t *testing.T) { + h := &Handler{ + cfg: &config.Config{}, + failedAttempts: make(map[string]*attemptInfo), + envSecret: "test-secret", + } + + for i := 0; i < 5; i++ { + allowed, statusCode, errMsg := h.AuthenticateManagementKey("127.0.0.1", true, "wrong-secret") + if allowed { + t.Fatalf("expected auth to be denied at attempt %d", i+1) + } + if statusCode != http.StatusUnauthorized || errMsg != "invalid management key" { + t.Fatalf("unexpected auth failure at attempt %d: status=%d msg=%q", i+1, statusCode, errMsg) + } + } + + allowed, statusCode, errMsg := h.AuthenticateManagementKey("127.0.0.1", true, "test-secret") + if allowed { + t.Fatalf("expected correct key to be denied while banned") + } + if statusCode != http.StatusForbidden { + t.Fatalf("expected forbidden status while banned, got %d", statusCode) + } + if !strings.HasPrefix(errMsg, "IP banned due to too many failed attempts. Try again in") { + t.Fatalf("unexpected banned message: %q", errMsg) + } +} diff --git a/internal/api/handlers/management/kiro_quota.go b/internal/api/handlers/management/kiro_quota.go new file mode 100644 index 0000000000..ee01a5c6b8 --- /dev/null +++ b/internal/api/handlers/management/kiro_quota.go @@ -0,0 +1,174 @@ +package management + +import ( + "context" + "net/http" + "strings" + "time" + + "github.com/gin-gonic/gin" + log "github.com/sirupsen/logrus" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kiro" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +// GetKiroQuota fetches Kiro (AWS CodeWhisperer) usage quota information. +// +// Endpoint: +// +// GET /v0/management/kiro-quota +// +// Query Parameters (optional): +// - auth_index: The credential "auth_index" from GET /v0/management/auth-files. +// If omitted, uses the first available Kiro credential. +// +// Response: +// +// Returns the UsageQuotaResponse with usage breakdown and subscription info. +// +// Example: +// +// curl -sS -X GET "http://127.0.0.1:8317/v0/management/kiro-quota?auth_index=" \ +// -H "Authorization: Bearer " +func (h *Handler) GetKiroQuota(c *gin.Context) { + authIndex := strings.TrimSpace(c.Query("auth_index")) + if authIndex == "" { + authIndex = strings.TrimSpace(c.Query("authIndex")) + } + if authIndex == "" { + authIndex = strings.TrimSpace(c.Query("AuthIndex")) + } + + auth := h.findKiroAuth(authIndex) + if auth == nil { + c.JSON(http.StatusBadRequest, gin.H{"error": "no kiro credential found"}) + return + } + + // Extract token data from auth metadata + tokenData := extractKiroTokenData(auth) + if tokenData == nil || tokenData.AccessToken == "" { + c.JSON(http.StatusBadRequest, gin.H{"error": "kiro access token not available (token may need refresh)"}) + return + } + + // Create usage checker with proxy-aware HTTP client + checker := kiro.NewUsageCheckerWithClient( + util.SetProxy(&h.cfg.SDKConfig, &http.Client{Timeout: 30 * time.Second}), + ) + + ctx, cancel := context.WithTimeout(c.Request.Context(), 30*time.Second) + defer cancel() + + usage, err := checker.CheckUsage(ctx, tokenData) + if err != nil { + log.WithError(err).Debug("kiro quota request failed") + c.JSON(http.StatusBadGateway, gin.H{"error": "kiro quota request failed: " + err.Error()}) + return + } + + // Build enriched response + response := gin.H{ + "usage": usage, + "quota_status": buildKiroQuotaStatus(usage), + "auth_index": auth.Index, + "auth_name": auth.FileName, + } + + c.JSON(http.StatusOK, response) +} + +// findKiroAuth locates a Kiro credential by auth_index or returns the first available one. +func (h *Handler) findKiroAuth(authIndex string) *coreauth.Auth { + if h == nil || h.authManager == nil { + return nil + } + + auths := h.authManager.List() + var firstKiro *coreauth.Auth + + for _, auth := range auths { + if auth == nil { + continue + } + provider := strings.ToLower(strings.TrimSpace(auth.Provider)) + if provider != "kiro" { + continue + } + if auth.Disabled { + continue + } + if firstKiro == nil { + firstKiro = auth + } + if authIndex != "" { + auth.EnsureIndex() + if auth.Index == authIndex { + return auth + } + } + } + + if authIndex == "" { + return firstKiro + } + return nil +} + +// extractKiroTokenData extracts KiroTokenData from a coreauth.Auth's Metadata. +func extractKiroTokenData(auth *coreauth.Auth) *kiro.KiroTokenData { + if auth == nil || auth.Metadata == nil { + return nil + } + + accessToken, _ := auth.Metadata["access_token"].(string) + refreshToken, _ := auth.Metadata["refresh_token"].(string) + profileArn, _ := auth.Metadata["profile_arn"].(string) + clientID, _ := auth.Metadata["client_id"].(string) + clientSecret, _ := auth.Metadata["client_secret"].(string) + region, _ := auth.Metadata["region"].(string) + startURL, _ := auth.Metadata["start_url"].(string) + + if accessToken == "" { + return nil + } + + return &kiro.KiroTokenData{ + AccessToken: accessToken, + RefreshToken: refreshToken, + ProfileArn: profileArn, + ClientID: clientID, + ClientSecret: clientSecret, + Region: region, + StartURL: startURL, + } +} + +// buildKiroQuotaStatus builds a summary status from the usage response. +func buildKiroQuotaStatus(usage *kiro.UsageQuotaResponse) gin.H { + if usage == nil { + return gin.H{"exhausted": true, "remaining": 0} + } + + remaining := kiro.GetRemainingQuota(usage) + exhausted := kiro.IsQuotaExhausted(usage) + percentage := kiro.GetUsagePercentage(usage) + + status := gin.H{ + "exhausted": exhausted, + "remaining": remaining, + "usage_percentage": percentage, + } + + if usage.NextDateReset > 0 { + status["next_reset"] = time.Unix(int64(usage.NextDateReset/1000), 0) + } + + if usage.SubscriptionInfo != nil { + status["subscription"] = usage.SubscriptionInfo + } + + return status +} diff --git a/internal/api/handlers/management/kiro_quota_cache.go b/internal/api/handlers/management/kiro_quota_cache.go new file mode 100644 index 0000000000..5a9cbdc1c0 --- /dev/null +++ b/internal/api/handlers/management/kiro_quota_cache.go @@ -0,0 +1,141 @@ +package management + +import ( + "context" + "net/http" + "strings" + "sync" + "time" + + "github.com/gin-gonic/gin" + log "github.com/sirupsen/logrus" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kiro" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +// kiroQuotaCache stores cached quota info for Kiro auth entries. +var ( + kiroQuotaMu sync.RWMutex + kiroQuotaStore = make(map[string]gin.H) // keyed by auth ID +) + +// getKiroQuotaCached returns cached quota info for a Kiro auth entry. +func (h *Handler) getKiroQuotaCached(auth *coreauth.Auth) gin.H { + if auth == nil { + return nil + } + provider := strings.ToLower(strings.TrimSpace(auth.Provider)) + if provider != "kiro" { + return nil + } + + kiroQuotaMu.RLock() + result, ok := kiroQuotaStore[auth.ID] + kiroQuotaMu.RUnlock() + + if ok { + return result + } + + // If not cached, try to fetch synchronously (first time only) + return h.fetchAndCacheKiroQuota(auth) +} + +// fetchAndCacheKiroQuota fetches quota for a single Kiro auth and caches it. +func (h *Handler) fetchAndCacheKiroQuota(auth *coreauth.Auth) gin.H { + if auth == nil || auth.Metadata == nil { + return nil + } + + tokenData := extractKiroTokenData(auth) + if tokenData == nil || tokenData.AccessToken == "" { + return nil + } + + checker := kiro.NewUsageCheckerWithClient( + util.SetProxy(&h.cfg.SDKConfig, &http.Client{Timeout: 15 * time.Second}), + ) + + ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second) + defer cancel() + + usage, err := checker.CheckUsage(ctx, tokenData) + if err != nil { + log.WithError(err).Debugf("kiro quota fetch failed for %s", auth.ID) + return nil + } + + result := buildKiroQuotaEntry(usage) + + kiroQuotaMu.Lock() + kiroQuotaStore[auth.ID] = result + kiroQuotaMu.Unlock() + + return result +} + +// buildKiroQuotaEntry builds the quota info map from a usage response. +func buildKiroQuotaEntry(usage *kiro.UsageQuotaResponse) gin.H { + if usage == nil || len(usage.UsageBreakdownList) == 0 { + return nil + } + + bd := usage.UsageBreakdownList[0] + result := gin.H{ + "resource_type": bd.ResourceType, + "used": bd.CurrentUsageWithPrecision, + "limit": bd.UsageLimitWithPrecision, + "remaining": bd.UsageLimitWithPrecision - bd.CurrentUsageWithPrecision, + "usage_percentage": kiro.GetUsagePercentage(usage), + "exhausted": kiro.IsQuotaExhausted(usage), + } + + if usage.SubscriptionInfo != nil { + result["plan"] = usage.SubscriptionInfo.SubscriptionTitle + } + + if usage.NextDateReset > 0 { + result["next_reset"] = time.Unix(int64(usage.NextDateReset/1000), 0) + } + + return result +} + +// StartKiroQuotaRefresher starts a background goroutine that periodically +// refreshes Kiro quota info for all active Kiro auth entries. +func (h *Handler) StartKiroQuotaRefresher() { + go func() { + // Initial delay to let the service start up + time.Sleep(10 * time.Second) + h.refreshAllKiroQuotas() + + ticker := time.NewTicker(5 * time.Minute) + defer ticker.Stop() + for range ticker.C { + h.refreshAllKiroQuotas() + } + }() +} + +// refreshAllKiroQuotas fetches quota for all active Kiro auth entries. +func (h *Handler) refreshAllKiroQuotas() { + if h == nil || h.authManager == nil { + return + } + + auths := h.authManager.List() + for _, auth := range auths { + if auth == nil || auth.Disabled { + continue + } + provider := strings.ToLower(strings.TrimSpace(auth.Provider)) + if provider != "kiro" { + continue + } + h.fetchAndCacheKiroQuota(auth) + // Small delay between requests to avoid rate limiting + time.Sleep(2 * time.Second) + } +} diff --git a/internal/api/handlers/management/logs.go b/internal/api/handlers/management/logs.go index b64cd61938..ca6d7eda81 100644 --- a/internal/api/handlers/management/logs.go +++ b/internal/api/handlers/management/logs.go @@ -13,7 +13,7 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" ) const ( diff --git a/internal/api/handlers/management/model_definitions.go b/internal/api/handlers/management/model_definitions.go index 85ff314bf4..0d1b8af437 100644 --- a/internal/api/handlers/management/model_definitions.go +++ b/internal/api/handlers/management/model_definitions.go @@ -5,7 +5,7 @@ import ( "strings" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" ) // GetStaticModelDefinitions returns static model metadata for a given channel. diff --git a/internal/api/handlers/management/oauth_providers.go b/internal/api/handlers/management/oauth_providers.go new file mode 100644 index 0000000000..c0e36f4f8c --- /dev/null +++ b/internal/api/handlers/management/oauth_providers.go @@ -0,0 +1,16 @@ +package management + +import ( + "net/http" + + "github.com/gin-gonic/gin" +) + +func (h *Handler) GetOAuthProviders(c *gin.Context) { + if h == nil || h.sdkAuthManager == nil { + c.JSON(http.StatusOK, gin.H{"providers": []struct{}{}}) + return + } + providers := h.sdkAuthManager.ListProviders() + c.JSON(http.StatusOK, gin.H{"providers": providers}) +} diff --git a/internal/api/handlers/management/test_store_test.go b/internal/api/handlers/management/test_store_test.go index cf7dbaf7d0..2eaacd904f 100644 --- a/internal/api/handlers/management/test_store_test.go +++ b/internal/api/handlers/management/test_store_test.go @@ -4,7 +4,7 @@ import ( "context" "sync" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) type memoryAuthStore struct { diff --git a/internal/api/handlers/management/usage.go b/internal/api/handlers/management/usage.go index 5f79408963..dd12a59666 100644 --- a/internal/api/handlers/management/usage.go +++ b/internal/api/handlers/management/usage.go @@ -2,16 +2,63 @@ package management import ( "encoding/json" + "errors" "net/http" "time" + "strconv" + "strings" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/usage" + "github.com/router-for-me/CLIProxyAPI/v7/internal/usage" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" ) +type usageQueueRecord []byte + +func (r usageQueueRecord) MarshalJSON() ([]byte, error) { + if json.Valid(r) { + return append([]byte(nil), r...), nil + } + return json.Marshal(string(r)) +} + +// GetUsageQueue pops queued usage records from the usage queue. +func (h *Handler) GetUsageQueue(c *gin.Context) { + if h == nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "handler unavailable"}) + return + } + + count, errCount := parseUsageQueueCount(c.Query("count")) + if errCount != nil { + c.JSON(http.StatusBadRequest, gin.H{"error": errCount.Error()}) + return + } + + items := redisqueue.PopOldest(count) + records := make([]usageQueueRecord, 0, len(items)) + for _, item := range items { + records = append(records, usageQueueRecord(append([]byte(nil), item...))) + } + + c.JSON(http.StatusOK, records) +} + +func parseUsageQueueCount(value string) (int, error) { + value = strings.TrimSpace(value) + if value == "" { + return 1, nil + } + count, errCount := strconv.Atoi(value) + if errCount != nil || count <= 0 { + return 0, errors.New("count must be a positive integer") + } + return count, nil +} + type usageExportPayload struct { - Version int `json:"version"` - ExportedAt time.Time `json:"exported_at"` + Version int `json:"version"` + ExportedAt time.Time `json:"exported_at"` Usage usage.StatisticsSnapshot `json:"usage"` } @@ -77,3 +124,4 @@ func (h *Handler) ImportUsageStatistics(c *gin.Context) { "failed_requests": snapshot.FailureCount, }) } + diff --git a/internal/api/handlers/management/usage_test.go b/internal/api/handlers/management/usage_test.go new file mode 100644 index 0000000000..bdb8aa2e29 --- /dev/null +++ b/internal/api/handlers/management/usage_test.go @@ -0,0 +1,98 @@ +package management + +import ( + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" +) + +func TestGetUsageQueuePopsRequestedRecords(t *testing.T) { + gin.SetMode(gin.TestMode) + withManagementUsageQueue(t, func() { + redisqueue.Enqueue([]byte(`{"id":1}`)) + redisqueue.Enqueue([]byte(`{"id":2}`)) + redisqueue.Enqueue([]byte(`{"id":3}`)) + + rec := httptest.NewRecorder() + ginCtx, _ := gin.CreateTestContext(rec) + ginCtx.Request = httptest.NewRequest(http.MethodGet, "/v0/management/usage-queue?count=2", nil) + + h := &Handler{} + h.GetUsageQueue(ginCtx) + + if rec.Code != http.StatusOK { + t.Fatalf("status = %d, want %d body=%s", rec.Code, http.StatusOK, rec.Body.String()) + } + + var payload []json.RawMessage + if errUnmarshal := json.Unmarshal(rec.Body.Bytes(), &payload); errUnmarshal != nil { + t.Fatalf("unmarshal response: %v", errUnmarshal) + } + if len(payload) != 2 { + t.Fatalf("response records = %d, want 2", len(payload)) + } + requireRecordID(t, payload[0], 1) + requireRecordID(t, payload[1], 2) + + remaining := redisqueue.PopOldest(10) + if len(remaining) != 1 || string(remaining[0]) != `{"id":3}` { + t.Fatalf("remaining queue = %q, want third item only", remaining) + } + }) +} + +func TestGetUsageQueueInvalidCountDoesNotPop(t *testing.T) { + gin.SetMode(gin.TestMode) + withManagementUsageQueue(t, func() { + redisqueue.Enqueue([]byte(`{"id":1}`)) + + rec := httptest.NewRecorder() + ginCtx, _ := gin.CreateTestContext(rec) + ginCtx.Request = httptest.NewRequest(http.MethodGet, "/v0/management/usage-queue?count=0", nil) + + h := &Handler{} + h.GetUsageQueue(ginCtx) + + if rec.Code != http.StatusBadRequest { + t.Fatalf("status = %d, want %d body=%s", rec.Code, http.StatusBadRequest, rec.Body.String()) + } + + remaining := redisqueue.PopOldest(10) + if len(remaining) != 1 || string(remaining[0]) != `{"id":1}` { + t.Fatalf("remaining queue = %q, want original item", remaining) + } + }) +} + +func withManagementUsageQueue(t *testing.T, fn func()) { + t.Helper() + + prevQueueEnabled := redisqueue.Enabled() + redisqueue.SetEnabled(false) + redisqueue.SetEnabled(true) + + defer func() { + redisqueue.SetEnabled(false) + redisqueue.SetEnabled(prevQueueEnabled) + }() + + fn() +} + +func requireRecordID(t *testing.T, raw json.RawMessage, want int) { + t.Helper() + + var payload struct { + ID int `json:"id"` + } + if errUnmarshal := json.Unmarshal(raw, &payload); errUnmarshal != nil { + t.Fatalf("unmarshal record: %v", errUnmarshal) + } + if payload.ID != want { + t.Fatalf("record id = %d, want %d", payload.ID, want) + } +} diff --git a/internal/api/handlers/management/vertex_import.go b/internal/api/handlers/management/vertex_import.go index bad066a270..bb064b9fb9 100644 --- a/internal/api/handlers/management/vertex_import.go +++ b/internal/api/handlers/management/vertex_import.go @@ -9,8 +9,8 @@ import ( "strings" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/vertex" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/vertex" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // ImportVertexCredential handles uploading a Vertex service account JSON and saving it as an auth record. diff --git a/internal/api/middleware/request_logging.go b/internal/api/middleware/request_logging.go index b57dd8aa42..7a10fad8a1 100644 --- a/internal/api/middleware/request_logging.go +++ b/internal/api/middleware/request_logging.go @@ -11,8 +11,8 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" ) const maxErrorOnlyCapturedRequestBodyBytes int64 = 1 << 20 // 1 MiB diff --git a/internal/api/middleware/response_writer.go b/internal/api/middleware/response_writer.go index 7f4892674a..5a89ed0fdf 100644 --- a/internal/api/middleware/response_writer.go +++ b/internal/api/middleware/response_writer.go @@ -10,8 +10,8 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" ) const requestBodyOverrideContextKey = "REQUEST_BODY_OVERRIDE" diff --git a/internal/api/middleware/response_writer_test.go b/internal/api/middleware/response_writer_test.go index f5c21deb8a..fa0bd54854 100644 --- a/internal/api/middleware/response_writer_test.go +++ b/internal/api/middleware/response_writer_test.go @@ -7,8 +7,8 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" ) func TestExtractRequestBodyPrefersOverride(t *testing.T) { diff --git a/internal/api/modules/amp/amp.go b/internal/api/modules/amp/amp.go index a12733e2a1..18c8ac1ef0 100644 --- a/internal/api/modules/amp/amp.go +++ b/internal/api/modules/amp/amp.go @@ -9,9 +9,9 @@ import ( "sync" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api/modules" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api/modules" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" log "github.com/sirupsen/logrus" ) diff --git a/internal/api/modules/amp/amp_test.go b/internal/api/modules/amp/amp_test.go index 430c4b62a7..5ca01754a2 100644 --- a/internal/api/modules/amp/amp_test.go +++ b/internal/api/modules/amp/amp_test.go @@ -9,10 +9,10 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api/modules" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api/modules" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" ) func TestAmpModule_Name(t *testing.T) { diff --git a/internal/api/modules/amp/fallback_handlers.go b/internal/api/modules/amp/fallback_handlers.go index e4e0f8a650..06e0a035d0 100644 --- a/internal/api/modules/amp/fallback_handlers.go +++ b/internal/api/modules/amp/fallback_handlers.go @@ -8,8 +8,8 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/api/modules/amp/fallback_handlers_test.go b/internal/api/modules/amp/fallback_handlers_test.go index a687fd116b..1aacaae21f 100644 --- a/internal/api/modules/amp/fallback_handlers_test.go +++ b/internal/api/modules/amp/fallback_handlers_test.go @@ -9,8 +9,8 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" ) func TestFallbackHandler_ModelMapping_PreservesThinkingSuffixAndRewritesResponse(t *testing.T) { diff --git a/internal/api/modules/amp/model_mapping.go b/internal/api/modules/amp/model_mapping.go index 4159a2b576..2b68866edf 100644 --- a/internal/api/modules/amp/model_mapping.go +++ b/internal/api/modules/amp/model_mapping.go @@ -7,9 +7,9 @@ import ( "strings" "sync" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" ) diff --git a/internal/api/modules/amp/model_mapping_test.go b/internal/api/modules/amp/model_mapping_test.go index 53165d22c3..dcfb07ee5e 100644 --- a/internal/api/modules/amp/model_mapping_test.go +++ b/internal/api/modules/amp/model_mapping_test.go @@ -3,8 +3,8 @@ package amp import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" ) func TestNewModelMapper(t *testing.T) { diff --git a/internal/api/modules/amp/proxy.go b/internal/api/modules/amp/proxy.go index c8010854f3..54f4b734ba 100644 --- a/internal/api/modules/amp/proxy.go +++ b/internal/api/modules/amp/proxy.go @@ -14,7 +14,7 @@ import ( "strings" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" log "github.com/sirupsen/logrus" ) diff --git a/internal/api/modules/amp/proxy_test.go b/internal/api/modules/amp/proxy_test.go index 49dba956c0..2852efde3a 100644 --- a/internal/api/modules/amp/proxy_test.go +++ b/internal/api/modules/amp/proxy_test.go @@ -11,7 +11,7 @@ import ( "strings" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) // Helper: compress data with gzip diff --git a/internal/api/modules/amp/response_rewriter.go b/internal/api/modules/amp/response_rewriter.go index 707fe576b4..895c494e74 100644 --- a/internal/api/modules/amp/response_rewriter.go +++ b/internal/api/modules/amp/response_rewriter.go @@ -123,6 +123,52 @@ func (rw *ResponseRewriter) Flush() { var modelFieldPaths = []string{"message.model", "model", "modelVersion", "response.model", "response.modelVersion"} +// ampCanonicalToolNames maps tool names to the exact casing expected by the +// Amp mode tool whitelist (case-sensitive match). +var ampCanonicalToolNames = map[string]string{ + "bash": "Bash", + "read": "Read", + "grep": "Grep", + "glob": "glob", + "task": "Task", + "check": "Check", +} + +// normalizeAmpToolNames fixes tool_use block names to match Amp's canonical casing. +// Some upstream models return lowercase tool names (e.g. "bash" instead of "Bash") +// which causes Amp's case-sensitive mode whitelist to reject them. +func normalizeAmpToolNames(data []byte) []byte { + // Non-streaming: content[].name in tool_use blocks + for index, block := range gjson.GetBytes(data, "content").Array() { + if block.Get("type").String() != "tool_use" { + continue + } + name := block.Get("name").String() + if canonical, ok := ampCanonicalToolNames[strings.ToLower(name)]; ok && name != canonical { + path := fmt.Sprintf("content.%d.name", index) + var err error + data, err = sjson.SetBytes(data, path, canonical) + if err != nil { + log.Warnf("Amp ResponseRewriter: failed to normalize tool name %q to %q: %v", name, canonical, err) + } + } + } + + // Streaming: content_block.name in content_block_start events + if gjson.GetBytes(data, "content_block.type").String() == "tool_use" { + name := gjson.GetBytes(data, "content_block.name").String() + if canonical, ok := ampCanonicalToolNames[strings.ToLower(name)]; ok && name != canonical { + var err error + data, err = sjson.SetBytes(data, "content_block.name", canonical) + if err != nil { + log.Warnf("Amp ResponseRewriter: failed to normalize streaming tool name %q to %q: %v", name, canonical, err) + } + } + } + + return data +} + // ensureAmpSignature injects empty signature fields into tool_use/thinking blocks // in API responses so that the Amp TUI does not crash on P.signature.length. func ensureAmpSignature(data []byte) []byte { @@ -179,6 +225,7 @@ func (rw *ResponseRewriter) suppressAmpThinking(data []byte) []byte { func (rw *ResponseRewriter) rewriteModelInResponse(data []byte) []byte { data = ensureAmpSignature(data) + data = normalizeAmpToolNames(data) data = rw.suppressAmpThinking(data) if len(data) == 0 { return data @@ -278,6 +325,9 @@ func (rw *ResponseRewriter) rewriteStreamEvent(data []byte) []byte { // Inject empty signature where needed data = ensureAmpSignature(data) + // Normalize tool names to canonical casing + data = normalizeAmpToolNames(data) + // Rewrite model name if rw.originalModel != "" { for _, path := range modelFieldPaths { diff --git a/internal/api/modules/amp/response_rewriter_test.go b/internal/api/modules/amp/response_rewriter_test.go index ac95dfc64f..a3a350cb23 100644 --- a/internal/api/modules/amp/response_rewriter_test.go +++ b/internal/api/modules/amp/response_rewriter_test.go @@ -175,6 +175,57 @@ func TestSanitizeAmpRequestBody_MixedInvalidThinkingAndToolUseSignature(t *testi } } +func TestNormalizeAmpToolNames_NonStreaming(t *testing.T) { + input := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"bash","input":{"cmd":"ls"}},{"type":"tool_use","id":"toolu_02","name":"read","input":{"path":"/tmp"}},{"type":"text","text":"hello"}]}`) + result := normalizeAmpToolNames(input) + + if !contains(result, []byte(`"name":"Bash"`)) { + t.Errorf("expected bash->Bash, got %s", string(result)) + } + if !contains(result, []byte(`"name":"Read"`)) { + t.Errorf("expected read->Read, got %s", string(result)) + } + if contains(result, []byte(`"name":"bash"`)) { + t.Errorf("expected lowercase bash to be replaced, got %s", string(result)) + } +} + +func TestNormalizeAmpToolNames_Streaming(t *testing.T) { + input := []byte(`{"type":"content_block_start","index":1,"content_block":{"type":"tool_use","name":"grep","id":"toolu_01","input":{}}}`) + result := normalizeAmpToolNames(input) + + if !contains(result, []byte(`"name":"Grep"`)) { + t.Errorf("expected grep->Grep in streaming, got %s", string(result)) + } +} + +func TestNormalizeAmpToolNames_AlreadyCorrect(t *testing.T) { + input := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"Bash","input":{"cmd":"ls"}}]}`) + result := normalizeAmpToolNames(input) + + if string(result) != string(input) { + t.Errorf("expected no modification for correctly-cased tool, got %s", string(result)) + } +} + +func TestNormalizeAmpToolNames_GlobPreserved(t *testing.T) { + input := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"glob","input":{"pattern":"*.go"}}]}`) + result := normalizeAmpToolNames(input) + + if string(result) != string(input) { + t.Errorf("expected glob to remain lowercase, got %s", string(result)) + } +} + +func TestNormalizeAmpToolNames_UnknownToolUntouched(t *testing.T) { + input := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"edit_file","input":{"path":"/tmp/x"}}]}`) + result := normalizeAmpToolNames(input) + + if string(result) != string(input) { + t.Errorf("expected no modification for unknown tool, got %s", string(result)) + } +} + func contains(data, substr []byte) bool { for i := 0; i <= len(data)-len(substr); i++ { if string(data[i:i+len(substr)]) == string(substr) { diff --git a/internal/api/modules/amp/routes.go b/internal/api/modules/amp/routes.go index 456a50ac12..84023d156d 100644 --- a/internal/api/modules/amp/routes.go +++ b/internal/api/modules/amp/routes.go @@ -9,11 +9,11 @@ import ( "strings" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers/claude" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers/gemini" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers/openai" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers/claude" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers/gemini" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers/openai" log "github.com/sirupsen/logrus" ) @@ -21,12 +21,12 @@ import ( // from gin.Context to the request context for SecretSource lookup. type clientAPIKeyContextKey struct{} -// clientAPIKeyMiddleware injects the authenticated client API key from gin.Context["apiKey"] +// clientAPIKeyMiddleware injects the authenticated client API key from gin.Context["userApiKey"] // into the request context so that SecretSource can look it up for per-client upstream routing. func clientAPIKeyMiddleware() gin.HandlerFunc { return func(c *gin.Context) { // Extract the client API key from gin context (set by AuthMiddleware) - if apiKey, exists := c.Get("apiKey"); exists { + if apiKey, exists := c.Get("userApiKey"); exists { if keyStr, ok := apiKey.(string); ok && keyStr != "" { // Inject into request context for SecretSource.Get(ctx) to read ctx := context.WithValue(c.Request.Context(), clientAPIKeyContextKey{}, keyStr) @@ -199,6 +199,7 @@ func (m *AmpModule) registerManagementRoutes(engine *gin.Engine, baseHandler *ha ampAPI.Any("/telemetry/*path", proxyHandler) ampAPI.Any("/threads", proxyHandler) ampAPI.Any("/threads/*path", proxyHandler) + ampAPI.Any("/thread-actors", proxyHandler) ampAPI.Any("/otel", proxyHandler) ampAPI.Any("/otel/*path", proxyHandler) ampAPI.Any("/tab", proxyHandler) diff --git a/internal/api/modules/amp/routes_test.go b/internal/api/modules/amp/routes_test.go index bae890aec4..a500f8150c 100644 --- a/internal/api/modules/amp/routes_test.go +++ b/internal/api/modules/amp/routes_test.go @@ -6,7 +6,7 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" ) func TestRegisterManagementRoutes(t *testing.T) { @@ -49,6 +49,7 @@ func TestRegisterManagementRoutes(t *testing.T) { {"/api/meta", http.MethodGet}, {"/api/telemetry", http.MethodGet}, {"/api/threads", http.MethodGet}, + {"/api/thread-actors", http.MethodPost}, {"/threads/", http.MethodGet}, {"/threads.rss", http.MethodGet}, // Root-level route (no /api prefix) {"/api/otel", http.MethodGet}, diff --git a/internal/api/modules/amp/secret.go b/internal/api/modules/amp/secret.go index f91c72ba9c..512d263d0c 100644 --- a/internal/api/modules/amp/secret.go +++ b/internal/api/modules/amp/secret.go @@ -10,7 +10,7 @@ import ( "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" log "github.com/sirupsen/logrus" ) diff --git a/internal/api/modules/amp/secret_test.go b/internal/api/modules/amp/secret_test.go index 6a6f6ba265..17a75b15de 100644 --- a/internal/api/modules/amp/secret_test.go +++ b/internal/api/modules/amp/secret_test.go @@ -9,7 +9,7 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" log "github.com/sirupsen/logrus" "github.com/sirupsen/logrus/hooks/test" ) diff --git a/internal/api/modules/modules.go b/internal/api/modules/modules.go index 8c5447d96d..5ddfa609c8 100644 --- a/internal/api/modules/modules.go +++ b/internal/api/modules/modules.go @@ -6,8 +6,8 @@ import ( "fmt" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" ) // Context encapsulates the dependencies exposed to routing modules during diff --git a/internal/api/mux_listener.go b/internal/api/mux_listener.go new file mode 100644 index 0000000000..d9a0c9f401 --- /dev/null +++ b/internal/api/mux_listener.go @@ -0,0 +1,68 @@ +package api + +import ( + "net" + "sync" +) + +type muxListener struct { + addr net.Addr + connCh chan net.Conn + closeCh chan struct{} + once sync.Once +} + +func newMuxListener(addr net.Addr, buffer int) *muxListener { + if buffer <= 0 { + buffer = 1 + } + return &muxListener{ + addr: addr, + connCh: make(chan net.Conn, buffer), + closeCh: make(chan struct{}), + } +} + +func (l *muxListener) Put(conn net.Conn) error { + if conn == nil { + return nil + } + select { + case <-l.closeCh: + return net.ErrClosed + case l.connCh <- conn: + return nil + } +} + +func (l *muxListener) Accept() (net.Conn, error) { + select { + case <-l.closeCh: + return nil, net.ErrClosed + case conn := <-l.connCh: + if conn == nil { + return nil, net.ErrClosed + } + return conn, nil + } +} + +func (l *muxListener) Close() error { + if l == nil { + return nil + } + l.once.Do(func() { + close(l.closeCh) + }) + return nil +} + +func (l *muxListener) Addr() net.Addr { + if l == nil { + return &net.TCPAddr{} + } + if l.addr == nil { + return &net.TCPAddr{} + } + return l.addr +} diff --git a/internal/api/protocol_multiplexer.go b/internal/api/protocol_multiplexer.go new file mode 100644 index 0000000000..b83e1164cf --- /dev/null +++ b/internal/api/protocol_multiplexer.go @@ -0,0 +1,115 @@ +package api + +import ( + "bufio" + "crypto/tls" + "errors" + "net" + "net/http" + "strings" + + log "github.com/sirupsen/logrus" +) + +func normalizeHTTPServeError(err error) error { + if err == nil { + return nil + } + if errors.Is(err, net.ErrClosed) { + return nil + } + if errors.Is(err, http.ErrServerClosed) { + return nil + } + return err +} + +func normalizeListenerError(err error) error { + if err == nil { + return nil + } + if errors.Is(err, net.ErrClosed) { + return nil + } + return err +} + +func (s *Server) acceptMuxConnections(listener net.Listener, httpListener *muxListener) error { + if s == nil || listener == nil { + return net.ErrClosed + } + + for { + conn, errAccept := listener.Accept() + if errAccept != nil { + return errAccept + } + if conn == nil { + continue + } + + tlsConn, ok := conn.(*tls.Conn) + if ok { + if errHandshake := tlsConn.Handshake(); errHandshake != nil { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close connection after TLS handshake error: %v", errClose) + } + continue + } + proto := strings.TrimSpace(tlsConn.ConnectionState().NegotiatedProtocol) + if proto == "h2" || proto == "http/1.1" { + if httpListener == nil { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close connection: %v", errClose) + } + continue + } + if errPut := httpListener.Put(tlsConn); errPut != nil { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close connection after HTTP routing failure: %v", errClose) + } + } + continue + } + } + + reader := bufio.NewReader(conn) + prefix, errPeek := reader.Peek(1) + if errPeek != nil { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close connection after protocol peek failure: %v", errClose) + } + continue + } + + if isRedisRESPPrefix(prefix[0]) { + if s.cfg != nil && s.cfg.Home.Enabled { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close redis connection while home mode is enabled: %v", errClose) + } + continue + } + if !s.managementRoutesEnabled.Load() { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close redis connection while management is disabled: %v", errClose) + } + continue + } + go s.handleRedisConnection(conn, reader) + continue + } + + if httpListener == nil { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close connection without HTTP listener: %v", errClose) + } + continue + } + + if errPut := httpListener.Put(&bufferedConn{Conn: conn, reader: reader}); errPut != nil { + if errClose := conn.Close(); errClose != nil { + log.Errorf("failed to close connection after HTTP routing failure: %v", errClose) + } + } + } +} diff --git a/internal/api/redis_queue_protocol.go b/internal/api/redis_queue_protocol.go new file mode 100644 index 0000000000..6f3622d7bf --- /dev/null +++ b/internal/api/redis_queue_protocol.go @@ -0,0 +1,377 @@ +package api + +import ( + "bufio" + "errors" + "fmt" + "io" + "net" + "net/http" + "strconv" + "strings" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" + log "github.com/sirupsen/logrus" +) + +func isRedisRESPPrefix(prefix byte) bool { + switch prefix { + case '*', '$', '+', '-', ':': + return true + default: + return false + } +} + +func (s *Server) handleRedisConnection(conn net.Conn, reader *bufio.Reader) { + if s == nil || conn == nil || reader == nil { + return + } + + clientIP, localClient := resolveRemoteIP(conn.RemoteAddr()) + authed := false + writer := bufio.NewWriter(conn) + defer func() { + if errClose := conn.Close(); errClose != nil { + log.Errorf("redis connection close error: %v", errClose) + } + }() + + flush := func() bool { + if errFlush := writer.Flush(); errFlush != nil { + log.Errorf("redis protocol flush error: %v", errFlush) + return false + } + return true + } + + if s.cfg != nil && s.cfg.Home.Enabled { + _ = writeRedisError(writer, "ERR redis usage output disabled in home mode") + _ = writer.Flush() + return + } + + for { + if !s.managementRoutesEnabled.Load() { + return + } + + args, err := readRESPArray(reader) + if err != nil { + if !errors.Is(err, io.EOF) { + _ = writeRedisError(writer, "ERR "+err.Error()) + _ = writer.Flush() + } + return + } + if len(args) == 0 { + _ = writeRedisError(writer, "ERR empty command") + if !flush() { + return + } + continue + } + + cmd := strings.ToUpper(strings.TrimSpace(args[0])) + + if cmd != "AUTH" && !authed { + if s.mgmt != nil { + _, statusCode, errMsg := s.mgmt.AuthenticateManagementKey(clientIP, localClient, "") + if statusCode == http.StatusForbidden && strings.HasPrefix(errMsg, "IP banned due to too many failed attempts") { + _ = writeRedisError(writer, "ERR "+errMsg) + } else { + _ = writeRedisError(writer, "NOAUTH Authentication required.") + } + } else { + _ = writeRedisError(writer, "NOAUTH Authentication required.") + } + if !flush() { + return + } + continue + } + + switch cmd { + case "AUTH": + password, ok := parseAuthPassword(args) + if !ok { + if s.mgmt != nil { + _, statusCode, errMsg := s.mgmt.AuthenticateManagementKey(clientIP, localClient, "") + if statusCode == http.StatusForbidden && strings.HasPrefix(errMsg, "IP banned due to too many failed attempts") { + _ = writeRedisError(writer, "ERR "+errMsg) + if !flush() { + return + } + continue + } + } + _ = writeRedisError(writer, "ERR wrong number of arguments for 'auth' command") + if !flush() { + return + } + continue + } + if s.mgmt == nil { + _ = writeRedisError(writer, "ERR remote management disabled") + if !flush() { + return + } + continue + } + allowed, _, errMsg := s.mgmt.AuthenticateManagementKey(clientIP, localClient, password) + if !allowed { + _ = writeRedisError(writer, "ERR "+errMsg) + if !flush() { + return + } + continue + } + authed = true + _ = writeRedisSimpleString(writer, "OK") + if !flush() { + return + } + case "LPOP", "RPOP": + if !authed { + _ = writeRedisError(writer, "NOAUTH Authentication required.") + if !flush() { + return + } + continue + } + count, hasCount, ok := parsePopCount(args) + if !ok { + _ = writeRedisError(writer, "ERR wrong number of arguments for '"+strings.ToLower(cmd)+"' command") + if !flush() { + return + } + continue + } + if count <= 0 { + _ = writeRedisError(writer, "ERR value is not an integer or out of range") + if !flush() { + return + } + continue + } + items := redisqueue.PopOldest(count) + if hasCount { + _ = writeRedisArrayOfBulkStrings(writer, items) + if !flush() { + return + } + continue + } + if len(items) == 0 { + _ = writeRedisNilBulkString(writer) + if !flush() { + return + } + continue + } + _ = writeRedisBulkString(writer, items[0]) + if !flush() { + return + } + default: + _ = writeRedisError(writer, fmt.Sprintf("ERR unknown command '%s'", strings.ToLower(cmd))) + if !flush() { + return + } + } + } +} + +func resolveRemoteIP(addr net.Addr) (ip string, localClient bool) { + if addr == nil { + return "", false + } + + var host string + switch a := addr.(type) { + case *net.TCPAddr: + if a != nil && a.IP != nil { + if ip4 := a.IP.To4(); ip4 != nil { + host = ip4.String() + } else { + host = a.IP.String() + } + } + default: + host = addr.String() + if h, _, err := net.SplitHostPort(host); err == nil { + host = h + } + host = strings.TrimSpace(host) + if raw, _, ok := strings.Cut(host, "%"); ok { + host = raw + } + if parsed := net.ParseIP(host); parsed != nil { + if ip4 := parsed.To4(); ip4 != nil { + host = ip4.String() + } else { + host = parsed.String() + } + } + } + + host = strings.TrimSpace(host) + localClient = host == "127.0.0.1" || host == "::1" + return host, localClient +} + +func parseAuthPassword(args []string) (string, bool) { + switch len(args) { + case 2: + return args[1], true + case 3: + // Support AUTH by ignoring username for compatibility. + return args[2], true + default: + return "", false + } +} + +func parsePopCount(args []string) (count int, hasCount bool, ok bool) { + if len(args) != 2 && len(args) != 3 { + return 0, false, false + } + if len(args) == 2 { + return 1, false, true + } + parsed, err := strconv.Atoi(strings.TrimSpace(args[2])) + if err != nil { + return 0, true, true + } + return parsed, true, true +} + +func readRESPArray(reader *bufio.Reader) ([]string, error) { + prefix, err := reader.ReadByte() + if err != nil { + return nil, err + } + if prefix != '*' { + return nil, fmt.Errorf("protocol error") + } + line, err := readRESPLine(reader) + if err != nil { + return nil, err + } + count, err := strconv.Atoi(line) + if err != nil || count < 0 { + return nil, fmt.Errorf("protocol error") + } + args := make([]string, 0, count) + for i := 0; i < count; i++ { + value, err := readRESPString(reader) + if err != nil { + return nil, err + } + args = append(args, value) + } + return args, nil +} + +func readRESPString(reader *bufio.Reader) (string, error) { + prefix, err := reader.ReadByte() + if err != nil { + return "", err + } + switch prefix { + case '$': + return readRESPBulkString(reader) + case '+', ':': + return readRESPLine(reader) + default: + return "", fmt.Errorf("protocol error") + } +} + +func readRESPBulkString(reader *bufio.Reader) (string, error) { + line, err := readRESPLine(reader) + if err != nil { + return "", err + } + length, err := strconv.Atoi(line) + if err != nil { + return "", fmt.Errorf("protocol error") + } + if length < 0 { + return "", nil + } + buf := make([]byte, length+2) + if _, err := io.ReadFull(reader, buf); err != nil { + return "", err + } + if length+2 < 2 || buf[length] != '\r' || buf[length+1] != '\n' { + return "", fmt.Errorf("protocol error") + } + return string(buf[:length]), nil +} + +func readRESPLine(reader *bufio.Reader) (string, error) { + line, err := reader.ReadString('\n') + if err != nil { + return "", err + } + line = strings.TrimSuffix(line, "\n") + line = strings.TrimSuffix(line, "\r") + return line, nil +} + +func writeRedisSimpleString(writer *bufio.Writer, value string) error { + if writer == nil { + return net.ErrClosed + } + _, err := writer.WriteString("+" + value + "\r\n") + return err +} + +func writeRedisError(writer *bufio.Writer, message string) error { + if writer == nil { + return net.ErrClosed + } + _, err := writer.WriteString("-" + message + "\r\n") + return err +} + +func writeRedisNilBulkString(writer *bufio.Writer) error { + if writer == nil { + return net.ErrClosed + } + _, err := writer.WriteString("$-1\r\n") + return err +} + +func writeRedisBulkString(writer *bufio.Writer, payload []byte) error { + if writer == nil { + return net.ErrClosed + } + if payload == nil { + return writeRedisNilBulkString(writer) + } + if _, err := writer.WriteString("$" + strconv.Itoa(len(payload)) + "\r\n"); err != nil { + return err + } + if _, err := writer.Write(payload); err != nil { + return err + } + _, err := writer.WriteString("\r\n") + return err +} + +func writeRedisArrayOfBulkStrings(writer *bufio.Writer, items [][]byte) error { + if writer == nil { + return net.ErrClosed + } + if _, err := writer.WriteString("*" + strconv.Itoa(len(items)) + "\r\n"); err != nil { + return err + } + for i := range items { + if err := writeRedisBulkString(writer, items[i]); err != nil { + return err + } + } + return nil +} diff --git a/internal/api/redis_queue_protocol_integration_test.go b/internal/api/redis_queue_protocol_integration_test.go new file mode 100644 index 0000000000..1586d37c85 --- /dev/null +++ b/internal/api/redis_queue_protocol_integration_test.go @@ -0,0 +1,513 @@ +package api + +import ( + "bufio" + "bytes" + "errors" + "fmt" + "io" + "net" + "strconv" + "strings" + "testing" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" +) + +type remoteAddrConn struct { + net.Conn + remoteAddr net.Addr +} + +func (c *remoteAddrConn) RemoteAddr() net.Addr { + if c == nil { + return nil + } + return c.remoteAddr +} + +func startRedisMuxListener(t *testing.T, server *Server) (addr string, stop func()) { + t.Helper() + + listener, errListen := net.Listen("tcp", "127.0.0.1:0") + if errListen != nil { + t.Fatalf("failed to listen: %v", errListen) + } + + errCh := make(chan error, 1) + go func() { + errCh <- server.acceptMuxConnections(listener, nil) + }() + + stop = func() { + _ = listener.Close() + select { + case err := <-errCh: + if err != nil && !errors.Is(err, net.ErrClosed) { + t.Errorf("accept loop returned unexpected error: %v", err) + } + case <-time.After(2 * time.Second): + t.Errorf("timeout waiting for accept loop to exit") + } + } + + return listener.Addr().String(), stop +} + +func writeTestRESPCommand(conn net.Conn, args ...string) error { + if conn == nil { + return net.ErrClosed + } + if len(args) == 0 { + return nil + } + + var buf bytes.Buffer + fmt.Fprintf(&buf, "*%d\r\n", len(args)) + for _, arg := range args { + fmt.Fprintf(&buf, "$%d\r\n%s\r\n", len(arg), arg) + } + _, err := conn.Write(buf.Bytes()) + return err +} + +func readTestRESPLine(r *bufio.Reader) (string, error) { + line, err := r.ReadString('\n') + if err != nil { + return "", err + } + if !strings.HasSuffix(line, "\r\n") { + return "", fmt.Errorf("invalid RESP line terminator: %q", line) + } + return strings.TrimSuffix(line, "\r\n"), nil +} + +func readTestRESPSimpleString(r *bufio.Reader) (string, error) { + prefix, err := r.ReadByte() + if err != nil { + return "", err + } + if prefix != '+' { + return "", fmt.Errorf("expected simple string prefix '+', got %q", prefix) + } + return readTestRESPLine(r) +} + +func readTestRESPError(r *bufio.Reader) (string, error) { + prefix, err := r.ReadByte() + if err != nil { + return "", err + } + if prefix != '-' { + return "", fmt.Errorf("expected error prefix '-', got %q", prefix) + } + return readTestRESPLine(r) +} + +func readTestRESPBulkString(r *bufio.Reader) ([]byte, error) { + prefix, err := r.ReadByte() + if err != nil { + return nil, err + } + if prefix != '$' { + return nil, fmt.Errorf("expected bulk string prefix '$', got %q", prefix) + } + + line, err := readTestRESPLine(r) + if err != nil { + return nil, err + } + length, err := strconv.Atoi(line) + if err != nil { + return nil, fmt.Errorf("invalid bulk string length %q: %v", line, err) + } + if length == -1 { + return nil, nil + } + if length < -1 { + return nil, fmt.Errorf("invalid bulk string length %d", length) + } + + payload := make([]byte, length+2) + if _, err := io.ReadFull(r, payload); err != nil { + return nil, err + } + if payload[length] != '\r' || payload[length+1] != '\n' { + return nil, fmt.Errorf("invalid bulk string terminator") + } + return payload[:length], nil +} + +func readRESPArrayOfBulkStrings(r *bufio.Reader) ([][]byte, error) { + prefix, err := r.ReadByte() + if err != nil { + return nil, err + } + if prefix != '*' { + return nil, fmt.Errorf("expected array prefix '*', got %q", prefix) + } + + line, err := readTestRESPLine(r) + if err != nil { + return nil, err + } + count, err := strconv.Atoi(line) + if err != nil { + return nil, fmt.Errorf("invalid array length %q: %v", line, err) + } + if count < 0 { + return nil, fmt.Errorf("invalid array length %d", count) + } + + out := make([][]byte, 0, count) + for i := 0; i < count; i++ { + item, err := readTestRESPBulkString(r) + if err != nil { + return nil, err + } + out = append(out, item) + } + return out, nil +} + +func TestRedisProtocol_ManagementDisabled_RejectsConnection(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "") + redisqueue.SetEnabled(false) + + server := newTestServer(t) + if server.managementRoutesEnabled.Load() { + t.Fatalf("expected managementRoutesEnabled to be false") + } + + addr, stop := startRedisMuxListener(t, server) + t.Cleanup(stop) + + conn, errDial := net.DialTimeout("tcp", addr, time.Second) + if errDial != nil { + t.Fatalf("failed to dial redis listener: %v", errDial) + } + t.Cleanup(func() { _ = conn.Close() }) + + _ = conn.SetDeadline(time.Now().Add(2 * time.Second)) + if errWrite := writeTestRESPCommand(conn, "PING"); errWrite != nil { + t.Fatalf("failed to write RESP command: %v", errWrite) + } + + buf := make([]byte, 1) + _, errRead := conn.Read(buf) + if errRead == nil { + t.Fatalf("expected connection to be closed when management is disabled") + } + if ne, ok := errRead.(net.Error); ok && ne.Timeout() { + t.Fatalf("expected connection to be closed when management is disabled, got timeout: %v", errRead) + } +} + +func TestRedisProtocol_HomeEnabled_DisablesConnection(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "test-management-password") + redisqueue.SetEnabled(false) + t.Cleanup(func() { redisqueue.SetEnabled(false) }) + + server := newTestServer(t) + if !server.managementRoutesEnabled.Load() { + t.Fatalf("expected managementRoutesEnabled to be true") + } + if server.cfg == nil { + t.Fatalf("expected server cfg to be non-nil") + } + server.cfg.Home.Enabled = true + redisqueue.SetEnabled(true) + + addr, stop := startRedisMuxListener(t, server) + t.Cleanup(stop) + + conn, errDial := net.DialTimeout("tcp", addr, time.Second) + if errDial != nil { + t.Fatalf("failed to dial redis listener: %v", errDial) + } + t.Cleanup(func() { _ = conn.Close() }) + + _ = conn.SetDeadline(time.Now().Add(2 * time.Second)) + _ = writeTestRESPCommand(conn, "PING") + + buf := make([]byte, 1) + _, errRead := conn.Read(buf) + if errRead == nil { + t.Fatalf("expected connection to be closed when home mode is enabled") + } + if ne, ok := errRead.(net.Error); ok && ne.Timeout() { + t.Fatalf("expected connection to be closed when home mode is enabled, got timeout: %v", errRead) + } +} + +func TestRedisProtocol_AUTH_And_PopContracts(t *testing.T) { + const managementPassword = "test-management-password" + + t.Setenv("MANAGEMENT_PASSWORD", managementPassword) + redisqueue.SetEnabled(false) + t.Cleanup(func() { redisqueue.SetEnabled(false) }) + + server := newTestServer(t) + if !server.managementRoutesEnabled.Load() { + t.Fatalf("expected managementRoutesEnabled to be true") + } + + addr, stop := startRedisMuxListener(t, server) + t.Cleanup(stop) + + conn, errDial := net.DialTimeout("tcp", addr, time.Second) + if errDial != nil { + t.Fatalf("failed to dial redis listener: %v", errDial) + } + t.Cleanup(func() { _ = conn.Close() }) + + reader := bufio.NewReader(conn) + + _ = conn.SetDeadline(time.Now().Add(5 * time.Second)) + + if errWrite := writeTestRESPCommand(conn, "AUTH", "test-key"); errWrite != nil { + t.Fatalf("failed to write AUTH command: %v", errWrite) + } + if msg, err := readTestRESPError(reader); err != nil { + t.Fatalf("failed to read AUTH error: %v", err) + } else if msg != "ERR invalid management key" { + t.Fatalf("unexpected AUTH error: %q", msg) + } + + if errWrite := writeTestRESPCommand(conn, "LPOP", "queue"); errWrite != nil { + t.Fatalf("failed to write LPOP command: %v", errWrite) + } + if msg, err := readTestRESPError(reader); err != nil { + t.Fatalf("failed to read LPOP NOAUTH error: %v", err) + } else if msg != "NOAUTH Authentication required." { + t.Fatalf("unexpected LPOP NOAUTH error: %q", msg) + } + + if errWrite := writeTestRESPCommand(conn, "AUTH", managementPassword); errWrite != nil { + t.Fatalf("failed to write AUTH command: %v", errWrite) + } + if msg, err := readTestRESPSimpleString(reader); err != nil { + t.Fatalf("failed to read AUTH response: %v", err) + } else if msg != "OK" { + t.Fatalf("unexpected AUTH response: %q", msg) + } + + if !redisqueue.Enabled() { + t.Fatalf("expected redisqueue to be enabled") + } + redisqueue.Enqueue([]byte("a")) + redisqueue.Enqueue([]byte("b")) + redisqueue.Enqueue([]byte("c")) + + if errWrite := writeTestRESPCommand(conn, "RPOP", "queue"); errWrite != nil { + t.Fatalf("failed to write RPOP command: %v", errWrite) + } + if item, err := readTestRESPBulkString(reader); err != nil { + t.Fatalf("failed to read RPOP response: %v", err) + } else if string(item) != "a" { + t.Fatalf("unexpected RPOP item: %q", string(item)) + } + + if errWrite := writeTestRESPCommand(conn, "LPOP", "queue"); errWrite != nil { + t.Fatalf("failed to write LPOP command: %v", errWrite) + } + if item, err := readTestRESPBulkString(reader); err != nil { + t.Fatalf("failed to read LPOP response: %v", err) + } else if string(item) != "b" { + t.Fatalf("unexpected LPOP item: %q", string(item)) + } + + if errWrite := writeTestRESPCommand(conn, "RPOP", "queue", "10"); errWrite != nil { + t.Fatalf("failed to write RPOP count command: %v", errWrite) + } + items, errItems := readRESPArrayOfBulkStrings(reader) + if errItems != nil { + t.Fatalf("failed to read RPOP count response: %v", errItems) + } + if len(items) != 1 || string(items[0]) != "c" { + t.Fatalf("unexpected RPOP count items: %#v", items) + } + + if errWrite := writeTestRESPCommand(conn, "LPOP", "queue"); errWrite != nil { + t.Fatalf("failed to write LPOP empty command: %v", errWrite) + } + item, errItem := readTestRESPBulkString(reader) + if errItem != nil { + t.Fatalf("failed to read LPOP empty response: %v", errItem) + } + if item != nil { + t.Fatalf("expected nil bulk string for empty queue, got %q", string(item)) + } + + if errWrite := writeTestRESPCommand(conn, "RPOP", "queue", "2"); errWrite != nil { + t.Fatalf("failed to write RPOP empty count command: %v", errWrite) + } + emptyItems, errEmpty := readRESPArrayOfBulkStrings(reader) + if errEmpty != nil { + t.Fatalf("failed to read RPOP empty count response: %v", errEmpty) + } + if len(emptyItems) != 0 { + t.Fatalf("expected empty array for empty queue with count, got %#v", emptyItems) + } +} + +func TestRedisProtocol_IPBan_MirrorsManagementPolicy(t *testing.T) { + const managementPassword = "test-management-password" + + t.Setenv("MANAGEMENT_PASSWORD", managementPassword) + redisqueue.SetEnabled(false) + t.Cleanup(func() { redisqueue.SetEnabled(false) }) + + server := newTestServer(t) + if !server.managementRoutesEnabled.Load() { + t.Fatalf("expected managementRoutesEnabled to be true") + } + + clientConn, serverConn := net.Pipe() + t.Cleanup(func() { _ = clientConn.Close() }) + t.Cleanup(func() { _ = serverConn.Close() }) + + fakeRemote := &net.TCPAddr{ + IP: net.ParseIP("1.2.3.4"), + Port: 1234, + } + wrappedConn := &remoteAddrConn{Conn: serverConn, remoteAddr: fakeRemote} + + go server.handleRedisConnection(wrappedConn, bufio.NewReader(wrappedConn)) + + reader := bufio.NewReader(clientConn) + _ = clientConn.SetDeadline(time.Now().Add(5 * time.Second)) + + for i := 0; i < 5; i++ { + if errWrite := writeTestRESPCommand(clientConn, "LPOP", "queue"); errWrite != nil { + t.Fatalf("failed to write LPOP command: %v", errWrite) + } + if msg, err := readTestRESPError(reader); err != nil { + t.Fatalf("failed to read LPOP NOAUTH error: %v", err) + } else if msg != "NOAUTH Authentication required." { + t.Fatalf("unexpected LPOP NOAUTH error at attempt %d: %q", i+1, msg) + } + } + + if errWrite := writeTestRESPCommand(clientConn, "LPOP", "queue"); errWrite != nil { + t.Fatalf("failed to write LPOP command after failures: %v", errWrite) + } + msg, err := readTestRESPError(reader) + if err != nil { + t.Fatalf("failed to read LPOP banned error: %v", err) + } + if !strings.HasPrefix(msg, "ERR IP banned due to too many failed attempts. Try again in") { + t.Fatalf("unexpected LPOP banned error: %q", msg) + } +} + +func TestRedisProtocol_AUTH_IPBan_BlocksCorrectPasswordDuringBan(t *testing.T) { + const managementPassword = "test-management-password" + + t.Setenv("MANAGEMENT_PASSWORD", managementPassword) + redisqueue.SetEnabled(false) + t.Cleanup(func() { redisqueue.SetEnabled(false) }) + + server := newTestServer(t) + if !server.managementRoutesEnabled.Load() { + t.Fatalf("expected managementRoutesEnabled to be true") + } + + clientConn, serverConn := net.Pipe() + t.Cleanup(func() { _ = clientConn.Close() }) + t.Cleanup(func() { _ = serverConn.Close() }) + + fakeRemote := &net.TCPAddr{ + IP: net.ParseIP("1.2.3.4"), + Port: 1234, + } + wrappedConn := &remoteAddrConn{Conn: serverConn, remoteAddr: fakeRemote} + + go server.handleRedisConnection(wrappedConn, bufio.NewReader(wrappedConn)) + + reader := bufio.NewReader(clientConn) + _ = clientConn.SetDeadline(time.Now().Add(5 * time.Second)) + + for i := 0; i < 5; i++ { + if errWrite := writeTestRESPCommand(clientConn, "AUTH", "wrong-password"); errWrite != nil { + t.Fatalf("failed to write AUTH command: %v", errWrite) + } + if msg, err := readTestRESPError(reader); err != nil { + t.Fatalf("failed to read AUTH error: %v", err) + } else if msg != "ERR invalid management key" { + t.Fatalf("unexpected AUTH error at attempt %d: %q", i+1, msg) + } + } + + for i := 0; i < 2; i++ { + if errWrite := writeTestRESPCommand(clientConn, "AUTH", "wrong-password"); errWrite != nil { + t.Fatalf("failed to write AUTH command after failures: %v", errWrite) + } + msg, err := readTestRESPError(reader) + if err != nil { + t.Fatalf("failed to read AUTH banned error: %v", err) + } + if !strings.HasPrefix(msg, "ERR IP banned due to too many failed attempts. Try again in") { + t.Fatalf("unexpected AUTH banned error at attempt %d: %q", i+6, msg) + } + } + + if errWrite := writeTestRESPCommand(clientConn, "AUTH", managementPassword); errWrite != nil { + t.Fatalf("failed to write AUTH command with correct password: %v", errWrite) + } + msg, err := readTestRESPError(reader) + if err != nil { + t.Fatalf("failed to read AUTH banned error for correct password: %v", err) + } + if !strings.HasPrefix(msg, "ERR IP banned due to too many failed attempts. Try again in") { + t.Fatalf("unexpected AUTH banned error for correct password: %q", msg) + } +} + +func TestRedisProtocol_LOCALHOST_AUTH_IPBan_BlocksCorrectPasswordDuringBan(t *testing.T) { + const managementPassword = "test-management-password" + + t.Setenv("MANAGEMENT_PASSWORD", managementPassword) + redisqueue.SetEnabled(false) + t.Cleanup(func() { redisqueue.SetEnabled(false) }) + + server := newTestServer(t) + if !server.managementRoutesEnabled.Load() { + t.Fatalf("expected managementRoutesEnabled to be true") + } + + addr, stop := startRedisMuxListener(t, server) + t.Cleanup(stop) + + conn, errDial := net.DialTimeout("tcp", addr, time.Second) + if errDial != nil { + t.Fatalf("failed to dial redis listener: %v", errDial) + } + t.Cleanup(func() { _ = conn.Close() }) + + reader := bufio.NewReader(conn) + _ = conn.SetDeadline(time.Now().Add(5 * time.Second)) + + for i := 0; i < 5; i++ { + if errWrite := writeTestRESPCommand(conn, "AUTH", "wrong-password"); errWrite != nil { + t.Fatalf("failed to write AUTH command: %v", errWrite) + } + if msg, err := readTestRESPError(reader); err != nil { + t.Fatalf("failed to read AUTH error: %v", err) + } else if msg != "ERR invalid management key" { + t.Fatalf("unexpected AUTH error at attempt %d: %q", i+1, msg) + } + } + + if errWrite := writeTestRESPCommand(conn, "AUTH", managementPassword); errWrite != nil { + t.Fatalf("failed to write AUTH command with correct password: %v", errWrite) + } + msg, err := readTestRESPError(reader) + if err != nil { + t.Fatalf("failed to read AUTH banned error for correct password: %v", err) + } + if !strings.HasPrefix(msg, "ERR IP banned due to too many failed attempts. Try again in") { + t.Fatalf("unexpected AUTH banned error for correct password: %q", msg) + } +} diff --git a/internal/api/server.go b/internal/api/server.go index 075455ba83..6d13ccd495 100644 --- a/internal/api/server.go +++ b/internal/api/server.go @@ -7,37 +7,43 @@ package api import ( "context" "crypto/subtle" + "crypto/tls" + "encoding/json" "errors" "fmt" + "net" "net/http" "os" "path/filepath" "reflect" + "sort" "strings" "sync" "sync/atomic" "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/access" - managementHandlers "github.com/router-for-me/CLIProxyAPI/v6/internal/api/handlers/management" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api/middleware" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api/modules" - ampmodule "github.com/router-for-me/CLIProxyAPI/v6/internal/api/modules/amp" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/managementasset" - "github.com/router-for-me/CLIProxyAPI/v6/internal/usage" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers/claude" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers/gemini" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers/openai" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/access" + managementHandlers "github.com/router-for-me/CLIProxyAPI/v7/internal/api/handlers/management" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api/middleware" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api/modules" + ampmodule "github.com/router-for-me/CLIProxyAPI/v7/internal/api/modules/amp" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/home" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/managementasset" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers/claude" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers/gemini" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers/openai" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" + "golang.org/x/net/http2" "gopkg.in/yaml.v3" ) @@ -61,7 +67,9 @@ type ServerOption func(*serverOptionConfig) func defaultRequestLoggerFactory(cfg *config.Config, configPath string) logging.RequestLogger { configDir := filepath.Dir(configPath) logsDir := logging.ResolveLogDirectory(cfg) - return logging.NewFileRequestLogger(cfg.RequestLog, logsDir, configDir, cfg.ErrorLogsMaxFiles) + logger := logging.NewFileRequestLogger(cfg.RequestLog, logsDir, configDir, cfg.ErrorLogsMaxFiles) + logger.SetHomeEnabled(cfg != nil && cfg.Home.Enabled) + return logger } // WithMiddleware appends additional Gin middleware during server construction. @@ -127,6 +135,12 @@ type Server struct { // server is the underlying HTTP server. server *http.Server + // muxBaseListener is the shared TCP listener used to serve both HTTP and Redis protocol traffic. + muxBaseListener net.Listener + + // muxHTTPListener receives HTTP connections selected by the multiplexer. + muxHTTPListener *muxListener + // handlers contains the API handlers for processing requests. handlers *handlers.BaseAPIHandler @@ -275,6 +289,10 @@ func NewServer(cfg *config.Config, authManager *auth.Manager, accessManager *sdk } s.localPassword = optionState.localPassword + // Home heartbeat gate: when home is enabled, block all endpoints with 503 until the + // subscribe-config heartbeat connection is healthy. + engine.Use(s.homeHeartbeatMiddleware()) + // Setup routes s.setupRoutes() @@ -299,6 +317,7 @@ func NewServer(cfg *config.Config, authManager *auth.Manager, accessManager *sdk // or when a local management password is provided (e.g. TUI mode). hasManagementSecret := cfg.RemoteManagement.SecretKey != "" || envManagementSecret || s.localPassword != "" s.managementRoutesEnabled.Store(hasManagementSecret) + redisqueue.SetEnabled(hasManagementSecret || (cfg != nil && cfg.Home.Enabled)) if hasManagementSecret { s.registerManagementRoutes() } @@ -316,12 +335,41 @@ func NewServer(cfg *config.Config, authManager *auth.Manager, accessManager *sdk return s } +func (s *Server) homeHeartbeatMiddleware() gin.HandlerFunc { + return func(c *gin.Context) { + if s == nil || s.cfg == nil || !s.cfg.Home.Enabled { + c.Next() + return + } + if c != nil && c.Request != nil { + path := c.Request.URL.Path + if strings.HasPrefix(path, "/v0/management/") || path == "/v0/management" || path == "/management.html" { + c.Next() + return + } + } + client := home.Current() + if client == nil || !client.HeartbeatOK() { + c.AbortWithStatus(http.StatusServiceUnavailable) + return + } + c.Next() + } +} + // setupRoutes configures the API routes for the server. // It defines the endpoints and associates them with their respective handlers. func (s *Server) setupRoutes() { - s.engine.GET("/healthz", func(c *gin.Context) { + healthzHandler := func(c *gin.Context) { + if c.Request.Method == http.MethodHead { + c.Status(http.StatusOK) + return + } + c.JSON(http.StatusOK, gin.H{"status": "ok"}) - }) + } + s.engine.GET("/healthz", healthzHandler) + s.engine.HEAD("/healthz", healthzHandler) s.engine.GET("/management.html", s.serveManagementControlPanel) openaiHandlers := openai.NewOpenAIAPIHandler(s.handlers) @@ -337,6 +385,8 @@ func (s *Server) setupRoutes() { v1.GET("/models", s.unifiedModelsHandler(openaiHandlers, claudeCodeHandlers)) v1.POST("/chat/completions", openaiHandlers.ChatCompletions) v1.POST("/completions", openaiHandlers.Completions) + v1.POST("/images/generations", openaiHandlers.ImagesGenerations) + v1.POST("/images/edits", openaiHandlers.ImagesEdits) v1.POST("/messages", claudeCodeHandlers.ClaudeMessages) v1.POST("/messages/count_tokens", claudeCodeHandlers.ClaudeCountTokens) v1.GET("/responses", openaiResponsesHandlers.ResponsesWebsocket) @@ -344,6 +394,15 @@ func (s *Server) setupRoutes() { v1.POST("/responses/compact", openaiResponsesHandlers.Compact) } + // Codex CLI direct route aliases (chatgpt_base_url compatible) + codexDirect := s.engine.Group("/backend-api/codex") + codexDirect.Use(AuthMiddleware(s.accessManager)) + { + codexDirect.GET("/responses", openaiResponsesHandlers.ResponsesWebsocket) + codexDirect.POST("/responses", openaiResponsesHandlers.Responses) + codexDirect.POST("/responses/compact", openaiResponsesHandlers.Compact) + } + // Gemini compatible API routes v1beta := s.engine.Group("/v1beta") v1beta.Use(AuthMiddleware(s.accessManager)) @@ -478,9 +537,6 @@ func (s *Server) registerManagementRoutes() { mgmt := s.engine.Group("/v0/management") mgmt.Use(s.managementAvailabilityMiddleware(), s.mgmt.Middleware()) { - mgmt.GET("/usage", s.mgmt.GetUsageStatistics) - mgmt.GET("/usage/export", s.mgmt.ExportUsageStatistics) - mgmt.POST("/usage/import", s.mgmt.ImportUsageStatistics) mgmt.GET("/config", s.mgmt.GetConfig) mgmt.GET("/config.yaml", s.mgmt.GetConfigYAML) mgmt.PUT("/config.yaml", s.mgmt.PutConfigYAML) @@ -525,6 +581,8 @@ func (s *Server) registerManagementRoutes() { mgmt.PUT("/api-keys", s.mgmt.PutAPIKeys) mgmt.PATCH("/api-keys", s.mgmt.PatchAPIKeys) mgmt.DELETE("/api-keys", s.mgmt.DeleteAPIKeys) + mgmt.GET("/api-key-usage", s.mgmt.GetAPIKeyUsage) + mgmt.GET("/usage-queue", s.mgmt.GetUsageQueue) mgmt.GET("/gemini-api-key", s.mgmt.GetGeminiKeys) mgmt.PUT("/gemini-api-key", s.mgmt.PutGeminiKeys) @@ -629,11 +687,24 @@ func (s *Server) registerManagementRoutes() { mgmt.GET("/kimi-auth-url", s.mgmt.RequestKimiToken) mgmt.POST("/oauth-callback", s.mgmt.PostOAuthCallback) mgmt.GET("/get-auth-status", s.mgmt.GetAuthStatus) + mgmt.GET("/usage", s.mgmt.GetUsageStatistics) + mgmt.GET("/usage/export", s.mgmt.ExportUsageStatistics) + mgmt.POST("/usage/import", s.mgmt.ImportUsageStatistics) + mgmt.GET("/kiro-quota", s.mgmt.GetKiroQuota) + s.mgmt.StartKiroQuotaRefresher() } } func (s *Server) managementAvailabilityMiddleware() gin.HandlerFunc { return func(c *gin.Context) { + if s == nil || s.cfg == nil { + c.AbortWithStatus(http.StatusNotFound) + return + } + if s.cfg.Home.Enabled { + c.AbortWithStatus(http.StatusNotFound) + return + } if !s.managementRoutesEnabled.Load() { c.AbortWithStatus(http.StatusNotFound) return @@ -644,7 +715,7 @@ func (s *Server) managementAvailabilityMiddleware() gin.HandlerFunc { func (s *Server) serveManagementControlPanel(c *gin.Context) { cfg := s.cfg - if cfg == nil || cfg.RemoteManagement.DisableControlPanel { + if cfg == nil || cfg.Home.Enabled || cfg.RemoteManagement.DisableControlPanel { c.AbortWithStatus(http.StatusNotFound) return } @@ -756,6 +827,11 @@ func (s *Server) watchKeepAlive() { // otherwise it routes to OpenAI handler. func (s *Server) unifiedModelsHandler(openaiHandler *openai.OpenAIAPIHandler, claudeHandler *claude.ClaudeCodeAPIHandler) gin.HandlerFunc { return func(c *gin.Context) { + if s != nil && s.cfg != nil && s.cfg.Home.Enabled { + s.handleHomeModels(c) + return + } + userAgent := c.GetHeader("User-Agent") // Route to Claude handler if User-Agent starts with "claude-cli" @@ -769,6 +845,170 @@ func (s *Server) unifiedModelsHandler(openaiHandler *openai.OpenAIAPIHandler, cl } } +type homeModelEntry struct { + id string + created int64 + ownedBy string + displayName string +} + +func (s *Server) handleHomeModels(c *gin.Context) { + if s == nil || c == nil || c.Request == nil { + return + } + client := home.Current() + if client == nil { + c.JSON(http.StatusServiceUnavailable, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "home control center unavailable", + Type: "server_error", + }, + }) + return + } + + raw, errGet := client.GetModels(c.Request.Context()) + if errGet != nil { + c.JSON(http.StatusBadGateway, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: errGet.Error(), + Type: "server_error", + }, + }) + return + } + + entries, errDecode := decodeHomeModels(raw) + if errDecode != nil { + c.JSON(http.StatusBadGateway, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: errDecode.Error(), + Type: "server_error", + }, + }) + return + } + + userAgent := c.GetHeader("User-Agent") + isClaude := strings.HasPrefix(userAgent, "claude-cli") + + if isClaude { + out := make([]map[string]any, 0, len(entries)) + for _, entry := range entries { + model := map[string]any{ + "id": entry.id, + "object": "model", + "owned_by": entry.ownedBy, + } + if entry.created > 0 { + model["created_at"] = entry.created + } + if entry.displayName != "" { + model["display_name"] = entry.displayName + } + out = append(out, model) + } + firstID := "" + lastID := "" + if len(out) > 0 { + if id, ok := out[0]["id"].(string); ok { + firstID = id + } + if id, ok := out[len(out)-1]["id"].(string); ok { + lastID = id + } + } + c.JSON(http.StatusOK, gin.H{ + "data": out, + "has_more": false, + "first_id": firstID, + "last_id": lastID, + }) + return + } + + filtered := make([]map[string]any, 0, len(entries)) + for _, entry := range entries { + model := map[string]any{ + "id": entry.id, + "object": "model", + } + if entry.created > 0 { + model["created"] = entry.created + } + if entry.ownedBy != "" { + model["owned_by"] = entry.ownedBy + } + filtered = append(filtered, model) + } + c.JSON(http.StatusOK, gin.H{ + "object": "list", + "data": filtered, + }) +} + +func decodeHomeModels(raw []byte) ([]homeModelEntry, error) { + if len(raw) == 0 { + return nil, fmt.Errorf("home models payload is empty") + } + + var bySection map[string][]map[string]any + if err := json.Unmarshal(raw, &bySection); err != nil { + return nil, fmt.Errorf("parse home models payload: %w", err) + } + if len(bySection) == 0 { + return nil, fmt.Errorf("home models payload has no sections") + } + + seen := make(map[string]struct{}) + out := make([]homeModelEntry, 0, 256) + for _, models := range bySection { + for _, model := range models { + id, _ := model["id"].(string) + id = strings.TrimSpace(id) + if id == "" { + continue + } + if _, ok := seen[id]; ok { + continue + } + seen[id] = struct{}{} + + created := int64(0) + switch v := model["created"].(type) { + case float64: + created = int64(v) + case int64: + created = v + case int: + created = int64(v) + case json.Number: + if n, err := v.Int64(); err == nil { + created = n + } + } + + ownedBy, _ := model["owned_by"].(string) + ownedBy = strings.TrimSpace(ownedBy) + displayName, _ := model["display_name"].(string) + displayName = strings.TrimSpace(displayName) + + out = append(out, homeModelEntry{ + id: id, + created: created, + ownedBy: ownedBy, + displayName: displayName, + }) + } + } + + sort.Slice(out, func(i, j int) bool { return out[i].id < out[j].id }) + if len(out) == 0 { + return nil, fmt.Errorf("home models payload contains no models") + } + return out, nil +} + // Start begins listening for and serving HTTP or HTTPS requests. // It's a blocking call and will only return on an unrecoverable error. // @@ -779,26 +1019,98 @@ func (s *Server) Start() error { return fmt.Errorf("failed to start HTTP server: server not initialized") } + addr := s.server.Addr + listener, errListen := net.Listen("tcp", addr) + if errListen != nil { + return fmt.Errorf("failed to start HTTP server: %v", errListen) + } + useTLS := s.cfg != nil && s.cfg.TLS.Enable if useTLS { - cert := strings.TrimSpace(s.cfg.TLS.Cert) - key := strings.TrimSpace(s.cfg.TLS.Key) - if cert == "" || key == "" { + certPath := strings.TrimSpace(s.cfg.TLS.Cert) + keyPath := strings.TrimSpace(s.cfg.TLS.Key) + if certPath == "" || keyPath == "" { + if errClose := listener.Close(); errClose != nil { + log.Errorf("failed to close listener after TLS validation failure: %v", errClose) + } return fmt.Errorf("failed to start HTTPS server: tls.cert or tls.key is empty") } - log.Debugf("Starting API server on %s with TLS", s.server.Addr) - if errServeTLS := s.server.ListenAndServeTLS(cert, key); errServeTLS != nil && !errors.Is(errServeTLS, http.ErrServerClosed) { - return fmt.Errorf("failed to start HTTPS server: %v", errServeTLS) + certPair, errLoad := tls.LoadX509KeyPair(certPath, keyPath) + if errLoad != nil { + if errClose := listener.Close(); errClose != nil { + log.Errorf("failed to close listener after TLS key pair load failure: %v", errClose) + } + return fmt.Errorf("failed to start HTTPS server: %v", errLoad) } - return nil - } - log.Debugf("Starting API server on %s", s.server.Addr) - if errServe := s.server.ListenAndServe(); errServe != nil && !errors.Is(errServe, http.ErrServerClosed) { - return fmt.Errorf("failed to start HTTP server: %v", errServe) + tlsConfig := &tls.Config{ + Certificates: []tls.Certificate{certPair}, + NextProtos: []string{"h2", "http/1.1"}, + } + s.server.TLSConfig = tlsConfig + if errHTTP2 := http2.ConfigureServer(s.server, &http2.Server{}); errHTTP2 != nil { + log.Warnf("failed to configure HTTP/2: %v", errHTTP2) + } + listener = tls.NewListener(listener, tlsConfig) + log.Debugf("Starting API server on %s with TLS", addr) + } else { + log.Debugf("Starting API server on %s", addr) } - return nil + httpListener := newMuxListener(listener.Addr(), 1024) + s.muxBaseListener = listener + s.muxHTTPListener = httpListener + + httpErrCh := make(chan error, 1) + acceptErrCh := make(chan error, 1) + + go func() { + httpErrCh <- s.server.Serve(httpListener) + }() + go func() { + acceptErrCh <- s.acceptMuxConnections(listener, httpListener) + }() + + select { + case errServe := <-httpErrCh: + if s.muxBaseListener != nil { + if errClose := s.muxBaseListener.Close(); errClose != nil && !errors.Is(errClose, net.ErrClosed) { + log.Debugf("failed to close shared listener after HTTP serve exit: %v", errClose) + } + } + if s.muxHTTPListener != nil { + _ = s.muxHTTPListener.Close() + } + errAccept := <-acceptErrCh + errServe = normalizeHTTPServeError(errServe) + errAccept = normalizeListenerError(errAccept) + if errServe != nil { + return fmt.Errorf("failed to start HTTP server: %v", errServe) + } + if errAccept != nil { + return fmt.Errorf("failed to start HTTP server: %v", errAccept) + } + return nil + case errAccept := <-acceptErrCh: + if s.muxHTTPListener != nil { + _ = s.muxHTTPListener.Close() + } + if s.muxBaseListener != nil { + if errClose := s.muxBaseListener.Close(); errClose != nil && !errors.Is(errClose, net.ErrClosed) { + log.Debugf("failed to close shared listener after accept loop exit: %v", errClose) + } + } + errServe := <-httpErrCh + errServe = normalizeHTTPServeError(errServe) + errAccept = normalizeListenerError(errAccept) + if errAccept != nil { + return fmt.Errorf("failed to start HTTP server: %v", errAccept) + } + if errServe != nil { + return fmt.Errorf("failed to start HTTP server: %v", errServe) + } + return nil + } } // Stop gracefully shuts down the API server without interrupting any @@ -819,6 +1131,15 @@ func (s *Server) Stop(ctx context.Context) error { } } + if s.muxHTTPListener != nil { + _ = s.muxHTTPListener.Close() + } + if s.muxBaseListener != nil { + if errClose := s.muxBaseListener.Close(); errClose != nil && !errors.Is(errClose, net.ErrClosed) { + log.Debugf("failed to close shared listener: %v", errClose) + } + } + // Shutdown the HTTP server. if err := s.server.Shutdown(ctx); err != nil { return fmt.Errorf("failed to shutdown HTTP server: %v", err) @@ -883,6 +1204,12 @@ func (s *Server) UpdateClients(cfg *config.Config) { } } + if oldCfg == nil || oldCfg.Home.Enabled != cfg.Home.Enabled { + if setter, ok := s.requestLogger.(interface{ SetHomeEnabled(bool) }); ok { + setter.SetHomeEnabled(cfg.Home.Enabled) + } + } + if oldCfg == nil || oldCfg.LoggingToFile != cfg.LoggingToFile || oldCfg.LogsMaxTotalSizeMB != cfg.LogsMaxTotalSizeMB { if err := logging.ConfigureLogOutput(cfg); err != nil { log.Errorf("failed to reconfigure log output: %v", err) @@ -890,7 +1217,11 @@ func (s *Server) UpdateClients(cfg *config.Config) { } if oldCfg == nil || oldCfg.UsageStatisticsEnabled != cfg.UsageStatisticsEnabled { - usage.SetStatisticsEnabled(cfg.UsageStatisticsEnabled) + redisqueue.SetUsageStatisticsEnabled(cfg.UsageStatisticsEnabled) + } + + if oldCfg == nil || oldCfg.RedisUsageQueueRetentionSeconds != cfg.RedisUsageQueueRetentionSeconds { + redisqueue.SetRetentionSeconds(cfg.RedisUsageQueueRetentionSeconds) } if s.requestLogger != nil && (oldCfg == nil || oldCfg.ErrorLogsMaxFiles != cfg.ErrorLogsMaxFiles) { @@ -903,6 +1234,10 @@ func (s *Server) UpdateClients(cfg *config.Config) { auth.SetQuotaCooldownDisabled(cfg.DisableCooling) } + if oldCfg != nil && oldCfg.DisableImageGeneration != cfg.DisableImageGeneration { + log.Infof("disable-image-generation updated: %v -> %v", oldCfg.DisableImageGeneration, cfg.DisableImageGeneration) + } + applySignatureCacheConfig(oldCfg, cfg) if s.handlers != nil && s.handlers.AuthManager != nil { @@ -945,6 +1280,7 @@ func (s *Server) UpdateClients(cfg *config.Config) { s.managementRoutesEnabled.Store(!newSecretEmpty) } } + redisqueue.SetEnabled(s.managementRoutesEnabled.Load() || (cfg != nil && cfg.Home.Enabled)) s.applyAccessConfig(oldCfg, cfg) s.cfg = cfg @@ -977,11 +1313,14 @@ func (s *Server) UpdateClients(cfg *config.Config) { } // Count client sources from configuration and auth store. - tokenStore := sdkAuth.GetTokenStore() - if dirSetter, ok := tokenStore.(interface{ SetBaseDir(string) }); ok { - dirSetter.SetBaseDir(cfg.AuthDir) + authEntries := 0 + if cfg != nil && !cfg.Home.Enabled { + tokenStore := sdkAuth.GetTokenStore() + if dirSetter, ok := tokenStore.(interface{ SetBaseDir(string) }); ok { + dirSetter.SetBaseDir(cfg.AuthDir) + } + authEntries = util.CountAuthFiles(context.Background(), tokenStore) } - authEntries := util.CountAuthFiles(context.Background(), tokenStore) geminiAPIKeyCount := len(cfg.GeminiKey) claudeAPIKeyCount := len(cfg.ClaudeKey) codexAPIKeyCount := len(cfg.CodexKey) @@ -989,6 +1328,9 @@ func (s *Server) UpdateClients(cfg *config.Config) { openAICompatCount := 0 for i := range cfg.OpenAICompatibility { entry := cfg.OpenAICompatibility[i] + if entry.Disabled { + continue + } openAICompatCount += len(entry.APIKeyEntries) } @@ -1026,7 +1368,7 @@ func AuthMiddleware(manager *sdkaccess.Manager) gin.HandlerFunc { result, err := manager.Authenticate(c.Request.Context(), c.Request) if err == nil { if result != nil { - c.Set("apiKey", result.Principal) + c.Set("userApiKey", result.Principal) c.Set("accessProvider", result.Provider) if len(result.Metadata) > 0 { c.Set("accessMetadata", result.Metadata) diff --git a/internal/api/server_test.go b/internal/api/server_test.go index dbc2cd5a83..e107702a88 100644 --- a/internal/api/server_test.go +++ b/internal/api/server_test.go @@ -11,11 +11,12 @@ import ( "time" gin "github.com/gin-gonic/gin" - proxyconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - internallogging "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + proxyconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + internallogging "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func newTestServer(t *testing.T) *Server { @@ -50,25 +51,128 @@ func newTestServer(t *testing.T) *Server { func TestHealthz(t *testing.T) { server := newTestServer(t) - req := httptest.NewRequest(http.MethodGet, "/healthz", nil) - rr := httptest.NewRecorder() - server.engine.ServeHTTP(rr, req) + t.Run("GET", func(t *testing.T) { + req := httptest.NewRequest(http.MethodGet, "/healthz", nil) + rr := httptest.NewRecorder() + server.engine.ServeHTTP(rr, req) - if rr.Code != http.StatusOK { - t.Fatalf("unexpected status code: got %d want %d; body=%s", rr.Code, http.StatusOK, rr.Body.String()) + if rr.Code != http.StatusOK { + t.Fatalf("unexpected status code: got %d want %d; body=%s", rr.Code, http.StatusOK, rr.Body.String()) + } + + var resp struct { + Status string `json:"status"` + } + if err := json.Unmarshal(rr.Body.Bytes(), &resp); err != nil { + t.Fatalf("failed to parse response JSON: %v; body=%s", err, rr.Body.String()) + } + if resp.Status != "ok" { + t.Fatalf("unexpected response status: got %q want %q", resp.Status, "ok") + } + }) + + t.Run("HEAD", func(t *testing.T) { + req := httptest.NewRequest(http.MethodHead, "/healthz", nil) + rr := httptest.NewRecorder() + server.engine.ServeHTTP(rr, req) + + if rr.Code != http.StatusOK { + t.Fatalf("unexpected status code: got %d want %d; body=%s", rr.Code, http.StatusOK, rr.Body.String()) + } + if rr.Body.Len() != 0 { + t.Fatalf("expected empty body for HEAD request, got %q", rr.Body.String()) + } + }) +} + +func TestManagementUsageRequiresManagementAuthAndPopsArray(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "test-management-key") + + prevQueueEnabled := redisqueue.Enabled() + redisqueue.SetEnabled(false) + t.Cleanup(func() { + redisqueue.SetEnabled(false) + redisqueue.SetEnabled(prevQueueEnabled) + }) + + server := newTestServer(t) + + redisqueue.Enqueue([]byte(`{"id":1}`)) + redisqueue.Enqueue([]byte(`{"id":2}`)) + + missingKeyReq := httptest.NewRequest(http.MethodGet, "/v0/management/usage-queue?count=2", nil) + missingKeyRR := httptest.NewRecorder() + server.engine.ServeHTTP(missingKeyRR, missingKeyReq) + if missingKeyRR.Code != http.StatusUnauthorized { + t.Fatalf("missing key status = %d, want %d body=%s", missingKeyRR.Code, http.StatusUnauthorized, missingKeyRR.Body.String()) } - var resp struct { - Status string `json:"status"` + legacyReq := httptest.NewRequest(http.MethodGet, "/v0/management/usage?count=2", nil) + legacyReq.Header.Set("Authorization", "Bearer test-management-key") + legacyRR := httptest.NewRecorder() + server.engine.ServeHTTP(legacyRR, legacyReq) + if legacyRR.Code != http.StatusNotFound { + t.Fatalf("legacy usage status = %d, want %d body=%s", legacyRR.Code, http.StatusNotFound, legacyRR.Body.String()) } - if err := json.Unmarshal(rr.Body.Bytes(), &resp); err != nil { - t.Fatalf("failed to parse response JSON: %v; body=%s", err, rr.Body.String()) + + authReq := httptest.NewRequest(http.MethodGet, "/v0/management/usage-queue?count=2", nil) + authReq.Header.Set("Authorization", "Bearer test-management-key") + authRR := httptest.NewRecorder() + server.engine.ServeHTTP(authRR, authReq) + if authRR.Code != http.StatusOK { + t.Fatalf("authenticated status = %d, want %d body=%s", authRR.Code, http.StatusOK, authRR.Body.String()) } - if resp.Status != "ok" { - t.Fatalf("unexpected response status: got %q want %q", resp.Status, "ok") + + var payload []json.RawMessage + if errUnmarshal := json.Unmarshal(authRR.Body.Bytes(), &payload); errUnmarshal != nil { + t.Fatalf("unmarshal response: %v body=%s", errUnmarshal, authRR.Body.String()) + } + if len(payload) != 2 { + t.Fatalf("response records = %d, want 2", len(payload)) + } + for i, raw := range payload { + var record struct { + ID int `json:"id"` + } + if errUnmarshal := json.Unmarshal(raw, &record); errUnmarshal != nil { + t.Fatalf("unmarshal record %d: %v", i, errUnmarshal) + } + if record.ID != i+1 { + t.Fatalf("record %d id = %d, want %d", i, record.ID, i+1) + } + } + + if remaining := redisqueue.PopOldest(1); len(remaining) != 0 { + t.Fatalf("remaining queue = %q, want empty", remaining) } } +func TestHomeEnabledHidesManagementEndpointsAndControlPanel(t *testing.T) { + t.Setenv("MANAGEMENT_PASSWORD", "test-management-key") + + server := newTestServer(t) + server.cfg.Home.Enabled = true + + t.Run("management endpoints return 404", func(t *testing.T) { + req := httptest.NewRequest(http.MethodGet, "/v0/management/config", nil) + req.Header.Set("Authorization", "Bearer test-management-key") + rr := httptest.NewRecorder() + server.engine.ServeHTTP(rr, req) + if rr.Code != http.StatusNotFound { + t.Fatalf("status = %d, want %d body=%s", rr.Code, http.StatusNotFound, rr.Body.String()) + } + }) + + t.Run("management control panel returns 404", func(t *testing.T) { + req := httptest.NewRequest(http.MethodGet, "/management.html", nil) + rr := httptest.NewRecorder() + server.engine.ServeHTTP(rr, req) + if rr.Code != http.StatusNotFound { + t.Fatalf("status = %d, want %d body=%s", rr.Code, http.StatusNotFound, rr.Body.String()) + } + }) +} + func TestAmpProviderModelRoutes(t *testing.T) { testCases := []struct { name string diff --git a/internal/auth/antigravity/auth.go b/internal/auth/antigravity/auth.go index 449f413fc1..7bee09bb66 100644 --- a/internal/auth/antigravity/auth.go +++ b/internal/auth/antigravity/auth.go @@ -11,8 +11,9 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" ) @@ -36,17 +37,21 @@ type AntigravityAuth struct { // NewAntigravityAuth creates a new Antigravity auth service. func NewAntigravityAuth(cfg *config.Config, httpClient *http.Client) *AntigravityAuth { - if httpClient != nil { - return &AntigravityAuth{httpClient: httpClient} - } if cfg == nil { cfg = &config.Config{} } + if httpClient != nil { + return &AntigravityAuth{httpClient: httpClient} + } return &AntigravityAuth{ httpClient: util.SetProxy(&cfg.SDKConfig, &http.Client{}), } } +func (o *AntigravityAuth) loadCodeAssistUserAgent() string { + return misc.AntigravityLoadCodeAssistUserAgent("") +} + // BuildAuthURL generates the OAuth authorization URL. func (o *AntigravityAuth) BuildAuthURL(state, redirectURI string) string { if strings.TrimSpace(redirectURI) == "" { @@ -118,6 +123,7 @@ func (o *AntigravityAuth) FetchUserInfo(ctx context.Context, accessToken string) return "", fmt.Errorf("antigravity userinfo: create request: %w", err) } req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("User-Agent", o.loadCodeAssistUserAgent()) resp, errDo := o.httpClient.Do(req) if errDo != nil { @@ -153,11 +159,12 @@ func (o *AntigravityAuth) FetchUserInfo(ctx context.Context, accessToken string) // FetchProjectID retrieves the project ID for the authenticated user via loadCodeAssist func (o *AntigravityAuth) FetchProjectID(ctx context.Context, accessToken string) (string, error) { + userAgent := o.loadCodeAssistUserAgent() loadReqBody := map[string]any{ "metadata": map[string]string{ - "ideType": "ANTIGRAVITY", - "platform": "PLATFORM_UNSPECIFIED", - "pluginType": "GEMINI", + "ide_type": "ANTIGRAVITY", + "ide_version": misc.AntigravityVersionFromUserAgent(userAgent), + "ide_name": "antigravity", }, } @@ -173,9 +180,8 @@ func (o *AntigravityAuth) FetchProjectID(ctx context.Context, accessToken string } req.Header.Set("Authorization", "Bearer "+accessToken) req.Header.Set("Content-Type", "application/json") - req.Header.Set("User-Agent", APIUserAgent) - req.Header.Set("X-Goog-Api-Client", APIClient) - req.Header.Set("Client-Metadata", ClientMetadata) + req.Header.Set("User-Agent", userAgent) + req.Header.Set("X-Goog-Api-Client", misc.AntigravityGoogAPIClientUA) resp, errDo := o.httpClient.Do(req) if errDo != nil { @@ -244,12 +250,13 @@ func (o *AntigravityAuth) FetchProjectID(ctx context.Context, accessToken string // OnboardUser attempts to fetch the project ID via onboardUser by polling for completion func (o *AntigravityAuth) OnboardUser(ctx context.Context, accessToken, tierID string) (string, error) { log.Infof("Antigravity: onboarding user with tier: %s", tierID) + userAgent := o.loadCodeAssistUserAgent() requestBody := map[string]any{ "tierId": tierID, "metadata": map[string]string{ - "ideType": "ANTIGRAVITY", - "platform": "PLATFORM_UNSPECIFIED", - "pluginType": "GEMINI", + "ide_type": "ANTIGRAVITY", + "ide_version": misc.AntigravityVersionFromUserAgent(userAgent), + "ide_name": "antigravity", }, } @@ -277,9 +284,8 @@ func (o *AntigravityAuth) OnboardUser(ctx context.Context, accessToken, tierID s } req.Header.Set("Authorization", "Bearer "+accessToken) req.Header.Set("Content-Type", "application/json") - req.Header.Set("User-Agent", APIUserAgent) - req.Header.Set("X-Goog-Api-Client", APIClient) - req.Header.Set("Client-Metadata", ClientMetadata) + req.Header.Set("User-Agent", userAgent) + req.Header.Set("X-Goog-Api-Client", misc.AntigravityGoogAPIClientUA) resp, errDo := o.httpClient.Do(req) if errDo != nil { diff --git a/internal/auth/antigravity/constants.go b/internal/auth/antigravity/constants.go index 680c8e3c70..61e736971a 100644 --- a/internal/auth/antigravity/constants.go +++ b/internal/auth/antigravity/constants.go @@ -21,14 +21,11 @@ var Scopes = []string{ const ( TokenEndpoint = "https://oauth2.googleapis.com/token" AuthEndpoint = "https://accounts.google.com/o/oauth2/v2/auth" - UserInfoEndpoint = "https://www.googleapis.com/oauth2/v1/userinfo?alt=json" + UserInfoEndpoint = "https://www.googleapis.com/oauth2/v2/userinfo?alt=json" ) // Antigravity API configuration const ( - APIEndpoint = "https://cloudcode-pa.googleapis.com" - APIVersion = "v1internal" - APIUserAgent = "google-api-nodejs-client/9.15.1" - APIClient = "google-cloud-sdk vscode_cloudshelleditor/0.1" - ClientMetadata = `{"ideType":"IDE_UNSPECIFIED","platform":"PLATFORM_UNSPECIFIED","pluginType":"GEMINI"}` + APIEndpoint = "https://cloudcode-pa.googleapis.com" + APIVersion = "v1internal" ) diff --git a/internal/auth/bt/bt.go b/internal/auth/bt/bt.go new file mode 100644 index 0000000000..485ef8c254 --- /dev/null +++ b/internal/auth/bt/bt.go @@ -0,0 +1,150 @@ +package bt + +import ( + "crypto/md5" + "encoding/base64" + "encoding/hex" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strconv" + "strings" + "time" + + log "github.com/sirupsen/logrus" +) + +const ( + CloudURL = "https://www.bt.cn" + APIURL = "https://api.bt.cn" + AppID = "bt_app_001" +) + +func md5Hash(s string) string { + h := md5.Sum([]byte(s)) + return hex.EncodeToString(h[:]) +} + +func generateStableSID(phone string) string { + macSeed := md5Hash(phone + ":mac") + hostname := "bt-server-" + md5Hash(phone + ":hostname")[:8] + cpu := "Intel Xeon Platinum 8480+" + return md5Hash(macSeed+hostname) + md5Hash(cpu) +} + +func generateStableMAC(phone string) string { + macSeed := md5Hash(phone + ":mac") + return fmt.Sprintf("%s:%s:%s:%s:%s:%s", + macSeed[0:2], macSeed[2:4], macSeed[4:6], macSeed[6:8], macSeed[8:10], macSeed[10:12]) +} + +func decodeBase64Password(encoded string) string { + decoded, err := base64.StdEncoding.DecodeString(encoded) + if err != nil { + ud, err2 := base64.URLEncoding.DecodeString(encoded) + if err2 != nil { + return encoded + } + return string(ud) + } + return string(decoded) +} + +func hexEncode(data url.Values) string { + return fmt.Sprintf("%x", []byte(data.Encode())) +} + +func Login(phone, passwordBase64 string) (*BTTokenStorage, error) { + password := decodeBase64Password(passwordBase64) + sid := generateStableSID(phone) + loginURL := APIURL + "/Auth/GetAuthToken" + + innerData := url.Values{ + "username": {phone}, + "password": {md5Hash(password)}, + "serverid": {sid}, + "os": {"Linux"}, + "mac": {generateStableMAC(phone)}, + "o": {""}, + } + payload := url.Values{"data": {hexEncode(innerData)}} + + resp, err := httpPostForm(loginURL, payload) + if err != nil { + return nil, fmt.Errorf("bt login request failed: %w", err) + } + defer func() { + if err := resp.Body.Close(); err != nil { + log.Debugf("bt auth: close response body error: %v", err) + } + }() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("bt login returned status %d", resp.StatusCode) + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("bt login read body failed: %w", err) + } + + var result map[string]interface{} + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("bt login parse response failed: %w", err) + } + + dataHex, ok := result["data"].(string) + if !ok { + msg, _ := result["msg"].(string) + if msg == "" { + msg = "unknown error" + } + return nil, fmt.Errorf("bt login failed: %s", msg) + } + + decoded, err := hex.DecodeString(dataHex) + if err != nil { + return nil, fmt.Errorf("bt login decode hex failed: %w", err) + } + + unescaped, err := url.QueryUnescape(string(decoded)) + if err != nil { + return nil, fmt.Errorf("bt login unescape failed: %w", err) + } + + var data map[string]interface{} + if err := json.Unmarshal([]byte(unescaped), &data); err != nil { + return nil, fmt.Errorf("bt login parse data failed: %w", err) + } + + uid, _ := data["uid"].(string) + accessKey, _ := data["access_key"].(string) + if uid == "" { + floatUID, ok := data["uid"].(float64) + if ok { + uid = strconv.FormatFloat(floatUID, 'f', 0, 64) + } + } + if uid == "" { + return nil, fmt.Errorf("bt login: uid not found in response") + } + + log.Infof("bt auth: login successful for phone %s", phone) + return NewBTTokenStorage(phone, uid, accessKey, sid), nil +} + +func RefreshSession(phone, passwordBase64 string) (*BTTokenStorage, error) { + return Login(phone, passwordBase64) +} + +func httpPostForm(url string, data url.Values) (*http.Response, error) { + req, err := http.NewRequest(http.MethodPost, url, strings.NewReader(data.Encode())) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "application/x-www-form-urlencoded") + client := &http.Client{Timeout: 15 * time.Second} + return client.Do(req) +} diff --git a/internal/auth/bt/token.go b/internal/auth/bt/token.go new file mode 100644 index 0000000000..b25de23466 --- /dev/null +++ b/internal/auth/bt/token.go @@ -0,0 +1,60 @@ +package bt + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" +) + +type BTTokenStorage struct { + Phone string `json:"phone"` + UID string `json:"uid"` + AccessKey string `json:"access_key"` + ServerID string `json:"serverid"` + Type string `json:"type"` +} + +func NewBTTokenStorage(phone, uid, accessKey, serverID string) *BTTokenStorage { + return &BTTokenStorage{ + Phone: phone, + UID: uid, + AccessKey: accessKey, + ServerID: serverID, + Type: "bt", + } +} + +func LoadBTTokenStorage(path string) (*BTTokenStorage, error) { + data, err := os.ReadFile(path) + if err != nil { + return nil, err + } + var s BTTokenStorage + if err := json.Unmarshal(data, &s); err != nil { + return nil, fmt.Errorf("failed to parse bt token file: %w", err) + } + if s.Type != "bt" { + return nil, fmt.Errorf("invalid token file type: %s", s.Type) + } + return &s, nil +} + +func (s *BTTokenStorage) SaveTokenToFile(authFilePath string) error { + misc.LogSavingCredentials(authFilePath) + s.Type = "bt" + if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil { + return fmt.Errorf("failed to create directory: %w", err) + } + f, err := os.OpenFile(authFilePath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) + if err != nil { + return fmt.Errorf("failed to create token file: %w", err) + } + defer func() { _ = f.Close() }() + if err = json.NewEncoder(f).Encode(s); err != nil { + return fmt.Errorf("failed to write token to file: %w", err) + } + return nil +} diff --git a/internal/auth/claude/anthropic_auth.go b/internal/auth/claude/anthropic_auth.go index 6c770abf43..d7ca154296 100644 --- a/internal/auth/claude/anthropic_auth.go +++ b/internal/auth/claude/anthropic_auth.go @@ -6,15 +6,18 @@ package claude import ( "context" "encoding/json" + "errors" "fmt" "io" "net/http" "net/url" "strings" + "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" log "github.com/sirupsen/logrus" + "golang.org/x/sync/singleflight" ) // OAuth configuration constants for Claude/Anthropic @@ -23,8 +26,94 @@ const ( TokenURL = "https://api.anthropic.com/v1/oauth/token" ClientID = "9d1c250a-e61b-44d9-88ed-5944d1962f5e" RedirectURI = "http://localhost:54545/callback" + + claudeRefreshMinBackoff = 5 * time.Second + claudeRefreshMaxBackoff = 5 * time.Minute +) + +var ( + claudeRefreshGroup singleflight.Group + claudeRefreshMu sync.Mutex + claudeRefreshBlock = make(map[string]time.Time) ) +type refreshHTTPError struct { + status int + message string + retryable bool +} + +func (e *refreshHTTPError) Error() string { + return fmt.Sprintf("token refresh failed with status %d: %s", e.status, e.message) +} + +func (e *refreshHTTPError) Retryable() bool { + return e != nil && e.retryable +} + +func resetClaudeRefreshState() { + claudeRefreshMu.Lock() + defer claudeRefreshMu.Unlock() + claudeRefreshBlock = make(map[string]time.Time) + claudeRefreshGroup = singleflight.Group{} +} + +func claudeRefreshBlockedUntil(refreshToken string) time.Time { + claudeRefreshMu.Lock() + defer claudeRefreshMu.Unlock() + return claudeRefreshBlock[refreshToken] +} + +func setClaudeRefreshBlockedUntil(refreshToken string, until time.Time) { + claudeRefreshMu.Lock() + defer claudeRefreshMu.Unlock() + claudeRefreshBlock[refreshToken] = until +} + +func clearClaudeRefreshBlockedUntil(refreshToken string) { + claudeRefreshMu.Lock() + defer claudeRefreshMu.Unlock() + delete(claudeRefreshBlock, refreshToken) +} + +func clampClaudeRefreshBackoff(d time.Duration) time.Duration { + if d < claudeRefreshMinBackoff { + return claudeRefreshMinBackoff + } + if d > claudeRefreshMaxBackoff { + return claudeRefreshMaxBackoff + } + return d +} + +func parseClaudeRetryAfter(resp *http.Response) time.Duration { + if resp == nil { + return claudeRefreshMinBackoff + } + if raw := strings.TrimSpace(resp.Header.Get("Retry-After")); raw != "" { + if seconds, err := time.ParseDuration(raw + "s"); err == nil { + return clampClaudeRefreshBackoff(seconds) + } + if when, err := http.ParseTime(raw); err == nil { + return clampClaudeRefreshBackoff(time.Until(when)) + } + } + if raw := strings.TrimSpace(resp.Header.Get("Retry-After-Ms")); raw != "" { + if ms, err := time.ParseDuration(raw + "ms"); err == nil { + return clampClaudeRefreshBackoff(ms) + } + } + return claudeRefreshMinBackoff +} + +func isClaudeRefreshRetryable(err error) bool { + var httpErr *refreshHTTPError + if errors.As(err, &httpErr) { + return httpErr.Retryable() + } + return true +} + // tokenResponse represents the response structure from Anthropic's OAuth token endpoint. // It contains access token, refresh token, and associated user/organization information. type tokenResponse struct { @@ -242,6 +331,35 @@ func (o *ClaudeAuth) RefreshTokens(ctx context.Context, refreshToken string) (*C if refreshToken == "" { return nil, fmt.Errorf("refresh token is required") } + if blockedUntil := claudeRefreshBlockedUntil(refreshToken); blockedUntil.After(time.Now()) { + return nil, &refreshHTTPError{ + status: http.StatusTooManyRequests, + message: fmt.Sprintf("refresh temporarily blocked until %s", blockedUntil.Format(time.RFC3339)), + retryable: false, + } + } + + result, err, _ := claudeRefreshGroup.Do(refreshToken, func() (interface{}, error) { + return o.refreshTokensSingleFlight(context.WithoutCancel(ctx), refreshToken) + }) + if err != nil { + return nil, err + } + tokenData, ok := result.(*ClaudeTokenData) + if !ok || tokenData == nil { + return nil, fmt.Errorf("token refresh failed: invalid single-flight result") + } + return tokenData, nil +} + +func (o *ClaudeAuth) refreshTokensSingleFlight(ctx context.Context, refreshToken string) (*ClaudeTokenData, error) { + if blockedUntil := claudeRefreshBlockedUntil(refreshToken); blockedUntil.After(time.Now()) { + return nil, &refreshHTTPError{ + status: http.StatusTooManyRequests, + message: fmt.Sprintf("refresh temporarily blocked until %s", blockedUntil.Format(time.RFC3339)), + retryable: false, + } + } reqBody := map[string]interface{}{ "client_id": ClientID, @@ -276,7 +394,17 @@ func (o *ClaudeAuth) RefreshTokens(ctx context.Context, refreshToken string) (*C } if resp.StatusCode != http.StatusOK { - return nil, fmt.Errorf("token refresh failed with status %d: %s", resp.StatusCode, string(body)) + message := string(body) + if resp.StatusCode == http.StatusTooManyRequests { + retryAfter := parseClaudeRetryAfter(resp) + setClaudeRefreshBlockedUntil(refreshToken, time.Now().Add(retryAfter)) + return nil, &refreshHTTPError{status: resp.StatusCode, message: message, retryable: false} + } + return nil, &refreshHTTPError{ + status: resp.StatusCode, + message: message, + retryable: resp.StatusCode >= http.StatusInternalServerError, + } } // log.Debugf("Token response: %s", string(body)) @@ -287,6 +415,8 @@ func (o *ClaudeAuth) RefreshTokens(ctx context.Context, refreshToken string) (*C } // Create token data + clearClaudeRefreshBlockedUntil(refreshToken) + return &ClaudeTokenData{ AccessToken: tokenResp.AccessToken, RefreshToken: tokenResp.RefreshToken, @@ -348,6 +478,9 @@ func (o *ClaudeAuth) RefreshTokensWithRetry(ctx context.Context, refreshToken st lastErr = err log.Warnf("Token refresh attempt %d failed: %v", attempt+1, err) + if !isClaudeRefreshRetryable(err) { + break + } } return nil, fmt.Errorf("token refresh failed after %d attempts: %w", maxRetries, lastErr) diff --git a/internal/auth/claude/anthropic_auth_proxy_test.go b/internal/auth/claude/anthropic_auth_proxy_test.go index 50c4875791..7cab9cd2f1 100644 --- a/internal/auth/claude/anthropic_auth_proxy_test.go +++ b/internal/auth/claude/anthropic_auth_proxy_test.go @@ -3,7 +3,7 @@ package claude import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" "golang.org/x/net/proxy" ) diff --git a/internal/auth/claude/anthropic_auth_test.go b/internal/auth/claude/anthropic_auth_test.go new file mode 100644 index 0000000000..0b14d0834c --- /dev/null +++ b/internal/auth/claude/anthropic_auth_test.go @@ -0,0 +1,123 @@ +package claude + +import ( + "context" + "io" + "net/http" + "strings" + "sync" + "sync/atomic" + "testing" + "time" +) + +type roundTripFunc func(*http.Request) (*http.Response, error) + +func (f roundTripFunc) RoundTrip(req *http.Request) (*http.Response, error) { + return f(req) +} + +func TestRefreshTokensWithRetry_429BlocksImmediateReplay(t *testing.T) { + resetClaudeRefreshState() + defer resetClaudeRefreshState() + + var calls int32 + auth := &ClaudeAuth{ + httpClient: &http.Client{ + Transport: roundTripFunc(func(req *http.Request) (*http.Response, error) { + atomic.AddInt32(&calls, 1) + return &http.Response{ + StatusCode: http.StatusTooManyRequests, + Body: io.NopCloser(strings.NewReader(`{"error":"rate_limited"}`)), + Header: http.Header{"Retry-After": []string{"60"}}, + Request: req, + }, nil + }), + }, + } + + _, err := auth.RefreshTokensWithRetry(context.Background(), "dummy_refresh_token", 3) + if err == nil { + t.Fatalf("expected 429 refresh error") + } + if !strings.Contains(err.Error(), "status 429") { + t.Fatalf("expected status 429 in error, got %v", err) + } + if got := atomic.LoadInt32(&calls); got != 1 { + t.Fatalf("expected 1 refresh attempt after 429, got %d", got) + } + + _, err = auth.RefreshTokensWithRetry(context.Background(), "dummy_refresh_token", 3) + if err == nil { + t.Fatalf("expected immediate blocked refresh error") + } + if got := atomic.LoadInt32(&calls); got != 1 { + t.Fatalf("expected blocked retry to avoid a second refresh call, got %d attempts", got) + } + if blockedUntil := claudeRefreshBlockedUntil("dummy_refresh_token"); !blockedUntil.After(time.Now()) { + t.Fatalf("expected blocked-until timestamp to be set, got %v", blockedUntil) + } +} + +func TestRefreshTokens_DeduplicatesConcurrentRefresh(t *testing.T) { + resetClaudeRefreshState() + defer resetClaudeRefreshState() + + var calls int32 + started := make(chan struct{}) + release := make(chan struct{}) + var once sync.Once + + auth := &ClaudeAuth{ + httpClient: &http.Client{ + Transport: roundTripFunc(func(req *http.Request) (*http.Response, error) { + atomic.AddInt32(&calls, 1) + once.Do(func() { close(started) }) + <-release + return &http.Response{ + StatusCode: http.StatusOK, + Body: io.NopCloser(strings.NewReader(`{ + "access_token":"new-access", + "refresh_token":"new-refresh", + "token_type":"Bearer", + "expires_in":3600, + "account":{"email_address":"shared@example.com"} + }`)), + Header: make(http.Header), + Request: req, + }, nil + }), + }, + } + + results := make(chan *ClaudeTokenData, 2) + errs := make(chan error, 2) + runRefresh := func() { + td, err := auth.RefreshTokens(context.Background(), "shared-refresh-token") + results <- td + errs <- err + } + + go runRefresh() + go runRefresh() + + <-started + time.Sleep(20 * time.Millisecond) + if got := atomic.LoadInt32(&calls); got != 1 { + t.Fatalf("expected concurrent refresh to share a single upstream call, got %d", got) + } + close(release) + + for i := 0; i < 2; i++ { + if err := <-errs; err != nil { + t.Fatalf("expected refresh to succeed, got %v", err) + } + td := <-results + if td == nil || td.AccessToken != "new-access" { + t.Fatalf("expected refreshed access token, got %#v", td) + } + } + if got := atomic.LoadInt32(&calls); got != 1 { + t.Fatalf("expected exactly 1 upstream refresh call, got %d", got) + } +} diff --git a/internal/auth/claude/token.go b/internal/auth/claude/token.go index 6ebb0f2f8c..10aa3b4344 100644 --- a/internal/auth/claude/token.go +++ b/internal/auth/claude/token.go @@ -9,7 +9,7 @@ import ( "os" "path/filepath" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" ) // ClaudeTokenStorage stores OAuth2 token information for Anthropic Claude API authentication. diff --git a/internal/auth/claude/utls_transport.go b/internal/auth/claude/utls_transport.go index 88b69c9bd9..f41087819f 100644 --- a/internal/auth/claude/utls_transport.go +++ b/internal/auth/claude/utls_transport.go @@ -8,8 +8,8 @@ import ( "sync" tls "github.com/refraction-networking/utls" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" "golang.org/x/net/http2" "golang.org/x/net/proxy" diff --git a/internal/auth/codearts/codearts_auth.go b/internal/auth/codearts/codearts_auth.go new file mode 100644 index 0000000000..6a90c25873 --- /dev/null +++ b/internal/auth/codearts/codearts_auth.go @@ -0,0 +1,295 @@ +package codearts + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "time" + + log "github.com/sirupsen/logrus" +) + +const ( + IAMHost = "https://iam.cn-north-4.myhuaweicloud.com" + APIHost = "https://ide.cn-north-4.myhuaweicloud.com" + RedirectHost = "https://devcloud.cn-north-4.huaweicloud.com/codeartside" + ChatURL = "https://snap-access.cn-north-4.myhuaweicloud.com/v1/chat/chat" + GptsURL = "https://snap-access.cn-north-4.myhuaweicloud.com/v1/agent-center/agents" + + DefaultAgentID = "a8bcb36232554267a5142361cc25a393" + + tokenRefreshMargin = 4 * time.Hour +) + +// CodeArtsAuth manages the CodeArts authentication lifecycle. +type CodeArtsAuth struct { + httpClient *http.Client +} + +// NewCodeArtsAuth creates a new CodeArtsAuth instance. +func NewCodeArtsAuth(httpClient *http.Client) *CodeArtsAuth { + if httpClient == nil { + httpClient = &http.Client{Timeout: 30 * time.Second} + } + return &CodeArtsAuth{httpClient: httpClient} +} + +// AuthorizationURL returns the URL the user should visit to log in. +// Matches Python: build_login_url(ticket_id, port, theme=1, locale="zh-cn", version=3, uri_scheme="codearts") +func (a *CodeArtsAuth) AuthorizationURL(ticketID string, port int) string { + params := url.Values{} + params.Set("ticket_id", ticketID) + params.Set("theme", "1") + params.Set("locale", "zh-cn") + params.Set("version", "3") + params.Set("uri_scheme", "codearts") + params.Set("port", fmt.Sprintf("%d", port)) + params.Set("is_redirect", "true") + return fmt.Sprintf("%s/redirect1?%s", RedirectHost, params.Encode()) +} + +// PollForLoginResult polls the ticket endpoint until the user completes login. +// Matches Python: poll_login_ticket(ticket_id, identifier, timeout=120) +// Returns the full auth result JSON map. +func (a *CodeArtsAuth) PollForLoginResult(ctx context.Context, ticketID, identifier string) (map[string]interface{}, error) { + pollURL := fmt.Sprintf("%s/v2/login/ticket", APIHost) + + for i := 0; i < 60; i++ { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(2 * time.Second): + } + + payload, _ := json.Marshal(map[string]string{ + "ticket_id": ticketID, + "identifier": identifier, + }) + + req, err := http.NewRequestWithContext(ctx, "POST", pollURL, bytes.NewReader(payload)) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "application/json") + + resp, err := a.httpClient.Do(req) + if err != nil { + log.Debugf("codearts: poll attempt %d failed: %v", i+1, err) + continue + } + body, _ := io.ReadAll(resp.Body) + resp.Body.Close() + + var result map[string]interface{} + if err := json.Unmarshal(body, &result); err != nil { + continue + } + + // Python checks: if data.get("status") == "success": return data.get("result") + if status, _ := result["status"].(string); status == "success" { + if authResult, ok := result["result"].(map[string]interface{}); ok { + log.Info("codearts: login successful") + return authResult, nil + } + } + + log.Debugf("codearts: poll attempt %d, status=%v", i+1, result["status"]) + } + return nil, fmt.Errorf("codearts: login timed out after 120s") +} + +// ExchangeForSecurityToken exchanges X-Auth-Token for AK/SK/SecurityToken. +// Matches Python: get_credential_by_token(x_auth_token) +func (a *CodeArtsAuth) ExchangeForSecurityToken(ctx context.Context, xAuthToken string) (*CodeArtsTokenData, error) { + exchangeURL := fmt.Sprintf("%s/v3.0/OS-CREDENTIAL/securitytokens", IAMHost) + + payload := map[string]interface{}{ + "auth": map[string]interface{}{ + "identity": map[string]interface{}{ + "methods": []string{"token"}, + "token": map[string]interface{}{ + "duration_seconds": 86400, + }, + }, + }, + } + body, _ := json.Marshal(payload) + + req, err := http.NewRequestWithContext(ctx, "POST", exchangeURL, bytes.NewReader(body)) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "application/json;charset=utf8") + req.Header.Set("X-Auth-Token", xAuthToken) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("codearts: security token exchange failed: %w", err) + } + defer resp.Body.Close() + + respBody, _ := io.ReadAll(resp.Body) + if resp.StatusCode != 201 { + return nil, fmt.Errorf("codearts: security token exchange returned %d: %s", resp.StatusCode, string(respBody)) + } + + var result struct { + Credential struct { + Access string `json:"access"` + Secret string `json:"secret"` + SecurityToken string `json:"securitytoken"` + ExpiresAt string `json:"expires_at"` + } `json:"credential"` + } + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, fmt.Errorf("codearts: failed to parse security token response: %w", err) + } + + expiresAt, _ := time.Parse(time.RFC3339, result.Credential.ExpiresAt) + + return &CodeArtsTokenData{ + AK: result.Credential.Access, + SK: result.Credential.Secret, + SecurityToken: result.Credential.SecurityToken, + ExpiresAt: expiresAt, + XAuthToken: xAuthToken, + }, nil +} + +// ProcessLoginResult extracts credentials from login result. +// Matches Python logic: check for credential in result, or exchange x_auth_token. +func (a *CodeArtsAuth) ProcessLoginResult(ctx context.Context, authResult map[string]interface{}) (*CodeArtsTokenData, error) { + userID, _ := authResult["user_id"].(string) + userName, _ := authResult["user_name"].(string) + domainID, _ := authResult["domain_id"].(string) + + // Check if credential is directly in the result + var tokenData *CodeArtsTokenData + + if credMap, ok := authResult["credential"].(map[string]interface{}); ok { + // Credential directly in login result + ak, _ := credMap["access"].(string) + sk, _ := credMap["secret"].(string) + secToken, _ := credMap["securitytoken"].(string) + expiresAtStr, _ := credMap["expires_at"].(string) + expiresAt, _ := time.Parse(time.RFC3339, expiresAtStr) + + tokenData = &CodeArtsTokenData{ + AK: ak, + SK: sk, + SecurityToken: secToken, + ExpiresAt: expiresAt, + } + } else { + // Need to exchange x_auth_token for credential + xAuthToken, _ := authResult["x_auth_token"].(string) + if xAuthToken == "" { + xAuthToken, _ = authResult["token"].(string) + } + if xAuthToken == "" { + return nil, fmt.Errorf("codearts: no credential or x_auth_token in login result") + } + + log.Info("codearts: exchanging X-Auth-Token for AK/SK credentials") + var err error + tokenData, err = a.ExchangeForSecurityToken(ctx, xAuthToken) + if err != nil { + return nil, err + } + tokenData.XAuthToken = xAuthToken + } + + tokenData.UserID = userID + tokenData.UserName = userName + tokenData.DomainID = domainID + + return tokenData, nil +} + +// NeedsRefresh returns true if the token should be refreshed. +func NeedsRefresh(token *CodeArtsTokenData) bool { + if token == nil { + return true + } + return token.IsExpired(tokenRefreshMargin) +} + +// RefreshToken refreshes the security token using POST /v2/login/refresh. +// Matches Python: refresh_token(credential) +func (a *CodeArtsAuth) RefreshToken(ctx context.Context, token *CodeArtsTokenData) (*CodeArtsTokenData, error) { + if token == nil || (token.AK == "" || token.SK == "") { + return nil, fmt.Errorf("codearts: cannot refresh without AK/SK") + } + + refreshURL := fmt.Sprintf("%s/v2/login/refresh", APIHost) + body := []byte(`{"duration_seconds":86400}`) + + req, err := http.NewRequestWithContext(ctx, "POST", refreshURL, bytes.NewReader(body)) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "application/json") + req.Header.Set("X-Security-Token", token.SecurityToken) + req.Header.Set("Access-Key", token.AK) + + // Sign with SDK-HMAC-SHA256 + SignRequest(req, body, token.AK, token.SK, token.SecurityToken) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("codearts: refresh request failed: %w", err) + } + defer resp.Body.Close() + + respBody, _ := io.ReadAll(resp.Body) + + if resp.StatusCode != 200 { + log.Warnf("codearts: refresh returned %d, attempting re-exchange", resp.StatusCode) + if token.XAuthToken != "" { + return a.ExchangeForSecurityToken(ctx, token.XAuthToken) + } + return nil, fmt.Errorf("codearts: refresh failed with status %d", resp.StatusCode) + } + + var result map[string]interface{} + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, fmt.Errorf("codearts: failed to parse refresh response: %w", err) + } + + // Extract credential from response + credMap, ok := result["credential"].(map[string]interface{}) + if !ok { + if r, ok2 := result["result"].(map[string]interface{}); ok2 { + credMap, _ = r["credential"].(map[string]interface{}) + } + } + if credMap == nil { + credMap = result + } + + ak, _ := credMap["access"].(string) + sk, _ := credMap["secret"].(string) + secToken, _ := credMap["securitytoken"].(string) + expiresAtStr, _ := credMap["expires_at"].(string) + expiresAt, _ := time.Parse(time.RFC3339, expiresAtStr) + + if ak == "" || sk == "" { + return nil, fmt.Errorf("codearts: refresh response missing credentials") + } + + return &CodeArtsTokenData{ + AK: ak, + SK: sk, + SecurityToken: secToken, + ExpiresAt: expiresAt, + XAuthToken: token.XAuthToken, + UserID: token.UserID, + UserName: token.UserName, + DomainID: token.DomainID, + Email: token.Email, + }, nil +} diff --git a/internal/auth/codearts/models.go b/internal/auth/codearts/models.go new file mode 100644 index 0000000000..2500b2b72c --- /dev/null +++ b/internal/auth/codearts/models.go @@ -0,0 +1,24 @@ +package codearts + +import "time" + +// CodeArtsTokenData holds the authentication credentials. +type CodeArtsTokenData struct { + AK string `json:"access"` + SK string `json:"secret"` + SecurityToken string `json:"securitytoken"` + ExpiresAt time.Time `json:"expires_at"` + XAuthToken string `json:"x_auth_token,omitempty"` + Email string `json:"email,omitempty"` + UserID string `json:"user_id,omitempty"` + UserName string `json:"user_name,omitempty"` + DomainID string `json:"domain_id,omitempty"` +} + +// IsExpired returns true if the token is expired or will expire within margin. +func (t *CodeArtsTokenData) IsExpired(margin time.Duration) bool { + if t.ExpiresAt.IsZero() { + return true + } + return time.Now().Add(margin).After(t.ExpiresAt) +} diff --git a/internal/auth/codearts/oauth_web.go b/internal/auth/codearts/oauth_web.go new file mode 100644 index 0000000000..b0450324d2 --- /dev/null +++ b/internal/auth/codearts/oauth_web.go @@ -0,0 +1,391 @@ +package codearts + +import ( + "context" + "crypto/rand" + "encoding/base64" + "encoding/json" + "fmt" + "net/http" + "net/url" + "os" + "path/filepath" + "sync" + "time" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +type sessionStatus string + +const ( + sPending sessionStatus = "pending" + sWaitingCB sessionStatus = "waiting_callback" + sPolling sessionStatus = "polling" + sSuccess sessionStatus = "success" + sFailed sessionStatus = "failed" +) + +type webSession struct { + stateID string + ticketID string + identifier string + status sessionStatus + startedAt time.Time + error string + token *CodeArtsTokenData + cancel context.CancelFunc +} + +// OAuthWebHandler handles CodeArts OAuth web login flow. +type OAuthWebHandler struct { + cfg *config.Config + sessions map[string]*webSession + // Map ticket_id -> stateID for callback lookup + ticketToState map[string]string + mu sync.RWMutex + auth *CodeArtsAuth +} + +// NewOAuthWebHandler creates a new CodeArts OAuth web handler. +func NewOAuthWebHandler(cfg *config.Config) *OAuthWebHandler { + return &OAuthWebHandler{ + cfg: cfg, + sessions: make(map[string]*webSession), + ticketToState: make(map[string]string), + auth: NewCodeArtsAuth(nil), + } +} + +// RegisterRoutes registers CodeArts OAuth web routes. +func (h *OAuthWebHandler) RegisterRoutes(router gin.IRouter) { + oauth := router.Group("/v0/oauth/codearts") + { + oauth.GET("", h.handleIndex) + oauth.GET("/start", h.handleStart) + oauth.GET("/callback", h.handleCallback) + oauth.GET("/status", h.handleStatus) + } + // Root-level callback: HuaweiCloud redirects to http://127.0.0.1:{port}/callback + router.GET("/callback", h.handleCallback) +} + +func generateState() (string, error) { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +func generateTicketID() string { + b := make([]byte, 32) + rand.Read(b) + return fmt.Sprintf("%x", b) +} + +func (h *OAuthWebHandler) handleIndex(c *gin.Context) { + c.Header("Content-Type", "text/html; charset=utf-8") + c.String(http.StatusOK, codeArtsLoginPage) +} + +func (h *OAuthWebHandler) handleStart(c *gin.Context) { + stateID, err := generateState() + if err != nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to generate state"}) + return + } + + ticketID := generateTicketID() + + port := h.cfg.Port + if port == 0 { + port = 8318 + } + + sess := &webSession{ + stateID: stateID, + ticketID: ticketID, + status: sWaitingCB, + startedAt: time.Now(), + } + + h.mu.Lock() + h.sessions[stateID] = sess + h.ticketToState[ticketID] = stateID + h.mu.Unlock() + + loginURL := h.auth.AuthorizationURL(ticketID, port) + + log.Infof("CodeArts OAuth: session %s started, login URL: %s", stateID, loginURL) + + if c.GetHeader("Accept") == "application/json" { + c.JSON(http.StatusOK, gin.H{"url": loginURL, "state": stateID}) + return + } + + c.Header("Content-Type", "text/html; charset=utf-8") + c.String(http.StatusOK, fmt.Sprintf(codeArtsWaitingPage, loginURL, stateID)) +} + +// handleCallback receives the callback from HuaweiCloud after user login. +// Python: GET /callback?identifier=XXX&redirect=YYY +// The redirect URL contains ticket_id which we use to match the correct session. +func (h *OAuthWebHandler) handleCallback(c *gin.Context) { + identifier := c.Query("identifier") + redirectURL := c.Query("redirect") + + log.Infof("CodeArts OAuth: callback received, identifier=%s, redirect=%s", identifier, redirectURL) + + if identifier == "" { + c.JSON(http.StatusOK, gin.H{"success": false, "error": "lack argument identifier"}) + return + } + + // Extract ticket_id from redirect URL to match the correct session + var ticketFromRedirect string + if redirectURL != "" { + if parsed, err := url.Parse(redirectURL); err == nil { + ticketFromRedirect = parsed.Query().Get("ticket_id") + } + } + + h.mu.Lock() + var matchedSess *webSession + + // First try: match by ticket_id from redirect URL + if ticketFromRedirect != "" { + if stateID, ok := h.ticketToState[ticketFromRedirect]; ok { + if sess, ok2 := h.sessions[stateID]; ok2 { + sess.identifier = identifier + sess.status = sPolling + matchedSess = sess + log.Infof("CodeArts OAuth: matched session by ticket_id=%s", ticketFromRedirect) + } + } + } + + // Fallback: match the most recent waiting session + if matchedSess == nil { + var latestSess *webSession + for _, sess := range h.sessions { + if sess.status == sWaitingCB { + if latestSess == nil || sess.startedAt.After(latestSess.startedAt) { + latestSess = sess + } + } + } + if latestSess != nil { + latestSess.identifier = identifier + latestSess.status = sPolling + matchedSess = latestSess + log.Infof("CodeArts OAuth: matched session by fallback (latest waiting), ticket=%s", latestSess.ticketID) + } + } + h.mu.Unlock() + + if matchedSess != nil { + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) + matchedSess.cancel = cancel + go h.pollLogin(ctx, matchedSess) + } else { + log.Warn("CodeArts OAuth: no matching session found for callback") + } + + if redirectURL != "" { + c.Redirect(http.StatusTemporaryRedirect, redirectURL) + } else { + c.Header("Content-Type", "text/html; charset=utf-8") + c.String(http.StatusOK, `Authentication successful

✅ Authentication successful!

You can close this tab.

This tab will close automatically in 3 seconds.

`) + } +} + +func (h *OAuthWebHandler) pollLogin(ctx context.Context, sess *webSession) { + if sess.cancel != nil { + defer sess.cancel() + } + + log.Infof("CodeArts OAuth: polling for login result, ticket=%s, identifier=%s", sess.ticketID, sess.identifier) + + // Poll with ticket_id + identifier (matching Python: poll_login_ticket) + authResult, err := h.auth.PollForLoginResult(ctx, sess.ticketID, sess.identifier) + if err != nil { + h.mu.Lock() + sess.status = sFailed + sess.error = err.Error() + h.mu.Unlock() + log.Errorf("CodeArts OAuth: poll failed: %v", err) + return + } + + // Process login result: extract credential or exchange x_auth_token + tokenData, err := h.auth.ProcessLoginResult(ctx, authResult) + if err != nil { + h.mu.Lock() + sess.status = sFailed + sess.error = err.Error() + h.mu.Unlock() + log.Errorf("CodeArts OAuth: process result failed: %v", err) + return + } + + h.mu.Lock() + sess.status = sSuccess + sess.token = tokenData + h.mu.Unlock() + + // Save auth file + h.saveTokenToFile(tokenData) + log.Infof("CodeArts OAuth: authentication successful for user %s", tokenData.UserName) +} + +func (h *OAuthWebHandler) handleStatus(c *gin.Context) { + stateID := c.Query("state") + if stateID == "" { + c.JSON(http.StatusBadRequest, gin.H{"error": "missing state"}) + return + } + + h.mu.RLock() + sess, ok := h.sessions[stateID] + h.mu.RUnlock() + + if !ok { + c.JSON(http.StatusNotFound, gin.H{"error": "session not found"}) + return + } + + switch sess.status { + case sSuccess: + msg := "Login successful! Token saved." + if sess.token != nil && sess.token.UserName != "" { + msg = fmt.Sprintf("Login successful! User: %s", sess.token.UserName) + } + c.JSON(http.StatusOK, gin.H{"status": "success", "message": msg}) + case sFailed: + c.JSON(http.StatusOK, gin.H{"status": "failed", "error": sess.error}) + case sPolling: + c.JSON(http.StatusOK, gin.H{"status": "pending", "message": "Polling for login result..."}) + default: + c.JSON(http.StatusOK, gin.H{"status": "pending", "message": "Waiting for browser callback..."}) + } +} + +func (h *OAuthWebHandler) saveTokenToFile(tokenData *CodeArtsTokenData) { + authDir := "" + if h.cfg != nil && h.cfg.AuthDir != "" { + var err error + authDir, err = util.ResolveAuthDir(h.cfg.AuthDir) + if err != nil { + log.Errorf("CodeArts OAuth: failed to resolve auth directory: %v", err) + } + } + if authDir == "" { + home, err := os.UserHomeDir() + if err != nil { + log.Errorf("CodeArts OAuth: failed to get home directory: %v", err) + return + } + authDir = filepath.Join(home, ".cli-proxy-api") + } + if err := os.MkdirAll(authDir, 0700); err != nil { + log.Errorf("CodeArts OAuth: failed to create auth directory: %v", err) + return + } + + fileName := "codearts-token.json" + if tokenData.UserName != "" { + fileName = fmt.Sprintf("codearts-%s.json", tokenData.UserName) + } + + // Save in the same format as the file synthesizer expects: + // { "type": "codearts", ... } + storage := map[string]interface{}{ + "type": "codearts", + "ak": tokenData.AK, + "sk": tokenData.SK, + "security_token": tokenData.SecurityToken, + "x_auth_token": tokenData.XAuthToken, + "expires_at": tokenData.ExpiresAt.Format(time.RFC3339), + "user_id": tokenData.UserID, + "user_name": tokenData.UserName, + "domain_id": tokenData.DomainID, + "email": tokenData.Email, + "last_refresh": time.Now().Format(time.RFC3339), + } + + data, err := json.MarshalIndent(storage, "", " ") + if err != nil { + log.Errorf("CodeArts OAuth: failed to marshal token: %v", err) + return + } + + authFilePath := filepath.Join(authDir, fileName) + if err := os.WriteFile(authFilePath, data, 0600); err != nil { + log.Errorf("CodeArts OAuth: failed to write auth file: %v", err) + return + } + log.Infof("CodeArts OAuth: token saved to %s", authFilePath) +} + +// HTML templates +const codeArtsLoginPage = ` +CodeArts IDE Login + +
+

🔑 CodeArts IDE Login

+

Login with your HuaweiCloud account to use CodeArts IDE models through CLIProxyAPI.

+Start Login +
` + +const codeArtsWaitingPage = ` +CodeArts IDE Login - Waiting + +
+

🔑 CodeArts IDE Login

+

Click the button below to open HuaweiCloud login page. After login, you will be redirected back here.

+Open HuaweiCloud Login +
⏳ Waiting for login callback...
+
+` diff --git a/internal/auth/codearts/signer.go b/internal/auth/codearts/signer.go new file mode 100644 index 0000000000..5523f5945c --- /dev/null +++ b/internal/auth/codearts/signer.go @@ -0,0 +1,159 @@ +package codearts + +import ( + "crypto/hmac" + "crypto/sha256" + "encoding/hex" + "fmt" + "net/http" + "net/url" + "sort" + "strings" + "time" +) + +func SignRequest(req *http.Request, body []byte, ak, sk, securityToken string) { + now := time.Now().UTC() + timeStr := now.Format("20060102T150405Z") + + req.Header.Set("X-Sdk-Date", timeStr) + req.Header.Set("host", req.URL.Host) + if securityToken != "" { + req.Header.Set("X-Security-Token", securityToken) + } + + signedHeaderKeys := extractSignedHeaders(req.Header) + + canonicalURI := buildCanonicalURI(req.URL.Path) + canonicalQuery := buildCanonicalQueryString(req.URL.Query()) + canonicalHdrs := buildCanonicalHeaders(req, signedHeaderKeys) + signedHeadersStr := strings.Join(signedHeaderKeys, ";") + + bodyHash := sha256Hex(body) + + canonicalReq := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", + req.Method, canonicalURI, canonicalQuery, + canonicalHdrs, signedHeadersStr, bodyHash) + + stringToSign := fmt.Sprintf("SDK-HMAC-SHA256\n%s\n%s", + timeStr, sha256Hex([]byte(canonicalReq))) + + signature := hmacSHA256Hex([]byte(sk), []byte(stringToSign)) + + authHeader := fmt.Sprintf("SDK-HMAC-SHA256 Access=%s, SignedHeaders=%s, Signature=%s", + ak, signedHeadersStr, signature) + req.Header.Set("Authorization", authHeader) +} + +func extractSignedHeaders(headers http.Header) []string { + var sh []string + for key := range headers { + lower := strings.ToLower(key) + if strings.HasPrefix(lower, "content-type") || strings.Contains(lower, "_") { + continue + } + sh = append(sh, lower) + } + sort.Strings(sh) + return sh +} + +func buildCanonicalURI(rawPath string) string { + parts := strings.Split(rawPath, "/") + var encoded []string + for _, p := range parts { + encoded = append(encoded, sdkEscape(p)) + } + path := strings.Join(encoded, "/") + if len(path) == 0 || path[len(path)-1] != '/' { + path = path + "/" + } + return path +} + +func buildCanonicalQueryString(query url.Values) string { + if len(query) == 0 { + return "" + } + keys := make([]string, 0, len(query)) + for k := range query { + keys = append(keys, k) + } + sort.Strings(keys) + var parts []string + for _, k := range keys { + vals := query[k] + sort.Strings(vals) + for _, v := range vals { + parts = append(parts, sdkEscape(k)+"="+sdkEscape(v)) + } + } + return strings.Join(parts, "&") +} + +func buildCanonicalHeaders(req *http.Request, signedHeaderKeys []string) string { + headerMap := make(map[string][]string) + for k, v := range req.Header { + lower := strings.ToLower(k) + if _, ok := headerMap[lower]; !ok { + headerMap[lower] = make([]string, 0) + } + headerMap[lower] = append(headerMap[lower], v...) + } + + var lines []string + for _, key := range signedHeaderKeys { + values := headerMap[key] + if key == "host" { + values = []string{req.URL.Host} + } + sort.Strings(values) + for _, v := range values { + lines = append(lines, key+":"+strings.TrimSpace(v)) + } + } + return fmt.Sprintf("%s\n", strings.Join(lines, "\n")) +} + +func sdkEscape(s string) string { + hexCount := 0 + for i := 0; i < len(s); i++ { + c := s[i] + if shouldEscape(c) { + hexCount++ + } + } + if hexCount == 0 { + return s + } + t := make([]byte, len(s)+2*hexCount) + j := 0 + for i := 0; i < len(s); i++ { + c := s[i] + if shouldEscape(c) { + t[j] = '%' + t[j+1] = "0123456789ABCDEF"[c>>4] + t[j+2] = "0123456789ABCDEF"[c&15] + j += 3 + } else { + t[j] = s[i] + j++ + } + } + return string(t) +} + +func shouldEscape(c byte) bool { + return !((c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z') || (c >= '0' && c <= '9') || c == '_' || c == '-' || c == '~' || c == '.') +} + +func sha256Hex(data []byte) string { + h := sha256.Sum256(data) + return hex.EncodeToString(h[:]) +} + +func hmacSHA256Hex(key, data []byte) string { + h := hmac.New(sha256.New, key) + h.Write(data) + return hex.EncodeToString(h.Sum(nil)) +} diff --git a/internal/auth/codebuddy/codebuddy_auth.go b/internal/auth/codebuddy/codebuddy_auth.go new file mode 100644 index 0000000000..ce5455045c --- /dev/null +++ b/internal/auth/codebuddy/codebuddy_auth.go @@ -0,0 +1,335 @@ +package codebuddy + +import ( + "bytes" + "context" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strings" + "time" + + "github.com/google/uuid" + log "github.com/sirupsen/logrus" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" +) + +const ( + BaseURL = "https://copilot.tencent.com" + DefaultDomain = "www.codebuddy.cn" + UserAgent = "CodeBuddyIDE/4.9.7 CodeBuddy/4.9.7" + + codeBuddyStatePath = "/v2/plugin/auth/state" + codeBuddyTokenPath = "/v2/plugin/auth/token" + codeBuddyRefreshPath = "/v2/plugin/auth/token/refresh" + pollInterval = 5 * time.Second + maxPollDuration = 5 * time.Minute + codeLoginPending = 11217 + codeSuccess = 0 +) + +type CodeBuddyAuth struct { + httpClient *http.Client + cfg *config.Config + baseURL string +} + +func NewCodeBuddyAuth(cfg *config.Config) *CodeBuddyAuth { + httpClient := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + httpClient = util.SetProxy(&cfg.SDKConfig, httpClient) + } + return &CodeBuddyAuth{httpClient: httpClient, cfg: cfg, baseURL: BaseURL} +} + +// AuthState holds the state and auth URL returned by the auth state API. +type AuthState struct { + State string + AuthURL string +} + +// FetchAuthState calls POST /v2/plugin/auth/state?platform=CLI to get the state and login URL. +func (a *CodeBuddyAuth) FetchAuthState(ctx context.Context) (*AuthState, error) { + stateURL := fmt.Sprintf("%s%s?platform=CLI", a.baseURL, codeBuddyStatePath) + body := []byte("{}") + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, stateURL, bytes.NewReader(body)) + if err != nil { + return nil, fmt.Errorf("codebuddy: failed to create auth state request: %w", err) + } + + requestID := uuid.NewString() + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("Content-Type", "application/json") + req.Header.Set("X-Requested-With", "XMLHttpRequest") + req.Header.Set("X-Domain", "copilot.tencent.com") + req.Header.Set("X-No-Authorization", "true") + req.Header.Set("X-No-User-Id", "true") + req.Header.Set("X-No-Enterprise-Id", "true") + req.Header.Set("X-No-Department-Info", "true") + req.Header.Set("X-Product", "SaaS") + req.Header.Set("User-Agent", UserAgent) + req.Header.Set("X-Request-ID", requestID) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("codebuddy: auth state request failed: %w", err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy auth state: close body error: %v", errClose) + } + }() + + bodyBytes, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("codebuddy: failed to read auth state response: %w", err) + } + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("codebuddy: auth state request returned status %d: %s", resp.StatusCode, string(bodyBytes)) + } + + var result struct { + Code int `json:"code"` + Msg string `json:"msg"` + Data *struct { + State string `json:"state"` + AuthURL string `json:"authUrl"` + } `json:"data"` + } + if err = json.Unmarshal(bodyBytes, &result); err != nil { + return nil, fmt.Errorf("codebuddy: failed to parse auth state response: %w", err) + } + if result.Code != codeSuccess { + return nil, fmt.Errorf("codebuddy: auth state request failed with code %d: %s", result.Code, result.Msg) + } + if result.Data == nil || result.Data.State == "" || result.Data.AuthURL == "" { + return nil, fmt.Errorf("codebuddy: auth state response missing state or authUrl") + } + + return &AuthState{ + State: result.Data.State, + AuthURL: result.Data.AuthURL, + }, nil +} + +type pollResponse struct { + Code int `json:"code"` + Msg string `json:"msg"` + RequestID string `json:"requestId"` + Data *struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` + ExpiresIn int64 `json:"expiresIn"` + TokenType string `json:"tokenType"` + Domain string `json:"domain"` + } `json:"data"` +} + +// doPollRequest performs a single polling request, safely reading and closing the response body +func (a *CodeBuddyAuth) doPollRequest(ctx context.Context, pollURL string) ([]byte, int, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, pollURL, nil) + if err != nil { + return nil, 0, fmt.Errorf("%w: %v", ErrTokenFetchFailed, err) + } + a.applyPollHeaders(req) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, 0, err + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy poll: close body error: %v", errClose) + } + }() + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, resp.StatusCode, fmt.Errorf("codebuddy poll: failed to read response body: %w", err) + } + return body, resp.StatusCode, nil +} + +// PollForToken polls until the user completes browser authorization and returns auth data. +func (a *CodeBuddyAuth) PollForToken(ctx context.Context, state string) (*CodeBuddyTokenStorage, error) { + deadline := time.Now().Add(maxPollDuration) + pollURL := fmt.Sprintf("%s%s?state=%s", a.baseURL, codeBuddyTokenPath, url.QueryEscape(state)) + + for time.Now().Before(deadline) { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(pollInterval): + } + + body, statusCode, err := a.doPollRequest(ctx, pollURL) + if err != nil { + log.Debugf("codebuddy poll: request error: %v", err) + continue + } + + if statusCode != http.StatusOK { + log.Debugf("codebuddy poll: unexpected status %d", statusCode) + continue + } + + var result pollResponse + if err := json.Unmarshal(body, &result); err != nil { + continue + } + + switch result.Code { + case codeSuccess: + if result.Data == nil { + return nil, fmt.Errorf("%w: empty data in response", ErrTokenFetchFailed) + } + userID, _ := a.DecodeUserID(result.Data.AccessToken) + return &CodeBuddyTokenStorage{ + AccessToken: result.Data.AccessToken, + RefreshToken: result.Data.RefreshToken, + ExpiresIn: result.Data.ExpiresIn, + TokenType: result.Data.TokenType, + Domain: result.Data.Domain, + UserID: userID, + Type: "codebuddy", + }, nil + case codeLoginPending: + // continue polling + default: + // TODO: when the CodeBuddy API error code for user denial is known, + // return ErrAccessDenied here instead of ErrTokenFetchFailed. + return nil, fmt.Errorf("%w: server returned code %d: %s", ErrTokenFetchFailed, result.Code, result.Msg) + } + } + return nil, ErrPollingTimeout +} + +// DecodeUserID decodes the sub field from a JWT access token as the user ID. +func (a *CodeBuddyAuth) DecodeUserID(accessToken string) (string, error) { + parts := strings.Split(accessToken, ".") + if len(parts) < 2 { + return "", ErrJWTDecodeFailed + } + payload, err := base64.RawURLEncoding.DecodeString(parts[1]) + if err != nil { + return "", fmt.Errorf("%w: %v", ErrJWTDecodeFailed, err) + } + var claims struct { + Sub string `json:"sub"` + } + if err := json.Unmarshal(payload, &claims); err != nil { + return "", fmt.Errorf("%w: %v", ErrJWTDecodeFailed, err) + } + if claims.Sub == "" { + return "", fmt.Errorf("%w: sub claim is empty", ErrJWTDecodeFailed) + } + return claims.Sub, nil +} + +// RefreshToken exchanges a refresh token for a new access token. +// It calls POST /v2/plugin/auth/token/refresh with the required headers. +func (a *CodeBuddyAuth) RefreshToken(ctx context.Context, accessToken, refreshToken, userID, domain string) (*CodeBuddyTokenStorage, error) { + if domain == "" { + domain = DefaultDomain + } + refreshURL := fmt.Sprintf("%s%s", a.baseURL, codeBuddyRefreshPath) + body := []byte("{}") + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, refreshURL, bytes.NewReader(body)) + if err != nil { + return nil, fmt.Errorf("codebuddy: failed to create refresh request: %w", err) + } + + requestID := strings.ReplaceAll(uuid.New().String(), "-", "") + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("Content-Type", "application/json") + req.Header.Set("X-Requested-With", "XMLHttpRequest") + req.Header.Set("X-Domain", domain) + req.Header.Set("X-Refresh-Token", refreshToken) + req.Header.Set("X-Auth-Refresh-Source", "plugin") + req.Header.Set("X-Request-ID", requestID) + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("X-User-Id", userID) + req.Header.Set("X-Product", "SaaS") + req.Header.Set("User-Agent", UserAgent) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("codebuddy: refresh request failed: %w", err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy refresh: close body error: %v", errClose) + } + }() + + bodyBytes, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("codebuddy: failed to read refresh response: %w", err) + } + + if resp.StatusCode == http.StatusUnauthorized || resp.StatusCode == http.StatusForbidden { + return nil, fmt.Errorf("codebuddy: refresh token rejected (status %d)", resp.StatusCode) + } + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("codebuddy: refresh failed with status %d: %s", resp.StatusCode, string(bodyBytes)) + } + + var result struct { + Code int `json:"code"` + Msg string `json:"msg"` + Data *struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` + ExpiresIn int64 `json:"expiresIn"` + RefreshExpiresIn int64 `json:"refreshExpiresIn"` + TokenType string `json:"tokenType"` + Domain string `json:"domain"` + } `json:"data"` + } + if err = json.Unmarshal(bodyBytes, &result); err != nil { + return nil, fmt.Errorf("codebuddy: failed to parse refresh response: %w", err) + } + if result.Code != codeSuccess { + return nil, fmt.Errorf("codebuddy: refresh failed with code %d: %s", result.Code, result.Msg) + } + if result.Data == nil { + return nil, fmt.Errorf("codebuddy: empty data in refresh response") + } + + newUserID, _ := a.DecodeUserID(result.Data.AccessToken) + if newUserID == "" { + newUserID = userID + } + tokenDomain := result.Data.Domain + if tokenDomain == "" { + tokenDomain = domain + } + + return &CodeBuddyTokenStorage{ + AccessToken: result.Data.AccessToken, + RefreshToken: result.Data.RefreshToken, + ExpiresIn: result.Data.ExpiresIn, + RefreshExpiresIn: result.Data.RefreshExpiresIn, + TokenType: result.Data.TokenType, + Domain: tokenDomain, + UserID: newUserID, + Type: "codebuddy", + }, nil +} + +func (a *CodeBuddyAuth) applyPollHeaders(req *http.Request) { + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("User-Agent", UserAgent) + req.Header.Set("X-Requested-With", "XMLHttpRequest") + req.Header.Set("X-No-Authorization", "true") + req.Header.Set("X-No-User-Id", "true") + req.Header.Set("X-No-Enterprise-Id", "true") + req.Header.Set("X-No-Department-Info", "true") + req.Header.Set("X-Product", "SaaS") +} diff --git a/internal/auth/codebuddy/codebuddy_auth_http_test.go b/internal/auth/codebuddy/codebuddy_auth_http_test.go new file mode 100644 index 0000000000..125d7c0343 --- /dev/null +++ b/internal/auth/codebuddy/codebuddy_auth_http_test.go @@ -0,0 +1,285 @@ +package codebuddy + +import ( + "context" + "encoding/base64" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" +) + +// newTestAuth creates a CodeBuddyAuth pointing at the given test server. +func newTestAuth(serverURL string) *CodeBuddyAuth { + return &CodeBuddyAuth{ + httpClient: http.DefaultClient, + baseURL: serverURL, + } +} + +// fakeJWT builds a minimal JWT with the given sub claim for testing. +func fakeJWT(sub string) string { + header := base64.RawURLEncoding.EncodeToString([]byte(`{"alg":"RS256"}`)) + payload, _ := json.Marshal(map[string]any{"sub": sub, "iat": 1234567890}) + encodedPayload := base64.RawURLEncoding.EncodeToString(payload) + return header + "." + encodedPayload + ".sig" +} + +// --- FetchAuthState tests --- + +func TestFetchAuthState_Success(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodPost { + t.Errorf("expected POST, got %s", r.Method) + } + if got := r.URL.Path; got != codeBuddyStatePath { + t.Errorf("expected path %s, got %s", codeBuddyStatePath, got) + } + if got := r.URL.Query().Get("platform"); got != "CLI" { + t.Errorf("expected platform=CLI, got %s", got) + } + if got := r.Header.Get("User-Agent"); got != UserAgent { + t.Errorf("expected User-Agent %s, got %s", UserAgent, got) + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 0, + "msg": "ok", + "data": map[string]any{ + "state": "test-state-abc", + "authUrl": "https://example.com/login?state=test-state-abc", + }, + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + result, err := auth.FetchAuthState(context.Background()) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if result.State != "test-state-abc" { + t.Errorf("expected state 'test-state-abc', got '%s'", result.State) + } + if result.AuthURL != "https://example.com/login?state=test-state-abc" { + t.Errorf("unexpected authURL: %s", result.AuthURL) + } +} + +func TestFetchAuthState_NonOKStatus(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.WriteHeader(http.StatusInternalServerError) + _, _ = w.Write([]byte("internal error")) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.FetchAuthState(context.Background()) + if err == nil { + t.Fatal("expected error for non-200 status") + } +} + +func TestFetchAuthState_APIErrorCode(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 10001, + "msg": "rate limited", + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.FetchAuthState(context.Background()) + if err == nil { + t.Fatal("expected error for non-zero code") + } +} + +func TestFetchAuthState_MissingData(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 0, + "msg": "ok", + "data": map[string]any{ + "state": "", + "authUrl": "", + }, + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.FetchAuthState(context.Background()) + if err == nil { + t.Fatal("expected error for empty state/authUrl") + } +} + +// --- RefreshToken tests --- + +func TestRefreshToken_Success(t *testing.T) { + newAccessToken := fakeJWT("refreshed-user-456") + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodPost { + t.Errorf("expected POST, got %s", r.Method) + } + if got := r.URL.Path; got != codeBuddyRefreshPath { + t.Errorf("expected path %s, got %s", codeBuddyRefreshPath, got) + } + if got := r.Header.Get("X-Refresh-Token"); got != "old-refresh-token" { + t.Errorf("expected X-Refresh-Token 'old-refresh-token', got '%s'", got) + } + if got := r.Header.Get("Authorization"); got != "Bearer old-access-token" { + t.Errorf("expected Authorization 'Bearer old-access-token', got '%s'", got) + } + if got := r.Header.Get("X-User-Id"); got != "user-123" { + t.Errorf("expected X-User-Id 'user-123', got '%s'", got) + } + if got := r.Header.Get("X-Domain"); got != "custom.domain.com" { + t.Errorf("expected X-Domain 'custom.domain.com', got '%s'", got) + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 0, + "msg": "ok", + "data": map[string]any{ + "accessToken": newAccessToken, + "refreshToken": "new-refresh-token", + "expiresIn": 3600, + "refreshExpiresIn": 86400, + "tokenType": "bearer", + "domain": "custom.domain.com", + }, + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + storage, err := auth.RefreshToken(context.Background(), "old-access-token", "old-refresh-token", "user-123", "custom.domain.com") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if storage.AccessToken != newAccessToken { + t.Errorf("expected new access token, got '%s'", storage.AccessToken) + } + if storage.RefreshToken != "new-refresh-token" { + t.Errorf("expected 'new-refresh-token', got '%s'", storage.RefreshToken) + } + if storage.UserID != "refreshed-user-456" { + t.Errorf("expected userID 'refreshed-user-456', got '%s'", storage.UserID) + } + if storage.ExpiresIn != 3600 { + t.Errorf("expected expiresIn 3600, got %d", storage.ExpiresIn) + } + if storage.RefreshExpiresIn != 86400 { + t.Errorf("expected refreshExpiresIn 86400, got %d", storage.RefreshExpiresIn) + } + if storage.Domain != "custom.domain.com" { + t.Errorf("expected domain 'custom.domain.com', got '%s'", storage.Domain) + } + if storage.Type != "codebuddy" { + t.Errorf("expected type 'codebuddy', got '%s'", storage.Type) + } +} + +func TestRefreshToken_DefaultDomain(t *testing.T) { + var receivedDomain string + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + receivedDomain = r.Header.Get("X-Domain") + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 0, + "msg": "ok", + "data": map[string]any{ + "accessToken": fakeJWT("user-1"), + "refreshToken": "rt", + "expiresIn": 3600, + "tokenType": "bearer", + "domain": DefaultDomain, + }, + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.RefreshToken(context.Background(), "at", "rt", "uid", "") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if receivedDomain != DefaultDomain { + t.Errorf("expected default domain '%s', got '%s'", DefaultDomain, receivedDomain) + } +} + +func TestRefreshToken_Unauthorized(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.WriteHeader(http.StatusUnauthorized) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.RefreshToken(context.Background(), "at", "rt", "uid", "d") + if err == nil { + t.Fatal("expected error for 401 response") + } +} + +func TestRefreshToken_Forbidden(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.WriteHeader(http.StatusForbidden) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.RefreshToken(context.Background(), "at", "rt", "uid", "d") + if err == nil { + t.Fatal("expected error for 403 response") + } +} + +func TestRefreshToken_APIErrorCode(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 40001, + "msg": "invalid refresh token", + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + _, err := auth.RefreshToken(context.Background(), "at", "rt", "uid", "d") + if err == nil { + t.Fatal("expected error for non-zero API code") + } +} + +func TestRefreshToken_FallbackUserIDAndDomain(t *testing.T) { + // When the new access token cannot be decoded for userID, it should fall back to the provided one. + // When the response domain is empty, it should fall back to the request domain. + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + _ = json.NewEncoder(w).Encode(map[string]any{ + "code": 0, + "msg": "ok", + "data": map[string]any{ + "accessToken": "not-a-valid-jwt", + "refreshToken": "new-rt", + "expiresIn": 7200, + "tokenType": "bearer", + "domain": "", + }, + }) + })) + defer srv.Close() + + auth := newTestAuth(srv.URL) + storage, err := auth.RefreshToken(context.Background(), "at", "rt", "original-uid", "original.domain.com") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if storage.UserID != "original-uid" { + t.Errorf("expected fallback userID 'original-uid', got '%s'", storage.UserID) + } + if storage.Domain != "original.domain.com" { + t.Errorf("expected fallback domain 'original.domain.com', got '%s'", storage.Domain) + } +} diff --git a/internal/auth/codebuddy/codebuddy_auth_test.go b/internal/auth/codebuddy/codebuddy_auth_test.go new file mode 100644 index 0000000000..e2da63539e --- /dev/null +++ b/internal/auth/codebuddy/codebuddy_auth_test.go @@ -0,0 +1,21 @@ +package codebuddy_test + +import ( + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codebuddy" +) + +func TestDecodeUserID_ValidJWT(t *testing.T) { + // JWT payload: {"sub":"test-user-id-123","iat":1234567890} + // base64url encode: eyJzdWIiOiJ0ZXN0LXVzZXItaWQtMTIzIiwiaWF0IjoxMjM0NTY3ODkwfQ + token := "eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXItaWQtMTIzIiwiaWF0IjoxMjM0NTY3ODkwfQ.sig" + auth := codebuddy.NewCodeBuddyAuth(nil) + userID, err := auth.DecodeUserID(token) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if userID != "test-user-id-123" { + t.Errorf("expected 'test-user-id-123', got '%s'", userID) + } +} diff --git a/internal/auth/codebuddy/errors.go b/internal/auth/codebuddy/errors.go new file mode 100644 index 0000000000..7a35809bae --- /dev/null +++ b/internal/auth/codebuddy/errors.go @@ -0,0 +1,25 @@ +package codebuddy + +import "errors" + +var ( + ErrPollingTimeout = errors.New("codebuddy: polling timeout, user did not authorize in time") + ErrAccessDenied = errors.New("codebuddy: access denied by user") + ErrTokenFetchFailed = errors.New("codebuddy: failed to fetch token from server") + ErrJWTDecodeFailed = errors.New("codebuddy: failed to decode JWT token") +) + +func GetUserFriendlyMessage(err error) string { + switch { + case errors.Is(err, ErrPollingTimeout): + return "Authentication timed out. Please try again." + case errors.Is(err, ErrAccessDenied): + return "Access denied. Please try again and approve the login request." + case errors.Is(err, ErrJWTDecodeFailed): + return "Failed to decode token. Please try logging in again." + case errors.Is(err, ErrTokenFetchFailed): + return "Failed to fetch token from server. Please try again." + default: + return "Authentication failed: " + err.Error() + } +} diff --git a/internal/auth/codebuddy/token.go b/internal/auth/codebuddy/token.go new file mode 100644 index 0000000000..76cc74ccc0 --- /dev/null +++ b/internal/auth/codebuddy/token.go @@ -0,0 +1,65 @@ +// Package codebuddy provides authentication and token management functionality +// for CodeBuddy AI services. It handles OAuth2 token storage, serialization, +// and retrieval for maintaining authenticated sessions with the CodeBuddy API. +package codebuddy + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" +) + +// CodeBuddyTokenStorage stores OAuth token information for CodeBuddy API authentication. +// It maintains compatibility with the existing auth system while adding CodeBuddy-specific fields +// for managing access tokens and user account information. +type CodeBuddyTokenStorage struct { + // AccessToken is the OAuth2 access token used for authenticating API requests. + AccessToken string `json:"access_token"` + // RefreshToken is the OAuth2 refresh token used to obtain new access tokens. + RefreshToken string `json:"refresh_token"` + // ExpiresIn is the number of seconds until the access token expires. + ExpiresIn int64 `json:"expires_in"` + // RefreshExpiresIn is the number of seconds until the refresh token expires. + RefreshExpiresIn int64 `json:"refresh_expires_in,omitempty"` + // TokenType is the type of token, typically "bearer". + TokenType string `json:"token_type"` + // Domain is the CodeBuddy service domain/region. + Domain string `json:"domain"` + // UserID is the user ID associated with this token. + UserID string `json:"user_id"` + // Type indicates the authentication provider type, always "codebuddy" for this storage. + Type string `json:"type"` +} + +// SaveTokenToFile serializes the CodeBuddy token storage to a JSON file. +// This method creates the necessary directory structure and writes the token +// data in JSON format to the specified file path for persistent storage. +// +// Parameters: +// - authFilePath: The full path where the token file should be saved +// +// Returns: +// - error: An error if the operation fails, nil otherwise +func (s *CodeBuddyTokenStorage) SaveTokenToFile(authFilePath string) error { + misc.LogSavingCredentials(authFilePath) + s.Type = "codebuddy" + if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil { + return fmt.Errorf("failed to create directory: %w", err) + } + + f, err := os.OpenFile(authFilePath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) + if err != nil { + return fmt.Errorf("failed to create token file: %w", err) + } + defer func() { + _ = f.Close() + }() + + if err = json.NewEncoder(f).Encode(s); err != nil { + return fmt.Errorf("failed to write token to file: %w", err) + } + return nil +} diff --git a/internal/auth/codebuddy_ai/codebuddy_ai_auth.go b/internal/auth/codebuddy_ai/codebuddy_ai_auth.go new file mode 100644 index 0000000000..b7cbabda5e --- /dev/null +++ b/internal/auth/codebuddy_ai/codebuddy_ai_auth.go @@ -0,0 +1,286 @@ +package codebuddy_ai + +import ( + "bytes" + "context" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strings" + "time" + + log "github.com/sirupsen/logrus" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" +) + +const ( + BaseURL = "https://www.codebuddy.ai" + DefaultDomain = "www.codebuddy.ai" + UserAgent = "CodeBuddy/1.100.0" + + authStatePath = "/v2/plugin/auth/state" + authTokenPath = "/v2/plugin/auth/token" + authRefreshPath = "/v2/plugin/auth/token/refresh" + pollInterval = 3 * time.Second + maxPollDuration = 5 * time.Minute + codeLoginPending = 11217 + codeSuccess = 0 +) + +type CodeBuddyAIAuth struct { + httpClient *http.Client + cfg *config.Config + baseURL string +} + +func NewCodeBuddyAIAuth(cfg *config.Config) *CodeBuddyAIAuth { + httpClient := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + httpClient = util.SetProxy(&cfg.SDKConfig, httpClient) + } + return &CodeBuddyAIAuth{httpClient: httpClient, cfg: cfg, baseURL: BaseURL} +} + +type AuthState struct { + State string + AuthURL string +} + +func (a *CodeBuddyAIAuth) FetchAuthState(ctx context.Context) (*AuthState, error) { + stateURL := fmt.Sprintf("%s%s?platform=ide", a.baseURL, authStatePath) + body := []byte("{}") + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, stateURL, bytes.NewReader(body)) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to create auth state request: %w", err) + } + + req.Header.Set("Content-Type", "application/json") + req.Header.Set("X-No-Authorization", "true") + req.Header.Set("X-No-User-Id", "true") + req.Header.Set("X-No-Enterprise-Id", "true") + req.Header.Set("User-Agent", UserAgent) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: auth state request failed: %w", err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy-ai auth state: close body error: %v", errClose) + } + }() + + bodyBytes, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to read auth state response: %w", err) + } + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("codebuddy-ai: auth state request returned status %d: %s", resp.StatusCode, string(bodyBytes)) + } + + var result struct { + Code int `json:"code"` + Msg string `json:"msg"` + Data *struct { + State string `json:"state"` + AuthURL string `json:"authUrl"` + } `json:"data"` + } + if err = json.Unmarshal(bodyBytes, &result); err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to parse auth state response: %w", err) + } + if result.Code != codeSuccess { + return nil, fmt.Errorf("codebuddy-ai: auth state request failed with code %d: %s", result.Code, result.Msg) + } + if result.Data == nil || result.Data.State == "" || result.Data.AuthURL == "" { + return nil, fmt.Errorf("codebuddy-ai: auth state response missing state or authUrl") + } + + return &AuthState{ + State: result.Data.State, + AuthURL: result.Data.AuthURL, + }, nil +} + +type pollResponse struct { + Code int `json:"code"` + Msg string `json:"msg"` + Data *struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` + ExpiresIn int64 `json:"expiresIn"` + RefreshExpiresIn int64 `json:"refreshExpiresIn"` + TokenType string `json:"tokenType"` + } `json:"data"` +} + +func (a *CodeBuddyAIAuth) PollForToken(ctx context.Context, state string) (*CodeBuddyAITokenStorage, error) { + deadline := time.Now().Add(maxPollDuration) + pollURL := fmt.Sprintf("%s%s?state=%s", a.baseURL, authTokenPath, url.QueryEscape(state)) + + for time.Now().Before(deadline) { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(pollInterval): + } + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, pollURL, nil) + if err != nil { + return nil, fmt.Errorf("%w: %v", ErrTokenFetchFailed, err) + } + req.Header.Set("Accept", "application/json") + req.Header.Set("X-No-Authorization", "true") + req.Header.Set("User-Agent", UserAgent) + + resp, err := a.httpClient.Do(req) + if err != nil { + log.Debugf("codebuddy-ai poll: request error: %v", err) + continue + } + body, err := io.ReadAll(resp.Body) + resp.Body.Close() + if err != nil { + log.Debugf("codebuddy-ai poll: read error: %v", err) + continue + } + if resp.StatusCode != http.StatusOK { + log.Debugf("codebuddy-ai poll: unexpected status %d", resp.StatusCode) + continue + } + + var result pollResponse + if err := json.Unmarshal(body, &result); err != nil { + continue + } + + switch result.Code { + case codeSuccess: + if result.Data == nil { + return nil, fmt.Errorf("%w: empty data in response", ErrTokenFetchFailed) + } + userID, _ := a.DecodeUserID(result.Data.AccessToken) + return &CodeBuddyAITokenStorage{ + AccessToken: result.Data.AccessToken, + RefreshToken: result.Data.RefreshToken, + ExpiresIn: result.Data.ExpiresIn, + RefreshExpiresIn: result.Data.RefreshExpiresIn, + TokenType: result.Data.TokenType, + Domain: DefaultDomain, + UserID: userID, + Type: "codebuddy-ai", + }, nil + case codeLoginPending: + default: + return nil, fmt.Errorf("%w: server returned code %d: %s", ErrTokenFetchFailed, result.Code, result.Msg) + } + } + return nil, ErrPollingTimeout +} + +func (a *CodeBuddyAIAuth) DecodeUserID(accessToken string) (string, error) { + parts := strings.Split(accessToken, ".") + if len(parts) < 2 { + return "", ErrJWTDecodeFailed + } + payload, err := base64.RawURLEncoding.DecodeString(parts[1]) + if err != nil { + return "", fmt.Errorf("%w: %v", ErrJWTDecodeFailed, err) + } + var claims struct { + Sub string `json:"sub"` + } + if err := json.Unmarshal(payload, &claims); err != nil { + return "", fmt.Errorf("%w: %v", ErrJWTDecodeFailed, err) + } + if claims.Sub == "" { + return "", fmt.Errorf("%w: sub claim is empty", ErrJWTDecodeFailed) + } + return claims.Sub, nil +} + +func (a *CodeBuddyAIAuth) RefreshToken(ctx context.Context, accessToken, refreshToken, userID, domain string) (*CodeBuddyAITokenStorage, error) { + if domain == "" { + domain = DefaultDomain + } + refreshURL := fmt.Sprintf("%s%s", a.baseURL, authRefreshPath) + body := []byte("{}") + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, refreshURL, bytes.NewReader(body)) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to create refresh request: %w", err) + } + + req.Header.Set("Content-Type", "application/json") + req.Header.Set("X-Domain", domain) + req.Header.Set("X-Refresh-Token", refreshToken) + req.Header.Set("X-Auth-Refresh-Source", "ide-main") + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("X-User-Id", userID) + req.Header.Set("User-Agent", UserAgent) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: refresh request failed: %w", err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy-ai refresh: close body error: %v", errClose) + } + }() + + bodyBytes, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to read refresh response: %w", err) + } + + if resp.StatusCode == http.StatusUnauthorized || resp.StatusCode == http.StatusForbidden { + return nil, fmt.Errorf("codebuddy-ai: refresh token rejected (status %d)", resp.StatusCode) + } + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("codebuddy-ai: refresh failed with status %d: %s", resp.StatusCode, string(bodyBytes)) + } + + var result struct { + Code int `json:"code"` + Msg string `json:"msg"` + Data *struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` + ExpiresIn int64 `json:"expiresIn"` + RefreshExpiresIn int64 `json:"refreshExpiresIn"` + TokenType string `json:"tokenType"` + } `json:"data"` + } + if err = json.Unmarshal(bodyBytes, &result); err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to parse refresh response: %w", err) + } + if result.Code != codeSuccess { + return nil, fmt.Errorf("codebuddy-ai: refresh failed with code %d: %s", result.Code, result.Msg) + } + if result.Data == nil { + return nil, fmt.Errorf("codebuddy-ai: empty data in refresh response") + } + + newUserID, _ := a.DecodeUserID(result.Data.AccessToken) + if newUserID == "" { + newUserID = userID + } + + return &CodeBuddyAITokenStorage{ + AccessToken: result.Data.AccessToken, + RefreshToken: result.Data.RefreshToken, + ExpiresIn: result.Data.ExpiresIn, + RefreshExpiresIn: result.Data.RefreshExpiresIn, + TokenType: result.Data.TokenType, + Domain: domain, + UserID: newUserID, + Type: "codebuddy-ai", + }, nil +} diff --git a/internal/auth/codebuddy_ai/errors.go b/internal/auth/codebuddy_ai/errors.go new file mode 100644 index 0000000000..997c8216f4 --- /dev/null +++ b/internal/auth/codebuddy_ai/errors.go @@ -0,0 +1,25 @@ +package codebuddy_ai + +import "errors" + +var ( + ErrPollingTimeout = errors.New("codebuddy-ai: polling timeout, user did not authorize in time") + ErrAccessDenied = errors.New("codebuddy-ai: access denied by user") + ErrTokenFetchFailed = errors.New("codebuddy-ai: failed to fetch token from server") + ErrJWTDecodeFailed = errors.New("codebuddy-ai: failed to decode JWT token") +) + +func GetUserFriendlyMessage(err error) string { + switch { + case errors.Is(err, ErrPollingTimeout): + return "Authentication timed out. Please try again." + case errors.Is(err, ErrAccessDenied): + return "Access denied. Please try again and approve the login request." + case errors.Is(err, ErrJWTDecodeFailed): + return "Failed to decode token. Please try logging in again." + case errors.Is(err, ErrTokenFetchFailed): + return "Failed to fetch token from server. Please try again." + default: + return "Authentication failed: " + err.Error() + } +} diff --git a/internal/auth/codebuddy_ai/token.go b/internal/auth/codebuddy_ai/token.go new file mode 100644 index 0000000000..c8cf205d66 --- /dev/null +++ b/internal/auth/codebuddy_ai/token.go @@ -0,0 +1,42 @@ +package codebuddy_ai + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" +) + +type CodeBuddyAITokenStorage struct { + AccessToken string `json:"access_token"` + RefreshToken string `json:"refresh_token"` + ExpiresIn int64 `json:"expires_in"` + RefreshExpiresIn int64 `json:"refresh_expires_in,omitempty"` + TokenType string `json:"token_type"` + Domain string `json:"domain"` + UserID string `json:"user_id"` + Type string `json:"type"` +} + +func (s *CodeBuddyAITokenStorage) SaveTokenToFile(authFilePath string) error { + misc.LogSavingCredentials(authFilePath) + s.Type = "codebuddy-ai" + if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil { + return fmt.Errorf("failed to create directory: %w", err) + } + + f, err := os.OpenFile(authFilePath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) + if err != nil { + return fmt.Errorf("failed to create token file: %w", err) + } + defer func() { + _ = f.Close() + }() + + if err = json.NewEncoder(f).Encode(s); err != nil { + return fmt.Errorf("failed to write token to file: %w", err) + } + return nil +} diff --git a/internal/auth/codex/openai_auth.go b/internal/auth/codex/openai_auth.go index 67b54b172d..681747caf5 100644 --- a/internal/auth/codex/openai_auth.go +++ b/internal/auth/codex/openai_auth.go @@ -14,8 +14,8 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" ) diff --git a/internal/auth/codex/openai_auth_test.go b/internal/auth/codex/openai_auth_test.go index a7fe83072d..e7d939b0a3 100644 --- a/internal/auth/codex/openai_auth_test.go +++ b/internal/auth/codex/openai_auth_test.go @@ -8,7 +8,7 @@ import ( "sync/atomic" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) type roundTripFunc func(*http.Request) (*http.Response, error) diff --git a/internal/auth/codex/token.go b/internal/auth/codex/token.go index 7f03207195..b2a7bcf21a 100644 --- a/internal/auth/codex/token.go +++ b/internal/auth/codex/token.go @@ -9,7 +9,7 @@ import ( "os" "path/filepath" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" ) // CodexTokenStorage stores OAuth2 token information for OpenAI Codex API authentication. diff --git a/internal/auth/copilot/copilot_auth.go b/internal/auth/copilot/copilot_auth.go new file mode 100644 index 0000000000..d5b6d0881d --- /dev/null +++ b/internal/auth/copilot/copilot_auth.go @@ -0,0 +1,394 @@ +// Package copilot provides authentication and token management for GitHub Copilot API. +// It handles the OAuth2 device flow for secure authentication with the Copilot API. +package copilot + +import ( + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + // copilotAPITokenURL is the endpoint for getting Copilot API tokens from GitHub token. + copilotAPITokenURL = "https://api.github.com/copilot_internal/v2/token" + // copilotAPIEndpoint is the base URL for making API requests. + copilotAPIEndpoint = "https://api.githubcopilot.com" + + // Common HTTP header values for Copilot API requests. + copilotUserAgent = "GithubCopilot/1.0" + copilotEditorVersion = "vscode/1.100.0" + copilotPluginVersion = "copilot/1.300.0" + copilotIntegrationID = "vscode-chat" + copilotOpenAIIntent = "conversation-panel" +) + +// CopilotAPIToken represents the Copilot API token response. +type CopilotAPIToken struct { + // Token is the JWT token for authenticating with the Copilot API. + Token string `json:"token"` + // ExpiresAt is the Unix timestamp when the token expires. + ExpiresAt int64 `json:"expires_at"` + // Endpoints contains the available API endpoints. + Endpoints struct { + API string `json:"api"` + Proxy string `json:"proxy"` + OriginTracker string `json:"origin-tracker"` + Telemetry string `json:"telemetry"` + } `json:"endpoints,omitempty"` + // ErrorDetails contains error information if the request failed. + ErrorDetails *struct { + URL string `json:"url"` + Message string `json:"message"` + DocumentationURL string `json:"documentation_url"` + } `json:"error_details,omitempty"` +} + +// CopilotAuth handles GitHub Copilot authentication flow. +// It provides methods for device flow authentication and token management. +type CopilotAuth struct { + httpClient *http.Client + deviceClient *DeviceFlowClient + cfg *config.Config +} + +// NewCopilotAuth creates a new CopilotAuth service instance. +// It initializes an HTTP client with proxy settings from the provided configuration. +func NewCopilotAuth(cfg *config.Config) *CopilotAuth { + return &CopilotAuth{ + httpClient: util.SetProxy(&cfg.SDKConfig, &http.Client{Timeout: 30 * time.Second}), + deviceClient: NewDeviceFlowClient(cfg), + cfg: cfg, + } +} + +// StartDeviceFlow initiates the device flow authentication. +// Returns the device code response containing the user code and verification URI. +func (c *CopilotAuth) StartDeviceFlow(ctx context.Context) (*DeviceCodeResponse, error) { + return c.deviceClient.RequestDeviceCode(ctx) +} + +// WaitForAuthorization polls for user authorization and returns the auth bundle. +func (c *CopilotAuth) WaitForAuthorization(ctx context.Context, deviceCode *DeviceCodeResponse) (*CopilotAuthBundle, error) { + tokenData, err := c.deviceClient.PollForToken(ctx, deviceCode) + if err != nil { + return nil, err + } + + // Fetch the GitHub username + userInfo, err := c.deviceClient.FetchUserInfo(ctx, tokenData.AccessToken) + if err != nil { + log.Warnf("copilot: failed to fetch user info: %v", err) + } + + username := userInfo.Login + if username == "" { + username = "github-user" + } + + return &CopilotAuthBundle{ + TokenData: tokenData, + Username: username, + Email: userInfo.Email, + Name: userInfo.Name, + }, nil +} + +// GetCopilotAPIToken exchanges a GitHub access token for a Copilot API token. +// This token is used to make authenticated requests to the Copilot API. +func (c *CopilotAuth) GetCopilotAPIToken(ctx context.Context, githubAccessToken string) (*CopilotAPIToken, error) { + if githubAccessToken == "" { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, fmt.Errorf("github access token is empty")) + } + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, copilotAPITokenURL, nil) + if err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + + req.Header.Set("Authorization", "token "+githubAccessToken) + req.Header.Set("Accept", "application/json") + req.Header.Set("User-Agent", copilotUserAgent) + req.Header.Set("Editor-Version", copilotEditorVersion) + req.Header.Set("Editor-Plugin-Version", copilotPluginVersion) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("copilot api token: close body error: %v", errClose) + } + }() + + bodyBytes, err := io.ReadAll(resp.Body) + if err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + + if !isHTTPSuccess(resp.StatusCode) { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, + fmt.Errorf("status %d: %s", resp.StatusCode, string(bodyBytes))) + } + + var apiToken CopilotAPIToken + if err = json.Unmarshal(bodyBytes, &apiToken); err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + + if apiToken.Token == "" { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, fmt.Errorf("empty copilot api token")) + } + + return &apiToken, nil +} + +// ValidateToken checks if a GitHub access token is valid by attempting to fetch user info. +func (c *CopilotAuth) ValidateToken(ctx context.Context, accessToken string) (bool, string, error) { + if accessToken == "" { + return false, "", nil + } + + userInfo, err := c.deviceClient.FetchUserInfo(ctx, accessToken) + if err != nil { + return false, "", err + } + + return true, userInfo.Login, nil +} + +// CreateTokenStorage creates a new CopilotTokenStorage from auth bundle. +func (c *CopilotAuth) CreateTokenStorage(bundle *CopilotAuthBundle) *CopilotTokenStorage { + return &CopilotTokenStorage{ + AccessToken: bundle.TokenData.AccessToken, + TokenType: bundle.TokenData.TokenType, + Scope: bundle.TokenData.Scope, + Username: bundle.Username, + Email: bundle.Email, + Name: bundle.Name, + Type: "github-copilot", + } +} + +// LoadAndValidateToken loads a token from storage and validates it. +// Returns the storage if valid, or an error if the token is invalid or expired. +func (c *CopilotAuth) LoadAndValidateToken(ctx context.Context, storage *CopilotTokenStorage) (bool, error) { + if storage == nil || storage.AccessToken == "" { + return false, fmt.Errorf("no token available") + } + + // Check if we can still use the GitHub token to get a Copilot API token + apiToken, err := c.GetCopilotAPIToken(ctx, storage.AccessToken) + if err != nil { + return false, err + } + + // Check if the API token is expired + if apiToken.ExpiresAt > 0 && time.Now().Unix() >= apiToken.ExpiresAt { + return false, fmt.Errorf("copilot api token expired") + } + + return true, nil +} + +// GetAPIEndpoint returns the Copilot API endpoint URL. +func (c *CopilotAuth) GetAPIEndpoint() string { + return copilotAPIEndpoint +} + +// MakeAuthenticatedRequest creates an authenticated HTTP request to the Copilot API. +func (c *CopilotAuth) MakeAuthenticatedRequest(ctx context.Context, method, url string, body io.Reader, apiToken *CopilotAPIToken) (*http.Request, error) { + req, err := http.NewRequestWithContext(ctx, method, url, body) + if err != nil { + return nil, fmt.Errorf("failed to create request: %w", err) + } + + req.Header.Set("Authorization", "Bearer "+apiToken.Token) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + req.Header.Set("User-Agent", copilotUserAgent) + req.Header.Set("Editor-Version", copilotEditorVersion) + req.Header.Set("Editor-Plugin-Version", copilotPluginVersion) + req.Header.Set("Openai-Intent", copilotOpenAIIntent) + req.Header.Set("Copilot-Integration-Id", copilotIntegrationID) + + return req, nil +} + +// CopilotModelEntry represents a single model entry returned by the Copilot /models API. +type CopilotModelEntry struct { + ID string `json:"id"` + Object string `json:"object"` + Created int64 `json:"created"` + OwnedBy string `json:"owned_by"` + Name string `json:"name,omitempty"` + Version string `json:"version,omitempty"` + Capabilities map[string]any `json:"capabilities,omitempty"` +} + +// CopilotModelLimits holds the token limits returned by the Copilot /models API +// under capabilities.limits. These limits vary by account type (individual vs +// business) and are the authoritative source for enforcing prompt size. +type CopilotModelLimits struct { + // MaxContextWindowTokens is the total context window (prompt + output). + MaxContextWindowTokens int + // MaxPromptTokens is the hard limit on input/prompt tokens. + // Exceeding this triggers a 400 error from the Copilot API. + MaxPromptTokens int + // MaxOutputTokens is the maximum number of output/completion tokens. + MaxOutputTokens int +} + +// Limits extracts the token limits from the model's capabilities map. +// Returns nil if no limits are available or the structure is unexpected. +// +// Expected Copilot API shape: +// +// "capabilities": { +// "limits": { +// "max_context_window_tokens": 200000, +// "max_prompt_tokens": 168000, +// "max_output_tokens": 32000 +// } +// } +func (e *CopilotModelEntry) Limits() *CopilotModelLimits { + if e.Capabilities == nil { + return nil + } + limitsRaw, ok := e.Capabilities["limits"] + if !ok { + return nil + } + limitsMap, ok := limitsRaw.(map[string]any) + if !ok { + return nil + } + + result := &CopilotModelLimits{ + MaxContextWindowTokens: anyToInt(limitsMap["max_context_window_tokens"]), + MaxPromptTokens: anyToInt(limitsMap["max_prompt_tokens"]), + MaxOutputTokens: anyToInt(limitsMap["max_output_tokens"]), + } + + // Only return if at least one field is populated. + if result.MaxContextWindowTokens == 0 && result.MaxPromptTokens == 0 && result.MaxOutputTokens == 0 { + return nil + } + return result +} + +// anyToInt converts a JSON-decoded numeric value to int. +// Go's encoding/json decodes numbers into float64 when the target is any/interface{}. +func anyToInt(v any) int { + switch n := v.(type) { + case float64: + return int(n) + case float32: + return int(n) + case int: + return n + case int64: + return int(n) + default: + return 0 + } +} + +// CopilotModelsResponse represents the response from the Copilot /models endpoint. +type CopilotModelsResponse struct { + Data []CopilotModelEntry `json:"data"` + Object string `json:"object"` +} + +// maxModelsResponseSize is the maximum allowed response size from the /models endpoint (2 MB). +const maxModelsResponseSize = 2 * 1024 * 1024 + +// allowedCopilotAPIHosts is the set of hosts that are considered safe for Copilot API requests. +var allowedCopilotAPIHosts = map[string]bool{ + "api.githubcopilot.com": true, + "api.individual.githubcopilot.com": true, + "api.business.githubcopilot.com": true, + "copilot-proxy.githubusercontent.com": true, +} + +// ListModels fetches the list of available models from the Copilot API. +// It requires a valid Copilot API token (not the GitHub access token). +func (c *CopilotAuth) ListModels(ctx context.Context, apiToken *CopilotAPIToken) ([]CopilotModelEntry, error) { + if apiToken == nil || apiToken.Token == "" { + return nil, fmt.Errorf("copilot: api token is required for listing models") + } + + // Build models URL, validating the endpoint host to prevent SSRF. + modelsURL := copilotAPIEndpoint + "/models" + if ep := strings.TrimRight(apiToken.Endpoints.API, "/"); ep != "" { + parsed, err := url.Parse(ep) + if err == nil && parsed.Scheme == "https" && allowedCopilotAPIHosts[parsed.Host] { + modelsURL = ep + "/models" + } else { + log.Warnf("copilot: ignoring untrusted API endpoint %q, using default", ep) + } + } + + req, err := c.MakeAuthenticatedRequest(ctx, http.MethodGet, modelsURL, nil, apiToken) + if err != nil { + return nil, fmt.Errorf("copilot: failed to create models request: %w", err) + } + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("copilot: models request failed: %w", err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("copilot list models: close body error: %v", errClose) + } + }() + + // Limit response body to prevent memory exhaustion. + limitedReader := io.LimitReader(resp.Body, maxModelsResponseSize) + bodyBytes, err := io.ReadAll(limitedReader) + if err != nil { + return nil, fmt.Errorf("copilot: failed to read models response: %w", err) + } + + if !isHTTPSuccess(resp.StatusCode) { + return nil, fmt.Errorf("copilot: list models failed with status %d: %s", resp.StatusCode, string(bodyBytes)) + } + + var modelsResp CopilotModelsResponse + if err = json.Unmarshal(bodyBytes, &modelsResp); err != nil { + return nil, fmt.Errorf("copilot: failed to parse models response: %w", err) + } + + return modelsResp.Data, nil +} + +// ListModelsWithGitHubToken is a convenience method that exchanges a GitHub access token +// for a Copilot API token and then fetches the available models. +func (c *CopilotAuth) ListModelsWithGitHubToken(ctx context.Context, githubAccessToken string) ([]CopilotModelEntry, error) { + apiToken, err := c.GetCopilotAPIToken(ctx, githubAccessToken) + if err != nil { + return nil, fmt.Errorf("copilot: failed to get API token for model listing: %w", err) + } + + return c.ListModels(ctx, apiToken) +} + +// buildChatCompletionURL builds the URL for chat completions API. +func buildChatCompletionURL() string { + return copilotAPIEndpoint + "/chat/completions" +} + +// isHTTPSuccess checks if the status code indicates success (2xx). +func isHTTPSuccess(statusCode int) bool { + return statusCode >= 200 && statusCode < 300 +} diff --git a/internal/auth/copilot/errors.go b/internal/auth/copilot/errors.go new file mode 100644 index 0000000000..a82dd8ecf6 --- /dev/null +++ b/internal/auth/copilot/errors.go @@ -0,0 +1,187 @@ +package copilot + +import ( + "errors" + "fmt" + "net/http" +) + +// OAuthError represents an OAuth-specific error. +type OAuthError struct { + // Code is the OAuth error code. + Code string `json:"error"` + // Description is a human-readable description of the error. + Description string `json:"error_description,omitempty"` + // URI is a URI identifying a human-readable web page with information about the error. + URI string `json:"error_uri,omitempty"` + // StatusCode is the HTTP status code associated with the error. + StatusCode int `json:"-"` +} + +// Error returns a string representation of the OAuth error. +func (e *OAuthError) Error() string { + if e.Description != "" { + return fmt.Sprintf("OAuth error %s: %s", e.Code, e.Description) + } + return fmt.Sprintf("OAuth error: %s", e.Code) +} + +// NewOAuthError creates a new OAuth error with the specified code, description, and status code. +func NewOAuthError(code, description string, statusCode int) *OAuthError { + return &OAuthError{ + Code: code, + Description: description, + StatusCode: statusCode, + } +} + +// AuthenticationError represents authentication-related errors. +type AuthenticationError struct { + // Type is the type of authentication error. + Type string `json:"type"` + // Message is a human-readable message describing the error. + Message string `json:"message"` + // Code is the HTTP status code associated with the error. + Code int `json:"code"` + // Cause is the underlying error that caused this authentication error. + Cause error `json:"-"` +} + +// Error returns a string representation of the authentication error. +func (e *AuthenticationError) Error() string { + if e.Cause != nil { + return fmt.Sprintf("%s: %s (caused by: %v)", e.Type, e.Message, e.Cause) + } + return fmt.Sprintf("%s: %s", e.Type, e.Message) +} + +// Unwrap returns the underlying cause of the error. +func (e *AuthenticationError) Unwrap() error { + return e.Cause +} + +// Common authentication error types for GitHub Copilot device flow. +var ( + // ErrDeviceCodeFailed represents an error when requesting the device code fails. + ErrDeviceCodeFailed = &AuthenticationError{ + Type: "device_code_failed", + Message: "Failed to request device code from GitHub", + Code: http.StatusBadRequest, + } + + // ErrDeviceCodeExpired represents an error when the device code has expired. + ErrDeviceCodeExpired = &AuthenticationError{ + Type: "device_code_expired", + Message: "Device code has expired. Please try again.", + Code: http.StatusGone, + } + + // ErrAuthorizationPending represents a pending authorization state (not an error, used for polling). + ErrAuthorizationPending = &AuthenticationError{ + Type: "authorization_pending", + Message: "Authorization is pending. Waiting for user to authorize.", + Code: http.StatusAccepted, + } + + // ErrSlowDown represents a request to slow down polling. + ErrSlowDown = &AuthenticationError{ + Type: "slow_down", + Message: "Polling too frequently. Slowing down.", + Code: http.StatusTooManyRequests, + } + + // ErrAccessDenied represents an error when the user denies authorization. + ErrAccessDenied = &AuthenticationError{ + Type: "access_denied", + Message: "User denied authorization", + Code: http.StatusForbidden, + } + + // ErrTokenExchangeFailed represents an error when token exchange fails. + ErrTokenExchangeFailed = &AuthenticationError{ + Type: "token_exchange_failed", + Message: "Failed to exchange device code for access token", + Code: http.StatusBadRequest, + } + + // ErrPollingTimeout represents an error when polling times out. + ErrPollingTimeout = &AuthenticationError{ + Type: "polling_timeout", + Message: "Timeout waiting for user authorization", + Code: http.StatusRequestTimeout, + } + + // ErrUserInfoFailed represents an error when fetching user info fails. + ErrUserInfoFailed = &AuthenticationError{ + Type: "user_info_failed", + Message: "Failed to fetch GitHub user information", + Code: http.StatusBadRequest, + } +) + +// NewAuthenticationError creates a new authentication error with a cause based on a base error. +func NewAuthenticationError(baseErr *AuthenticationError, cause error) *AuthenticationError { + return &AuthenticationError{ + Type: baseErr.Type, + Message: baseErr.Message, + Code: baseErr.Code, + Cause: cause, + } +} + +// IsAuthenticationError checks if an error is an authentication error. +func IsAuthenticationError(err error) bool { + var authenticationError *AuthenticationError + ok := errors.As(err, &authenticationError) + return ok +} + +// IsOAuthError checks if an error is an OAuth error. +func IsOAuthError(err error) bool { + var oAuthError *OAuthError + ok := errors.As(err, &oAuthError) + return ok +} + +// GetUserFriendlyMessage returns a user-friendly error message based on the error type. +func GetUserFriendlyMessage(err error) string { + var authErr *AuthenticationError + if errors.As(err, &authErr) { + switch authErr.Type { + case "device_code_failed": + return "Failed to start GitHub authentication. Please check your network connection and try again." + case "device_code_expired": + return "The authentication code has expired. Please try again." + case "authorization_pending": + return "Waiting for you to authorize the application on GitHub." + case "slow_down": + return "Please wait a moment before trying again." + case "access_denied": + return "Authentication was cancelled or denied." + case "token_exchange_failed": + return "Failed to complete authentication. Please try again." + case "polling_timeout": + return "Authentication timed out. Please try again." + case "user_info_failed": + return "Failed to get your GitHub account information. Please try again." + default: + return "Authentication failed. Please try again." + } + } + + var oauthErr *OAuthError + if errors.As(err, &oauthErr) { + switch oauthErr.Code { + case "access_denied": + return "Authentication was cancelled or denied." + case "invalid_request": + return "Invalid authentication request. Please try again." + case "server_error": + return "GitHub server error. Please try again later." + default: + return fmt.Sprintf("Authentication failed: %s", oauthErr.Description) + } + } + + return "An unexpected error occurred. Please try again." +} diff --git a/internal/auth/copilot/oauth.go b/internal/auth/copilot/oauth.go new file mode 100644 index 0000000000..634ccd4c8d --- /dev/null +++ b/internal/auth/copilot/oauth.go @@ -0,0 +1,271 @@ +package copilot + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "net/url" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + // copilotClientID is GitHub's Copilot CLI OAuth client ID. + copilotClientID = "Iv1.b507a08c87ecfe98" + // copilotDeviceCodeURL is the endpoint for requesting device codes. + copilotDeviceCodeURL = "https://github.com/login/device/code" + // copilotTokenURL is the endpoint for exchanging device codes for tokens. + copilotTokenURL = "https://github.com/login/oauth/access_token" + // copilotUserInfoURL is the endpoint for fetching GitHub user information. + copilotUserInfoURL = "https://api.github.com/user" + // defaultPollInterval is the default interval for polling token endpoint. + defaultPollInterval = 5 * time.Second + // maxPollDuration is the maximum time to wait for user authorization. + maxPollDuration = 15 * time.Minute +) + +// DeviceFlowClient handles the OAuth2 device flow for GitHub Copilot. +type DeviceFlowClient struct { + httpClient *http.Client + cfg *config.Config +} + +// NewDeviceFlowClient creates a new device flow client. +func NewDeviceFlowClient(cfg *config.Config) *DeviceFlowClient { + client := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + client = util.SetProxy(&cfg.SDKConfig, client) + } + return &DeviceFlowClient{ + httpClient: client, + cfg: cfg, + } +} + +// RequestDeviceCode initiates the device flow by requesting a device code from GitHub. +func (c *DeviceFlowClient) RequestDeviceCode(ctx context.Context) (*DeviceCodeResponse, error) { + data := url.Values{} + data.Set("client_id", copilotClientID) + data.Set("scope", "read:user user:email") + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, copilotDeviceCodeURL, strings.NewReader(data.Encode())) + if err != nil { + return nil, NewAuthenticationError(ErrDeviceCodeFailed, err) + } + req.Header.Set("Content-Type", "application/x-www-form-urlencoded") + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, NewAuthenticationError(ErrDeviceCodeFailed, err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("copilot device code: close body error: %v", errClose) + } + }() + + if !isHTTPSuccess(resp.StatusCode) { + bodyBytes, _ := io.ReadAll(resp.Body) + return nil, NewAuthenticationError(ErrDeviceCodeFailed, fmt.Errorf("status %d: %s", resp.StatusCode, string(bodyBytes))) + } + + var deviceCode DeviceCodeResponse + if err = json.NewDecoder(resp.Body).Decode(&deviceCode); err != nil { + return nil, NewAuthenticationError(ErrDeviceCodeFailed, err) + } + + return &deviceCode, nil +} + +// PollForToken polls the token endpoint until the user authorizes or the device code expires. +func (c *DeviceFlowClient) PollForToken(ctx context.Context, deviceCode *DeviceCodeResponse) (*CopilotTokenData, error) { + if deviceCode == nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, fmt.Errorf("device code is nil")) + } + + interval := time.Duration(deviceCode.Interval) * time.Second + if interval < defaultPollInterval { + interval = defaultPollInterval + } + + deadline := time.Now().Add(maxPollDuration) + if deviceCode.ExpiresIn > 0 { + codeDeadline := time.Now().Add(time.Duration(deviceCode.ExpiresIn) * time.Second) + if codeDeadline.Before(deadline) { + deadline = codeDeadline + } + } + + ticker := time.NewTicker(interval) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + return nil, NewAuthenticationError(ErrPollingTimeout, ctx.Err()) + case <-ticker.C: + if time.Now().After(deadline) { + return nil, ErrPollingTimeout + } + + token, err := c.exchangeDeviceCode(ctx, deviceCode.DeviceCode) + if err != nil { + var authErr *AuthenticationError + if errors.As(err, &authErr) { + switch authErr.Type { + case ErrAuthorizationPending.Type: + // Continue polling + continue + case ErrSlowDown.Type: + // Increase interval and continue + interval += 5 * time.Second + ticker.Reset(interval) + continue + case ErrDeviceCodeExpired.Type: + return nil, err + case ErrAccessDenied.Type: + return nil, err + } + } + return nil, err + } + return token, nil + } + } +} + +// exchangeDeviceCode attempts to exchange the device code for an access token. +func (c *DeviceFlowClient) exchangeDeviceCode(ctx context.Context, deviceCode string) (*CopilotTokenData, error) { + data := url.Values{} + data.Set("client_id", copilotClientID) + data.Set("device_code", deviceCode) + data.Set("grant_type", "urn:ietf:params:oauth:grant-type:device_code") + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, copilotTokenURL, strings.NewReader(data.Encode())) + if err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + req.Header.Set("Content-Type", "application/x-www-form-urlencoded") + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("copilot token exchange: close body error: %v", errClose) + } + }() + + bodyBytes, err := io.ReadAll(resp.Body) + if err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + + // GitHub returns 200 for both success and error cases in device flow + // Check for OAuth error response first + var oauthResp struct { + Error string `json:"error"` + ErrorDescription string `json:"error_description"` + AccessToken string `json:"access_token"` + TokenType string `json:"token_type"` + Scope string `json:"scope"` + } + + if err = json.Unmarshal(bodyBytes, &oauthResp); err != nil { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, err) + } + + if oauthResp.Error != "" { + switch oauthResp.Error { + case "authorization_pending": + return nil, ErrAuthorizationPending + case "slow_down": + return nil, ErrSlowDown + case "expired_token": + return nil, ErrDeviceCodeExpired + case "access_denied": + return nil, ErrAccessDenied + default: + return nil, NewOAuthError(oauthResp.Error, oauthResp.ErrorDescription, resp.StatusCode) + } + } + + if oauthResp.AccessToken == "" { + return nil, NewAuthenticationError(ErrTokenExchangeFailed, fmt.Errorf("empty access token")) + } + + return &CopilotTokenData{ + AccessToken: oauthResp.AccessToken, + TokenType: oauthResp.TokenType, + Scope: oauthResp.Scope, + }, nil +} + +// GitHubUserInfo holds GitHub user profile information. +type GitHubUserInfo struct { + // Login is the GitHub username. + Login string + // Email is the primary email address (may be empty if not public). + Email string + // Name is the display name. + Name string +} + +// FetchUserInfo retrieves the GitHub user profile for the authenticated user. +func (c *DeviceFlowClient) FetchUserInfo(ctx context.Context, accessToken string) (GitHubUserInfo, error) { + if accessToken == "" { + return GitHubUserInfo{}, NewAuthenticationError(ErrUserInfoFailed, fmt.Errorf("access token is empty")) + } + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, copilotUserInfoURL, nil) + if err != nil { + return GitHubUserInfo{}, NewAuthenticationError(ErrUserInfoFailed, err) + } + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("Accept", "application/json") + req.Header.Set("User-Agent", "CLIProxyAPI") + + resp, err := c.httpClient.Do(req) + if err != nil { + return GitHubUserInfo{}, NewAuthenticationError(ErrUserInfoFailed, err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("copilot user info: close body error: %v", errClose) + } + }() + + if !isHTTPSuccess(resp.StatusCode) { + bodyBytes, _ := io.ReadAll(resp.Body) + return GitHubUserInfo{}, NewAuthenticationError(ErrUserInfoFailed, fmt.Errorf("status %d: %s", resp.StatusCode, string(bodyBytes))) + } + + var raw struct { + Login string `json:"login"` + Email string `json:"email"` + Name string `json:"name"` + } + if err = json.NewDecoder(resp.Body).Decode(&raw); err != nil { + return GitHubUserInfo{}, NewAuthenticationError(ErrUserInfoFailed, err) + } + + if raw.Login == "" { + return GitHubUserInfo{}, NewAuthenticationError(ErrUserInfoFailed, fmt.Errorf("empty username")) + } + + return GitHubUserInfo{ + Login: raw.Login, + Email: raw.Email, + Name: raw.Name, + }, nil +} diff --git a/internal/auth/copilot/oauth_test.go b/internal/auth/copilot/oauth_test.go new file mode 100644 index 0000000000..3311b4f850 --- /dev/null +++ b/internal/auth/copilot/oauth_test.go @@ -0,0 +1,213 @@ +package copilot + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "strings" + "testing" +) + +// roundTripFunc lets us inject a custom transport for testing. +type roundTripFunc func(*http.Request) (*http.Response, error) + +func (f roundTripFunc) RoundTrip(r *http.Request) (*http.Response, error) { return f(r) } + +// newTestClient returns an *http.Client whose requests are redirected to the given test server, +// regardless of the original URL host. +func newTestClient(srv *httptest.Server) *http.Client { + return &http.Client{ + Transport: roundTripFunc(func(req *http.Request) (*http.Response, error) { + req2 := req.Clone(req.Context()) + req2.URL.Scheme = "http" + req2.URL.Host = strings.TrimPrefix(srv.URL, "http://") + return srv.Client().Transport.RoundTrip(req2) + }), + } +} + +// TestFetchUserInfo_FullProfile verifies that FetchUserInfo returns login, email, and name. +func TestFetchUserInfo_FullProfile(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if !strings.HasPrefix(r.Header.Get("Authorization"), "Bearer ") { + w.WriteHeader(http.StatusUnauthorized) + return + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]string{ + "login": "octocat", + "email": "octocat@github.com", + "name": "The Octocat", + }) + })) + defer srv.Close() + + client := &DeviceFlowClient{httpClient: newTestClient(srv)} + info, err := client.FetchUserInfo(context.Background(), "test-token") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if info.Login != "octocat" { + t.Errorf("Login: got %q, want %q", info.Login, "octocat") + } + if info.Email != "octocat@github.com" { + t.Errorf("Email: got %q, want %q", info.Email, "octocat@github.com") + } + if info.Name != "The Octocat" { + t.Errorf("Name: got %q, want %q", info.Name, "The Octocat") + } +} + +// TestFetchUserInfo_EmptyEmail verifies graceful handling when email is absent (private account). +func TestFetchUserInfo_EmptyEmail(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + // GitHub returns null for private emails. + _, _ = w.Write([]byte(`{"login":"privateuser","email":null,"name":"Private User"}`)) + })) + defer srv.Close() + + client := &DeviceFlowClient{httpClient: newTestClient(srv)} + info, err := client.FetchUserInfo(context.Background(), "test-token") + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + if info.Login != "privateuser" { + t.Errorf("Login: got %q, want %q", info.Login, "privateuser") + } + if info.Email != "" { + t.Errorf("Email: got %q, want empty string", info.Email) + } + if info.Name != "Private User" { + t.Errorf("Name: got %q, want %q", info.Name, "Private User") + } +} + +// TestFetchUserInfo_EmptyToken verifies error is returned for empty access token. +func TestFetchUserInfo_EmptyToken(t *testing.T) { + client := &DeviceFlowClient{httpClient: http.DefaultClient} + _, err := client.FetchUserInfo(context.Background(), "") + if err == nil { + t.Fatal("expected error for empty token, got nil") + } +} + +// TestFetchUserInfo_EmptyLogin verifies error is returned when API returns no login. +func TestFetchUserInfo_EmptyLogin(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"email":"someone@example.com","name":"No Login"}`)) + })) + defer srv.Close() + + client := &DeviceFlowClient{httpClient: newTestClient(srv)} + _, err := client.FetchUserInfo(context.Background(), "test-token") + if err == nil { + t.Fatal("expected error for empty login, got nil") + } +} + +// TestFetchUserInfo_HTTPError verifies error is returned on non-2xx response. +func TestFetchUserInfo_HTTPError(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusUnauthorized) + _, _ = w.Write([]byte(`{"message":"Bad credentials"}`)) + })) + defer srv.Close() + + client := &DeviceFlowClient{httpClient: newTestClient(srv)} + _, err := client.FetchUserInfo(context.Background(), "bad-token") + if err == nil { + t.Fatal("expected error for 401 response, got nil") + } +} + +// TestCopilotTokenStorage_EmailNameFields verifies Email and Name serialise correctly. +func TestCopilotTokenStorage_EmailNameFields(t *testing.T) { + ts := &CopilotTokenStorage{ + AccessToken: "ghu_abc", + TokenType: "bearer", + Scope: "read:user user:email", + Username: "octocat", + Email: "octocat@github.com", + Name: "The Octocat", + Type: "github-copilot", + } + + data, err := json.Marshal(ts) + if err != nil { + t.Fatalf("marshal error: %v", err) + } + + var out map[string]any + if err = json.Unmarshal(data, &out); err != nil { + t.Fatalf("unmarshal error: %v", err) + } + + for _, key := range []string{"access_token", "username", "email", "name", "type"} { + if _, ok := out[key]; !ok { + t.Errorf("expected key %q in JSON output, not found", key) + } + } + if out["email"] != "octocat@github.com" { + t.Errorf("email: got %v, want %q", out["email"], "octocat@github.com") + } + if out["name"] != "The Octocat" { + t.Errorf("name: got %v, want %q", out["name"], "The Octocat") + } +} + +// TestCopilotTokenStorage_OmitEmptyEmailName verifies email/name are omitted when empty (omitempty). +func TestCopilotTokenStorage_OmitEmptyEmailName(t *testing.T) { + ts := &CopilotTokenStorage{ + AccessToken: "ghu_abc", + Username: "octocat", + Type: "github-copilot", + } + + data, err := json.Marshal(ts) + if err != nil { + t.Fatalf("marshal error: %v", err) + } + + var out map[string]any + if err = json.Unmarshal(data, &out); err != nil { + t.Fatalf("unmarshal error: %v", err) + } + + if _, ok := out["email"]; ok { + t.Error("email key should be omitted when empty (omitempty), but was present") + } + if _, ok := out["name"]; ok { + t.Error("name key should be omitted when empty (omitempty), but was present") + } +} + +// TestCopilotAuthBundle_EmailNameFields verifies bundle carries email and name through the pipeline. +func TestCopilotAuthBundle_EmailNameFields(t *testing.T) { + bundle := &CopilotAuthBundle{ + TokenData: &CopilotTokenData{AccessToken: "ghu_abc"}, + Username: "octocat", + Email: "octocat@github.com", + Name: "The Octocat", + } + if bundle.Email != "octocat@github.com" { + t.Errorf("bundle.Email: got %q, want %q", bundle.Email, "octocat@github.com") + } + if bundle.Name != "The Octocat" { + t.Errorf("bundle.Name: got %q, want %q", bundle.Name, "The Octocat") + } +} + +// TestGitHubUserInfo_Struct verifies the exported GitHubUserInfo struct fields are accessible. +func TestGitHubUserInfo_Struct(t *testing.T) { + info := GitHubUserInfo{ + Login: "octocat", + Email: "octocat@github.com", + Name: "The Octocat", + } + if info.Login == "" || info.Email == "" || info.Name == "" { + t.Error("GitHubUserInfo fields should not be empty") + } +} diff --git a/internal/auth/copilot/token.go b/internal/auth/copilot/token.go new file mode 100644 index 0000000000..c3cd6a6dc7 --- /dev/null +++ b/internal/auth/copilot/token.go @@ -0,0 +1,101 @@ +// Package copilot provides authentication and token management functionality +// for GitHub Copilot AI services. It handles OAuth2 device flow token storage, +// serialization, and retrieval for maintaining authenticated sessions with the Copilot API. +package copilot + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" +) + +// CopilotTokenStorage stores OAuth2 token information for GitHub Copilot API authentication. +// It maintains compatibility with the existing auth system while adding Copilot-specific fields +// for managing access tokens and user account information. +type CopilotTokenStorage struct { + // AccessToken is the OAuth2 access token used for authenticating API requests. + AccessToken string `json:"access_token"` + // TokenType is the type of token, typically "bearer". + TokenType string `json:"token_type"` + // Scope is the OAuth2 scope granted to the token. + Scope string `json:"scope"` + // ExpiresAt is the timestamp when the access token expires (if provided). + ExpiresAt string `json:"expires_at,omitempty"` + // Username is the GitHub username associated with this token. + Username string `json:"username"` + // Email is the GitHub email address associated with this token. + Email string `json:"email,omitempty"` + // Name is the GitHub display name associated with this token. + Name string `json:"name,omitempty"` + // Type indicates the authentication provider type, always "github-copilot" for this storage. + Type string `json:"type"` +} + +// CopilotTokenData holds the raw OAuth token response from GitHub. +type CopilotTokenData struct { + // AccessToken is the OAuth2 access token. + AccessToken string `json:"access_token"` + // TokenType is the type of token, typically "bearer". + TokenType string `json:"token_type"` + // Scope is the OAuth2 scope granted to the token. + Scope string `json:"scope"` +} + +// CopilotAuthBundle bundles authentication data for storage. +type CopilotAuthBundle struct { + // TokenData contains the OAuth token information. + TokenData *CopilotTokenData + // Username is the GitHub username. + Username string + // Email is the GitHub email address. + Email string + // Name is the GitHub display name. + Name string +} + +// DeviceCodeResponse represents GitHub's device code response. +type DeviceCodeResponse struct { + // DeviceCode is the device verification code. + DeviceCode string `json:"device_code"` + // UserCode is the code the user must enter at the verification URI. + UserCode string `json:"user_code"` + // VerificationURI is the URL where the user should enter the code. + VerificationURI string `json:"verification_uri"` + // ExpiresIn is the number of seconds until the device code expires. + ExpiresIn int `json:"expires_in"` + // Interval is the minimum number of seconds to wait between polling requests. + Interval int `json:"interval"` +} + +// SaveTokenToFile serializes the Copilot token storage to a JSON file. +// This method creates the necessary directory structure and writes the token +// data in JSON format to the specified file path for persistent storage. +// +// Parameters: +// - authFilePath: The full path where the token file should be saved +// +// Returns: +// - error: An error if the operation fails, nil otherwise +func (ts *CopilotTokenStorage) SaveTokenToFile(authFilePath string) error { + misc.LogSavingCredentials(authFilePath) + ts.Type = "github-copilot" + if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil { + return fmt.Errorf("failed to create directory: %v", err) + } + + f, err := os.Create(authFilePath) + if err != nil { + return fmt.Errorf("failed to create token file: %w", err) + } + defer func() { + _ = f.Close() + }() + + if err = json.NewEncoder(f).Encode(ts); err != nil { + return fmt.Errorf("failed to write token to file: %w", err) + } + return nil +} diff --git a/internal/auth/cursor/filename.go b/internal/auth/cursor/filename.go new file mode 100644 index 0000000000..e8fb8415ec --- /dev/null +++ b/internal/auth/cursor/filename.go @@ -0,0 +1,33 @@ +package cursor + +import ( + "fmt" + "strings" +) + +// CredentialFileName returns the filename used to persist Cursor credentials. +// Priority: explicit label > auto-generated from JWT sub hash. +// If both label and subHash are empty, falls back to "cursor.json". +func CredentialFileName(label, subHash string) string { + label = strings.TrimSpace(label) + subHash = strings.TrimSpace(subHash) + if label != "" { + return fmt.Sprintf("cursor.%s.json", label) + } + if subHash != "" { + return fmt.Sprintf("cursor.%s.json", subHash) + } + return "cursor.json" +} + +// DisplayLabel returns a human-readable label for the Cursor account. +func DisplayLabel(label, subHash string) string { + label = strings.TrimSpace(label) + if label != "" { + return "Cursor " + label + } + if subHash != "" { + return "Cursor " + subHash + } + return "Cursor User" +} diff --git a/internal/auth/cursor/oauth.go b/internal/auth/cursor/oauth.go new file mode 100644 index 0000000000..009dda012c --- /dev/null +++ b/internal/auth/cursor/oauth.go @@ -0,0 +1,249 @@ +// Package cursor implements Cursor OAuth PKCE authentication and token refresh. +package cursor + +import ( + "context" + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "math" + "net/http" + "strings" + "time" +) + +const ( + CursorLoginURL = "https://cursor.com/loginDeepControl" + CursorPollURL = "https://api2.cursor.sh/auth/poll" + CursorRefreshURL = "https://api2.cursor.sh/auth/exchange_user_api_key" + + pollMaxAttempts = 150 + pollBaseDelay = 1 * time.Second + pollMaxDelay = 10 * time.Second + pollBackoffMultiply = 1.2 + maxConsecutiveErrors = 10 +) + +// AuthParams holds the PKCE parameters for Cursor login. +type AuthParams struct { + Verifier string + Challenge string + UUID string + LoginURL string +} + +// TokenPair holds the access and refresh tokens from Cursor. +type TokenPair struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` +} + +// GeneratePKCE creates a PKCE verifier and challenge pair. +func GeneratePKCE() (verifier, challenge string, err error) { + verifierBytes := make([]byte, 96) + if _, err = rand.Read(verifierBytes); err != nil { + return "", "", fmt.Errorf("cursor: failed to generate PKCE verifier: %w", err) + } + verifier = base64.RawURLEncoding.EncodeToString(verifierBytes) + + h := sha256.Sum256([]byte(verifier)) + challenge = base64.RawURLEncoding.EncodeToString(h[:]) + return verifier, challenge, nil +} + +// GenerateAuthParams creates the full set of auth params for Cursor login. +func GenerateAuthParams() (*AuthParams, error) { + verifier, challenge, err := GeneratePKCE() + if err != nil { + return nil, err + } + + uuidBytes := make([]byte, 16) + if _, err = rand.Read(uuidBytes); err != nil { + return nil, fmt.Errorf("cursor: failed to generate UUID: %w", err) + } + uuid := fmt.Sprintf("%x-%x-%x-%x-%x", + uuidBytes[0:4], uuidBytes[4:6], uuidBytes[6:8], uuidBytes[8:10], uuidBytes[10:16]) + + loginURL := fmt.Sprintf("%s?challenge=%s&uuid=%s&mode=login&redirectTarget=cli", + CursorLoginURL, challenge, uuid) + + return &AuthParams{ + Verifier: verifier, + Challenge: challenge, + UUID: uuid, + LoginURL: loginURL, + }, nil +} + +// PollForAuth polls the Cursor auth endpoint until the user completes login. +func PollForAuth(ctx context.Context, uuid, verifier string) (*TokenPair, error) { + delay := pollBaseDelay + consecutiveErrors := 0 + + client := &http.Client{Timeout: 10 * time.Second} + + for attempt := 0; attempt < pollMaxAttempts; attempt++ { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(delay): + } + + url := fmt.Sprintf("%s?uuid=%s&verifier=%s", CursorPollURL, uuid, verifier) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) + if err != nil { + return nil, fmt.Errorf("cursor: failed to create poll request: %w", err) + } + + resp, err := client.Do(req) + if err != nil { + consecutiveErrors++ + if consecutiveErrors >= maxConsecutiveErrors { + return nil, fmt.Errorf("cursor: too many consecutive poll errors (last: %v)", err) + } + delay = minDuration(time.Duration(float64(delay)*pollBackoffMultiply), pollMaxDelay) + continue + } + + body, _ := io.ReadAll(resp.Body) + resp.Body.Close() + + if resp.StatusCode == http.StatusNotFound { + // Still waiting for user to authorize + consecutiveErrors = 0 + delay = minDuration(time.Duration(float64(delay)*pollBackoffMultiply), pollMaxDelay) + continue + } + + if resp.StatusCode >= 200 && resp.StatusCode < 300 { + var tokens TokenPair + if err := json.Unmarshal(body, &tokens); err != nil { + return nil, fmt.Errorf("cursor: failed to parse auth response: %w", err) + } + return &tokens, nil + } + + return nil, fmt.Errorf("cursor: poll failed with status %d: %s", resp.StatusCode, string(body)) + } + + return nil, fmt.Errorf("cursor: authentication polling timeout (waited ~%.0f seconds)", + float64(pollMaxAttempts)*pollMaxDelay.Seconds()/2) +} + +// RefreshToken refreshes a Cursor access token using the refresh token. +func RefreshToken(ctx context.Context, refreshToken string) (*TokenPair, error) { + client := &http.Client{Timeout: 10 * time.Second} + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, CursorRefreshURL, + strings.NewReader("{}")) + if err != nil { + return nil, fmt.Errorf("cursor: failed to create refresh request: %w", err) + } + req.Header.Set("Authorization", "Bearer "+refreshToken) + req.Header.Set("Content-Type", "application/json") + + resp, err := client.Do(req) + if err != nil { + return nil, fmt.Errorf("cursor: token refresh request failed: %w", err) + } + defer resp.Body.Close() + + body, _ := io.ReadAll(resp.Body) + + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + return nil, fmt.Errorf("cursor: token refresh failed (status %d): %s", resp.StatusCode, string(body)) + } + + var tokens TokenPair + if err := json.Unmarshal(body, &tokens); err != nil { + return nil, fmt.Errorf("cursor: failed to parse refresh response: %w", err) + } + + // Keep original refresh token if not returned + if tokens.RefreshToken == "" { + tokens.RefreshToken = refreshToken + } + + return &tokens, nil +} + +// ParseJWTSub extracts the "sub" claim from a Cursor JWT access token. +// Cursor JWTs contain "sub" like "auth0|user_XXXX" which uniquely identifies +// the account. Returns empty string if parsing fails. +func ParseJWTSub(token string) string { + decoded := decodeJWTPayload(token) + if decoded == nil { + return "" + } + var claims struct { + Sub string `json:"sub"` + } + if err := json.Unmarshal(decoded, &claims); err != nil { + return "" + } + return claims.Sub +} + +// SubToShortHash converts a JWT sub claim to a short hex hash for use in filenames. +// e.g. "auth0|user_2x..." → "a3f8b2c1" +func SubToShortHash(sub string) string { + if sub == "" { + return "" + } + h := sha256.Sum256([]byte(sub)) + return fmt.Sprintf("%x", h[:4]) // 8 hex chars +} + +// decodeJWTPayload decodes the payload (middle) part of a JWT. +func decodeJWTPayload(token string) []byte { + parts := strings.Split(token, ".") + if len(parts) != 3 { + return nil + } + payload := parts[1] + switch len(payload) % 4 { + case 2: + payload += "==" + case 3: + payload += "=" + } + payload = strings.ReplaceAll(payload, "-", "+") + payload = strings.ReplaceAll(payload, "_", "/") + decoded, err := base64.StdEncoding.DecodeString(payload) + if err != nil { + return nil + } + return decoded +} + +// GetTokenExpiry extracts the JWT expiry from an access token with a 5-minute safety margin. +// Falls back to 1 hour from now if the token can't be parsed. +func GetTokenExpiry(token string) time.Time { + decoded := decodeJWTPayload(token) + if decoded == nil { + return time.Now().Add(1 * time.Hour) + } + + var claims struct { + Exp float64 `json:"exp"` + } + if err := json.Unmarshal(decoded, &claims); err != nil || claims.Exp == 0 { + return time.Now().Add(1 * time.Hour) + } + + sec, frac := math.Modf(claims.Exp) + expiry := time.Unix(int64(sec), int64(frac*1e9)) + // Subtract 5-minute safety margin + return expiry.Add(-5 * time.Minute) +} + +func minDuration(a, b time.Duration) time.Duration { + if a < b { + return a + } + return b +} diff --git a/internal/auth/cursor/proto/connect.go b/internal/auth/cursor/proto/connect.go new file mode 100644 index 0000000000..ffe5905e3b --- /dev/null +++ b/internal/auth/cursor/proto/connect.go @@ -0,0 +1,84 @@ +package proto + +import ( + "encoding/binary" + "encoding/json" + "fmt" +) + +const ( + // ConnectEndStreamFlag marks the end-of-stream frame (trailers). + ConnectEndStreamFlag byte = 0x02 + // ConnectCompressionFlag indicates the payload is compressed (not supported). + ConnectCompressionFlag byte = 0x01 + // ConnectFrameHeaderSize is the fixed 5-byte frame header. + ConnectFrameHeaderSize = 5 +) + +// FrameConnectMessage wraps a protobuf payload in a Connect frame. +// Frame format: [1 byte flags][4 bytes payload length (big-endian)][payload] +func FrameConnectMessage(data []byte, flags byte) []byte { + frame := make([]byte, ConnectFrameHeaderSize+len(data)) + frame[0] = flags + binary.BigEndian.PutUint32(frame[1:5], uint32(len(data))) + copy(frame[5:], data) + return frame +} + +// ParseConnectFrame extracts one frame from a buffer. +// Returns (flags, payload, bytesConsumed, ok). +// ok is false when the buffer is too short for a complete frame. +func ParseConnectFrame(buf []byte) (flags byte, payload []byte, consumed int, ok bool) { + if len(buf) < ConnectFrameHeaderSize { + return 0, nil, 0, false + } + flags = buf[0] + length := binary.BigEndian.Uint32(buf[1:5]) + total := ConnectFrameHeaderSize + int(length) + if len(buf) < total { + return 0, nil, 0, false + } + return flags, buf[5:total], total, true +} + +// ConnectError is a structured error from the Connect protocol end-of-stream trailer. +// The Code field contains the server-defined error code (e.g. gRPC standard codes +// like "resource_exhausted", "unauthenticated", "permission_denied", "unavailable"). +type ConnectError struct { + Code string // server-defined error code + Message string // human-readable error description +} + +func (e *ConnectError) Error() string { + return fmt.Sprintf("Connect error %s: %s", e.Code, e.Message) +} + +// ParseConnectEndStream parses a Connect end-of-stream frame payload (JSON). +// Returns nil if there is no error in the trailer. +// On error, returns a *ConnectError with the server's error code and message. +func ParseConnectEndStream(data []byte) error { + if len(data) == 0 { + return nil + } + var trailer struct { + Error *struct { + Code string `json:"code"` + Message string `json:"message"` + } `json:"error"` + } + if err := json.Unmarshal(data, &trailer); err != nil { + return fmt.Errorf("failed to parse Connect end stream: %w", err) + } + if trailer.Error != nil { + code := trailer.Error.Code + if code == "" { + code = "unknown" + } + msg := trailer.Error.Message + if msg == "" { + msg = "Unknown error" + } + return &ConnectError{Code: code, Message: msg} + } + return nil +} diff --git a/internal/auth/cursor/proto/decode.go b/internal/auth/cursor/proto/decode.go new file mode 100644 index 0000000000..f54fc73588 --- /dev/null +++ b/internal/auth/cursor/proto/decode.go @@ -0,0 +1,563 @@ +package proto + +import ( + "encoding/hex" + "fmt" + + log "github.com/sirupsen/logrus" + "google.golang.org/protobuf/encoding/protowire" +) + +// ServerMessageType identifies the kind of decoded server message. +type ServerMessageType int + +const ( + ServerMsgUnknown ServerMessageType = iota + ServerMsgTextDelta // Text content delta + ServerMsgThinkingDelta // Thinking/reasoning delta + ServerMsgThinkingCompleted // Thinking completed + ServerMsgKvGetBlob // Server wants a blob + ServerMsgKvSetBlob // Server wants to store a blob + ServerMsgExecRequestCtx // Server requests context (tools, etc.) + ServerMsgExecMcpArgs // Server wants MCP tool execution + ServerMsgExecShellArgs // Rejected: shell command + ServerMsgExecReadArgs // Rejected: file read + ServerMsgExecWriteArgs // Rejected: file write + ServerMsgExecDeleteArgs // Rejected: file delete + ServerMsgExecLsArgs // Rejected: directory listing + ServerMsgExecGrepArgs // Rejected: grep search + ServerMsgExecFetchArgs // Rejected: HTTP fetch + ServerMsgExecDiagnostics // Respond with empty diagnostics + ServerMsgExecShellStream // Rejected: shell stream + ServerMsgExecBgShellSpawn // Rejected: background shell + ServerMsgExecWriteShellStdin // Rejected: write shell stdin + ServerMsgExecOther // Other exec types (respond with empty) + ServerMsgTurnEnded // Turn has ended (no more output) + ServerMsgHeartbeat // Server heartbeat + ServerMsgTokenDelta // Token usage delta + ServerMsgCheckpoint // Conversation checkpoint update +) + +// DecodedServerMessage holds parsed data from an AgentServerMessage. +type DecodedServerMessage struct { + Type ServerMessageType + + // For text/thinking deltas + Text string + + // For KV messages + KvId uint32 + BlobId []byte // hex-encoded blob ID + BlobData []byte // for setBlobArgs + + // For exec messages + ExecMsgId uint32 + ExecId string + + // For MCP args + McpToolName string + McpToolCallId string + McpArgs map[string][]byte // arg name -> protobuf-encoded value + + // For rejection context + Path string + Command string + WorkingDirectory string + Url string + + // For other exec - the raw field number for building a response + ExecFieldNumber int + + // For TokenDeltaUpdate + TokenDelta int64 + + // For conversation checkpoint update (raw bytes, not decoded) + CheckpointData []byte +} + +// DecodeAgentServerMessage parses an AgentServerMessage and returns +// a structured representation of the first meaningful message found. +func DecodeAgentServerMessage(data []byte) (*DecodedServerMessage, error) { + msg := &DecodedServerMessage{Type: ServerMsgUnknown} + + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return msg, fmt.Errorf("invalid tag") + } + data = data[n:] + + switch typ { + case protowire.BytesType: + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return msg, fmt.Errorf("invalid bytes field %d", num) + } + data = data[n:] + + // Debug: log top-level ASM fields + log.Debugf("DecodeAgentServerMessage: found ASM field %d, len=%d", num, len(val)) + + switch num { + case ASM_InteractionUpdate: + log.Debugf("DecodeAgentServerMessage: calling decodeInteractionUpdate") + decodeInteractionUpdate(val, msg) + case ASM_ExecServerMessage: + log.Debugf("DecodeAgentServerMessage: calling decodeExecServerMessage") + decodeExecServerMessage(val, msg) + case ASM_KvServerMessage: + decodeKvServerMessage(val, msg) + case ASM_ConversationCheckpoint: + msg.Type = ServerMsgCheckpoint + msg.CheckpointData = append([]byte(nil), val...) // copy raw bytes + log.Debugf("DecodeAgentServerMessage: captured checkpoint %d bytes", len(val)) + } + + case protowire.VarintType: + _, n := protowire.ConsumeVarint(data) + if n < 0 { + return msg, fmt.Errorf("invalid varint field %d", num) + } + data = data[n:] + + default: + // Skip unknown wire types + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return msg, fmt.Errorf("invalid field %d", num) + } + data = data[n:] + } + } + + return msg, nil +} + +func decodeInteractionUpdate(data []byte, msg *DecodedServerMessage) { + log.Debugf("decodeInteractionUpdate: input len=%d, hex=%x", len(data), data) + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + log.Debugf("decodeInteractionUpdate: invalid tag, remaining=%x", data) + return + } + data = data[n:] + log.Debugf("decodeInteractionUpdate: field=%d wire=%d remaining=%d bytes", num, typ, len(data)) + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + log.Debugf("decodeInteractionUpdate: invalid bytes field %d", num) + return + } + data = data[n:] + log.Debugf("decodeInteractionUpdate: field %d content len=%d, first 20 bytes: %x", num, len(val), val[:min(20, len(val))]) + + switch num { + case IU_TextDelta: + msg.Type = ServerMsgTextDelta + msg.Text = decodeStringField(val, TDU_Text) + log.Debugf("decodeInteractionUpdate: TextDelta text=%q", msg.Text) + case IU_ThinkingDelta: + msg.Type = ServerMsgThinkingDelta + msg.Text = decodeStringField(val, TKD_Text) + log.Debugf("decodeInteractionUpdate: ThinkingDelta text=%q", msg.Text) + case IU_ThinkingCompleted: + msg.Type = ServerMsgThinkingCompleted + log.Debugf("decodeInteractionUpdate: ThinkingCompleted") + case 2: + // tool_call_started - ignore but log + log.Debugf("decodeInteractionUpdate: ToolCallStarted (ignored)") + case 3: + // tool_call_completed - ignore but log + log.Debugf("decodeInteractionUpdate: ToolCallCompleted (ignored)") + case 8: + // token_delta - extract token count + msg.Type = ServerMsgTokenDelta + msg.TokenDelta = decodeVarintField(val, 1) + log.Debugf("decodeInteractionUpdate: TokenDeltaUpdate tokens=%d", msg.TokenDelta) + case 13: + // heartbeat from server + msg.Type = ServerMsgHeartbeat + case 14: + // turn_ended - critical: model finished generating + msg.Type = ServerMsgTurnEnded + log.Debugf("decodeInteractionUpdate: TurnEndedUpdate - stream should end") + case 16: + // step_started - ignore + log.Debugf("decodeInteractionUpdate: StepStartedUpdate (ignored)") + case 17: + // step_completed - ignore + log.Debugf("decodeInteractionUpdate: StepCompletedUpdate (ignored)") + default: + log.Debugf("decodeInteractionUpdate: unknown field %d", num) + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } +} + +func decodeKvServerMessage(data []byte, msg *DecodedServerMessage) { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return + } + data = data[n:] + + switch typ { + case protowire.VarintType: + val, n := protowire.ConsumeVarint(data) + if n < 0 { + return + } + data = data[n:] + if num == KSM_Id { + msg.KvId = uint32(val) + } + + case protowire.BytesType: + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return + } + data = data[n:] + + switch num { + case KSM_GetBlobArgs: + msg.Type = ServerMsgKvGetBlob + msg.BlobId = decodeBytesField(val, GBA_BlobId) + case KSM_SetBlobArgs: + msg.Type = ServerMsgKvSetBlob + decodeSetBlobArgs(val, msg) + } + + default: + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } +} + +func decodeSetBlobArgs(data []byte, msg *DecodedServerMessage) { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return + } + data = data[n:] + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return + } + data = data[n:] + switch num { + case SBA_BlobId: + msg.BlobId = val + case SBA_BlobData: + msg.BlobData = val + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } +} + +func decodeExecServerMessage(data []byte, msg *DecodedServerMessage) { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return + } + data = data[n:] + + switch typ { + case protowire.VarintType: + val, n := protowire.ConsumeVarint(data) + if n < 0 { + return + } + data = data[n:] + if num == ESM_Id { + msg.ExecMsgId = uint32(val) + log.Debugf("decodeExecServerMessage: ESM_Id = %d", val) + } + + case protowire.BytesType: + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return + } + data = data[n:] + + // Debug: log all fields found in ExecServerMessage + log.Debugf("decodeExecServerMessage: found field %d, len=%d, first 20 bytes: %x", num, len(val), val[:min(20, len(val))]) + + switch num { + case ESM_ExecId: + msg.ExecId = string(val) + log.Debugf("decodeExecServerMessage: ESM_ExecId = %q", msg.ExecId) + case ESM_RequestContextArgs: + msg.Type = ServerMsgExecRequestCtx + case ESM_McpArgs: + msg.Type = ServerMsgExecMcpArgs + decodeMcpArgs(val, msg) + case ESM_ShellArgs: + msg.Type = ServerMsgExecShellArgs + decodeShellArgs(val, msg) + case ESM_ShellStreamArgs: + msg.Type = ServerMsgExecShellStream + decodeShellArgs(val, msg) + case ESM_ReadArgs: + msg.Type = ServerMsgExecReadArgs + msg.Path = decodeStringField(val, RA_Path) + case ESM_WriteArgs: + msg.Type = ServerMsgExecWriteArgs + msg.Path = decodeStringField(val, WA_Path) + case ESM_DeleteArgs: + msg.Type = ServerMsgExecDeleteArgs + msg.Path = decodeStringField(val, DA_Path) + case ESM_LsArgs: + msg.Type = ServerMsgExecLsArgs + msg.Path = decodeStringField(val, LA_Path) + case ESM_GrepArgs: + msg.Type = ServerMsgExecGrepArgs + case ESM_FetchArgs: + msg.Type = ServerMsgExecFetchArgs + msg.Url = decodeStringField(val, FA_Url) + case ESM_DiagnosticsArgs: + msg.Type = ServerMsgExecDiagnostics + case ESM_BackgroundShellSpawn: + msg.Type = ServerMsgExecBgShellSpawn + decodeShellArgs(val, msg) // same structure + case ESM_WriteShellStdinArgs: + msg.Type = ServerMsgExecWriteShellStdin + default: + // Unknown exec types - only set if we haven't identified the type yet + // (other fields like span_context (19) come after the exec type field) + if msg.Type == ServerMsgUnknown { + msg.Type = ServerMsgExecOther + msg.ExecFieldNumber = int(num) + } + } + + default: + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } +} + +func decodeMcpArgs(data []byte, msg *DecodedServerMessage) { + msg.McpArgs = make(map[string][]byte) + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return + } + data = data[n:] + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return + } + data = data[n:] + + switch num { + case MCA_Name: + msg.McpToolName = string(val) + case MCA_Args: + // Map entries are encoded as submessages with key=1, value=2 + decodeMapEntry(val, msg.McpArgs) + case MCA_ToolCallId: + msg.McpToolCallId = string(val) + case MCA_ToolName: + // ToolName takes precedence if present + if msg.McpToolName == "" || string(val) != "" { + msg.McpToolName = string(val) + } + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } +} + +func decodeMapEntry(data []byte, m map[string][]byte) { + var key string + var value []byte + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return + } + data = data[n:] + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return + } + data = data[n:] + if num == 1 { + key = string(val) + } else if num == 2 { + value = append([]byte(nil), val...) + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } + if key != "" { + m[key] = value + } +} + +func decodeShellArgs(data []byte, msg *DecodedServerMessage) { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return + } + data = data[n:] + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return + } + data = data[n:] + switch num { + case SHA_Command: + msg.Command = string(val) + case SHA_WorkingDirectory: + msg.WorkingDirectory = string(val) + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return + } + data = data[n:] + } + } +} + +// --- Helper decoders --- + +// decodeStringField extracts a string from the first matching field in a submessage. +func decodeStringField(data []byte, targetField protowire.Number) string { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return "" + } + data = data[n:] + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return "" + } + data = data[n:] + if num == targetField { + return string(val) + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return "" + } + data = data[n:] + } + } + return "" +} + +// decodeBytesField extracts bytes from the first matching field in a submessage. +func decodeBytesField(data []byte, targetField protowire.Number) []byte { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return nil + } + data = data[n:] + + if typ == protowire.BytesType { + val, n := protowire.ConsumeBytes(data) + if n < 0 { + return nil + } + data = data[n:] + if num == targetField { + return append([]byte(nil), val...) + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return nil + } + data = data[n:] + } + } + return nil +} + +// decodeVarintField extracts an int64 from the first matching varint field in a submessage. +func decodeVarintField(data []byte, targetField protowire.Number) int64 { + for len(data) > 0 { + num, typ, n := protowire.ConsumeTag(data) + if n < 0 { + return 0 + } + data = data[n:] + if typ == protowire.VarintType { + val, n := protowire.ConsumeVarint(data) + if n < 0 { + return 0 + } + data = data[n:] + if num == targetField { + return int64(val) + } + } else { + n := protowire.ConsumeFieldValue(num, typ, data) + if n < 0 { + return 0 + } + data = data[n:] + } + } + return 0 +} + +// BlobIdHex returns the hex string of a blob ID for use as a map key. +func BlobIdHex(blobId []byte) string { + return hex.EncodeToString(blobId) +} diff --git a/internal/auth/cursor/proto/descriptor.go b/internal/auth/cursor/proto/descriptor.go new file mode 100644 index 0000000000..a24b3fa9a6 --- /dev/null +++ b/internal/auth/cursor/proto/descriptor.go @@ -0,0 +1,1244 @@ +package proto + +import ( + "encoding/base64" + "sync" + + "google.golang.org/protobuf/proto" + "google.golang.org/protobuf/reflect/protodesc" + "google.golang.org/protobuf/reflect/protoreflect" + descrptorpb "google.golang.org/protobuf/types/descriptorpb" +) + +// agentDescriptorB64 is the base64-encoded FileDescriptorProto for agent.proto. +// Extracted from alma-plugins/plugins/cursor-auth/proto/agent_pb.ts. +const agentDescriptorB64 = "" + + "CgthZ2VudC5wcm90bxIIYWdlbnQudjEicgoOR2xvYlRvb2xSZXN1bHQSLAoHc3VjY2VzcxgBIAEo" + + "CzIZLmFnZW50LnYxLkdsb2JUb29sU3VjY2Vzc0gAEigKBWVycm9yGAIgASgLMhcuYWdlbnQudjEu" + + "R2xvYlRvb2xFcnJvckgAQggKBnJlc3VsdCIeCg1HbG9iVG9vbEVycm9yEg0KBWVycm9yGAEgASgJ" + + "IokBCg9HbG9iVG9vbFN1Y2Nlc3MSDwoHcGF0dGVybhgBIAEoCRIMCgRwYXRoGAIgASgJEg0KBWZp" + + "bGVzGAMgAygJEhMKC3RvdGFsX2ZpbGVzGAQgASgFEhgKEGNsaWVudF90cnVuY2F0ZWQYBSABKAgS" + + "GQoRcmlwZ3JlcF90cnVuY2F0ZWQYBiABKAgiRgoMR2xvYlRvb2xDYWxsEgwKBGFyZ3MYASABKAwS" + + "KAoGcmVzdWx0GAIgASgLMhguYWdlbnQudjEuR2xvYlRvb2xSZXN1bHQibQoRUmVhZExpbnRzVG9v" + + "bENhbGwSKQoEYXJncxgBIAEoCzIbLmFnZW50LnYxLlJlYWRMaW50c1Rvb2xBcmdzEi0KBnJlc3Vs" + + "dBgCIAEoCzIdLmFnZW50LnYxLlJlYWRMaW50c1Rvb2xSZXN1bHQiIgoRUmVhZExpbnRzVG9vbEFy" + + "Z3MSDQoFcGF0aHMYASADKAkigQEKE1JlYWRMaW50c1Rvb2xSZXN1bHQSMQoHc3VjY2VzcxgBIAEo" + + "CzIeLmFnZW50LnYxLlJlYWRMaW50c1Rvb2xTdWNjZXNzSAASLQoFZXJyb3IYAiABKAsyHC5hZ2Vu" + + "dC52MS5SZWFkTGludHNUb29sRXJyb3JIAEIICgZyZXN1bHQiewoUUmVhZExpbnRzVG9vbFN1Y2Nl" + + "c3MSMwoQZmlsZV9kaWFnbm9zdGljcxgBIAMoCzIZLmFnZW50LnYxLkZpbGVEaWFnbm9zdGljcxIT" + + "Cgt0b3RhbF9maWxlcxgCIAEoBRIZChF0b3RhbF9kaWFnbm9zdGljcxgDIAEoBSJpCg9GaWxlRGlh" + + "Z25vc3RpY3MSDAoEcGF0aBgBIAEoCRItCgtkaWFnbm9zdGljcxgCIAMoCzIYLmFnZW50LnYxLkRp" + + "YWdub3N0aWNJdGVtEhkKEWRpYWdub3N0aWNzX2NvdW50GAMgASgFIqsBCg5EaWFnbm9zdGljSXRl" + + "bRIuCghzZXZlcml0eRgBIAEoDjIcLmFnZW50LnYxLkRpYWdub3N0aWNTZXZlcml0eRIoCgVyYW5n" + + "ZRgCIAEoCzIZLmFnZW50LnYxLkRpYWdub3N0aWNSYW5nZRIPCgdtZXNzYWdlGAMgASgJEg4KBnNv" + + "dXJjZRgEIAEoCRIMCgRjb2RlGAUgASgJEhAKCGlzX3N0YWxlGAYgASgIIlUKD0RpYWdub3N0aWNS" + + "YW5nZRIhCgVzdGFydBgBIAEoCzISLmFnZW50LnYxLlBvc2l0aW9uEh8KA2VuZBgCIAEoCzISLmFn" + + "ZW50LnYxLlBvc2l0aW9uIisKElJlYWRMaW50c1Rvb2xFcnJvchIVCg1lcnJvcl9tZXNzYWdlGAEg" + + "ASgJIh0KDE1jcFRvb2xFcnJvchINCgVlcnJvchgBIAEoCSLSAQoNTWNwVG9vbFJlc3VsdBInCgdz" + + "dWNjZXNzGAEgASgLMhQuYWdlbnQudjEuTWNwU3VjY2Vzc0gAEicKBWVycm9yGAIgASgLMhYuYWdl" + + "bnQudjEuTWNwVG9vbEVycm9ySAASKQoIcmVqZWN0ZWQYAyABKAsyFS5hZ2VudC52MS5NY3BSZWpl" + + "Y3RlZEgAEjoKEXBlcm1pc3Npb25fZGVuaWVkGAQgASgLMh0uYWdlbnQudjEuTWNwUGVybWlzc2lv" + + "bkRlbmllZEgAQggKBnJlc3VsdCJXCgtNY3BUb29sQ2FsbBIfCgRhcmdzGAEgASgLMhEuYWdlbnQu" + + "djEuTWNwQXJncxInCgZyZXN1bHQYAiABKAsyFy5hZ2VudC52MS5NY3BUb29sUmVzdWx0Im0KEVNl" + + "bVNlYXJjaFRvb2xDYWxsEikKBGFyZ3MYASABKAsyGy5hZ2VudC52MS5TZW1TZWFyY2hUb29sQXJn" + + "cxItCgZyZXN1bHQYAiABKAsyHS5hZ2VudC52MS5TZW1TZWFyY2hUb29sUmVzdWx0IlMKEVNlbVNl" + + "YXJjaFRvb2xBcmdzEg0KBXF1ZXJ5GAEgASgJEhoKEnRhcmdldF9kaXJlY3RvcmllcxgCIAMoCRIT" + + "CgtleHBsYW5hdGlvbhgDIAEoCSKBAQoTU2VtU2VhcmNoVG9vbFJlc3VsdBIxCgdzdWNjZXNzGAEg" + + "ASgLMh4uYWdlbnQudjEuU2VtU2VhcmNoVG9vbFN1Y2Nlc3NIABItCgVlcnJvchgCIAEoCzIcLmFn" + + "ZW50LnYxLlNlbVNlYXJjaFRvb2xFcnJvckgAQggKBnJlc3VsdCI9ChRTZW1TZWFyY2hUb29sU3Vj" + + "Y2VzcxIPCgdyZXN1bHRzGAEgASgJEhQKDGNvZGVfcmVzdWx0cxgCIAMoDCIrChJTZW1TZWFyY2hU" + + "b29sRXJyb3ISFQoNZXJyb3JfbWVzc2FnZRgBIAEoCSKCAQoYTGlzdE1jcFJlc291cmNlc1Rvb2xD" + + "YWxsEjAKBGFyZ3MYASABKAsyIi5hZ2VudC52MS5MaXN0TWNwUmVzb3VyY2VzRXhlY0FyZ3MSNAoG" + + "cmVzdWx0GAIgASgLMiQuYWdlbnQudjEuTGlzdE1jcFJlc291cmNlc0V4ZWNSZXN1bHQifwoXUmVh" + + "ZE1jcFJlc291cmNlVG9vbENhbGwSLwoEYXJncxgBIAEoCzIhLmFnZW50LnYxLlJlYWRNY3BSZXNv" + + "dXJjZUV4ZWNBcmdzEjMKBnJlc3VsdBgCIAEoCzIjLmFnZW50LnYxLlJlYWRNY3BSZXNvdXJjZUV4" + + "ZWNSZXN1bHQiWQoNRmV0Y2hUb29sQ2FsbBIhCgRhcmdzGAEgASgLMhMuYWdlbnQudjEuRmV0Y2hB" + + "cmdzEiUKBnJlc3VsdBgCIAEoCzIVLmFnZW50LnYxLkZldGNoUmVzdWx0Im4KFFJlY29yZFNjcmVl" + + "blRvb2xDYWxsEigKBGFyZ3MYASABKAsyGi5hZ2VudC52MS5SZWNvcmRTY3JlZW5BcmdzEiwKBnJl" + + "c3VsdBgCIAEoCzIcLmFnZW50LnYxLlJlY29yZFNjcmVlblJlc3VsdCJ3ChdXcml0ZVNoZWxsU3Rk" + + "aW5Ub29sQ2FsbBIrCgRhcmdzGAEgASgLMh0uYWdlbnQudjEuV3JpdGVTaGVsbFN0ZGluQXJncxIv" + + "CgZyZXN1bHQYAiABKAsyHy5hZ2VudC52MS5Xcml0ZVNoZWxsU3RkaW5SZXN1bHQisQEKC1JlZmxl" + + "Y3RBcmdzEiIKGnVuZXhwZWN0ZWRfYWN0aW9uX291dGNvbWVzGAEgASgJEh0KFXJlbGV2YW50X2lu" + + "c3RydWN0aW9ucxgCIAEoCRIZChFzY2VuYXJpb19hbmFseXNpcxgDIAEoCRIaChJjcml0aWNhbF9z" + + "eW50aGVzaXMYBCABKAkSEgoKbmV4dF9zdGVwcxgFIAEoCRIUCgx0b29sX2NhbGxfaWQYBiABKAki" + + "bwoNUmVmbGVjdFJlc3VsdBIrCgdzdWNjZXNzGAEgASgLMhguYWdlbnQudjEuUmVmbGVjdFN1Y2Nl" + + "c3NIABInCgVlcnJvchgCIAEoCzIWLmFnZW50LnYxLlJlZmxlY3RFcnJvckgAQggKBnJlc3VsdCIQ" + + "Cg5SZWZsZWN0U3VjY2VzcyIdCgxSZWZsZWN0RXJyb3ISDQoFZXJyb3IYASABKAkiXwoPUmVmbGVj" + + "dFRvb2xDYWxsEiMKBGFyZ3MYASABKAsyFS5hZ2VudC52MS5SZWZsZWN0QXJncxInCgZyZXN1bHQY" + + "AiABKAsyFy5hZ2VudC52MS5SZWZsZWN0UmVzdWx0IlkKF1N0YXJ0R3JpbmRFeGVjdXRpb25Bcmdz" + + "EhgKC2V4cGxhbmF0aW9uGAEgASgJSACIAQESFAoMdG9vbF9jYWxsX2lkGAIgASgJQg4KDF9leHBs" + + "YW5hdGlvbiKTAQoZU3RhcnRHcmluZEV4ZWN1dGlvblJlc3VsdBI3CgdzdWNjZXNzGAEgASgLMiQu" + + "YWdlbnQudjEuU3RhcnRHcmluZEV4ZWN1dGlvblN1Y2Nlc3NIABIzCgVlcnJvchgCIAEoCzIiLmFn" + + "ZW50LnYxLlN0YXJ0R3JpbmRFeGVjdXRpb25FcnJvckgAQggKBnJlc3VsdCIcChpTdGFydEdyaW5k" + + "RXhlY3V0aW9uU3VjY2VzcyIpChhTdGFydEdyaW5kRXhlY3V0aW9uRXJyb3ISDQoFZXJyb3IYASAB" + + "KAkigwEKG1N0YXJ0R3JpbmRFeGVjdXRpb25Ub29sQ2FsbBIvCgRhcmdzGAEgASgLMiEuYWdlbnQu" + + "djEuU3RhcnRHcmluZEV4ZWN1dGlvbkFyZ3MSMwoGcmVzdWx0GAIgASgLMiMuYWdlbnQudjEuU3Rh" + + "cnRHcmluZEV4ZWN1dGlvblJlc3VsdCJYChZTdGFydEdyaW5kUGxhbm5pbmdBcmdzEhgKC2V4cGxh" + + "bmF0aW9uGAEgASgJSACIAQESFAoMdG9vbF9jYWxsX2lkGAIgASgJQg4KDF9leHBsYW5hdGlvbiKQ" + + "AQoYU3RhcnRHcmluZFBsYW5uaW5nUmVzdWx0EjYKB3N1Y2Nlc3MYASABKAsyIy5hZ2VudC52MS5T" + + "dGFydEdyaW5kUGxhbm5pbmdTdWNjZXNzSAASMgoFZXJyb3IYAiABKAsyIS5hZ2VudC52MS5TdGFy" + + "dEdyaW5kUGxhbm5pbmdFcnJvckgAQggKBnJlc3VsdCIbChlTdGFydEdyaW5kUGxhbm5pbmdTdWNj" + + "ZXNzIigKF1N0YXJ0R3JpbmRQbGFubmluZ0Vycm9yEg0KBWVycm9yGAEgASgJIoABChpTdGFydEdy" + + "aW5kUGxhbm5pbmdUb29sQ2FsbBIuCgRhcmdzGAEgASgLMiAuYWdlbnQudjEuU3RhcnRHcmluZFBs" + + "YW5uaW5nQXJncxIyCgZyZXN1bHQYAiABKAsyIi5hZ2VudC52MS5TdGFydEdyaW5kUGxhbm5pbmdS" + + "ZXN1bHQinAEKCFRhc2tBcmdzEhMKC2Rlc2NyaXB0aW9uGAEgASgJEg4KBnByb21wdBgCIAEoCRIt" + + "Cg1zdWJhZ2VudF90eXBlGAMgASgLMhYuYWdlbnQudjEuU3ViYWdlbnRUeXBlEhIKBW1vZGVsGAQg" + + "ASgJSACIAQESEwoGcmVzdW1lGAUgASgJSAGIAQFCCAoGX21vZGVsQgkKB19yZXN1bWUiqgEKC1Rh" + + "c2tTdWNjZXNzEjYKEmNvbnZlcnNhdGlvbl9zdGVwcxgBIAMoCzIaLmFnZW50LnYxLkNvbnZlcnNh" + + "dGlvblN0ZXASFQoIYWdlbnRfaWQYAiABKAlIAIgBARIVCg1pc19iYWNrZ3JvdW5kGAMgASgIEhgK" + + "C2R1cmF0aW9uX21zGAQgASgESAGIAQFCCwoJX2FnZW50X2lkQg4KDF9kdXJhdGlvbl9tcyIaCglU" + + "YXNrRXJyb3ISDQoFZXJyb3IYASABKAkiZgoKVGFza1Jlc3VsdBIoCgdzdWNjZXNzGAEgASgLMhUu" + + "YWdlbnQudjEuVGFza1N1Y2Nlc3NIABIkCgVlcnJvchgCIAEoCzITLmFnZW50LnYxLlRhc2tFcnJv" + + "ckgAQggKBnJlc3VsdCJWCgxUYXNrVG9vbENhbGwSIAoEYXJncxgBIAEoCzISLmFnZW50LnYxLlRh" + + "c2tBcmdzEiQKBnJlc3VsdBgCIAEoCzIULmFnZW50LnYxLlRhc2tSZXN1bHQiTAoRVGFza1Rvb2xD" + + "YWxsRGVsdGESNwoSaW50ZXJhY3Rpb25fdXBkYXRlGAEgASgLMhsuYWdlbnQudjEuSW50ZXJhY3Rp" + + "b25VcGRhdGUiyw8KCFRvb2xDYWxsEjIKD3NoZWxsX3Rvb2xfY2FsbBgBIAEoCzIXLmFnZW50LnYx" + + "LlNoZWxsVG9vbENhbGxIABI0ChBkZWxldGVfdG9vbF9jYWxsGAMgASgLMhguYWdlbnQudjEuRGVs" + + "ZXRlVG9vbENhbGxIABIwCg5nbG9iX3Rvb2xfY2FsbBgEIAEoCzIWLmFnZW50LnYxLkdsb2JUb29s" + + "Q2FsbEgAEjAKDmdyZXBfdG9vbF9jYWxsGAUgASgLMhYuYWdlbnQudjEuR3JlcFRvb2xDYWxsSAAS" + + "MAoOcmVhZF90b29sX2NhbGwYCCABKAsyFi5hZ2VudC52MS5SZWFkVG9vbENhbGxIABI/ChZ1cGRh" + + "dGVfdG9kb3NfdG9vbF9jYWxsGAkgASgLMh0uYWdlbnQudjEuVXBkYXRlVG9kb3NUb29sQ2FsbEgA" + + "EjsKFHJlYWRfdG9kb3NfdG9vbF9jYWxsGAogASgLMhsuYWdlbnQudjEuUmVhZFRvZG9zVG9vbENh" + + "bGxIABIwCg5lZGl0X3Rvb2xfY2FsbBgMIAEoCzIWLmFnZW50LnYxLkVkaXRUb29sQ2FsbEgAEiwK" + + "DGxzX3Rvb2xfY2FsbBgNIAEoCzIULmFnZW50LnYxLkxzVG9vbENhbGxIABI7ChRyZWFkX2xpbnRz" + + "X3Rvb2xfY2FsbBgOIAEoCzIbLmFnZW50LnYxLlJlYWRMaW50c1Rvb2xDYWxsSAASLgoNbWNwX3Rv" + + "b2xfY2FsbBgPIAEoCzIVLmFnZW50LnYxLk1jcFRvb2xDYWxsSAASOwoUc2VtX3NlYXJjaF90b29s" + + "X2NhbGwYECABKAsyGy5hZ2VudC52MS5TZW1TZWFyY2hUb29sQ2FsbEgAEj0KFWNyZWF0ZV9wbGFu" + + "X3Rvb2xfY2FsbBgRIAEoCzIcLmFnZW50LnYxLkNyZWF0ZVBsYW5Ub29sQ2FsbEgAEjsKFHdlYl9z" + + "ZWFyY2hfdG9vbF9jYWxsGBIgASgLMhsuYWdlbnQudjEuV2ViU2VhcmNoVG9vbENhbGxIABIwCg50" + + "YXNrX3Rvb2xfY2FsbBgTIAEoCzIWLmFnZW50LnYxLlRhc2tUb29sQ2FsbEgAEkoKHGxpc3RfbWNw" + + "X3Jlc291cmNlc190b29sX2NhbGwYFCABKAsyIi5hZ2VudC52MS5MaXN0TWNwUmVzb3VyY2VzVG9v" + + "bENhbGxIABJIChtyZWFkX21jcF9yZXNvdXJjZV90b29sX2NhbGwYFSABKAsyIS5hZ2VudC52MS5S" + + "ZWFkTWNwUmVzb3VyY2VUb29sQ2FsbEgAEkYKGmFwcGx5X2FnZW50X2RpZmZfdG9vbF9jYWxsGBYg" + + "ASgLMiAuYWdlbnQudjEuQXBwbHlBZ2VudERpZmZUb29sQ2FsbEgAEj8KFmFza19xdWVzdGlvbl90" + + "b29sX2NhbGwYFyABKAsyHS5hZ2VudC52MS5Bc2tRdWVzdGlvblRvb2xDYWxsSAASMgoPZmV0Y2hf" + + "dG9vbF9jYWxsGBggASgLMhcuYWdlbnQudjEuRmV0Y2hUb29sQ2FsbEgAEj0KFXN3aXRjaF9tb2Rl" + + "X3Rvb2xfY2FsbBgZIAEoCzIcLmFnZW50LnYxLlN3aXRjaE1vZGVUb29sQ2FsbEgAEjsKFGV4YV9z" + + "ZWFyY2hfdG9vbF9jYWxsGBogASgLMhsuYWdlbnQudjEuRXhhU2VhcmNoVG9vbENhbGxIABI5ChNl" + + "eGFfZmV0Y2hfdG9vbF9jYWxsGBsgASgLMhouYWdlbnQudjEuRXhhRmV0Y2hUb29sQ2FsbEgAEkMK" + + "GGdlbmVyYXRlX2ltYWdlX3Rvb2xfY2FsbBgcIAEoCzIfLmFnZW50LnYxLkdlbmVyYXRlSW1hZ2VU" + + "b29sQ2FsbEgAEkEKF3JlY29yZF9zY3JlZW5fdG9vbF9jYWxsGB0gASgLMh4uYWdlbnQudjEuUmVj" + + "b3JkU2NyZWVuVG9vbENhbGxIABI/ChZjb21wdXRlcl91c2VfdG9vbF9jYWxsGB4gASgLMh0uYWdl" + + "bnQudjEuQ29tcHV0ZXJVc2VUb29sQ2FsbEgAEkgKG3dyaXRlX3NoZWxsX3N0ZGluX3Rvb2xfY2Fs" + + "bBgfIAEoCzIhLmFnZW50LnYxLldyaXRlU2hlbGxTdGRpblRvb2xDYWxsSAASNgoRcmVmbGVjdF90" + + "b29sX2NhbGwYICABKAsyGS5hZ2VudC52MS5SZWZsZWN0VG9vbENhbGxIABJOCh5zZXR1cF92bV9l" + + "bnZpcm9ubWVudF90b29sX2NhbGwYISABKAsyJC5hZ2VudC52MS5TZXR1cFZtRW52aXJvbm1lbnRU" + + "b29sQ2FsbEgAEjoKE3RydW5jYXRlZF90b29sX2NhbGwYIiABKAsyGy5hZ2VudC52MS5UcnVuY2F0" + + "ZWRUb29sQ2FsbEgAElAKH3N0YXJ0X2dyaW5kX2V4ZWN1dGlvbl90b29sX2NhbGwYIyABKAsyJS5h" + + "Z2VudC52MS5TdGFydEdyaW5kRXhlY3V0aW9uVG9vbENhbGxIABJOCh5zdGFydF9ncmluZF9wbGFu" + + "bmluZ190b29sX2NhbGwYJCABKAsyJC5hZ2VudC52MS5TdGFydEdyaW5kUGxhbm5pbmdUb29sQ2Fs" + + "bEgAQgYKBHRvb2wiFwoVVHJ1bmNhdGVkVG9vbENhbGxBcmdzIhoKGFRydW5jYXRlZFRvb2xDYWxs" + + "U3VjY2VzcyInChZUcnVuY2F0ZWRUb29sQ2FsbEVycm9yEg0KBWVycm9yGAEgASgJIo0BChdUcnVu" + + "Y2F0ZWRUb29sQ2FsbFJlc3VsdBI1CgdzdWNjZXNzGAEgASgLMiIuYWdlbnQudjEuVHJ1bmNhdGVk" + + "VG9vbENhbGxTdWNjZXNzSAASMQoFZXJyb3IYAiABKAsyIC5hZ2VudC52MS5UcnVuY2F0ZWRUb29s" + + "Q2FsbEVycm9ySABCCAoGcmVzdWx0IpQBChFUcnVuY2F0ZWRUb29sQ2FsbBIdChVvcmlnaW5hbF9z" + + "dGVwX2Jsb2JfaWQYASABKAwSLQoEYXJncxgCIAEoCzIfLmFnZW50LnYxLlRydW5jYXRlZFRvb2xD" + + "YWxsQXJncxIxCgZyZXN1bHQYAyABKAsyIS5hZ2VudC52MS5UcnVuY2F0ZWRUb29sQ2FsbFJlc3Vs" + + "dCLRAQoNVG9vbENhbGxEZWx0YRI9ChVzaGVsbF90b29sX2NhbGxfZGVsdGEYASABKAsyHC5hZ2Vu" + + "dC52MS5TaGVsbFRvb2xDYWxsRGVsdGFIABI7ChR0YXNrX3Rvb2xfY2FsbF9kZWx0YRgCIAEoCzIb" + + "LmFnZW50LnYxLlRhc2tUb29sQ2FsbERlbHRhSAASOwoUZWRpdF90b29sX2NhbGxfZGVsdGEYAyAB" + + "KAsyGy5hZ2VudC52MS5FZGl0VG9vbENhbGxEZWx0YUgAQgcKBWRlbHRhIrYBChBDb252ZXJzYXRp" + + "b25TdGVwEjcKEWFzc2lzdGFudF9tZXNzYWdlGAEgASgLMhouYWdlbnQudjEuQXNzaXN0YW50TWVz" + + "c2FnZUgAEicKCXRvb2xfY2FsbBgCIAEoCzISLmFnZW50LnYxLlRvb2xDYWxsSAASNQoQdGhpbmtp" + + "bmdfbWVzc2FnZRgDIAEoCzIZLmFnZW50LnYxLlRoaW5raW5nTWVzc2FnZUgAQgkKB21lc3NhZ2Ui" + + "gQQKEkNvbnZlcnNhdGlvbkFjdGlvbhI6ChN1c2VyX21lc3NhZ2VfYWN0aW9uGAEgASgLMhsuYWdl" + + "bnQudjEuVXNlck1lc3NhZ2VBY3Rpb25IABIvCg1yZXN1bWVfYWN0aW9uGAIgASgLMhYuYWdlbnQu" + + "djEuUmVzdW1lQWN0aW9uSAASLwoNY2FuY2VsX2FjdGlvbhgDIAEoCzIWLmFnZW50LnYxLkNhbmNl" + + "bEFjdGlvbkgAEjUKEHN1bW1hcml6ZV9hY3Rpb24YBCABKAsyGS5hZ2VudC52MS5TdW1tYXJpemVB" + + "Y3Rpb25IABI8ChRzaGVsbF9jb21tYW5kX2FjdGlvbhgFIAEoCzIcLmFnZW50LnYxLlNoZWxsQ29t" + + "bWFuZEFjdGlvbkgAEjYKEXN0YXJ0X3BsYW5fYWN0aW9uGAYgASgLMhkuYWdlbnQudjEuU3RhcnRQ" + + "bGFuQWN0aW9uSAASOgoTZXhlY3V0ZV9wbGFuX2FjdGlvbhgHIAEoCzIbLmFnZW50LnYxLkV4ZWN1" + + "dGVQbGFuQWN0aW9uSAASWgokYXN5bmNfYXNrX3F1ZXN0aW9uX2NvbXBsZXRpb25fYWN0aW9uGAgg" + + "ASgLMiouYWdlbnQudjEuQXN5bmNBc2tRdWVzdGlvbkNvbXBsZXRpb25BY3Rpb25IAEIICgZhY3Rp" + + "b24ivwEKEVVzZXJNZXNzYWdlQWN0aW9uEisKDHVzZXJfbWVzc2FnZRgBIAEoCzIVLmFnZW50LnYx" + + "LlVzZXJNZXNzYWdlEjEKD3JlcXVlc3RfY29udGV4dBgCIAEoCzIYLmFnZW50LnYxLlJlcXVlc3RD" + + "b250ZXh0EikKHHNlbmRfdG9faW50ZXJhY3Rpb25fbGlzdGVuZXIYAyABKAhIAIgBAUIfCh1fc2Vu" + + "ZF90b19pbnRlcmFjdGlvbl9saXN0ZW5lciIOCgxDYW5jZWxBY3Rpb24iQQoMUmVzdW1lQWN0aW9u" + + "EjEKD3JlcXVlc3RfY29udGV4dBgCIAEoCzIYLmFnZW50LnYxLlJlcXVlc3RDb250ZXh0IqABCiBB" + + "c3luY0Fza1F1ZXN0aW9uQ29tcGxldGlvbkFjdGlvbhIdChVvcmlnaW5hbF90b29sX2NhbGxfaWQY" + + "ASABKAkSMAoNb3JpZ2luYWxfYXJncxgCIAEoCzIZLmFnZW50LnYxLkFza1F1ZXN0aW9uQXJncxIr" + + "CgZyZXN1bHQYAyABKAsyGy5hZ2VudC52MS5Bc2tRdWVzdGlvblJlc3VsdCIRCg9TdW1tYXJpemVB" + + "Y3Rpb24iVAoSU2hlbGxDb21tYW5kQWN0aW9uEi0KDXNoZWxsX2NvbW1hbmQYASABKAsyFi5hZ2Vu" + + "dC52MS5TaGVsbENvbW1hbmQSDwoHZXhlY19pZBgCIAEoCSKCAQoPU3RhcnRQbGFuQWN0aW9uEisK" + + "DHVzZXJfbWVzc2FnZRgBIAEoCzIVLmFnZW50LnYxLlVzZXJNZXNzYWdlEjEKD3JlcXVlc3RfY29u" + + "dGV4dBgCIAEoCzIYLmFnZW50LnYxLlJlcXVlc3RDb250ZXh0Eg8KB2lzX3NwZWMYAyABKAgi4gEK" + + "EUV4ZWN1dGVQbGFuQWN0aW9uEjEKD3JlcXVlc3RfY29udGV4dBgBIAEoCzIYLmFnZW50LnYxLlJl" + + "cXVlc3RDb250ZXh0Ei0KBHBsYW4YAiABKAsyGi5hZ2VudC52MS5Db252ZXJzYXRpb25QbGFuSACI" + + "AQESGgoNcGxhbl9maWxlX3VyaRgDIAEoCUgBiAEBEh4KEXBsYW5fZmlsZV9jb250ZW50GAQgASgJ" + + "SAKIAQFCBwoFX3BsYW5CEAoOX3BsYW5fZmlsZV91cmlCFAoSX3BsYW5fZmlsZV9jb250ZW50IugC" + + "CgtVc2VyTWVzc2FnZRIMCgR0ZXh0GAEgASgJEhIKCm1lc3NhZ2VfaWQYAiABKAkSOAoQc2VsZWN0" + + "ZWRfY29udGV4dBgDIAEoCzIZLmFnZW50LnYxLlNlbGVjdGVkQ29udGV4dEgAiAEBEgwKBG1vZGUY" + + "BCABKAUSHQoQaXNfc2ltdWxhdGVkX21zZxgFIAEoCEgBiAEBEh8KEmJlc3Rfb2Zfbl9ncm91cF9p" + + "ZBgGIAEoCUgCiAEBEigKG3RyeV91c2VfYmVzdF9vZl9uX3Byb21vdGlvbhgHIAEoCEgDiAEBEhYK" + + "CXJpY2hfdGV4dBgIIAEoCUgEiAEBQhMKEV9zZWxlY3RlZF9jb250ZXh0QhMKEV9pc19zaW11bGF0" + + "ZWRfbXNnQhUKE19iZXN0X29mX25fZ3JvdXBfaWRCHgocX3RyeV91c2VfYmVzdF9vZl9uX3Byb21v" + + "dGlvbkIMCgpfcmljaF90ZXh0IiAKEEFzc2lzdGFudE1lc3NhZ2USDAoEdGV4dBgBIAEoCSI0Cg9U" + + "aGlua2luZ01lc3NhZ2USDAoEdGV4dBgBIAEoCRITCgtkdXJhdGlvbl9tcxgCIAEoDSIfCgxTaGVs" + + "bENvbW1hbmQSDwoHY29tbWFuZBgBIAEoCSJACgtTaGVsbE91dHB1dBIOCgZzdGRvdXQYASABKAkS" + + "DgoGc3RkZXJyGAIgASgJEhEKCWV4aXRfY29kZRgDIAEoBSKiAQoQQ29udmVyc2F0aW9uVHVybhJC" + + "ChdhZ2VudF9jb252ZXJzYXRpb25fdHVybhgBIAEoCzIfLmFnZW50LnYxLkFnZW50Q29udmVyc2F0" + + "aW9uVHVybkgAEkIKF3NoZWxsX2NvbnZlcnNhdGlvbl90dXJuGAIgASgLMh8uYWdlbnQudjEuU2hl" + + "bGxDb252ZXJzYXRpb25UdXJuSABCBgoEdHVybiIgChBDb252ZXJzYXRpb25QbGFuEgwKBHBsYW4Y" + + "ASABKAkivQEKGUNvbnZlcnNhdGlvblR1cm5TdHJ1Y3R1cmUSSwoXYWdlbnRfY29udmVyc2F0aW9u" + + "X3R1cm4YASABKAsyKC5hZ2VudC52MS5BZ2VudENvbnZlcnNhdGlvblR1cm5TdHJ1Y3R1cmVIABJL" + + "ChdzaGVsbF9jb252ZXJzYXRpb25fdHVybhgCIAEoCzIoLmFnZW50LnYxLlNoZWxsQ29udmVyc2F0" + + "aW9uVHVyblN0cnVjdHVyZUgAQgYKBHR1cm4ilwEKFUFnZW50Q29udmVyc2F0aW9uVHVybhIrCgx1" + + "c2VyX21lc3NhZ2UYASABKAsyFS5hZ2VudC52MS5Vc2VyTWVzc2FnZRIpCgVzdGVwcxgCIAMoCzIa" + + "LmFnZW50LnYxLkNvbnZlcnNhdGlvblN0ZXASFwoKcmVxdWVzdF9pZBgDIAEoCUgAiAEBQg0KC19y" + + "ZXF1ZXN0X2lkIm0KHkFnZW50Q29udmVyc2F0aW9uVHVyblN0cnVjdHVyZRIUCgx1c2VyX21lc3Nh" + + "Z2UYASABKAwSDQoFc3RlcHMYAiADKAwSFwoKcmVxdWVzdF9pZBgDIAEoCUgAiAEBQg0KC19yZXF1" + + "ZXN0X2lkInMKFVNoZWxsQ29udmVyc2F0aW9uVHVybhItCg1zaGVsbF9jb21tYW5kGAEgASgLMhYu" + + "YWdlbnQudjEuU2hlbGxDb21tYW5kEisKDHNoZWxsX291dHB1dBgCIAEoCzIVLmFnZW50LnYxLlNo" + + "ZWxsT3V0cHV0Ik0KHlNoZWxsQ29udmVyc2F0aW9uVHVyblN0cnVjdHVyZRIVCg1zaGVsbF9jb21t" + + "YW5kGAEgASgMEhQKDHNoZWxsX291dHB1dBgCIAEoDCImChNDb252ZXJzYXRpb25TdW1tYXJ5Eg8K" + + "B3N1bW1hcnkYASABKAkieAoaQ29udmVyc2F0aW9uU3VtbWFyeUFyY2hpdmUSGwoTc3VtbWFyaXpl" + + "ZF9tZXNzYWdlcxgBIAMoDBIPCgdzdW1tYXJ5GAIgASgJEhMKC3dpbmRvd190YWlsGAMgASgNEhcK" + + "D3N1bW1hcnlfbWVzc2FnZRgEIAEoDCJDChhDb252ZXJzYXRpb25Ub2tlbkRldGFpbHMSEwoLdXNl" + + "ZF90b2tlbnMYASABKA0SEgoKbWF4X3Rva2VucxgCIAEoDSJfCglGaWxlU3RhdGUSFAoHY29udGVu" + + "dBgBIAEoCUgAiAEBEhwKD2luaXRpYWxfY29udGVudBgCIAEoCUgBiAEBQgoKCF9jb250ZW50QhIK" + + "EF9pbml0aWFsX2NvbnRlbnQiaAoSRmlsZVN0YXRlU3RydWN0dXJlEhQKB2NvbnRlbnQYASABKAxI" + + "AIgBARIcCg9pbml0aWFsX2NvbnRlbnQYAiABKAxIAYgBAUIKCghfY29udGVudEISChBfaW5pdGlh" + + "bF9jb250ZW50IjcKClN0ZXBUaW1pbmcSEwoLZHVyYXRpb25fbXMYASABKAQSFAoMdGltZXN0YW1w" + + "X21zGAIgASgEIvYEChFDb252ZXJzYXRpb25TdGF0ZRIhChlyb290X3Byb21wdF9tZXNzYWdlc19q" + + "c29uGAEgAygJEikKBXR1cm5zGAggAygLMhouYWdlbnQudjEuQ29udmVyc2F0aW9uVHVybhIhCgV0" + + "b2RvcxgDIAMoCzISLmFnZW50LnYxLlRvZG9JdGVtEhoKEnBlbmRpbmdfdG9vbF9jYWxscxgEIAMo" + + "CRI5Cg10b2tlbl9kZXRhaWxzGAUgASgLMiIuYWdlbnQudjEuQ29udmVyc2F0aW9uVG9rZW5EZXRh" + + "aWxzEjMKB3N1bW1hcnkYBiABKAsyHS5hZ2VudC52MS5Db252ZXJzYXRpb25TdW1tYXJ5SACIAQES" + + "LQoEcGxhbhgHIAEoCzIaLmFnZW50LnYxLkNvbnZlcnNhdGlvblBsYW5IAYgBARJCCg9zdW1tYXJ5" + + "X2FyY2hpdmUYCSABKAsyJC5hZ2VudC52MS5Db252ZXJzYXRpb25TdW1tYXJ5QXJjaGl2ZUgCiAEB" + + "EkAKC2ZpbGVfc3RhdGVzGAogAygLMisuYWdlbnQudjEuQ29udmVyc2F0aW9uU3RhdGUuRmlsZVN0" + + "YXRlc0VudHJ5Ej4KEHN1bW1hcnlfYXJjaGl2ZXMYCyADKAsyJC5hZ2VudC52MS5Db252ZXJzYXRp" + + "b25TdW1tYXJ5QXJjaGl2ZRpGCg9GaWxlU3RhdGVzRW50cnkSCwoDa2V5GAEgASgJEiIKBXZhbHVl" + + "GAIgASgLMhMuYWdlbnQudjEuRmlsZVN0YXRlOgI4AUIKCghfc3VtbWFyeUIHCgVfcGxhbkISChBf" + + "c3VtbWFyeV9hcmNoaXZlIscBChZTdWJhZ2VudFBlcnNpc3RlZFN0YXRlEkAKEmNvbnZlcnNhdGlv" + + "bl9zdGF0ZRgBIAEoCzIkLmFnZW50LnYxLkNvbnZlcnNhdGlvblN0YXRlU3RydWN0dXJlEhwKFGNy" + + "ZWF0ZWRfdGltZXN0YW1wX21zGAIgASgEEh4KFmxhc3RfdXNlZF90aW1lc3RhbXBfbXMYAyABKAQS" + + "LQoNc3ViYWdlbnRfdHlwZRgEIAEoCzIWLmFnZW50LnYxLlN1YmFnZW50VHlwZSK3BwoaQ29udmVy" + + "c2F0aW9uU3RhdGVTdHJ1Y3R1cmUSEQoJdHVybnNfb2xkGAIgAygMEiEKGXJvb3RfcHJvbXB0X21l" + + "c3NhZ2VzX2pzb24YASADKAwSDQoFdHVybnMYCCADKAwSDQoFdG9kb3MYAyADKAwSGgoScGVuZGlu" + + "Z190b29sX2NhbGxzGAQgAygJEjkKDXRva2VuX2RldGFpbHMYBSABKAsyIi5hZ2VudC52MS5Db252" + + "ZXJzYXRpb25Ub2tlbkRldGFpbHMSFAoHc3VtbWFyeRgGIAEoDEgAiAEBEhEKBHBsYW4YByABKAxI" + + "AYgBARIfChdwcmV2aW91c193b3Jrc3BhY2VfdXJpcxgJIAMoCRIRCgRtb2RlGAogASgFSAKIAQES" + + "HAoPc3VtbWFyeV9hcmNoaXZlGAsgASgMSAOIAQESSQoLZmlsZV9zdGF0ZXMYDCADKAsyNC5hZ2Vu" + + "dC52MS5Db252ZXJzYXRpb25TdGF0ZVN0cnVjdHVyZS5GaWxlU3RhdGVzRW50cnkSTgoOZmlsZV9z" + + "dGF0ZXNfdjIYDyADKAsyNi5hZ2VudC52MS5Db252ZXJzYXRpb25TdGF0ZVN0cnVjdHVyZS5GaWxl" + + "U3RhdGVzVjJFbnRyeRIYChBzdW1tYXJ5X2FyY2hpdmVzGA0gAygMEioKDHR1cm5fdGltaW5ncxgO" + + "IAMoCzIULmFnZW50LnYxLlN0ZXBUaW1pbmcSUQoPc3ViYWdlbnRfc3RhdGVzGBAgAygLMjguYWdl" + + "bnQudjEuQ29udmVyc2F0aW9uU3RhdGVTdHJ1Y3R1cmUuU3ViYWdlbnRTdGF0ZXNFbnRyeRIaChJz" + + "ZWxmX3N1bW1hcnlfY291bnQYESABKA0SEgoKcmVhZF9wYXRocxgSIAMoCRoxCg9GaWxlU3RhdGVz" + + "RW50cnkSCwoDa2V5GAEgASgJEg0KBXZhbHVlGAIgASgMOgI4ARpRChFGaWxlU3RhdGVzVjJFbnRy" + + "eRILCgNrZXkYASABKAkSKwoFdmFsdWUYAiABKAsyHC5hZ2VudC52MS5GaWxlU3RhdGVTdHJ1Y3R1" + + "cmU6AjgBGlcKE1N1YmFnZW50U3RhdGVzRW50cnkSCwoDa2V5GAEgASgJEi8KBXZhbHVlGAIgASgL" + + "MiAuYWdlbnQudjEuU3ViYWdlbnRQZXJzaXN0ZWRTdGF0ZToCOAFCCgoIX3N1bW1hcnlCBwoFX3Bs" + + "YW5CBwoFX21vZGVCEgoQX3N1bW1hcnlfYXJjaGl2ZSIRCg9UaGlua2luZ0RldGFpbHMiSAoRQXBp" + + "S2V5Q3JlZGVudGlhbHMSDwoHYXBpX2tleRgBIAEoCRIVCghiYXNlX3VybBgCIAEoCUgAiAEBQgsK" + + "CV9iYXNlX3VybCJJChBBenVyZUNyZWRlbnRpYWxzEg8KB2FwaV9rZXkYASABKAkSEAoIYmFzZV91" + + "cmwYAiABKAkSEgoKZGVwbG95bWVudBgDIAEoCSJ6ChJCZWRyb2NrQ3JlZGVudGlhbHMSEgoKYWNj" + + "ZXNzX2tleRgBIAEoCRISCgpzZWNyZXRfa2V5GAIgASgJEg4KBnJlZ2lvbhgDIAEoCRIaCg1zZXNz" + + "aW9uX3Rva2VuGAQgASgJSACIAQFCEAoOX3Nlc3Npb25fdG9rZW4isQMKDE1vZGVsRGV0YWlscxIQ" + + "Cghtb2RlbF9pZBgBIAEoCRIYChBkaXNwbGF5X21vZGVsX2lkGAMgASgJEhQKDGRpc3BsYXlfbmFt" + + "ZRgEIAEoCRIaChJkaXNwbGF5X25hbWVfc2hvcnQYBSABKAkSDwoHYWxpYXNlcxgGIAMoCRI4ChB0" + + "aGlua2luZ19kZXRhaWxzGAIgASgLMhkuYWdlbnQudjEuVGhpbmtpbmdEZXRhaWxzSAGIAQESFQoI" + + "bWF4X21vZGUYByABKAhIAogBARI6ChNhcGlfa2V5X2NyZWRlbnRpYWxzGAggASgLMhsuYWdlbnQu" + + "djEuQXBpS2V5Q3JlZGVudGlhbHNIABI3ChFhenVyZV9jcmVkZW50aWFscxgJIAEoCzIaLmFnZW50" + + "LnYxLkF6dXJlQ3JlZGVudGlhbHNIABI7ChNiZWRyb2NrX2NyZWRlbnRpYWxzGAogASgLMhwuYWdl" + + "bnQudjEuQmVkcm9ja0NyZWRlbnRpYWxzSABCDQoLY3JlZGVudGlhbHNCEwoRX3RoaW5raW5nX2Rl" + + "dGFpbHNCCwoJX21heF9tb2RlIrcCCg5SZXF1ZXN0ZWRNb2RlbBIQCghtb2RlbF9pZBgBIAEoCRIQ" + + "CghtYXhfbW9kZRgCIAEoCBJACgpwYXJhbWV0ZXJzGAMgAygLMiwuYWdlbnQudjEuUmVxdWVzdGVk" + + "TW9kZWxfTW9kZWxQYXJhbWV0ZXJieXRlcxI6ChNhcGlfa2V5X2NyZWRlbnRpYWxzGAQgASgLMhsu" + + "YWdlbnQudjEuQXBpS2V5Q3JlZGVudGlhbHNIABI3ChFhenVyZV9jcmVkZW50aWFscxgFIAEoCzIa" + + "LmFnZW50LnYxLkF6dXJlQ3JlZGVudGlhbHNIABI7ChNiZWRyb2NrX2NyZWRlbnRpYWxzGAYgASgL" + + "MhwuYWdlbnQudjEuQmVkcm9ja0NyZWRlbnRpYWxzSABCDQoLY3JlZGVudGlhbHMiPwoiUmVxdWVz" + + "dGVkTW9kZWxfTW9kZWxQYXJhbWV0ZXJieXRlcxIKCgJpZBgBIAEoCRINCgV2YWx1ZRgCIAEoCSK5" + + "BAoPQWdlbnRSdW5SZXF1ZXN0EkAKEmNvbnZlcnNhdGlvbl9zdGF0ZRgBIAEoCzIkLmFnZW50LnYx" + + "LkNvbnZlcnNhdGlvblN0YXRlU3RydWN0dXJlEiwKBmFjdGlvbhgCIAEoCzIcLmFnZW50LnYxLkNv" + + "bnZlcnNhdGlvbkFjdGlvbhItCg1tb2RlbF9kZXRhaWxzGAMgASgLMhYuYWdlbnQudjEuTW9kZWxE" + + "ZXRhaWxzEjYKD3JlcXVlc3RlZF9tb2RlbBgJIAEoCzIYLmFnZW50LnYxLlJlcXVlc3RlZE1vZGVs" + + "SACIAQESJQoJbWNwX3Rvb2xzGAQgASgLMhIuYWdlbnQudjEuTWNwVG9vbHMSHAoPY29udmVyc2F0" + + "aW9uX2lkGAUgASgJSAGIAQESRAoXbWNwX2ZpbGVfc3lzdGVtX29wdGlvbnMYBiABKAsyHi5hZ2Vu" + + "dC52MS5NY3BGaWxlU3lzdGVtT3B0aW9uc0gCiAEBEjIKDXNraWxsX29wdGlvbnMYByABKAsyFi5h" + + "Z2VudC52MS5Ta2lsbE9wdGlvbnNIA4gBARIhChRjdXN0b21fc3lzdGVtX3Byb21wdBgIIAEoCUgE" + + "iAEBQhIKEF9yZXF1ZXN0ZWRfbW9kZWxCEgoQX2NvbnZlcnNhdGlvbl9pZEIaChhfbWNwX2ZpbGVf" + + "c3lzdGVtX29wdGlvbnNCEAoOX3NraWxsX29wdGlvbnNCFwoVX2N1c3RvbV9zeXN0ZW1fcHJvbXB0" + + "Ih8KD1RleHREZWx0YVVwZGF0ZRIMCgR0ZXh0GAEgASgJImYKFVRvb2xDYWxsU3RhcnRlZFVwZGF0" + + "ZRIPCgdjYWxsX2lkGAEgASgJEiUKCXRvb2xfY2FsbBgCIAEoCzISLmFnZW50LnYxLlRvb2xDYWxs" + + "EhUKDW1vZGVsX2NhbGxfaWQYAyABKAkiaAoXVG9vbENhbGxDb21wbGV0ZWRVcGRhdGUSDwoHY2Fs" + + "bF9pZBgBIAEoCRIlCgl0b29sX2NhbGwYAiABKAsyEi5hZ2VudC52MS5Ub29sQ2FsbBIVCg1tb2Rl" + + "bF9jYWxsX2lkGAMgASgJIm8KE1Rvb2xDYWxsRGVsdGFVcGRhdGUSDwoHY2FsbF9pZBgBIAEoCRIw" + + "Cg90b29sX2NhbGxfZGVsdGEYAiABKAsyFy5hZ2VudC52MS5Ub29sQ2FsbERlbHRhEhUKDW1vZGVs" + + "X2NhbGxfaWQYAyABKAkifwoVUGFydGlhbFRvb2xDYWxsVXBkYXRlEg8KB2NhbGxfaWQYASABKAkS" + + "JQoJdG9vbF9jYWxsGAIgASgLMhIuYWdlbnQudjEuVG9vbENhbGwSFwoPYXJnc190ZXh0X2RlbHRh" + + "GAMgASgJEhUKDW1vZGVsX2NhbGxfaWQYBCABKAkiIwoTVGhpbmtpbmdEZWx0YVVwZGF0ZRIMCgR0" + + "ZXh0GAEgASgJIjcKF1RoaW5raW5nQ29tcGxldGVkVXBkYXRlEhwKFHRoaW5raW5nX2R1cmF0aW9u" + + "X21zGAEgASgFIiIKEFRva2VuRGVsdGFVcGRhdGUSDgoGdG9rZW5zGAEgASgFIiAKDVN1bW1hcnlV" + + "cGRhdGUSDwoHc3VtbWFyeRgBIAEoCSIWChRTdW1tYXJ5U3RhcnRlZFVwZGF0ZSIRCg9IZWFydGJl" + + "YXRVcGRhdGUiGAoWU3VtbWFyeUNvbXBsZXRlZFVwZGF0ZSLXAQoWU2hlbGxPdXRwdXREZWx0YVVw" + + "ZGF0ZRItCgZzdGRvdXQYASABKAsyGy5hZ2VudC52MS5TaGVsbFN0cmVhbVN0ZG91dEgAEi0KBnN0" + + "ZGVychgCIAEoCzIbLmFnZW50LnYxLlNoZWxsU3RyZWFtU3RkZXJySAASKQoEZXhpdBgDIAEoCzIZ" + + "LmFnZW50LnYxLlNoZWxsU3RyZWFtRXhpdEgAEisKBXN0YXJ0GAQgASgLMhouYWdlbnQudjEuU2hl" + + "bGxTdHJlYW1TdGFydEgAQgcKBWV2ZW50IhEKD1R1cm5FbmRlZFVwZGF0ZSJIChlVc2VyTWVzc2Fn" + + "ZUFwcGVuZGVkVXBkYXRlEisKDHVzZXJfbWVzc2FnZRgBIAEoCzIVLmFnZW50LnYxLlVzZXJNZXNz" + + "YWdlIiQKEVN0ZXBTdGFydGVkVXBkYXRlEg8KB3N0ZXBfaWQYASABKAQiQAoTU3RlcENvbXBsZXRl" + + "ZFVwZGF0ZRIPCgdzdGVwX2lkGAEgASgEEhgKEHN0ZXBfZHVyYXRpb25fbXMYAiABKAMi7wcKEUlu" + + "dGVyYWN0aW9uVXBkYXRlEi8KCnRleHRfZGVsdGEYASABKAsyGS5hZ2VudC52MS5UZXh0RGVsdGFV" + + "cGRhdGVIABI8ChFwYXJ0aWFsX3Rvb2xfY2FsbBgHIAEoCzIfLmFnZW50LnYxLlBhcnRpYWxUb29s" + + "Q2FsbFVwZGF0ZUgAEjgKD3Rvb2xfY2FsbF9kZWx0YRgPIAEoCzIdLmFnZW50LnYxLlRvb2xDYWxs" + + "RGVsdGFVcGRhdGVIABI8ChF0b29sX2NhbGxfc3RhcnRlZBgCIAEoCzIfLmFnZW50LnYxLlRvb2xD" + + "YWxsU3RhcnRlZFVwZGF0ZUgAEkAKE3Rvb2xfY2FsbF9jb21wbGV0ZWQYAyABKAsyIS5hZ2VudC52" + + "MS5Ub29sQ2FsbENvbXBsZXRlZFVwZGF0ZUgAEjcKDnRoaW5raW5nX2RlbHRhGAQgASgLMh0uYWdl" + + "bnQudjEuVGhpbmtpbmdEZWx0YVVwZGF0ZUgAEj8KEnRoaW5raW5nX2NvbXBsZXRlZBgFIAEoCzIh" + + "LmFnZW50LnYxLlRoaW5raW5nQ29tcGxldGVkVXBkYXRlSAASRAoVdXNlcl9tZXNzYWdlX2FwcGVu" + + "ZGVkGAYgASgLMiMuYWdlbnQudjEuVXNlck1lc3NhZ2VBcHBlbmRlZFVwZGF0ZUgAEjEKC3Rva2Vu" + + "X2RlbHRhGAggASgLMhouYWdlbnQudjEuVG9rZW5EZWx0YVVwZGF0ZUgAEioKB3N1bW1hcnkYCSAB" + + "KAsyFy5hZ2VudC52MS5TdW1tYXJ5VXBkYXRlSAASOQoPc3VtbWFyeV9zdGFydGVkGAogASgLMh4u" + + "YWdlbnQudjEuU3VtbWFyeVN0YXJ0ZWRVcGRhdGVIABI9ChFzdW1tYXJ5X2NvbXBsZXRlZBgLIAEo" + + "CzIgLmFnZW50LnYxLlN1bW1hcnlDb21wbGV0ZWRVcGRhdGVIABI+ChJzaGVsbF9vdXRwdXRfZGVs" + + "dGEYDCABKAsyIC5hZ2VudC52MS5TaGVsbE91dHB1dERlbHRhVXBkYXRlSAASLgoJaGVhcnRiZWF0" + + "GA0gASgLMhkuYWdlbnQudjEuSGVhcnRiZWF0VXBkYXRlSAASLwoKdHVybl9lbmRlZBgOIAEoCzIZ" + + "LmFnZW50LnYxLlR1cm5FbmRlZFVwZGF0ZUgAEjMKDHN0ZXBfc3RhcnRlZBgQIAEoCzIbLmFnZW50" + + "LnYxLlN0ZXBTdGFydGVkVXBkYXRlSAASNwoOc3RlcF9jb21wbGV0ZWQYESABKAsyHS5hZ2VudC52" + + "MS5TdGVwQ29tcGxldGVkVXBkYXRlSABCCQoHbWVzc2FnZSKaBAoQSW50ZXJhY3Rpb25RdWVyeRIK" + + "CgJpZBgBIAEoDRJDChh3ZWJfc2VhcmNoX3JlcXVlc3RfcXVlcnkYAiABKAsyHy5hZ2VudC52MS5X" + + "ZWJTZWFyY2hSZXF1ZXN0UXVlcnlIABJPCh5hc2tfcXVlc3Rpb25faW50ZXJhY3Rpb25fcXVlcnkY" + + "AyABKAsyJS5hZ2VudC52MS5Bc2tRdWVzdGlvbkludGVyYWN0aW9uUXVlcnlIABJFChlzd2l0Y2hf" + + "bW9kZV9yZXF1ZXN0X3F1ZXJ5GAQgASgLMiAuYWdlbnQudjEuU3dpdGNoTW9kZVJlcXVlc3RRdWVy" + + "eUgAEkMKGGV4YV9zZWFyY2hfcmVxdWVzdF9xdWVyeRgFIAEoCzIfLmFnZW50LnYxLkV4YVNlYXJj" + + "aFJlcXVlc3RRdWVyeUgAEkEKF2V4YV9mZXRjaF9yZXF1ZXN0X3F1ZXJ5GAYgASgLMh4uYWdlbnQu" + + "djEuRXhhRmV0Y2hSZXF1ZXN0UXVlcnlIABJFChljcmVhdGVfcGxhbl9yZXF1ZXN0X3F1ZXJ5GAcg" + + "ASgLMiAuYWdlbnQudjEuQ3JlYXRlUGxhblJlcXVlc3RRdWVyeUgAEkUKGXNldHVwX3ZtX2Vudmly" + + "b25tZW50X2FyZ3MYCCABKAsyIC5hZ2VudC52MS5TZXR1cFZtRW52aXJvbm1lbnRBcmdzSABCBwoF" + + "cXVlcnkixgQKE0ludGVyYWN0aW9uUmVzcG9uc2USCgoCaWQYASABKA0SSQobd2ViX3NlYXJjaF9y" + + "ZXF1ZXN0X3Jlc3BvbnNlGAIgASgLMiIuYWdlbnQudjEuV2ViU2VhcmNoUmVxdWVzdFJlc3BvbnNl" + + "SAASVQohYXNrX3F1ZXN0aW9uX2ludGVyYWN0aW9uX3Jlc3BvbnNlGAMgASgLMiguYWdlbnQudjEu" + + "QXNrUXVlc3Rpb25JbnRlcmFjdGlvblJlc3BvbnNlSAASSwocc3dpdGNoX21vZGVfcmVxdWVzdF9y" + + "ZXNwb25zZRgEIAEoCzIjLmFnZW50LnYxLlN3aXRjaE1vZGVSZXF1ZXN0UmVzcG9uc2VIABJJChtl" + + "eGFfc2VhcmNoX3JlcXVlc3RfcmVzcG9uc2UYBSABKAsyIi5hZ2VudC52MS5FeGFTZWFyY2hSZXF1" + + "ZXN0UmVzcG9uc2VIABJHChpleGFfZmV0Y2hfcmVxdWVzdF9yZXNwb25zZRgGIAEoCzIhLmFnZW50" + + "LnYxLkV4YUZldGNoUmVxdWVzdFJlc3BvbnNlSAASSwocY3JlYXRlX3BsYW5fcmVxdWVzdF9yZXNw" + + "b25zZRgHIAEoCzIjLmFnZW50LnYxLkNyZWF0ZVBsYW5SZXF1ZXN0UmVzcG9uc2VIABJJChtzZXR1" + + "cF92bV9lbnZpcm9ubWVudF9yZXN1bHQYCCABKAsyIi5hZ2VudC52MS5TZXR1cFZtRW52aXJvbm1l" + + "bnRSZXN1bHRIAEIICgZyZXN1bHQiXAobQXNrUXVlc3Rpb25JbnRlcmFjdGlvblF1ZXJ5EicKBGFy" + + "Z3MYASABKAsyGS5hZ2VudC52MS5Bc2tRdWVzdGlvbkFyZ3MSFAoMdG9vbF9jYWxsX2lkGAIgASgJ" + + "Ik0KHkFza1F1ZXN0aW9uSW50ZXJhY3Rpb25SZXNwb25zZRIrCgZyZXN1bHQYASABKAsyGy5hZ2Vu" + + "dC52MS5Bc2tRdWVzdGlvblJlc3VsdCIRCg9DbGllbnRIZWFydGJlYXQixgQKDlByZXdhcm1SZXF1" + + "ZXN0Ei0KDW1vZGVsX2RldGFpbHMYASABKAsyFi5hZ2VudC52MS5Nb2RlbERldGFpbHMSNgoPcmVx" + + "dWVzdGVkX21vZGVsGAkgASgLMhguYWdlbnQudjEuUmVxdWVzdGVkTW9kZWxIAIgBARIcCg9jb252" + + "ZXJzYXRpb25faWQYAiABKAlIAYgBARJAChJjb252ZXJzYXRpb25fc3RhdGUYAyABKAsyJC5hZ2Vu" + + "dC52MS5Db252ZXJzYXRpb25TdGF0ZVN0cnVjdHVyZRIlCgltY3BfdG9vbHMYBCABKAsyEi5hZ2Vu" + + "dC52MS5NY3BUb29scxJEChdtY3BfZmlsZV9zeXN0ZW1fb3B0aW9ucxgFIAEoCzIeLmFnZW50LnYx" + + "Lk1jcEZpbGVTeXN0ZW1PcHRpb25zSAKIAQESHwoSYmVzdF9vZl9uX2dyb3VwX2lkGAYgASgJSAOI" + + "AQESKAobdHJ5X3VzZV9iZXN0X29mX25fcHJvbW90aW9uGAcgASgISASIAQESIQoUY3VzdG9tX3N5" + + "c3RlbV9wcm9tcHQYCCABKAlIBYgBAUISChBfcmVxdWVzdGVkX21vZGVsQhIKEF9jb252ZXJzYXRp" + + "b25faWRCGgoYX21jcF9maWxlX3N5c3RlbV9vcHRpb25zQhUKE19iZXN0X29mX25fZ3JvdXBfaWRC" + + "HgocX3RyeV91c2VfYmVzdF9vZl9uX3Byb21vdGlvbkIXChVfY3VzdG9tX3N5c3RlbV9wcm9tcHQi" + + "HQoPRXhlY1NlcnZlckFib3J0EgoKAmlkGAEgASgNIlEKGEV4ZWNTZXJ2ZXJDb250cm9sTWVzc2Fn" + + "ZRIqCgVhYm9ydBgBIAEoCzIZLmFnZW50LnYxLkV4ZWNTZXJ2ZXJBYm9ydEgAQgkKB21lc3NhZ2Ui" + + "+AMKEkFnZW50Q2xpZW50TWVzc2FnZRIwCgtydW5fcmVxdWVzdBgBIAEoCzIZLmFnZW50LnYxLkFn" + + "ZW50UnVuUmVxdWVzdEgAEjoKE2V4ZWNfY2xpZW50X21lc3NhZ2UYAiABKAsyGy5hZ2VudC52MS5F" + + "eGVjQ2xpZW50TWVzc2FnZUgAEkkKG2V4ZWNfY2xpZW50X2NvbnRyb2xfbWVzc2FnZRgFIAEoCzIi" + + "LmFnZW50LnYxLkV4ZWNDbGllbnRDb250cm9sTWVzc2FnZUgAEjYKEWt2X2NsaWVudF9tZXNzYWdl" + + "GAMgASgLMhkuYWdlbnQudjEuS3ZDbGllbnRNZXNzYWdlSAASOwoTY29udmVyc2F0aW9uX2FjdGlv" + + "bhgEIAEoCzIcLmFnZW50LnYxLkNvbnZlcnNhdGlvbkFjdGlvbkgAEj0KFGludGVyYWN0aW9uX3Jl" + + "c3BvbnNlGAYgASgLMh0uYWdlbnQudjEuSW50ZXJhY3Rpb25SZXNwb25zZUgAEjUKEGNsaWVudF9o" + + "ZWFydGJlYXQYByABKAsyGS5hZ2VudC52MS5DbGllbnRIZWFydGJlYXRIABIzCg9wcmV3YXJtX3Jl" + + "cXVlc3QYCCABKAsyGC5hZ2VudC52MS5QcmV3YXJtUmVxdWVzdEgAQgkKB21lc3NhZ2UiogMKEkFn" + + "ZW50U2VydmVyTWVzc2FnZRI5ChJpbnRlcmFjdGlvbl91cGRhdGUYASABKAsyGy5hZ2VudC52MS5J" + + "bnRlcmFjdGlvblVwZGF0ZUgAEjoKE2V4ZWNfc2VydmVyX21lc3NhZ2UYAiABKAsyGy5hZ2VudC52" + + "MS5FeGVjU2VydmVyTWVzc2FnZUgAEkkKG2V4ZWNfc2VydmVyX2NvbnRyb2xfbWVzc2FnZRgFIAEo" + + "CzIiLmFnZW50LnYxLkV4ZWNTZXJ2ZXJDb250cm9sTWVzc2FnZUgAEk4KHmNvbnZlcnNhdGlvbl9j" + + "aGVja3BvaW50X3VwZGF0ZRgDIAEoCzIkLmFnZW50LnYxLkNvbnZlcnNhdGlvblN0YXRlU3RydWN0" + + "dXJlSAASNgoRa3Zfc2VydmVyX21lc3NhZ2UYBCABKAsyGS5hZ2VudC52MS5LdlNlcnZlck1lc3Nh" + + "Z2VIABI3ChFpbnRlcmFjdGlvbl9xdWVyeRgHIAEoCzIaLmFnZW50LnYxLkludGVyYWN0aW9uUXVl" + + "cnlIAEIJCgdtZXNzYWdlIigKEE5hbWVBZ2VudFJlcXVlc3QSFAoMdXNlcl9tZXNzYWdlGAEgASgJ" + + "IiEKEU5hbWVBZ2VudFJlc3BvbnNlEgwKBG5hbWUYASABKAkiMgoWR2V0VXNhYmxlTW9kZWxzUmVx" + + "dWVzdBIYChBjdXN0b21fbW9kZWxfaWRzGAEgAygJIkEKF0dldFVzYWJsZU1vZGVsc1Jlc3BvbnNl" + + "EiYKBm1vZGVscxgBIAMoCzIWLmFnZW50LnYxLk1vZGVsRGV0YWlscyIeChxHZXREZWZhdWx0TW9k" + + "ZWxGb3JDbGlSZXF1ZXN0IkYKHUdldERlZmF1bHRNb2RlbEZvckNsaVJlc3BvbnNlEiUKBW1vZGVs" + + "GAEgASgLMhYuYWdlbnQudjEuTW9kZWxEZXRhaWxzIh8KHUdldEFsbG93ZWRNb2RlbEludGVudHNS" + + "ZXF1ZXN0IjcKHkdldEFsbG93ZWRNb2RlbEludGVudHNSZXNwb25zZRIVCg1tb2RlbF9pbnRlbnRz" + + "GAEgAygJIpcCChNJZGVFZGl0b3JzU3RhdGVGaWxlEhUKDXJlbGF0aXZlX3BhdGgYASABKAkSFQoN" + + "YWJzb2x1dGVfcGF0aBgCIAEoCRIhChRpc19jdXJyZW50bHlfZm9jdXNlZBgDIAEoCEgAiAEBEiAK" + + "E2N1cnJlbnRfbGluZV9udW1iZXIYBCABKAVIAYgBARIeChFjdXJyZW50X2xpbmVfdGV4dBgFIAEo" + + "CUgCiAEBEhcKCmxpbmVfY291bnQYBiABKAVIA4gBAUIXChVfaXNfY3VycmVudGx5X2ZvY3VzZWRC" + + "FgoUX2N1cnJlbnRfbGluZV9udW1iZXJCFAoSX2N1cnJlbnRfbGluZV90ZXh0Qg0KC19saW5lX2Nv" + + "dW50IlMKE0lkZUVkaXRvcnNTdGF0ZUxpdGUSPAoVcmVjZW50bHlfdmlld2VkX2ZpbGVzGAEgAygL" + + "Mh0uYWdlbnQudjEuSWRlRWRpdG9yc1N0YXRlRmlsZSJ0ChZBcHBseUFnZW50RGlmZlRvb2xDYWxs" + + "EioKBGFyZ3MYASABKAsyHC5hZ2VudC52MS5BcHBseUFnZW50RGlmZkFyZ3MSLgoGcmVzdWx0GAIg" + + "ASgLMh4uYWdlbnQudjEuQXBwbHlBZ2VudERpZmZSZXN1bHQiJgoSQXBwbHlBZ2VudERpZmZBcmdz" + + "EhAKCGFnZW50X2lkGAEgASgJIoQBChRBcHBseUFnZW50RGlmZlJlc3VsdBIyCgdzdWNjZXNzGAEg" + + "ASgLMh8uYWdlbnQudjEuQXBwbHlBZ2VudERpZmZTdWNjZXNzSAASLgoFZXJyb3IYAiABKAsyHS5h" + + "Z2VudC52MS5BcHBseUFnZW50RGlmZkVycm9ySABCCAoGcmVzdWx0Ik4KFUFwcGx5QWdlbnREaWZm" + + "U3VjY2VzcxI1Cg9hcHBsaWVkX2NoYW5nZXMYASADKAsyHC5hZ2VudC52MS5BcHBsaWVkQWdlbnRD" + + "aGFuZ2Ui6QEKEkFwcGxpZWRBZ2VudENoYW5nZRIMCgRwYXRoGAEgASgJEhMKC2NoYW5nZV90eXBl" + + "GAIgASgFEhsKDmJlZm9yZV9jb250ZW50GAMgASgJSACIAQESGgoNYWZ0ZXJfY29udGVudBgEIAEo" + + "CUgBiAEBEhIKBWVycm9yGAUgASgJSAKIAQESHgoRbWVzc2FnZV9mb3JfbW9kZWwYBiABKAlIA4gB" + + "AUIRCg9fYmVmb3JlX2NvbnRlbnRCEAoOX2FmdGVyX2NvbnRlbnRCCAoGX2Vycm9yQhQKEl9tZXNz" + + "YWdlX2Zvcl9tb2RlbCJbChNBcHBseUFnZW50RGlmZkVycm9yEg0KBWVycm9yGAEgASgJEjUKD2Fw" + + "cGxpZWRfY2hhbmdlcxgCIAMoCzIcLmFnZW50LnYxLkFwcGxpZWRBZ2VudENoYW5nZSJrChNBc2tR" + + "dWVzdGlvblRvb2xDYWxsEicKBGFyZ3MYASABKAsyGS5hZ2VudC52MS5Bc2tRdWVzdGlvbkFyZ3MS" + + "KwoGcmVzdWx0GAIgASgLMhsuYWdlbnQudjEuQXNrUXVlc3Rpb25SZXN1bHQijwEKD0Fza1F1ZXN0" + + "aW9uQXJncxINCgV0aXRsZRgBIAEoCRI1CglxdWVzdGlvbnMYAiADKAsyIi5hZ2VudC52MS5Bc2tR" + + "dWVzdGlvbkFyZ3NfUXVlc3Rpb24SEQoJcnVuX2FzeW5jGAUgASgIEiMKG2FzeW5jX29yaWdpbmFs" + + "X3Rvb2xfY2FsbF9pZBgGIAEoCSKBAQoYQXNrUXVlc3Rpb25BcmdzX1F1ZXN0aW9uEgoKAmlkGAEg" + + "ASgJEg4KBnByb21wdBgCIAEoCRIxCgdvcHRpb25zGAMgAygLMiAuYWdlbnQudjEuQXNrUXVlc3Rp" + + "b25BcmdzX09wdGlvbhIWCg5hbGxvd19tdWx0aXBsZRgEIAEoCCIzChZBc2tRdWVzdGlvbkFyZ3Nf" + + "T3B0aW9uEgoKAmlkGAEgASgJEg0KBWxhYmVsGAIgASgJIhIKEEFza1F1ZXN0aW9uQXN5bmMi2wEK" + + "EUFza1F1ZXN0aW9uUmVzdWx0Ei8KB3N1Y2Nlc3MYASABKAsyHC5hZ2VudC52MS5Bc2tRdWVzdGlv" + + "blN1Y2Nlc3NIABIrCgVlcnJvchgCIAEoCzIaLmFnZW50LnYxLkFza1F1ZXN0aW9uRXJyb3JIABIx" + + "CghyZWplY3RlZBgDIAEoCzIdLmFnZW50LnYxLkFza1F1ZXN0aW9uUmVqZWN0ZWRIABIrCgVhc3lu" + + "YxgEIAEoCzIaLmFnZW50LnYxLkFza1F1ZXN0aW9uQXN5bmNIAEIICgZyZXN1bHQiSgoSQXNrUXVl" + + "c3Rpb25TdWNjZXNzEjQKB2Fuc3dlcnMYASADKAsyIy5hZ2VudC52MS5Bc2tRdWVzdGlvblN1Y2Nl" + + "c3NfQW5zd2VyIk0KGUFza1F1ZXN0aW9uU3VjY2Vzc19BbnN3ZXISEwoLcXVlc3Rpb25faWQYASAB" + + "KAkSGwoTc2VsZWN0ZWRfb3B0aW9uX2lkcxgCIAMoCSIpChBBc2tRdWVzdGlvbkVycm9yEhUKDWVy" + + "cm9yX21lc3NhZ2UYASABKAkiJQoTQXNrUXVlc3Rpb25SZWplY3RlZBIOCgZyZWFzb24YASABKAki" + + "iQIKGEJhY2tncm91bmRTaGVsbFNwYXduQXJncxIPCgdjb21tYW5kGAEgASgJEhkKEXdvcmtpbmdf" + + "ZGlyZWN0b3J5GAIgASgJEhQKDHRvb2xfY2FsbF9pZBgDIAEoCRI7Cg5wYXJzaW5nX3Jlc3VsdBgE" + + "IAEoCzIjLmFnZW50LnYxLlNoZWxsQ29tbWFuZFBhcnNpbmdSZXN1bHQSNAoOc2FuZGJveF9wb2xp" + + "Y3kYBSABKAsyFy5hZ2VudC52MS5TYW5kYm94UG9saWN5SACIAQESJQodZW5hYmxlX3dyaXRlX3No" + + "ZWxsX3N0ZGluX3Rvb2wYBiABKAhCEQoPX3NhbmRib3hfcG9saWN5IoECChpCYWNrZ3JvdW5kU2hl" + + "bGxTcGF3blJlc3VsdBI4CgdzdWNjZXNzGAEgASgLMiUuYWdlbnQudjEuQmFja2dyb3VuZFNoZWxs" + + "U3Bhd25TdWNjZXNzSAASNAoFZXJyb3IYAiABKAsyIy5hZ2VudC52MS5CYWNrZ3JvdW5kU2hlbGxT" + + "cGF3bkVycm9ySAASKwoIcmVqZWN0ZWQYAyABKAsyFy5hZ2VudC52MS5TaGVsbFJlamVjdGVkSAAS" + + "PAoRcGVybWlzc2lvbl9kZW5pZWQYBCABKAsyHy5hZ2VudC52MS5TaGVsbFBlcm1pc3Npb25EZW5p" + + "ZWRIAEIICgZyZXN1bHQidQobQmFja2dyb3VuZFNoZWxsU3Bhd25TdWNjZXNzEhAKCHNoZWxsX2lk" + + "GAEgASgNEg8KB2NvbW1hbmQYAiABKAkSGQoRd29ya2luZ19kaXJlY3RvcnkYAyABKAkSEAoDcGlk" + + "GAQgASgNSACIAQFCBgoEX3BpZCJWChlCYWNrZ3JvdW5kU2hlbGxTcGF3bkVycm9yEg8KB2NvbW1h" + + "bmQYASABKAkSGQoRd29ya2luZ19kaXJlY3RvcnkYAiABKAkSDQoFZXJyb3IYAyABKAkiNgoTV3Jp" + + "dGVTaGVsbFN0ZGluQXJncxIQCghzaGVsbF9pZBgBIAEoDRINCgVjaGFycxgCIAEoCSKHAQoVV3Jp" + + "dGVTaGVsbFN0ZGluUmVzdWx0EjMKB3N1Y2Nlc3MYASABKAsyIC5hZ2VudC52MS5Xcml0ZVNoZWxs" + + "U3RkaW5TdWNjZXNzSAASLwoFZXJyb3IYAiABKAsyHi5hZ2VudC52MS5Xcml0ZVNoZWxsU3RkaW5F" + + "cnJvckgAQggKBnJlc3VsdCJdChZXcml0ZVNoZWxsU3RkaW5TdWNjZXNzEhAKCHNoZWxsX2lkGAEg" + + "ASgNEjEKKXRlcm1pbmFsX2ZpbGVfbGVuZ3RoX2JlZm9yZV9pbnB1dF93cml0dGVuGAIgASgNIiUK" + + "FFdyaXRlU2hlbGxTdGRpbkVycm9yEg0KBWVycm9yGAEgASgJIiIKCkNvb3JkaW5hdGUSCQoBeBgB" + + "IAEoBRIJCgF5GAIgASgFIlUKD0NvbXB1dGVyVXNlQXJncxIUCgx0b29sX2NhbGxfaWQYASABKAkS" + + "LAoHYWN0aW9ucxgCIAMoCzIbLmFnZW50LnYxLkNvbXB1dGVyVXNlQWN0aW9uIoEEChFDb21wdXRl" + + "clVzZUFjdGlvbhIvCgptb3VzZV9tb3ZlGAEgASgLMhkuYWdlbnQudjEuTW91c2VNb3ZlQWN0aW9u" + + "SAASJgoFY2xpY2sYAiABKAsyFS5hZ2VudC52MS5DbGlja0FjdGlvbkgAEi8KCm1vdXNlX2Rvd24Y" + + "AyABKAsyGS5hZ2VudC52MS5Nb3VzZURvd25BY3Rpb25IABIrCghtb3VzZV91cBgEIAEoCzIXLmFn" + + "ZW50LnYxLk1vdXNlVXBBY3Rpb25IABIkCgRkcmFnGAUgASgLMhQuYWdlbnQudjEuRHJhZ0FjdGlv" + + "bkgAEigKBnNjcm9sbBgGIAEoCzIWLmFnZW50LnYxLlNjcm9sbEFjdGlvbkgAEiQKBHR5cGUYByAB" + + "KAsyFC5hZ2VudC52MS5UeXBlQWN0aW9uSAASIgoDa2V5GAggASgLMhMuYWdlbnQudjEuS2V5QWN0" + + "aW9uSAASJAoEd2FpdBgJIAEoCzIULmFnZW50LnYxLldhaXRBY3Rpb25IABIwCgpzY3JlZW5zaG90" + + "GAogASgLMhouYWdlbnQudjEuU2NyZWVuc2hvdEFjdGlvbkgAEjkKD2N1cnNvcl9wb3NpdGlvbhgL" + + "IAEoCzIeLmFnZW50LnYxLkN1cnNvclBvc2l0aW9uQWN0aW9uSABCCAoGYWN0aW9uIjsKD01vdXNl" + + "TW92ZUFjdGlvbhIoCgpjb29yZGluYXRlGAEgASgLMhQuYWdlbnQudjEuQ29vcmRpbmF0ZSKYAQoL" + + "Q2xpY2tBY3Rpb24SLQoKY29vcmRpbmF0ZRgBIAEoCzIULmFnZW50LnYxLkNvb3JkaW5hdGVIAIgB" + + "ARIOCgZidXR0b24YAiABKAUSDQoFY291bnQYAyABKAUSGgoNbW9kaWZpZXJfa2V5cxgEIAEoCUgB" + + "iAEBQg0KC19jb29yZGluYXRlQhAKDl9tb2RpZmllcl9rZXlzIiEKD01vdXNlRG93bkFjdGlvbhIO" + + "CgZidXR0b24YASABKAUiHwoNTW91c2VVcEFjdGlvbhIOCgZidXR0b24YASABKAUiQAoKRHJhZ0Fj" + + "dGlvbhIiCgRwYXRoGAEgAygLMhQuYWdlbnQudjEuQ29vcmRpbmF0ZRIOCgZidXR0b24YAiABKAUi" + + "nQEKDFNjcm9sbEFjdGlvbhItCgpjb29yZGluYXRlGAEgASgLMhQuYWdlbnQudjEuQ29vcmRpbmF0" + + "ZUgAiAEBEhEKCWRpcmVjdGlvbhgCIAEoBRIOCgZhbW91bnQYAyABKAUSGgoNbW9kaWZpZXJfa2V5" + + "cxgEIAEoCUgBiAEBQg0KC19jb29yZGluYXRlQhAKDl9tb2RpZmllcl9rZXlzIhoKClR5cGVBY3Rp" + + "b24SDAoEdGV4dBgBIAEoCSJMCglLZXlBY3Rpb24SCwoDa2V5GAEgASgJEh0KEGhvbGRfZHVyYXRp" + + "b25fbXMYAiABKAVIAIgBAUITChFfaG9sZF9kdXJhdGlvbl9tcyIhCgpXYWl0QWN0aW9uEhMKC2R1" + + "cmF0aW9uX21zGAEgASgFIhIKEFNjcmVlbnNob3RBY3Rpb24iFgoUQ3Vyc29yUG9zaXRpb25BY3Rp" + + "b24iewoRQ29tcHV0ZXJVc2VSZXN1bHQSLwoHc3VjY2VzcxgBIAEoCzIcLmFnZW50LnYxLkNvbXB1" + + "dGVyVXNlU3VjY2Vzc0gAEisKBWVycm9yGAIgASgLMhouYWdlbnQudjEuQ29tcHV0ZXJVc2VFcnJv" + + "ckgAQggKBnJlc3VsdCL7AQoSQ29tcHV0ZXJVc2VTdWNjZXNzEhQKDGFjdGlvbl9jb3VudBgBIAEo" + + "BRITCgtkdXJhdGlvbl9tcxgCIAEoBRIXCgpzY3JlZW5zaG90GAMgASgJSACIAQESEAoDbG9nGAQg" + + "ASgJSAGIAQESHAoPc2NyZWVuc2hvdF9wYXRoGAUgASgJSAKIAQESMgoPY3Vyc29yX3Bvc2l0aW9u" + + "GAYgASgLMhQuYWdlbnQudjEuQ29vcmRpbmF0ZUgDiAEBQg0KC19zY3JlZW5zaG90QgYKBF9sb2dC" + + "EgoQX3NjcmVlbnNob3RfcGF0aEISChBfY3Vyc29yX3Bvc2l0aW9uIsABChBDb21wdXRlclVzZUVy" + + "cm9yEg0KBWVycm9yGAEgASgJEhQKDGFjdGlvbl9jb3VudBgCIAEoBRITCgtkdXJhdGlvbl9tcxgD" + + "IAEoBRIQCgNsb2cYBCABKAlIAIgBARIXCgpzY3JlZW5zaG90GAUgASgJSAGIAQESHAoPc2NyZWVu" + + "c2hvdF9wYXRoGAYgASgJSAKIAQFCBgoEX2xvZ0INCgtfc2NyZWVuc2hvdEISChBfc2NyZWVuc2hv" + + "dF9wYXRoImsKE0NvbXB1dGVyVXNlVG9vbENhbGwSJwoEYXJncxgBIAEoCzIZLmFnZW50LnYxLkNv" + + "bXB1dGVyVXNlQXJncxIrCgZyZXN1bHQYAiABKAsyGy5hZ2VudC52MS5Db21wdXRlclVzZVJlc3Vs" + + "dCJoChJDcmVhdGVQbGFuVG9vbENhbGwSJgoEYXJncxgBIAEoCzIYLmFnZW50LnYxLkNyZWF0ZVBs" + + "YW5BcmdzEioKBnJlc3VsdBgCIAEoCzIaLmFnZW50LnYxLkNyZWF0ZVBsYW5SZXN1bHQiOAoFUGhh" + + "c2USDAoEbmFtZRgBIAEoCRIhCgV0b2RvcxgCIAMoCzISLmFnZW50LnYxLlRvZG9JdGVtIpYBCg5D" + + "cmVhdGVQbGFuQXJncxIMCgRwbGFuGAEgASgJEiEKBXRvZG9zGAIgAygLMhIuYWdlbnQudjEuVG9k" + + "b0l0ZW0SEAoIb3ZlcnZpZXcYAyABKAkSDAoEbmFtZRgEIAEoCRISCgppc19wcm9qZWN0GAUgASgI" + + "Eh8KBnBoYXNlcxgGIAMoCzIPLmFnZW50LnYxLlBoYXNlIooBChBDcmVhdGVQbGFuUmVzdWx0EhAK" + + "CHBsYW5fdXJpGAMgASgJEi4KB3N1Y2Nlc3MYASABKAsyGy5hZ2VudC52MS5DcmVhdGVQbGFuU3Vj" + + "Y2Vzc0gAEioKBWVycm9yGAIgASgLMhkuYWdlbnQudjEuQ3JlYXRlUGxhbkVycm9ySABCCAoGcmVz" + + "dWx0IhMKEUNyZWF0ZVBsYW5TdWNjZXNzIiAKD0NyZWF0ZVBsYW5FcnJvchINCgVlcnJvchgBIAEo" + + "CSJWChZDcmVhdGVQbGFuUmVxdWVzdFF1ZXJ5EiYKBGFyZ3MYASABKAsyGC5hZ2VudC52MS5DcmVh" + + "dGVQbGFuQXJncxIUCgx0b29sX2NhbGxfaWQYAiABKAkiRwoZQ3JlYXRlUGxhblJlcXVlc3RSZXNw" + + "b25zZRIqCgZyZXN1bHQYASABKAsyGi5hZ2VudC52MS5DcmVhdGVQbGFuUmVzdWx0IhYKFEN1cnNv" + + "clJ1bGVUeXBlR2xvYmFsIigKF0N1cnNvclJ1bGVUeXBlRmlsZUdsb2JzEg0KBWdsb2JzGAEgAygJ" + + "IjEKGkN1cnNvclJ1bGVUeXBlQWdlbnRGZXRjaGVkEhMKC2Rlc2NyaXB0aW9uGAEgASgJIiAKHkN1" + + "cnNvclJ1bGVUeXBlTWFudWFsbHlBdHRhY2hlZCKLAgoOQ3Vyc29yUnVsZVR5cGUSMAoGZ2xvYmFs" + + "GAEgASgLMh4uYWdlbnQudjEuQ3Vyc29yUnVsZVR5cGVHbG9iYWxIABI5CgxmaWxlX2dsb2JiZWQY" + + "AiABKAsyIS5hZ2VudC52MS5DdXJzb3JSdWxlVHlwZUZpbGVHbG9ic0gAEj0KDWFnZW50X2ZldGNo" + + "ZWQYAyABKAsyJC5hZ2VudC52MS5DdXJzb3JSdWxlVHlwZUFnZW50RmV0Y2hlZEgAEkUKEW1hbnVh" + + "bGx5X2F0dGFjaGVkGAQgASgLMiguYWdlbnQudjEuQ3Vyc29yUnVsZVR5cGVNYW51YWxseUF0dGFj" + + "aGVkSABCBgoEdHlwZSLIAQoKQ3Vyc29yUnVsZRIRCglmdWxsX3BhdGgYASABKAkSDwoHY29udGVu" + + "dBgCIAEoCRImCgR0eXBlGAMgASgLMhguYWdlbnQudjEuQ3Vyc29yUnVsZVR5cGUSDgoGc291cmNl" + + "GAQgASgFEh4KEWdpdF9yZW1vdGVfb3JpZ2luGAUgASgJSACIAQESGAoLcGFyc2VfZXJyb3IYBiAB" + + "KAlIAYgBAUIUChJfZ2l0X3JlbW90ZV9vcmlnaW5CDgoMX3BhcnNlX2Vycm9yIjAKCkRlbGV0ZUFy" + + "Z3MSDAoEcGF0aBgBIAEoCRIUCgx0b29sX2NhbGxfaWQYAiABKAki7QIKDERlbGV0ZVJlc3VsdBIq" + + "CgdzdWNjZXNzGAEgASgLMhcuYWdlbnQudjEuRGVsZXRlU3VjY2Vzc0gAEjYKDmZpbGVfbm90X2Zv" + + "dW5kGAIgASgLMhwuYWdlbnQudjEuRGVsZXRlRmlsZU5vdEZvdW5kSAASKwoIbm90X2ZpbGUYAyAB" + + "KAsyFy5hZ2VudC52MS5EZWxldGVOb3RGaWxlSAASPQoRcGVybWlzc2lvbl9kZW5pZWQYBCABKAsy" + + "IC5hZ2VudC52MS5EZWxldGVQZXJtaXNzaW9uRGVuaWVkSAASLQoJZmlsZV9idXN5GAUgASgLMhgu" + + "YWdlbnQudjEuRGVsZXRlRmlsZUJ1c3lIABIsCghyZWplY3RlZBgGIAEoCzIYLmFnZW50LnYxLkRl" + + "bGV0ZVJlamVjdGVkSAASJgoFZXJyb3IYByABKAsyFS5hZ2VudC52MS5EZWxldGVFcnJvckgAQggK" + + "BnJlc3VsdCJcCg1EZWxldGVTdWNjZXNzEgwKBHBhdGgYASABKAkSFAoMZGVsZXRlZF9maWxlGAIg" + + "ASgJEhEKCWZpbGVfc2l6ZRgDIAEoAxIUCgxwcmV2X2NvbnRlbnQYBCABKAkiIgoSRGVsZXRlRmls" + + "ZU5vdEZvdW5kEgwKBHBhdGgYASABKAkiMgoNRGVsZXRlTm90RmlsZRIMCgRwYXRoGAEgASgJEhMK" + + "C2FjdHVhbF90eXBlGAIgASgJIlkKFkRlbGV0ZVBlcm1pc3Npb25EZW5pZWQSDAoEcGF0aBgBIAEo" + + "CRIcChRjbGllbnRfdmlzaWJsZV9lcnJvchgCIAEoCRITCgtpc19yZWFkb25seRgDIAEoCCIeCg5E" + + "ZWxldGVGaWxlQnVzeRIMCgRwYXRoGAEgASgJIi4KDkRlbGV0ZVJlamVjdGVkEgwKBHBhdGgYASAB" + + "KAkSDgoGcmVhc29uGAIgASgJIioKC0RlbGV0ZUVycm9yEgwKBHBhdGgYASABKAkSDQoFZXJyb3IY" + + "AiABKAkiXAoORGVsZXRlVG9vbENhbGwSIgoEYXJncxgBIAEoCzIULmFnZW50LnYxLkRlbGV0ZUFy" + + "Z3MSJgoGcmVzdWx0GAIgASgLMhYuYWdlbnQudjEuRGVsZXRlUmVzdWx0IjUKD0RpYWdub3N0aWNz" + + "QXJncxIMCgRwYXRoGAEgASgJEhQKDHRvb2xfY2FsbF9pZBgCIAEoCSKvAgoRRGlhZ25vc3RpY3NS" + + "ZXN1bHQSLwoHc3VjY2VzcxgBIAEoCzIcLmFnZW50LnYxLkRpYWdub3N0aWNzU3VjY2Vzc0gAEisK" + + "BWVycm9yGAIgASgLMhouYWdlbnQudjEuRGlhZ25vc3RpY3NFcnJvckgAEjEKCHJlamVjdGVkGAMg" + + "ASgLMh0uYWdlbnQudjEuRGlhZ25vc3RpY3NSZWplY3RlZEgAEjsKDmZpbGVfbm90X2ZvdW5kGAQg" + + "ASgLMiEuYWdlbnQudjEuRGlhZ25vc3RpY3NGaWxlTm90Rm91bmRIABJCChFwZXJtaXNzaW9uX2Rl" + + "bmllZBgFIAEoCzIlLmFnZW50LnYxLkRpYWdub3N0aWNzUGVybWlzc2lvbkRlbmllZEgAQggKBnJl" + + "c3VsdCJoChJEaWFnbm9zdGljc1N1Y2Nlc3MSDAoEcGF0aBgBIAEoCRIpCgtkaWFnbm9zdGljcxgC" + + "IAMoCzIULmFnZW50LnYxLkRpYWdub3N0aWMSGQoRdG90YWxfZGlhZ25vc3RpY3MYAyABKAUifwoK" + + "RGlhZ25vc3RpYxIQCghzZXZlcml0eRgBIAEoBRIeCgVyYW5nZRgCIAEoCzIPLmFnZW50LnYxLlJh" + + "bmdlEg8KB21lc3NhZ2UYAyABKAkSDgoGc291cmNlGAQgASgJEgwKBGNvZGUYBSABKAkSEAoIaXNf" + + "c3RhbGUYBiABKAgiLwoQRGlhZ25vc3RpY3NFcnJvchIMCgRwYXRoGAEgASgJEg0KBWVycm9yGAIg" + + "ASgJIjMKE0RpYWdub3N0aWNzUmVqZWN0ZWQSDAoEcGF0aBgBIAEoCRIOCgZyZWFzb24YAiABKAki" + + "JwoXRGlhZ25vc3RpY3NGaWxlTm90Rm91bmQSDAoEcGF0aBgBIAEoCSIrChtEaWFnbm9zdGljc1Bl" + + "cm1pc3Npb25EZW5pZWQSDAoEcGF0aBgBIAEoCSJICghFZGl0QXJncxIMCgRwYXRoGAEgASgJEhsK" + + "DnN0cmVhbV9jb250ZW50GAYgASgJSACIAQFCEQoPX3N0cmVhbV9jb250ZW50ItYCCgpFZGl0UmVz" + + "dWx0EigKB3N1Y2Nlc3MYASABKAsyFS5hZ2VudC52MS5FZGl0U3VjY2Vzc0gAEjQKDmZpbGVfbm90" + + "X2ZvdW5kGAIgASgLMhouYWdlbnQudjEuRWRpdEZpbGVOb3RGb3VuZEgAEkQKFnJlYWRfcGVybWlz" + + "c2lvbl9kZW5pZWQYAyABKAsyIi5hZ2VudC52MS5FZGl0UmVhZFBlcm1pc3Npb25EZW5pZWRIABJG" + + "Chd3cml0ZV9wZXJtaXNzaW9uX2RlbmllZBgEIAEoCzIjLmFnZW50LnYxLkVkaXRXcml0ZVBlcm1p" + + "c3Npb25EZW5pZWRIABIqCghyZWplY3RlZBgGIAEoCzIWLmFnZW50LnYxLkVkaXRSZWplY3RlZEgA" + + "EiQKBWVycm9yGAcgASgLMhMuYWdlbnQudjEuRWRpdEVycm9ySABCCAoGcmVzdWx0IqQCCgtFZGl0" + + "U3VjY2VzcxIMCgRwYXRoGAEgASgJEhgKC2xpbmVzX2FkZGVkGAMgASgFSACIAQESGgoNbGluZXNf" + + "cmVtb3ZlZBgEIAEoBUgBiAEBEhgKC2RpZmZfc3RyaW5nGAUgASgJSAKIAQESJQoYYmVmb3JlX2Z1" + + "bGxfZmlsZV9jb250ZW50GAYgASgJSAOIAQESHwoXYWZ0ZXJfZnVsbF9maWxlX2NvbnRlbnQYByAB" + + "KAkSFAoHbWVzc2FnZRgIIAEoCUgEiAEBQg4KDF9saW5lc19hZGRlZEIQCg5fbGluZXNfcmVtb3Zl" + + "ZEIOCgxfZGlmZl9zdHJpbmdCGwoZX2JlZm9yZV9mdWxsX2ZpbGVfY29udGVudEIKCghfbWVzc2Fn" + + "ZSIgChBFZGl0RmlsZU5vdEZvdW5kEgwKBHBhdGgYASABKAkiKAoYRWRpdFJlYWRQZXJtaXNzaW9u" + + "RGVuaWVkEgwKBHBhdGgYASABKAkiTQoZRWRpdFdyaXRlUGVybWlzc2lvbkRlbmllZBIMCgRwYXRo" + + "GAEgASgJEg0KBWVycm9yGAIgASgJEhMKC2lzX3JlYWRvbmx5GAMgASgIIiwKDEVkaXRSZWplY3Rl" + + "ZBIMCgRwYXRoGAEgASgJEg4KBnJlYXNvbhgCIAEoCSJiCglFZGl0RXJyb3ISDAoEcGF0aBgBIAEo" + + "CRINCgVlcnJvchgCIAEoCRIgChNtb2RlbF92aXNpYmxlX2Vycm9yGAUgASgJSACIAQFCFgoUX21v" + + "ZGVsX3Zpc2libGVfZXJyb3IiVgoMRWRpdFRvb2xDYWxsEiAKBGFyZ3MYASABKAsyEi5hZ2VudC52" + + "MS5FZGl0QXJncxIkCgZyZXN1bHQYAiABKAsyFC5hZ2VudC52MS5FZGl0UmVzdWx0IjEKEUVkaXRU" + + "b29sQ2FsbERlbHRhEhwKFHN0cmVhbV9jb250ZW50X2RlbHRhGAEgASgJIjEKDEV4YUZldGNoQXJn" + + "cxILCgNpZHMYASADKAkSFAoMdG9vbF9jYWxsX2lkGAIgASgJIqIBCg5FeGFGZXRjaFJlc3VsdBIs" + + "CgdzdWNjZXNzGAEgASgLMhkuYWdlbnQudjEuRXhhRmV0Y2hTdWNjZXNzSAASKAoFZXJyb3IYAiAB" + + "KAsyFy5hZ2VudC52MS5FeGFGZXRjaEVycm9ySAASLgoIcmVqZWN0ZWQYAyABKAsyGi5hZ2VudC52" + + "MS5FeGFGZXRjaFJlamVjdGVkSABCCAoGcmVzdWx0Ij4KD0V4YUZldGNoU3VjY2VzcxIrCghjb250" + + "ZW50cxgBIAMoCzIZLmFnZW50LnYxLkV4YUZldGNoQ29udGVudCIeCg1FeGFGZXRjaEVycm9yEg0K" + + "BWVycm9yGAEgASgJIiIKEEV4YUZldGNoUmVqZWN0ZWQSDgoGcmVhc29uGAEgASgJIlMKD0V4YUZl" + + "dGNoQ29udGVudBINCgV0aXRsZRgBIAEoCRILCgN1cmwYAiABKAkSDAoEdGV4dBgDIAEoCRIWCg5w" + + "dWJsaXNoZWRfZGF0ZRgEIAEoCSJiChBFeGFGZXRjaFRvb2xDYWxsEiQKBGFyZ3MYASABKAsyFi5h" + + "Z2VudC52MS5FeGFGZXRjaEFyZ3MSKAoGcmVzdWx0GAIgASgLMhguYWdlbnQudjEuRXhhRmV0Y2hS" + + "ZXN1bHQiPAoURXhhRmV0Y2hSZXF1ZXN0UXVlcnkSJAoEYXJncxgBIAEoCzIWLmFnZW50LnYxLkV4" + + "YUZldGNoQXJncyKjAQoXRXhhRmV0Y2hSZXF1ZXN0UmVzcG9uc2USPgoIYXBwcm92ZWQYASABKAsy" + + "Ki5hZ2VudC52MS5FeGFGZXRjaFJlcXVlc3RSZXNwb25zZV9BcHByb3ZlZEgAEj4KCHJlamVjdGVk" + + "GAIgASgLMiouYWdlbnQudjEuRXhhRmV0Y2hSZXF1ZXN0UmVzcG9uc2VfUmVqZWN0ZWRIAEIICgZy" + + "ZXN1bHQiIgogRXhhRmV0Y2hSZXF1ZXN0UmVzcG9uc2VfQXBwcm92ZWQiMgogRXhhRmV0Y2hSZXF1" + + "ZXN0UmVzcG9uc2VfUmVqZWN0ZWQSDgoGcmVhc29uGAEgASgJIlcKDUV4YVNlYXJjaEFyZ3MSDQoF" + + "cXVlcnkYASABKAkSDAoEdHlwZRgCIAEoCRITCgtudW1fcmVzdWx0cxgDIAEoBRIUCgx0b29sX2Nh" + + "bGxfaWQYBCABKAkipgEKD0V4YVNlYXJjaFJlc3VsdBItCgdzdWNjZXNzGAEgASgLMhouYWdlbnQu" + + "djEuRXhhU2VhcmNoU3VjY2Vzc0gAEikKBWVycm9yGAIgASgLMhguYWdlbnQudjEuRXhhU2VhcmNo" + + "RXJyb3JIABIvCghyZWplY3RlZBgDIAEoCzIbLmFnZW50LnYxLkV4YVNlYXJjaFJlamVjdGVkSABC" + + "CAoGcmVzdWx0IkQKEEV4YVNlYXJjaFN1Y2Nlc3MSMAoKcmVmZXJlbmNlcxgBIAMoCzIcLmFnZW50" + + "LnYxLkV4YVNlYXJjaFJlZmVyZW5jZSIfCg5FeGFTZWFyY2hFcnJvchINCgVlcnJvchgBIAEoCSIj" + + "ChFFeGFTZWFyY2hSZWplY3RlZBIOCgZyZWFzb24YASABKAkiVgoSRXhhU2VhcmNoUmVmZXJlbmNl" + + "Eg0KBXRpdGxlGAEgASgJEgsKA3VybBgCIAEoCRIMCgR0ZXh0GAMgASgJEhYKDnB1Ymxpc2hlZF9k" + + "YXRlGAQgASgJImUKEUV4YVNlYXJjaFRvb2xDYWxsEiUKBGFyZ3MYASABKAsyFy5hZ2VudC52MS5F" + + "eGFTZWFyY2hBcmdzEikKBnJlc3VsdBgCIAEoCzIZLmFnZW50LnYxLkV4YVNlYXJjaFJlc3VsdCI+" + + "ChVFeGFTZWFyY2hSZXF1ZXN0UXVlcnkSJQoEYXJncxgBIAEoCzIXLmFnZW50LnYxLkV4YVNlYXJj" + + "aEFyZ3MipgEKGEV4YVNlYXJjaFJlcXVlc3RSZXNwb25zZRI/CghhcHByb3ZlZBgBIAEoCzIrLmFn" + + "ZW50LnYxLkV4YVNlYXJjaFJlcXVlc3RSZXNwb25zZV9BcHByb3ZlZEgAEj8KCHJlamVjdGVkGAIg" + + "ASgLMisuYWdlbnQudjEuRXhhU2VhcmNoUmVxdWVzdFJlc3BvbnNlX1JlamVjdGVkSABCCAoGcmVz" + + "dWx0IiMKIUV4YVNlYXJjaFJlcXVlc3RSZXNwb25zZV9BcHByb3ZlZCIzCiFFeGFTZWFyY2hSZXF1" + + "ZXN0UmVzcG9uc2VfUmVqZWN0ZWQSDgoGcmVhc29uGAEgASgJIiMKFUV4ZWNDbGllbnRTdHJlYW1D" + + "bG9zZRIKCgJpZBgBIAEoDSJWCg9FeGVjQ2xpZW50VGhyb3cSCgoCaWQYASABKA0SDQoFZXJyb3IY" + + "AiABKAkSGAoLc3RhY2tfdHJhY2UYAyABKAlIAIgBAUIOCgxfc3RhY2tfdHJhY2UiIQoTRXhlY0Ns" + + "aWVudEhlYXJ0YmVhdBIKCgJpZBgBIAEoDSK+AQoYRXhlY0NsaWVudENvbnRyb2xNZXNzYWdlEjcK" + + "DHN0cmVhbV9jbG9zZRgBIAEoCzIfLmFnZW50LnYxLkV4ZWNDbGllbnRTdHJlYW1DbG9zZUgAEioK" + + "BXRocm93GAIgASgLMhkuYWdlbnQudjEuRXhlY0NsaWVudFRocm93SAASMgoJaGVhcnRiZWF0GAMg" + + "ASgLMh0uYWdlbnQudjEuRXhlY0NsaWVudEhlYXJ0YmVhdEgAQgkKB21lc3NhZ2UihAEKC1NwYW5D" + + "b250ZXh0EhAKCHRyYWNlX2lkGAEgASgJEg8KB3NwYW5faWQYAiABKAkSGAoLdHJhY2VfZmxhZ3MY" + + "AyABKA1IAIgBARIYCgt0cmFjZV9zdGF0ZRgEIAEoCUgBiAEBQg4KDF90cmFjZV9mbGFnc0IOCgxf" + + "dHJhY2Vfc3RhdGUiCwoJQWJvcnRBcmdzIg0KC0Fib3J0UmVzdWx0IoUIChFFeGVjU2VydmVyTWVz" + + "c2FnZRIKCgJpZBgBIAEoDRIPCgdleGVjX2lkGA8gASgJEjAKDHNwYW5fY29udGV4dBgTIAEoCzIV" + + "LmFnZW50LnYxLlNwYW5Db250ZXh0SAGIAQESKQoKc2hlbGxfYXJncxgCIAEoCzITLmFnZW50LnYx" + + "LlNoZWxsQXJnc0gAEikKCndyaXRlX2FyZ3MYAyABKAsyEy5hZ2VudC52MS5Xcml0ZUFyZ3NIABIr" + + "CgtkZWxldGVfYXJncxgEIAEoCzIULmFnZW50LnYxLkRlbGV0ZUFyZ3NIABInCglncmVwX2FyZ3MY" + + "BSABKAsyEi5hZ2VudC52MS5HcmVwQXJnc0gAEicKCXJlYWRfYXJncxgHIAEoCzISLmFnZW50LnYx" + + "LlJlYWRBcmdzSAASIwoHbHNfYXJncxgIIAEoCzIQLmFnZW50LnYxLkxzQXJnc0gAEjUKEGRpYWdu" + + "b3N0aWNzX2FyZ3MYCSABKAsyGS5hZ2VudC52MS5EaWFnbm9zdGljc0FyZ3NIABI8ChRyZXF1ZXN0" + + "X2NvbnRleHRfYXJncxgKIAEoCzIcLmFnZW50LnYxLlJlcXVlc3RDb250ZXh0QXJnc0gAEiUKCG1j" + + "cF9hcmdzGAsgASgLMhEuYWdlbnQudjEuTWNwQXJnc0gAEjAKEXNoZWxsX3N0cmVhbV9hcmdzGA4g" + + "ASgLMhMuYWdlbnQudjEuU2hlbGxBcmdzSAASSQobYmFja2dyb3VuZF9zaGVsbF9zcGF3bl9hcmdz" + + "GBAgASgLMiIuYWdlbnQudjEuQmFja2dyb3VuZFNoZWxsU3Bhd25BcmdzSAASSgocbGlzdF9tY3Bf" + + "cmVzb3VyY2VzX2V4ZWNfYXJncxgRIAEoCzIiLmFnZW50LnYxLkxpc3RNY3BSZXNvdXJjZXNFeGVj" + + "QXJnc0gAEkgKG3JlYWRfbWNwX3Jlc291cmNlX2V4ZWNfYXJncxgSIAEoCzIhLmFnZW50LnYxLlJl" + + "YWRNY3BSZXNvdXJjZUV4ZWNBcmdzSAASKQoKZmV0Y2hfYXJncxgUIAEoCzITLmFnZW50LnYxLkZl" + + "dGNoQXJnc0gAEjgKEnJlY29yZF9zY3JlZW5fYXJncxgVIAEoCzIaLmFnZW50LnYxLlJlY29yZFNj" + + "cmVlbkFyZ3NIABI2ChFjb21wdXRlcl91c2VfYXJncxgWIAEoCzIZLmFnZW50LnYxLkNvbXB1dGVy" + + "VXNlQXJnc0gAEj8KFndyaXRlX3NoZWxsX3N0ZGluX2FyZ3MYFyABKAsyHS5hZ2VudC52MS5Xcml0" + + "ZVNoZWxsU3RkaW5BcmdzSABCCQoHbWVzc2FnZUIPCg1fc3Bhbl9jb250ZXh0Iv8HChFFeGVjQ2xp" + + "ZW50TWVzc2FnZRIKCgJpZBgBIAEoDRIPCgdleGVjX2lkGA8gASgJEi0KDHNoZWxsX3Jlc3VsdBgC" + + "IAEoCzIVLmFnZW50LnYxLlNoZWxsUmVzdWx0SAASLQoMd3JpdGVfcmVzdWx0GAMgASgLMhUuYWdl" + + "bnQudjEuV3JpdGVSZXN1bHRIABIvCg1kZWxldGVfcmVzdWx0GAQgASgLMhYuYWdlbnQudjEuRGVs" + + "ZXRlUmVzdWx0SAASKwoLZ3JlcF9yZXN1bHQYBSABKAsyFC5hZ2VudC52MS5HcmVwUmVzdWx0SAAS" + + "KwoLcmVhZF9yZXN1bHQYByABKAsyFC5hZ2VudC52MS5SZWFkUmVzdWx0SAASJwoJbHNfcmVzdWx0" + + "GAggASgLMhIuYWdlbnQudjEuTHNSZXN1bHRIABI5ChJkaWFnbm9zdGljc19yZXN1bHQYCSABKAsy" + + "Gy5hZ2VudC52MS5EaWFnbm9zdGljc1Jlc3VsdEgAEkAKFnJlcXVlc3RfY29udGV4dF9yZXN1bHQY" + + "CiABKAsyHi5hZ2VudC52MS5SZXF1ZXN0Q29udGV4dFJlc3VsdEgAEikKCm1jcF9yZXN1bHQYCyAB" + + "KAsyEy5hZ2VudC52MS5NY3BSZXN1bHRIABItCgxzaGVsbF9zdHJlYW0YDiABKAsyFS5hZ2VudC52" + + "MS5TaGVsbFN0cmVhbUgAEk0KHWJhY2tncm91bmRfc2hlbGxfc3Bhd25fcmVzdWx0GBAgASgLMiQu" + + "YWdlbnQudjEuQmFja2dyb3VuZFNoZWxsU3Bhd25SZXN1bHRIABJOCh5saXN0X21jcF9yZXNvdXJj" + + "ZXNfZXhlY19yZXN1bHQYESABKAsyJC5hZ2VudC52MS5MaXN0TWNwUmVzb3VyY2VzRXhlY1Jlc3Vs" + + "dEgAEkwKHXJlYWRfbWNwX3Jlc291cmNlX2V4ZWNfcmVzdWx0GBIgASgLMiMuYWdlbnQudjEuUmVh" + + "ZE1jcFJlc291cmNlRXhlY1Jlc3VsdEgAEi0KDGZldGNoX3Jlc3VsdBgUIAEoCzIVLmFnZW50LnYx" + + "LkZldGNoUmVzdWx0SAASPAoUcmVjb3JkX3NjcmVlbl9yZXN1bHQYFSABKAsyHC5hZ2VudC52MS5S" + + "ZWNvcmRTY3JlZW5SZXN1bHRIABI6ChNjb21wdXRlcl91c2VfcmVzdWx0GBYgASgLMhsuYWdlbnQu" + + "djEuQ29tcHV0ZXJVc2VSZXN1bHRIABJDChh3cml0ZV9zaGVsbF9zdGRpbl9yZXN1bHQYFyABKAsy" + + "Hy5hZ2VudC52MS5Xcml0ZVNoZWxsU3RkaW5SZXN1bHRIAEIJCgdtZXNzYWdlIi4KCUZldGNoQXJn" + + "cxILCgN1cmwYASABKAkSFAoMdG9vbF9jYWxsX2lkGAIgASgJImkKC0ZldGNoUmVzdWx0EikKB3N1" + + "Y2Nlc3MYASABKAsyFi5hZ2VudC52MS5GZXRjaFN1Y2Nlc3NIABIlCgVlcnJvchgCIAEoCzIULmFn" + + "ZW50LnYxLkZldGNoRXJyb3JIAEIICgZyZXN1bHQiVwoMRmV0Y2hTdWNjZXNzEgsKA3VybBgBIAEo" + + "CRIPCgdjb250ZW50GAIgASgJEhMKC3N0YXR1c19jb2RlGAMgASgFEhQKDGNvbnRlbnRfdHlwZRgE" + + "IAEoCSIoCgpGZXRjaEVycm9yEgsKA3VybBgBIAEoCRINCgVlcnJvchgCIAEoCSJtChFHZW5lcmF0" + + "ZUltYWdlQXJncxITCgtkZXNjcmlwdGlvbhgBIAEoCRIWCglmaWxlX3BhdGgYAiABKAlIAIgBARId" + + "ChVyZWZlcmVuY2VfaW1hZ2VfcGF0aHMYBSADKAlCDAoKX2ZpbGVfcGF0aCKBAQoTR2VuZXJhdGVJ" + + "bWFnZVJlc3VsdBIxCgdzdWNjZXNzGAEgASgLMh4uYWdlbnQudjEuR2VuZXJhdGVJbWFnZVN1Y2Nl" + + "c3NIABItCgVlcnJvchgCIAEoCzIcLmFnZW50LnYxLkdlbmVyYXRlSW1hZ2VFcnJvckgAQggKBnJl" + + "c3VsdCI9ChRHZW5lcmF0ZUltYWdlU3VjY2VzcxIRCglmaWxlX3BhdGgYASABKAkSEgoKaW1hZ2Vf" + + "ZGF0YRgCIAEoCSIjChJHZW5lcmF0ZUltYWdlRXJyb3ISDQoFZXJyb3IYASABKAkicQoVR2VuZXJh" + + "dGVJbWFnZVRvb2xDYWxsEikKBGFyZ3MYASABKAsyGy5hZ2VudC52MS5HZW5lcmF0ZUltYWdlQXJn" + + "cxItCgZyZXN1bHQYAiABKAsyHS5hZ2VudC52MS5HZW5lcmF0ZUltYWdlUmVzdWx0IsYECghHcmVw" + + "QXJncxIPCgdwYXR0ZXJuGAEgASgJEhEKBHBhdGgYAiABKAlIAIgBARIRCgRnbG9iGAMgASgJSAGI" + + "AQESGAoLb3V0cHV0X21vZGUYBCABKAlIAogBARIbCg5jb250ZXh0X2JlZm9yZRgFIAEoBUgDiAEB" + + "EhoKDWNvbnRleHRfYWZ0ZXIYBiABKAVIBIgBARIUCgdjb250ZXh0GAcgASgFSAWIAQESHQoQY2Fz" + + "ZV9pbnNlbnNpdGl2ZRgIIAEoCEgGiAEBEhEKBHR5cGUYCSABKAlIB4gBARIXCgpoZWFkX2xpbWl0" + + "GAogASgFSAiIAQESFgoJbXVsdGlsaW5lGAsgASgISAmIAQESEQoEc29ydBgMIAEoCUgKiAEBEhsK" + + "DnNvcnRfYXNjZW5kaW5nGA0gASgISAuIAQESFAoMdG9vbF9jYWxsX2lkGA4gASgJEjQKDnNhbmRi" + + "b3hfcG9saWN5GA8gASgLMhcuYWdlbnQudjEuU2FuZGJveFBvbGljeUgMiAEBQgcKBV9wYXRoQgcK" + + "BV9nbG9iQg4KDF9vdXRwdXRfbW9kZUIRCg9fY29udGV4dF9iZWZvcmVCEAoOX2NvbnRleHRfYWZ0" + + "ZXJCCgoIX2NvbnRleHRCEwoRX2Nhc2VfaW5zZW5zaXRpdmVCBwoFX3R5cGVCDQoLX2hlYWRfbGlt" + + "aXRCDAoKX211bHRpbGluZUIHCgVfc29ydEIRCg9fc29ydF9hc2NlbmRpbmdCEQoPX3NhbmRib3hf" + + "cG9saWN5ImYKCkdyZXBSZXN1bHQSKAoHc3VjY2VzcxgBIAEoCzIVLmFnZW50LnYxLkdyZXBTdWNj" + + "ZXNzSAASJAoFZXJyb3IYAiABKAsyEy5hZ2VudC52MS5HcmVwRXJyb3JIAEIICgZyZXN1bHQiGgoJ" + + "R3JlcEVycm9yEg0KBWVycm9yGAEgASgJIrQCCgtHcmVwU3VjY2VzcxIPCgdwYXR0ZXJuGAEgASgJ" + + "EgwKBHBhdGgYAiABKAkSEwoLb3V0cHV0X21vZGUYAyABKAkSRgoRd29ya3NwYWNlX3Jlc3VsdHMY" + + "BCADKAsyKy5hZ2VudC52MS5HcmVwU3VjY2Vzcy5Xb3Jrc3BhY2VSZXN1bHRzRW50cnkSPAoUYWN0" + + "aXZlX2VkaXRvcl9yZXN1bHQYBSABKAsyGS5hZ2VudC52MS5HcmVwVW5pb25SZXN1bHRIAIgBARpS" + + "ChVXb3Jrc3BhY2VSZXN1bHRzRW50cnkSCwoDa2V5GAEgASgJEigKBXZhbHVlGAIgASgLMhkuYWdl" + + "bnQudjEuR3JlcFVuaW9uUmVzdWx0OgI4AUIXChVfYWN0aXZlX2VkaXRvcl9yZXN1bHQiowEKD0dy" + + "ZXBVbmlvblJlc3VsdBIqCgVjb3VudBgBIAEoCzIZLmFnZW50LnYxLkdyZXBDb3VudFJlc3VsdEgA" + + "EioKBWZpbGVzGAIgASgLMhkuYWdlbnQudjEuR3JlcEZpbGVzUmVzdWx0SAASLgoHY29udGVudBgD" + + "IAEoCzIbLmFnZW50LnYxLkdyZXBDb250ZW50UmVzdWx0SABCCAoGcmVzdWx0IpsBCg9HcmVwQ291" + + "bnRSZXN1bHQSJwoGY291bnRzGAEgAygLMhcuYWdlbnQudjEuR3JlcEZpbGVDb3VudBITCgt0b3Rh" + + "bF9maWxlcxgCIAEoBRIVCg10b3RhbF9tYXRjaGVzGAMgASgFEhgKEGNsaWVudF90cnVuY2F0ZWQY" + + "BCABKAgSGQoRcmlwZ3JlcF90cnVuY2F0ZWQYBSABKAgiLAoNR3JlcEZpbGVDb3VudBIMCgRmaWxl" + + "GAEgASgJEg0KBWNvdW50GAIgASgFImoKD0dyZXBGaWxlc1Jlc3VsdBINCgVmaWxlcxgBIAMoCRIT" + + "Cgt0b3RhbF9maWxlcxgCIAEoBRIYChBjbGllbnRfdHJ1bmNhdGVkGAMgASgIEhkKEXJpcGdyZXBf" + + "dHJ1bmNhdGVkGAQgASgIIqQBChFHcmVwQ29udGVudFJlc3VsdBIoCgdtYXRjaGVzGAEgAygLMhcu" + + "YWdlbnQudjEuR3JlcEZpbGVNYXRjaBITCgt0b3RhbF9saW5lcxgCIAEoBRIbChN0b3RhbF9tYXRj" + + "aGVkX2xpbmVzGAMgASgFEhgKEGNsaWVudF90cnVuY2F0ZWQYBCABKAgSGQoRcmlwZ3JlcF90cnVu" + + "Y2F0ZWQYBSABKAgiSgoNR3JlcEZpbGVNYXRjaBIMCgRmaWxlGAEgASgJEisKB21hdGNoZXMYAiAD" + + "KAsyGi5hZ2VudC52MS5HcmVwQ29udGVudE1hdGNoImwKEEdyZXBDb250ZW50TWF0Y2gSEwoLbGlu" + + "ZV9udW1iZXIYASABKAUSDwoHY29udGVudBgCIAEoCRIZChFjb250ZW50X3RydW5jYXRlZBgDIAEo" + + "CBIXCg9pc19jb250ZXh0X2xpbmUYBCABKAgiHQoKR3JlcFN0cmVhbRIPCgdwYXR0ZXJuGAEgASgJ" + + "IlYKDEdyZXBUb29sQ2FsbBIgCgRhcmdzGAEgASgLMhIuYWdlbnQudjEuR3JlcEFyZ3MSJAoGcmVz" + + "dWx0GAIgASgLMhQuYWdlbnQudjEuR3JlcFJlc3VsdCIeCgtHZXRCbG9iQXJncxIPCgdibG9iX2lk" + + "GAEgASgMIjUKDUdldEJsb2JSZXN1bHQSFgoJYmxvYl9kYXRhGAEgASgMSACIAQFCDAoKX2Jsb2Jf" + + "ZGF0YSIxCgtTZXRCbG9iQXJncxIPCgdibG9iX2lkGAEgASgMEhEKCWJsb2JfZGF0YRgCIAEoDCI+" + + "Cg1TZXRCbG9iUmVzdWx0EiMKBWVycm9yGAEgASgLMg8uYWdlbnQudjEuRXJyb3JIAIgBAUIICgZf" + + "ZXJyb3IiywEKD0t2U2VydmVyTWVzc2FnZRIKCgJpZBgBIAEoDRIwCgxzcGFuX2NvbnRleHQYBCAB" + + "KAsyFS5hZ2VudC52MS5TcGFuQ29udGV4dEgBiAEBEi4KDWdldF9ibG9iX2FyZ3MYAiABKAsyFS5h" + + "Z2VudC52MS5HZXRCbG9iQXJnc0gAEi4KDXNldF9ibG9iX2FyZ3MYAyABKAsyFS5hZ2VudC52MS5T" + + "ZXRCbG9iQXJnc0gAQgkKB21lc3NhZ2VCDwoNX3NwYW5fY29udGV4dCKQAQoPS3ZDbGllbnRNZXNz" + + "YWdlEgoKAmlkGAEgASgNEjIKD2dldF9ibG9iX3Jlc3VsdBgCIAEoCzIXLmFnZW50LnYxLkdldEJs" + + "b2JSZXN1bHRIABIyCg9zZXRfYmxvYl9yZXN1bHQYAyABKAsyFy5hZ2VudC52MS5TZXRCbG9iUmVz" + + "dWx0SABCCQoHbWVzc2FnZSKtAQoGTHNBcmdzEgwKBHBhdGgYASABKAkSDgoGaWdub3JlGAIgAygJ" + + "EhQKDHRvb2xfY2FsbF9pZBgDIAEoCRI0Cg5zYW5kYm94X3BvbGljeRgEIAEoCzIXLmFnZW50LnYx" + + "LlNhbmRib3hQb2xpY3lIAIgBARIXCgp0aW1lb3V0X21zGAUgASgNSAGIAQFCEQoPX3NhbmRib3hf" + + "cG9saWN5Qg0KC190aW1lb3V0X21zIrIBCghMc1Jlc3VsdBImCgdzdWNjZXNzGAEgASgLMhMuYWdl" + + "bnQudjEuTHNTdWNjZXNzSAASIgoFZXJyb3IYAiABKAsyES5hZ2VudC52MS5Mc0Vycm9ySAASKAoI" + + "cmVqZWN0ZWQYAyABKAsyFC5hZ2VudC52MS5Mc1JlamVjdGVkSAASJgoHdGltZW91dBgEIAEoCzIT" + + "LmFnZW50LnYxLkxzVGltZW91dEgAQggKBnJlc3VsdCJHCglMc1N1Y2Nlc3MSOgoTZGlyZWN0b3J5" + + "X3RyZWVfcm9vdBgBIAEoCzIdLmFnZW50LnYxLkxzRGlyZWN0b3J5VHJlZU5vZGUi9gIKE0xzRGly" + + "ZWN0b3J5VHJlZU5vZGUSEAoIYWJzX3BhdGgYASABKAkSNAoNY2hpbGRyZW5fZGlycxgCIAMoCzId" + + "LmFnZW50LnYxLkxzRGlyZWN0b3J5VHJlZU5vZGUSOgoOY2hpbGRyZW5fZmlsZXMYAyADKAsyIi5h" + + "Z2VudC52MS5Mc0RpcmVjdG9yeVRyZWVOb2RlX0ZpbGUSHwoXY2hpbGRyZW5fd2VyZV9wcm9jZXNz" + + "ZWQYBCABKAgSZAodZnVsbF9zdWJ0cmVlX2V4dGVuc2lvbl9jb3VudHMYBSADKAsyPS5hZ2VudC52" + + "MS5Mc0RpcmVjdG9yeVRyZWVOb2RlLkZ1bGxTdWJ0cmVlRXh0ZW5zaW9uQ291bnRzRW50cnkSEQoJ" + + "bnVtX2ZpbGVzGAYgASgFGkEKH0Z1bGxTdWJ0cmVlRXh0ZW5zaW9uQ291bnRzRW50cnkSCwoDa2V5" + + "GAEgASgJEg0KBXZhbHVlGAIgASgFOgI4ASJ6ChhMc0RpcmVjdG9yeVRyZWVOb2RlX0ZpbGUSDAoE" + + "bmFtZRgBIAEoCRI6ChF0ZXJtaW5hbF9tZXRhZGF0YRgCIAEoCzIaLmFnZW50LnYxLlRlcm1pbmFs" + + "TWV0YWRhdGFIAIgBAUIUChJfdGVybWluYWxfbWV0YWRhdGEiJgoHTHNFcnJvchIMCgRwYXRoGAEg" + + "ASgJEg0KBWVycm9yGAIgASgJIioKCkxzUmVqZWN0ZWQSDAoEcGF0aBgBIAEoCRIOCgZyZWFzb24Y" + + "AiABKAkiRwoJTHNUaW1lb3V0EjoKE2RpcmVjdG9yeV90cmVlX3Jvb3QYASABKAsyHS5hZ2VudC52" + + "MS5Mc0RpcmVjdG9yeVRyZWVOb2RlIvEBChBUZXJtaW5hbE1ldGFkYXRhEhAKA2N3ZBgBIAEoCUgA" + + "iAEBEjkKDWxhc3RfY29tbWFuZHMYAiADKAsyIi5hZ2VudC52MS5UZXJtaW5hbE1ldGFkYXRhX0Nv" + + "bW1hbmQSHQoQbGFzdF9tb2RpZmllZF9tcxgDIAEoA0gBiAEBEkAKD2N1cnJlbnRfY29tbWFuZBgE" + + "IAEoCzIiLmFnZW50LnYxLlRlcm1pbmFsTWV0YWRhdGFfQ29tbWFuZEgCiAEBQgYKBF9jd2RCEwoR" + + "X2xhc3RfbW9kaWZpZWRfbXNCEgoQX2N1cnJlbnRfY29tbWFuZCKnAQoYVGVybWluYWxNZXRhZGF0" + + "YV9Db21tYW5kEg8KB2NvbW1hbmQYASABKAkSFgoJZXhpdF9jb2RlGAIgASgFSACIAQESGQoMdGlt" + + "ZXN0YW1wX21zGAMgASgDSAGIAQESGAoLZHVyYXRpb25fbXMYBCABKANIAogBAUIMCgpfZXhpdF9j" + + "b2RlQg8KDV90aW1lc3RhbXBfbXNCDgoMX2R1cmF0aW9uX21zIlAKCkxzVG9vbENhbGwSHgoEYXJn" + + "cxgBIAEoCzIQLmFnZW50LnYxLkxzQXJncxIiCgZyZXN1bHQYAiABKAsyEi5hZ2VudC52MS5Mc1Jl" + + "c3VsdCK1AQoHTWNwQXJncxIMCgRuYW1lGAEgASgJEikKBGFyZ3MYAiADKAsyGy5hZ2VudC52MS5N" + + "Y3BBcmdzLkFyZ3NFbnRyeRIUCgx0b29sX2NhbGxfaWQYAyABKAkSGwoTcHJvdmlkZXJfaWRlbnRp" + + "ZmllchgEIAEoCRIRCgl0b29sX25hbWUYBSABKAkaKwoJQXJnc0VudHJ5EgsKA2tleRgBIAEoCRIN" + + "CgV2YWx1ZRgCIAEoDDoCOAEi/wEKCU1jcFJlc3VsdBInCgdzdWNjZXNzGAEgASgLMhQuYWdlbnQu" + + "djEuTWNwU3VjY2Vzc0gAEiMKBWVycm9yGAIgASgLMhIuYWdlbnQudjEuTWNwRXJyb3JIABIpCghy" + + "ZWplY3RlZBgDIAEoCzIVLmFnZW50LnYxLk1jcFJlamVjdGVkSAASOgoRcGVybWlzc2lvbl9kZW5p" + + "ZWQYBCABKAsyHS5hZ2VudC52MS5NY3BQZXJtaXNzaW9uRGVuaWVkSAASMwoOdG9vbF9ub3RfZm91" + + "bmQYBSABKAsyGS5hZ2VudC52MS5NY3BUb29sTm90Rm91bmRIAEIICgZyZXN1bHQiOAoPTWNwVG9v" + + "bE5vdEZvdW5kEgwKBG5hbWUYASABKAkSFwoPYXZhaWxhYmxlX3Rvb2xzGAIgAygJImoKDk1jcFRl" + + "eHRDb250ZW50EgwKBHRleHQYASABKAkSNgoPb3V0cHV0X2xvY2F0aW9uGAIgASgLMhguYWdlbnQu" + + "djEuT3V0cHV0TG9jYXRpb25IAIgBAUISChBfb3V0cHV0X2xvY2F0aW9uIjIKD01jcEltYWdlQ29u" + + "dGVudBIMCgRkYXRhGAEgASgMEhEKCW1pbWVfdHlwZRgCIAEoCSJ7ChhNY3BUb29sUmVzdWx0Q29u" + + "dGVudEl0ZW0SKAoEdGV4dBgBIAEoCzIYLmFnZW50LnYxLk1jcFRleHRDb250ZW50SAASKgoFaW1h" + + "Z2UYAiABKAsyGS5hZ2VudC52MS5NY3BJbWFnZUNvbnRlbnRIAEIJCgdjb250ZW50IlMKCk1jcFN1" + + "Y2Nlc3MSMwoHY29udGVudBgBIAMoCzIiLmFnZW50LnYxLk1jcFRvb2xSZXN1bHRDb250ZW50SXRl" + + "bRIQCghpc19lcnJvchgCIAEoCCIZCghNY3BFcnJvchINCgVlcnJvchgBIAEoCSIyCgtNY3BSZWpl" + + "Y3RlZBIOCgZyZWFzb24YASABKAkSEwoLaXNfcmVhZG9ubHkYAiABKAgiOQoTTWNwUGVybWlzc2lv" + + "bkRlbmllZBINCgVlcnJvchgBIAEoCRITCgtpc19yZWFkb25seRgCIAEoCCI6ChhMaXN0TWNwUmVz" + + "b3VyY2VzRXhlY0FyZ3MSEwoGc2VydmVyGAEgASgJSACIAQFCCQoHX3NlcnZlciLGAQoaTGlzdE1j" + + "cFJlc291cmNlc0V4ZWNSZXN1bHQSNAoHc3VjY2VzcxgBIAEoCzIhLmFnZW50LnYxLkxpc3RNY3BS" + + "ZXNvdXJjZXNTdWNjZXNzSAASMAoFZXJyb3IYAiABKAsyHy5hZ2VudC52MS5MaXN0TWNwUmVzb3Vy" + + "Y2VzRXJyb3JIABI2CghyZWplY3RlZBgDIAEoCzIiLmFnZW50LnYxLkxpc3RNY3BSZXNvdXJjZXNS" + + "ZWplY3RlZEgAQggKBnJlc3VsdCK9AgomTGlzdE1jcFJlc291cmNlc0V4ZWNSZXN1bHRfTWNwUmVz" + + "b3VyY2USCwoDdXJpGAEgASgJEhEKBG5hbWUYAiABKAlIAIgBARIYCgtkZXNjcmlwdGlvbhgDIAEo" + + "CUgBiAEBEhYKCW1pbWVfdHlwZRgEIAEoCUgCiAEBEg4KBnNlcnZlchgFIAEoCRJWCgthbm5vdGF0" + + "aW9ucxgGIAMoCzJBLmFnZW50LnYxLkxpc3RNY3BSZXNvdXJjZXNFeGVjUmVzdWx0X01jcFJlc291" + + "cmNlLkFubm90YXRpb25zRW50cnkaMgoQQW5ub3RhdGlvbnNFbnRyeRILCgNrZXkYASABKAkSDQoF" + + "dmFsdWUYAiABKAk6AjgBQgcKBV9uYW1lQg4KDF9kZXNjcmlwdGlvbkIMCgpfbWltZV90eXBlIl4K" + + "F0xpc3RNY3BSZXNvdXJjZXNTdWNjZXNzEkMKCXJlc291cmNlcxgBIAMoCzIwLmFnZW50LnYxLkxp" + + "c3RNY3BSZXNvdXJjZXNFeGVjUmVzdWx0X01jcFJlc291cmNlIiYKFUxpc3RNY3BSZXNvdXJjZXNF" + + "cnJvchINCgVlcnJvchgBIAEoCSIqChhMaXN0TWNwUmVzb3VyY2VzUmVqZWN0ZWQSDgoGcmVhc29u" + + "GAEgASgJImQKF1JlYWRNY3BSZXNvdXJjZUV4ZWNBcmdzEg4KBnNlcnZlchgBIAEoCRILCgN1cmkY" + + "AiABKAkSGgoNZG93bmxvYWRfcGF0aBgDIAEoCUgAiAEBQhAKDl9kb3dubG9hZF9wYXRoIvoBChlS" + + "ZWFkTWNwUmVzb3VyY2VFeGVjUmVzdWx0EjMKB3N1Y2Nlc3MYASABKAsyIC5hZ2VudC52MS5SZWFk" + + "TWNwUmVzb3VyY2VTdWNjZXNzSAASLwoFZXJyb3IYAiABKAsyHi5hZ2VudC52MS5SZWFkTWNwUmVz" + + "b3VyY2VFcnJvckgAEjUKCHJlamVjdGVkGAMgASgLMiEuYWdlbnQudjEuUmVhZE1jcFJlc291cmNl" + + "UmVqZWN0ZWRIABI2Cglub3RfZm91bmQYBCABKAsyIS5hZ2VudC52MS5SZWFkTWNwUmVzb3VyY2VO" + + "b3RGb3VuZEgAQggKBnJlc3VsdCLmAgoWUmVhZE1jcFJlc291cmNlU3VjY2VzcxILCgN1cmkYASAB" + + "KAkSEQoEbmFtZRgCIAEoCUgBiAEBEhgKC2Rlc2NyaXB0aW9uGAMgASgJSAKIAQESFgoJbWltZV90" + + "eXBlGAQgASgJSAOIAQESRgoLYW5ub3RhdGlvbnMYByADKAsyMS5hZ2VudC52MS5SZWFkTWNwUmVz" + + "b3VyY2VTdWNjZXNzLkFubm90YXRpb25zRW50cnkSGgoNZG93bmxvYWRfcGF0aBgIIAEoCUgEiAEB" + + "Eg4KBHRleHQYBSABKAlIABIOCgRibG9iGAYgASgMSAAaMgoQQW5ub3RhdGlvbnNFbnRyeRILCgNr" + + "ZXkYASABKAkSDQoFdmFsdWUYAiABKAk6AjgBQgkKB2NvbnRlbnRCBwoFX25hbWVCDgoMX2Rlc2Ny" + + "aXB0aW9uQgwKCl9taW1lX3R5cGVCEAoOX2Rvd25sb2FkX3BhdGgiMgoUUmVhZE1jcFJlc291cmNl" + + "RXJyb3ISCwoDdXJpGAEgASgJEg0KBWVycm9yGAIgASgJIjYKF1JlYWRNY3BSZXNvdXJjZVJlamVj" + + "dGVkEgsKA3VyaRgBIAEoCRIOCgZyZWFzb24YAiABKAkiJgoXUmVhZE1jcFJlc291cmNlTm90Rm91" + + "bmQSCwoDdXJpGAEgASgJInwKEU1jcFRvb2xEZWZpbml0aW9uEgwKBG5hbWUYASABKAkSGwoTcHJv" + + "dmlkZXJfaWRlbnRpZmllchgEIAEoCRIRCgl0b29sX25hbWUYBSABKAkSEwoLZGVzY3JpcHRpb24Y" + + "AiABKAkSFAoMaW5wdXRfc2NoZW1hGAMgASgMIjoKCE1jcFRvb2xzEi4KCW1jcF90b29scxgBIAMo" + + "CzIbLmFnZW50LnYxLk1jcFRvb2xEZWZpbml0aW9uIjwKD01jcEluc3RydWN0aW9ucxITCgtzZXJ2" + + "ZXJfbmFtZRgBIAEoCRIUCgxpbnN0cnVjdGlvbnMYAiABKAki1wEKDU1jcERlc2NyaXB0b3ISEwoL" + + "c2VydmVyX25hbWUYASABKAkSGQoRc2VydmVyX2lkZW50aWZpZXIYAiABKAkSGAoLZm9sZGVyX3Bh" + + "dGgYAyABKAlIAIgBARIkChdzZXJ2ZXJfdXNlX2luc3RydWN0aW9ucxgEIAEoCUgBiAEBEioKBXRv" + + "b2xzGAUgAygLMhsuYWdlbnQudjEuTWNwVG9vbERlc2NyaXB0b3JCDgoMX2ZvbGRlcl9wYXRoQhoK" + + "GF9zZXJ2ZXJfdXNlX2luc3RydWN0aW9ucyJYChFNY3BUb29sRGVzY3JpcHRvchIRCgl0b29sX25h" + + "bWUYASABKAkSHAoPZGVmaW5pdGlvbl9wYXRoGAIgASgJSACIAQFCEgoQX2RlZmluaXRpb25fcGF0" + + "aCJ4ChRNY3BGaWxlU3lzdGVtT3B0aW9ucxIPCgdlbmFibGVkGAEgASgIEh0KFXdvcmtzcGFjZV9w" + + "cm9qZWN0X2RpchgCIAEoCRIwCg9tY3BfZGVzY3JpcHRvcnMYAyADKAsyFy5hZ2VudC52MS5NY3BE" + + "ZXNjcmlwdG9yIi4KCFJlYWRBcmdzEgwKBHBhdGgYASABKAkSFAoMdG9vbF9jYWxsX2lkGAIgASgJ" + + "IrgCCgpSZWFkUmVzdWx0EigKB3N1Y2Nlc3MYASABKAsyFS5hZ2VudC52MS5SZWFkU3VjY2Vzc0gA" + + "EiQKBWVycm9yGAIgASgLMhMuYWdlbnQudjEuUmVhZEVycm9ySAASKgoIcmVqZWN0ZWQYAyABKAsy" + + "Fi5hZ2VudC52MS5SZWFkUmVqZWN0ZWRIABI0Cg5maWxlX25vdF9mb3VuZBgEIAEoCzIaLmFnZW50" + + "LnYxLlJlYWRGaWxlTm90Rm91bmRIABI7ChFwZXJtaXNzaW9uX2RlbmllZBgFIAEoCzIeLmFnZW50" + + "LnYxLlJlYWRQZXJtaXNzaW9uRGVuaWVkSAASMQoMaW52YWxpZF9maWxlGAYgASgLMhkuYWdlbnQu" + + "djEuUmVhZEludmFsaWRGaWxlSABCCAoGcmVzdWx0IrMBCgtSZWFkU3VjY2VzcxIMCgRwYXRoGAEg" + + "ASgJEhMKC3RvdGFsX2xpbmVzGAMgASgFEhEKCWZpbGVfc2l6ZRgEIAEoAxIRCgl0cnVuY2F0ZWQY" + + "BiABKAgSGwoOb3V0cHV0X2Jsb2JfaWQYByABKAxIAYgBARIRCgdjb250ZW50GAIgASgJSAASDgoE" + + "ZGF0YRgFIAEoDEgAQggKBm91dHB1dEIRCg9fb3V0cHV0X2Jsb2JfaWQiKAoJUmVhZEVycm9yEgwK" + + "BHBhdGgYASABKAkSDQoFZXJyb3IYAiABKAkiLAoMUmVhZFJlamVjdGVkEgwKBHBhdGgYASABKAkS" + + "DgoGcmVhc29uGAIgASgJIiAKEFJlYWRGaWxlTm90Rm91bmQSDAoEcGF0aBgBIAEoCSIkChRSZWFk" + + "UGVybWlzc2lvbkRlbmllZBIMCgRwYXRoGAEgASgJIi8KD1JlYWRJbnZhbGlkRmlsZRIMCgRwYXRo" + + "GAEgASgJEg4KBnJlYXNvbhgCIAEoCSJeCgxSZWFkVG9vbENhbGwSJAoEYXJncxgBIAEoCzIWLmFn" + + "ZW50LnYxLlJlYWRUb29sQXJncxIoCgZyZXN1bHQYAiABKAsyGC5hZ2VudC52MS5SZWFkVG9vbFJl" + + "c3VsdCJaCgxSZWFkVG9vbEFyZ3MSDAoEcGF0aBgBIAEoCRITCgZvZmZzZXQYAiABKAVIAIgBARIS" + + "CgVsaW1pdBgDIAEoBUgBiAEBQgkKB19vZmZzZXRCCAoGX2xpbWl0InIKDlJlYWRUb29sUmVzdWx0" + + "EiwKB3N1Y2Nlc3MYASABKAsyGS5hZ2VudC52MS5SZWFkVG9vbFN1Y2Nlc3NIABIoCgVlcnJvchgC" + + "IAEoCzIXLmFnZW50LnYxLlJlYWRUb29sRXJyb3JIAEIICgZyZXN1bHQiMQoJUmVhZFJhbmdlEhIK" + + "CnN0YXJ0X2xpbmUYASABKA0SEAoIZW5kX2xpbmUYAiABKA0ijgIKD1JlYWRUb29sU3VjY2VzcxIQ" + + "Cghpc19lbXB0eRgCIAEoCBIWCg5leGNlZWRlZF9saW1pdBgDIAEoCBITCgt0b3RhbF9saW5lcxgE" + + "IAEoDRIRCglmaWxlX3NpemUYBSABKA0SDAoEcGF0aBgHIAEoCRIsCgpyZWFkX3JhbmdlGAggASgL" + + "MhMuYWdlbnQudjEuUmVhZFJhbmdlSAGIAQESEQoHY29udGVudBgBIAEoCUgAEg4KBGRhdGEYBiAB" + + "KAxIABIWCgxkYXRhX2Jsb2JfaWQYCSABKAxIABIZCg9jb250ZW50X2Jsb2JfaWQYCiABKAxIAEII" + + "CgZvdXRwdXRCDQoLX3JlYWRfcmFuZ2UiJgoNUmVhZFRvb2xFcnJvchIVCg1lcnJvcl9tZXNzYWdl" + + "GAEgASgJImoKEFJlY29yZFNjcmVlbkFyZ3MSDAoEbW9kZRgBIAEoBRIUCgx0b29sX2NhbGxfaWQY" + + "AiABKAkSHQoQc2F2ZV9hc19maWxlbmFtZRgDIAEoCUgAiAEBQhMKEV9zYXZlX2FzX2ZpbGVuYW1l" + + "IokCChJSZWNvcmRTY3JlZW5SZXN1bHQSOwoNc3RhcnRfc3VjY2VzcxgBIAEoCzIiLmFnZW50LnYx" + + "LlJlY29yZFNjcmVlblN0YXJ0U3VjY2Vzc0gAEjkKDHNhdmVfc3VjY2VzcxgCIAEoCzIhLmFnZW50" + + "LnYxLlJlY29yZFNjcmVlblNhdmVTdWNjZXNzSAASPwoPZGlzY2FyZF9zdWNjZXNzGAMgASgLMiQu" + + "YWdlbnQudjEuUmVjb3JkU2NyZWVuRGlzY2FyZFN1Y2Nlc3NIABIwCgdmYWlsdXJlGAQgASgLMh0u" + + "YWdlbnQudjEuUmVjb3JkU2NyZWVuRmFpbHVyZUgAQggKBnJlc3VsdCJnChhSZWNvcmRTY3JlZW5T" + + "dGFydFN1Y2Nlc3MSJQodd2FzX3ByaW9yX3JlY29yZGluZ19jYW5jZWxsZWQYASABKAgSJAocd2Fz" + + "X3NhdmVfYXNfZmlsZW5hbWVfaWdub3JlZBgCIAEoCCKgAQoXUmVjb3JkU2NyZWVuU2F2ZVN1Y2Nl" + + "c3MSDAoEcGF0aBgBIAEoCRIdChVyZWNvcmRpbmdfZHVyYXRpb25fbXMYAiABKAMSMAojcmVxdWVz" + + "dGVkX2ZpbGVfcGF0aF9yZWplY3RlZF9yZWFzb24YAyABKAVIAIgBAUImCiRfcmVxdWVzdGVkX2Zp" + + "bGVfcGF0aF9yZWplY3RlZF9yZWFzb24iHAoaUmVjb3JkU2NyZWVuRGlzY2FyZFN1Y2Nlc3MiJAoT" + + "UmVjb3JkU2NyZWVuRmFpbHVyZRINCgVlcnJvchgBIAEoCSI2ChNDdXJzb3JQYWNrYWdlUHJvbXB0" + + "EgwKBG5hbWUYASABKAkSEQoJZmlsZV9wYXRoGAIgASgJIuIBCg1DdXJzb3JQYWNrYWdlEgwKBG5h" + + "bWUYASABKAkSEwoLZGVzY3JpcHRpb24YAiABKAkSEwoLZm9sZGVyX3BhdGgYAyABKAkSDwoHZW5h" + + "YmxlZBgEIAEoCBIYCgtwYXJzZV9lcnJvchgFIAEoCUgAiAEBEi4KB3Byb21wdHMYBiADKAsyHS5h" + + "Z2VudC52MS5DdXJzb3JQYWNrYWdlUHJvbXB0EhgKEHJlYWRtZV9maWxlX3BhdGgYByABKAkSFAoM" + + "cGFja2FnZV90eXBlGAggASgFQg4KDF9wYXJzZV9lcnJvciKrAgoWUmVwb3NpdG9yeUluZGV4aW5n" + + "SW5mbxIfChdyZWxhdGl2ZV93b3Jrc3BhY2VfcGF0aBgBIAEoCRITCgtyZW1vdGVfdXJscxgCIAMo" + + "CRIUCgxyZW1vdGVfbmFtZXMYAyADKAkSEQoJcmVwb19uYW1lGAQgASgJEhIKCnJlcG9fb3duZXIY" + + "BSABKAkSEgoKaXNfdHJhY2tlZBgGIAEoCBIQCghpc19sb2NhbBgHIAEoCBImChlvcnRob2dvbmFs" + + "X3RyYW5zZm9ybV9zZWVkGAggASgBSACIAQESFQoNd29ya3NwYWNlX3VyaRgJIAEoCRIbChNwYXRo" + + "X2VuY3J5cHRpb25fa2V5GAogASgJQhwKGl9vcnRob2dvbmFsX3RyYW5zZm9ybV9zZWVkInQKElJl" + + "cXVlc3RDb250ZXh0QXJncxIdChBub3Rlc19zZXNzaW9uX2lkGAIgASgJSACIAQESGQoMd29ya3Nw" + + "YWNlX2lkGAMgASgJSAGIAQFCEwoRX25vdGVzX3Nlc3Npb25faWRCDwoNX3dvcmtzcGFjZV9pZCK6" + + "AQoUUmVxdWVzdENvbnRleHRSZXN1bHQSMgoHc3VjY2VzcxgBIAEoCzIfLmFnZW50LnYxLlJlcXVl" + + "c3RDb250ZXh0U3VjY2Vzc0gAEi4KBWVycm9yGAIgASgLMh0uYWdlbnQudjEuUmVxdWVzdENvbnRl" + + "eHRFcnJvckgAEjQKCHJlamVjdGVkGAMgASgLMiAuYWdlbnQudjEuUmVxdWVzdENvbnRleHRSZWpl" + + "Y3RlZEgAQggKBnJlc3VsdCJKChVSZXF1ZXN0Q29udGV4dFN1Y2Nlc3MSMQoPcmVxdWVzdF9jb250" + + "ZXh0GAEgASgLMhguYWdlbnQudjEuUmVxdWVzdENvbnRleHQiJAoTUmVxdWVzdENvbnRleHRFcnJv" + + "chINCgVlcnJvchgBIAEoCSIoChZSZXF1ZXN0Q29udGV4dFJlamVjdGVkEg4KBnJlYXNvbhgBIAEo" + + "CSLCAQoKSW1hZ2VQcm90bxIMCgRkYXRhGAEgASgMEgwKBHV1aWQYAiABKAkSDAoEcGF0aBgDIAEo" + + "CRIxCglkaW1lbnNpb24YBCABKAsyHi5hZ2VudC52MS5JbWFnZVByb3RvX0RpbWVuc2lvbhImChl0" + + "YXNrX3NwZWNpZmljX2Rlc2NyaXB0aW9uGAYgASgJSACIAQESEQoJbWltZV90eXBlGAcgASgJQhwK" + + "Gl90YXNrX3NwZWNpZmljX2Rlc2NyaXB0aW9uIjUKFEltYWdlUHJvdG9fRGltZW5zaW9uEg0KBXdp" + + "ZHRoGAEgASgFEg4KBmhlaWdodBgCIAEoBSJoCgtHaXRSZXBvSW5mbxIMCgRwYXRoGAEgASgJEg4K" + + "BnN0YXR1cxgCIAEoCRITCgticmFuY2hfbmFtZRgDIAEoCRIXCgpyZW1vdGVfdXJsGAQgASgJSACI" + + "AQFCDQoLX3JlbW90ZV91cmwimwIKEVJlcXVlc3RDb250ZXh0RW52EhIKCm9zX3ZlcnNpb24YASAB" + + "KAkSFwoPd29ya3NwYWNlX3BhdGhzGAIgAygJEg0KBXNoZWxsGAMgASgJEhcKD3NhbmRib3hfZW5h" + + "YmxlZBgFIAEoCBIYChB0ZXJtaW5hbHNfZm9sZGVyGAcgASgJEiEKGWFnZW50X3NoYXJlZF9ub3Rl" + + "c19mb2xkZXIYCCABKAkSJwofYWdlbnRfY29udmVyc2F0aW9uX25vdGVzX2ZvbGRlchgJIAEoCRIR" + + "Cgl0aW1lX3pvbmUYCiABKAkSFgoOcHJvamVjdF9mb2xkZXIYCyABKAkSIAoYYWdlbnRfdHJhbnNj" + + "cmlwdHNfZm9sZGVyGAwgASgJIjwKD0RlYnVnTW9kZUNvbmZpZxIQCghsb2dfcGF0aBgBIAEoCRIX" + + "Cg9zZXJ2ZXJfZW5kcG9pbnQYAiABKAkitAEKD1NraWxsRGVzY3JpcHRvchIMCgRuYW1lGAEgASgJ" + + "EhMKC2Rlc2NyaXB0aW9uGAIgASgJEhMKC2ZvbGRlcl9wYXRoGAMgASgJEg8KB2VuYWJsZWQYBCAB" + + "KAgSGAoLcGFyc2VfZXJyb3IYBSABKAlIAIgBARIYChByZWFkbWVfZmlsZV9wYXRoGAYgASgJEhQK" + + "DHBhY2thZ2VfdHlwZRgHIAEoBUIOCgxfcGFyc2VfZXJyb3IiRAoMU2tpbGxPcHRpb25zEjQKEXNr" + + "aWxsX2Rlc2NyaXB0b3JzGAEgAygLMhkuYWdlbnQudjEuU2tpbGxEZXNjcmlwdG9yIvYICg5SZXF1" + + "ZXN0Q29udGV4dBIjCgVydWxlcxgCIAMoCzIULmFnZW50LnYxLkN1cnNvclJ1bGUSKAoDZW52GAQg" + + "ASgLMhsuYWdlbnQudjEuUmVxdWVzdENvbnRleHRFbnYSOQoPcmVwb3NpdG9yeV9pbmZvGAYgAygL" + + "MiAuYWdlbnQudjEuUmVwb3NpdG9yeUluZGV4aW5nSW5mbxIqCgV0b29scxgHIAMoCzIbLmFnZW50" + + "LnYxLk1jcFRvb2xEZWZpbml0aW9uEicKGmNvbnZlcnNhdGlvbl9ub3Rlc19saXN0aW5nGAggASgJ" + + "SACIAQESIQoUc2hhcmVkX25vdGVzX2xpc3RpbmcYCSABKAlIAYgBARIoCglnaXRfcmVwb3MYCyAD" + + "KAsyFS5hZ2VudC52MS5HaXRSZXBvSW5mbxI2Cg9wcm9qZWN0X2xheW91dHMYDSADKAsyHS5hZ2Vu" + + "dC52MS5Mc0RpcmVjdG9yeVRyZWVOb2RlEjMKEG1jcF9pbnN0cnVjdGlvbnMYDiADKAsyGS5hZ2Vu" + + "dC52MS5NY3BJbnN0cnVjdGlvbnMSOQoRZGVidWdfbW9kZV9jb25maWcYDyABKAsyGS5hZ2VudC52" + + "MS5EZWJ1Z01vZGVDb25maWdIAogBARIXCgpjbG91ZF9ydWxlGBAgASgJSAOIAQESHwoSd2ViX3Nl" + + "YXJjaF9lbmFibGVkGBEgASgISASIAQESMgoNc2tpbGxfb3B0aW9ucxgSIAEoCzIWLmFnZW50LnYx" + + "LlNraWxsT3B0aW9uc0gFiAEBEi4KIXJlcG9zaXRvcnlfaW5mb19zaG91bGRfcXVlcnlfcHJvZBgT" + + "IAEoCEgGiAEBEkEKDWZpbGVfY29udGVudHMYFCADKAsyKi5hZ2VudC52MS5SZXF1ZXN0Q29udGV4" + + "dC5GaWxlQ29udGVudHNFbnRyeRIgChN1c2VyX2ludGVudF9zdW1tYXJ5GBUgASgJSAeIAQESMgoQ" + + "Y3VzdG9tX3N1YmFnZW50cxgWIAMoCzIYLmFnZW50LnYxLkN1c3RvbVN1YmFnZW50EkQKF21jcF9m" + + "aWxlX3N5c3RlbV9vcHRpb25zGBcgASgLMh4uYWdlbnQudjEuTWNwRmlsZVN5c3RlbU9wdGlvbnNI" + + "CIgBARozChFGaWxlQ29udGVudHNFbnRyeRILCgNrZXkYASABKAkSDQoFdmFsdWUYAiABKAk6AjgB" + + "Qh0KG19jb252ZXJzYXRpb25fbm90ZXNfbGlzdGluZ0IXChVfc2hhcmVkX25vdGVzX2xpc3RpbmdC" + + "FAoSX2RlYnVnX21vZGVfY29uZmlnQg0KC19jbG91ZF9ydWxlQhUKE193ZWJfc2VhcmNoX2VuYWJs" + + "ZWRCEAoOX3NraWxsX29wdGlvbnNCJAoiX3JlcG9zaXRvcnlfaW5mb19zaG91bGRfcXVlcnlfcHJv" + + "ZEIWChRfdXNlcl9pbnRlbnRfc3VtbWFyeUIaChhfbWNwX2ZpbGVfc3lzdGVtX29wdGlvbnMisgIK" + + "DVNhbmRib3hQb2xpY3kSDAoEdHlwZRgBIAEoBRIbCg5uZXR3b3JrX2FjY2VzcxgCIAEoCEgAiAEB" + + "EiIKGmFkZGl0aW9uYWxfcmVhZHdyaXRlX3BhdGhzGAMgAygJEiEKGWFkZGl0aW9uYWxfcmVhZG9u" + + "bHlfcGF0aHMYBCADKAkSHQoQZGVidWdfb3V0cHV0X2RpchgFIAEoCUgBiAEBEh0KEGJsb2NrX2dp" + + "dF93cml0ZXMYBiABKAhIAogBARIeChFkaXNhYmxlX3RtcF93cml0ZRgHIAEoCEgDiAEBQhEKD19u" + + "ZXR3b3JrX2FjY2Vzc0ITChFfZGVidWdfb3V0cHV0X2RpckITChFfYmxvY2tfZ2l0X3dyaXRlc0IU" + + "ChJfZGlzYWJsZV90bXBfd3JpdGUi7wEKDVNlbGVjdGVkSW1hZ2USDAoEdXVpZBgCIAEoCRIMCgRw" + + "YXRoGAMgASgJEjQKCWRpbWVuc2lvbhgEIAEoCzIhLmFnZW50LnYxLlNlbGVjdGVkSW1hZ2VfRGlt" + + "ZW5zaW9uEhEKCW1pbWVfdHlwZRgHIAEoCRIRCgdibG9iX2lkGAEgASgMSAASDgoEZGF0YRgIIAEo" + + "DEgAEkMKEWJsb2JfaWRfd2l0aF9kYXRhGAkgASgLMiYuYWdlbnQudjEuU2VsZWN0ZWRJbWFnZV9C" + + "bG9iSWRXaXRoRGF0YUgAQhEKD2RhdGFfb3JfYmxvYl9pZCI9ChxTZWxlY3RlZEltYWdlX0Jsb2JJ" + + "ZFdpdGhEYXRhEg8KB2Jsb2JfaWQYASABKAwSDAoEZGF0YRgCIAEoDCI4ChdTZWxlY3RlZEltYWdl" + + "X0RpbWVuc2lvbhINCgV3aWR0aBgBIAEoBRIOCgZoZWlnaHQYAiABKAUiSQoRRXh0cmFDb250ZXh0" + + "RW50cnkSDgoEZGF0YRgBIAEoCUgAEhEKB2Jsb2JfaWQYAiABKAxIAEIRCg9kYXRhX29yX2Jsb2Jf" + + "aWQiWwoMU2VsZWN0ZWRGaWxlEg8KB2NvbnRlbnQYASABKAkSDAoEcGF0aBgCIAEoCRIaCg1yZWxh" + + "dGl2ZV9wYXRoGAMgASgJSACIAQFCEAoOX3JlbGF0aXZlX3BhdGgihAEKFVNlbGVjdGVkQ29kZVNl" + + "bGVjdGlvbhIPCgdjb250ZW50GAEgASgJEgwKBHBhdGgYAiABKAkSGgoNcmVsYXRpdmVfcGF0aBgD" + + "IAEoCUgAiAEBEh4KBXJhbmdlGAQgASgLMg8uYWdlbnQudjEuUmFuZ2VCEAoOX3JlbGF0aXZlX3Bh" + + "dGgiXQoQU2VsZWN0ZWRUZXJtaW5hbBIPCgdjb250ZW50GAEgASgJEhIKBXRpdGxlGAIgASgJSACI" + + "AQESEQoEcGF0aBgDIAEoCUgBiAEBQggKBl90aXRsZUIHCgVfcGF0aCKGAQoZU2VsZWN0ZWRUZXJt" + + "aW5hbFNlbGVjdGlvbhIPCgdjb250ZW50GAEgASgJEhIKBXRpdGxlGAIgASgJSACIAQESEQoEcGF0" + + "aBgDIAEoCUgBiAEBEh4KBXJhbmdlGAQgASgLMg8uYWdlbnQudjEuUmFuZ2VCCAoGX3RpdGxlQgcK" + + "BV9wYXRoIoMBCg5TZWxlY3RlZEZvbGRlchIMCgRwYXRoGAEgASgJEhoKDXJlbGF0aXZlX3BhdGgY" + + "AiABKAlIAIgBARI1Cg5kaXJlY3RvcnlfdHJlZRgDIAEoCzIdLmFnZW50LnYxLkxzRGlyZWN0b3J5" + + "VHJlZU5vZGVCEAoOX3JlbGF0aXZlX3BhdGginwEKFFNlbGVjdGVkRXh0ZXJuYWxMaW5rEgsKA3Vy" + + "bBgBIAEoCRIMCgR1dWlkGAIgASgJEhgKC3BkZl9jb250ZW50GAMgASgJSACIAQESEwoGaXNfcGRm" + + "GAQgASgISAGIAQESFQoIZmlsZW5hbWUYBSABKAlIAogBAUIOCgxfcGRmX2NvbnRlbnRCCQoHX2lz" + + "X3BkZkILCglfZmlsZW5hbWUiOAoSU2VsZWN0ZWRDdXJzb3JSdWxlEiIKBHJ1bGUYASABKAsyFC5h" + + "Z2VudC52MS5DdXJzb3JSdWxlIiIKD1NlbGVjdGVkR2l0RGlmZhIPCgdjb250ZW50GAEgASgJIjIK" + + "H1NlbGVjdGVkR2l0RGlmZkZyb21CcmFuY2hUb01haW4SDwoHY29udGVudBgBIAEoCSJpChFTZWxl" + + "Y3RlZEdpdENvbW1pdBILCgNzaGEYASABKAkSDwoHbWVzc2FnZRgCIAEoCRIYCgtkZXNjcmlwdGlv" + + "bhgDIAEoCUgAiAEBEgwKBGRpZmYYBCABKAlCDgoMX2Rlc2NyaXB0aW9uIt0BChNTZWxlY3RlZFB1" + + "bGxSZXF1ZXN0Eg4KBm51bWJlchgBIAEoBRILCgN1cmwYAiABKAkSEgoFdGl0bGUYAyABKAlIAIgB" + + "ARITCgtmb2xkZXJfcGF0aBgEIAEoCRIZCgxzdW1tYXJ5X2pzb24YBSABKAlIAYgBARIYCgtkZXNj" + + "cmlwdGlvbhgGIAEoCUgCiAEBEhQKB2Jsb2JfaWQYByABKAxIA4gBAUIICgZfdGl0bGVCDwoNX3N1" + + "bW1hcnlfanNvbkIOCgxfZGVzY3JpcHRpb25CCgoIX2Jsb2JfaWQiswEKGlNlbGVjdGVkR2l0UFJE" + + "aWZmU2VsZWN0aW9uEg4KBnByX3VybBgBIAEoCRIRCglmaWxlX3BhdGgYAiABKAkSEgoKc3RhcnRf" + + "bGluZRgDIAEoBRIQCghlbmRfbGluZRgEIAEoBRIZCgxkaWZmX2NvbnRlbnQYBSABKAlIAIgBARIU" + + "CgdibG9iX2lkGAYgASgMSAGIAQFCDwoNX2RpZmZfY29udGVudEIKCghfYmxvYl9pZCI2ChVTZWxl" + + "Y3RlZEN1cnNvckNvbW1hbmQSDAoEbmFtZRgBIAEoCRIPCgdjb250ZW50GAIgASgJIjUKFVNlbGVj" + + "dGVkRG9jdW1lbnRhdGlvbhIOCgZkb2NfaWQYASABKAkSDAoEbmFtZRgCIAEoCSIyChBTZWxlY3Rl" + + "ZFBhc3RDaGF0EhAKCGFnZW50X2lkGAEgASgJEgwKBG5hbWUYAiABKAkiqwEKCUNhbGxGcmFtZRIa" + + "Cg1mdW5jdGlvbl9uYW1lGAEgASgJSACIAQESEAoDdXJsGAIgASgJSAGIAQESGAoLbGluZV9udW1i" + + "ZXIYAyABKAVIAogBARIaCg1jb2x1bW5fbnVtYmVyGAQgASgFSAOIAQFCEAoOX2Z1bmN0aW9uX25h" + + "bWVCBgoEX3VybEIOCgxfbGluZV9udW1iZXJCEAoOX2NvbHVtbl9udW1iZXIiaAoKU3RhY2tUcmFj" + + "ZRIoCgtjYWxsX2ZyYW1lcxgBIAMoCzITLmFnZW50LnYxLkNhbGxGcmFtZRIcCg9yYXdfc3RhY2tf" + + "dHJhY2UYAiABKAlIAIgBAUISChBfcmF3X3N0YWNrX3RyYWNlIuQBChJTZWxlY3RlZENvbnNvbGVM" + + "b2cSDwoHbWVzc2FnZRgBIAEoCRIRCgl0aW1lc3RhbXAYAiABKAESDQoFbGV2ZWwYAyABKAkSEwoL" + + "Y2xpZW50X25hbWUYBCABKAkSEgoKc2Vzc2lvbl9pZBgFIAEoCRIuCgtzdGFja190cmFjZRgGIAEo" + + "CzIULmFnZW50LnYxLlN0YWNrVHJhY2VIAIgBARIdChBvYmplY3RfZGF0YV9qc29uGAcgASgJSAGI" + + "AQFCDgoMX3N0YWNrX3RyYWNlQhMKEV9vYmplY3RfZGF0YV9qc29uIroBChFTZWxlY3RlZFVJRWxl" + + "bWVudBIPCgdlbGVtZW50GAEgASgJEg0KBXhwYXRoGAIgASgJEhQKDHRleHRfY29udGVudBgDIAEo" + + "CRINCgVleHRyYRgEIAEoCRIWCgljb21wb25lbnQYBSABKAlIAIgBARIhChRjb21wb25lbnRfcHJv" + + "cHNfanNvbhgGIAEoCUgBiAEBQgwKCl9jb21wb25lbnRCFwoVX2NvbXBvbmVudF9wcm9wc19qc29u" + + "IiAKEFNlbGVjdGVkU3ViYWdlbnQSDAoEbmFtZRgBIAEoCSKCCgoPU2VsZWN0ZWRDb250ZXh0EjAK" + + "D3NlbGVjdGVkX2ltYWdlcxgBIAMoCzIXLmFnZW50LnYxLlNlbGVjdGVkSW1hZ2USPAoSaW52b2Nh" + + "dGlvbl9jb250ZXh0GAIgASgLMhsuYWdlbnQudjEuSW52b2NhdGlvbkNvbnRleHRIAIgBARIVCg1l" + + "eHRyYV9jb250ZXh0GAMgAygJEjoKFWV4dHJhX2NvbnRleHRfZW50cmllcxgQIAMoCzIbLmFnZW50" + + "LnYxLkV4dHJhQ29udGV4dEVudHJ5EiUKBWZpbGVzGAQgAygLMhYuYWdlbnQudjEuU2VsZWN0ZWRG" + + "aWxlEjgKD2NvZGVfc2VsZWN0aW9ucxgFIAMoCzIfLmFnZW50LnYxLlNlbGVjdGVkQ29kZVNlbGVj" + + "dGlvbhItCgl0ZXJtaW5hbHMYBiADKAsyGi5hZ2VudC52MS5TZWxlY3RlZFRlcm1pbmFsEkAKE3Rl" + + "cm1pbmFsX3NlbGVjdGlvbnMYByADKAsyIy5hZ2VudC52MS5TZWxlY3RlZFRlcm1pbmFsU2VsZWN0" + + "aW9uEikKB2ZvbGRlcnMYCCADKAsyGC5hZ2VudC52MS5TZWxlY3RlZEZvbGRlchI2Cg5leHRlcm5h" + + "bF9saW5rcxgJIAMoCzIeLmFnZW50LnYxLlNlbGVjdGVkRXh0ZXJuYWxMaW5rEjIKDGN1cnNvcl9y" + + "dWxlcxgKIAMoCzIcLmFnZW50LnYxLlNlbGVjdGVkQ3Vyc29yUnVsZRIwCghnaXRfZGlmZhgSIAEo" + + "CzIZLmFnZW50LnYxLlNlbGVjdGVkR2l0RGlmZkgBiAEBElQKHGdpdF9kaWZmX2Zyb21fYnJhbmNo" + + "X3RvX21haW4YCyABKAsyKS5hZ2VudC52MS5TZWxlY3RlZEdpdERpZmZGcm9tQnJhbmNoVG9NYWlu" + + "SAKIAQESOAoPY3Vyc29yX2NvbW1hbmRzGAwgAygLMh8uYWdlbnQudjEuU2VsZWN0ZWRDdXJzb3JD" + + "b21tYW5kEjcKDmRvY3VtZW50YXRpb25zGA0gAygLMh8uYWdlbnQudjEuU2VsZWN0ZWREb2N1bWVu" + + "dGF0aW9uEjAKC3VpX2VsZW1lbnRzGA4gAygLMhsuYWdlbnQudjEuU2VsZWN0ZWRVSUVsZW1lbnQS" + + "MgoMY29uc29sZV9sb2dzGA8gAygLMhwuYWdlbnQudjEuU2VsZWN0ZWRDb25zb2xlTG9nEjAKC2dp" + + "dF9jb21taXRzGBEgAygLMhsuYWdlbnQudjEuU2VsZWN0ZWRHaXRDb21taXQSLgoKcGFzdF9jaGF0" + + "cxgTIAMoCzIaLmFnZW50LnYxLlNlbGVjdGVkUGFzdENoYXQSRAoWZ2l0X3ByX2RpZmZfc2VsZWN0" + + "aW9ucxgUIAMoCzIkLmFnZW50LnYxLlNlbGVjdGVkR2l0UFJEaWZmU2VsZWN0aW9uEj0KFnNlbGVj" + + "dGVkX3B1bGxfcmVxdWVzdHMYFSADKAsyHS5hZ2VudC52MS5TZWxlY3RlZFB1bGxSZXF1ZXN0EjYK" + + "EnNlbGVjdGVkX3N1YmFnZW50cxgWIAMoCzIaLmFnZW50LnYxLlNlbGVjdGVkU3ViYWdlbnRCFQoT" + + "X2ludm9jYXRpb25fY29udGV4dEILCglfZ2l0X2RpZmZCHwodX2dpdF9kaWZmX2Zyb21fYnJhbmNo" + + "X3RvX21haW4i5QEKEUludm9jYXRpb25Db250ZXh0Ej8KDHNsYWNrX3RocmVhZBgBIAEoCzInLmFn" + + "ZW50LnYxLkludm9jYXRpb25Db250ZXh0X1NsYWNrVGhyZWFkSAASOQoJZ2l0aHViX3ByGAIgASgL" + + "MiQuYWdlbnQudjEuSW52b2NhdGlvbkNvbnRleHRfR2l0aHViUFJIABI5CglpZGVfc3RhdGUYAyAB" + + "KAsyJC5hZ2VudC52MS5JbnZvY2F0aW9uQ29udGV4dF9JZGVTdGF0ZUgAEhEKB2Jsb2JfaWQYCiAB" + + "KAxIAEIGCgRkYXRhIrsBCh1JbnZvY2F0aW9uQ29udGV4dF9TbGFja1RocmVhZBIOCgZ0aHJlYWQY" + + "ASABKAkSGQoMY2hhbm5lbF9uYW1lGAIgASgJSACIAQESHAoPY2hhbm5lbF9wdXJwb3NlGAMgASgJ" + + "SAGIAQESGgoNY2hhbm5lbF90b3BpYxgEIAEoCUgCiAEBQg8KDV9jaGFubmVsX25hbWVCEgoQX2No" + + "YW5uZWxfcHVycG9zZUIQCg5fY2hhbm5lbF90b3BpYyJ8ChpJbnZvY2F0aW9uQ29udGV4dF9HaXRo" + + "dWJQUhINCgV0aXRsZRgBIAEoCRITCgtkZXNjcmlwdGlvbhgCIAEoCRIQCghjb21tZW50cxgDIAEo" + + "CRIYCgtjaV9mYWlsdXJlcxgEIAEoCUgAiAEBQg4KDF9jaV9mYWlsdXJlcyL+AQoaSW52b2NhdGlv" + + "bkNvbnRleHRfSWRlU3RhdGUSQAoNdmlzaWJsZV9maWxlcxgBIAMoCzIpLmFnZW50LnYxLkludm9j" + + "YXRpb25Db250ZXh0X0lkZVN0YXRlX0ZpbGUSSAoVcmVjZW50bHlfdmlld2VkX2ZpbGVzGAIgAygL" + + "MikuYWdlbnQudjEuSW52b2NhdGlvbkNvbnRleHRfSWRlU3RhdGVfRmlsZRJUChRjdXJyZW50bHlf" + + "dmlld2VkX3BycxgDIAMoCzI2LmFnZW50LnYxLkludm9jYXRpb25Db250ZXh0X0lkZVN0YXRlX1Zp" + + "ZXdlZFB1bGxSZXF1ZXN0Io4CCh9JbnZvY2F0aW9uQ29udGV4dF9JZGVTdGF0ZV9GaWxlEgwKBHBh" + + "dGgYASABKAkSGgoNcmVsYXRpdmVfcGF0aBgCIAEoCUgAiAEBElYKD2N1cnNvcl9wb3NpdGlvbhgD" + + "IAEoCzI4LmFnZW50LnYxLkludm9jYXRpb25Db250ZXh0X0lkZVN0YXRlX0ZpbGVfQ3Vyc29yUG9z" + + "aXRpb25IAYgBARITCgt0b3RhbF9saW5lcxgEIAEoBRIbCg5hY3RpdmVfY29tbWFuZBgFIAEoCUgC" + + "iAEBQhAKDl9yZWxhdGl2ZV9wYXRoQhIKEF9jdXJzb3JfcG9zaXRpb25CEQoPX2FjdGl2ZV9jb21t" + + "YW5kIkwKLkludm9jYXRpb25Db250ZXh0X0lkZVN0YXRlX0ZpbGVfQ3Vyc29yUG9zaXRpb24SDAoE" + + "bGluZRgBIAEoBRIMCgR0ZXh0GAIgASgJIukBCixJbnZvY2F0aW9uQ29udGV4dF9JZGVTdGF0ZV9W" + + "aWV3ZWRQdWxsUmVxdWVzdBIOCgZudW1iZXIYASABKAUSCwoDdXJsGAIgASgJEhIKBXRpdGxlGAMg" + + "ASgJSACIAQESGAoLZm9sZGVyX3BhdGgYBCABKAlIAYgBARIZCgxzdW1tYXJ5X2pzb24YBSABKAlI" + + "AogBARIYCgtkZXNjcmlwdGlvbhgGIAEoCUgDiAEBQggKBl90aXRsZUIOCgxfZm9sZGVyX3BhdGhC" + + "DwoNX3N1bW1hcnlfanNvbkIOCgxfZGVzY3JpcHRpb24iSAoWU2V0dXBWbUVudmlyb25tZW50QXJn" + + "cxIXCg9pbnN0YWxsX2NvbW1hbmQYAiABKAkSFQoNc3RhcnRfY29tbWFuZBgDIAEoCSJcChhTZXR1" + + "cFZtRW52aXJvbm1lbnRSZXN1bHQSNgoHc3VjY2VzcxgBIAEoCzIjLmFnZW50LnYxLlNldHVwVm1F" + + "bnZpcm9ubWVudFN1Y2Nlc3NIAEIICgZyZXN1bHQiGwoZU2V0dXBWbUVudmlyb25tZW50U3VjY2Vz" + + "cyKAAQoaU2V0dXBWbUVudmlyb25tZW50VG9vbENhbGwSLgoEYXJncxgBIAEoCzIgLmFnZW50LnYx" + + "LlNldHVwVm1FbnZpcm9ubWVudEFyZ3MSMgoGcmVzdWx0GAIgASgLMiIuYWdlbnQudjEuU2V0dXBW" + + "bUVudmlyb25tZW50UmVzdWx0IsABChlTaGVsbENvbW1hbmRQYXJzaW5nUmVzdWx0EhYKDnBhcnNp" + + "bmdfZmFpbGVkGAEgASgIElIKE2V4ZWN1dGFibGVfY29tbWFuZHMYAiADKAsyNS5hZ2VudC52MS5T" + + "aGVsbENvbW1hbmRQYXJzaW5nUmVzdWx0X0V4ZWN1dGFibGVDb21tYW5kEhUKDWhhc19yZWRpcmVj" + + "dHMYAyABKAgSIAoYaGFzX2NvbW1hbmRfc3Vic3RpdHV0aW9uGAQgASgIIk0KLlNoZWxsQ29tbWFu" + + "ZFBhcnNpbmdSZXN1bHRfRXhlY3V0YWJsZUNvbW1hbmRBcmcSDAoEdHlwZRgBIAEoCRINCgV2YWx1" + + "ZRgCIAEoCSKWAQorU2hlbGxDb21tYW5kUGFyc2luZ1Jlc3VsdF9FeGVjdXRhYmxlQ29tbWFuZBIM" + + "CgRuYW1lGAEgASgJEkYKBGFyZ3MYAiADKAsyOC5hZ2VudC52MS5TaGVsbENvbW1hbmRQYXJzaW5n" + + "UmVzdWx0X0V4ZWN1dGFibGVDb21tYW5kQXJnEhEKCWZ1bGxfdGV4dBgDIAEoCSKIBAoJU2hlbGxB" + + "cmdzEg8KB2NvbW1hbmQYASABKAkSGQoRd29ya2luZ19kaXJlY3RvcnkYAiABKAkSDwoHdGltZW91" + + "dBgDIAEoBRIUCgx0b29sX2NhbGxfaWQYBCABKAkSFwoPc2ltcGxlX2NvbW1hbmRzGAUgAygJEhoK" + + "Emhhc19pbnB1dF9yZWRpcmVjdBgGIAEoCBIbChNoYXNfb3V0cHV0X3JlZGlyZWN0GAcgASgIEjsK" + + "DnBhcnNpbmdfcmVzdWx0GAggASgLMiMuYWdlbnQudjEuU2hlbGxDb21tYW5kUGFyc2luZ1Jlc3Vs" + + "dBI+ChhyZXF1ZXN0ZWRfc2FuZGJveF9wb2xpY3kYCSABKAsyFy5hZ2VudC52MS5TYW5kYm94UG9s" + + "aWN5SACIAQESKAobZmlsZV9vdXRwdXRfdGhyZXNob2xkX2J5dGVzGAogASgESAGIAQESFQoNaXNf" + + "YmFja2dyb3VuZBgLIAEoCBIVCg1za2lwX2FwcHJvdmFsGAwgASgIEhgKEHRpbWVvdXRfYmVoYXZp" + + "b3IYDSABKAUSGQoMaGFyZF90aW1lb3V0GA4gASgFSAKIAQFCGwoZX3JlcXVlc3RlZF9zYW5kYm94" + + "X3BvbGljeUIeChxfZmlsZV9vdXRwdXRfdGhyZXNob2xkX2J5dGVzQg8KDV9oYXJkX3RpbWVvdXQi" + + "+gMKC1NoZWxsUmVzdWx0EjQKDnNhbmRib3hfcG9saWN5GGUgASgLMhcuYWdlbnQudjEuU2FuZGJv" + + "eFBvbGljeUgBiAEBEhoKDWlzX2JhY2tncm91bmQYZiABKAhIAogBARIdChB0ZXJtaW5hbHNfZm9s" + + "ZGVyGGcgASgJSAOIAQESEAoDcGlkGGggASgNSASIAQESKQoHc3VjY2VzcxgBIAEoCzIWLmFnZW50" + + "LnYxLlNoZWxsU3VjY2Vzc0gAEikKB2ZhaWx1cmUYAiABKAsyFi5hZ2VudC52MS5TaGVsbEZhaWx1" + + "cmVIABIpCgd0aW1lb3V0GAMgASgLMhYuYWdlbnQudjEuU2hlbGxUaW1lb3V0SAASKwoIcmVqZWN0" + + "ZWQYBCABKAsyFy5hZ2VudC52MS5TaGVsbFJlamVjdGVkSAASMAoLc3Bhd25fZXJyb3IYBSABKAsy" + + "GS5hZ2VudC52MS5TaGVsbFNwYXduRXJyb3JIABI8ChFwZXJtaXNzaW9uX2RlbmllZBgHIAEoCzIf" + + "LmFnZW50LnYxLlNoZWxsUGVybWlzc2lvbkRlbmllZEgAQggKBnJlc3VsdEIRCg9fc2FuZGJveF9w" + + "b2xpY3lCEAoOX2lzX2JhY2tncm91bmRCEwoRX3Rlcm1pbmFsc19mb2xkZXJCBgoEX3BpZCIhChFT" + + "aGVsbFN0cmVhbVN0ZG91dBIMCgRkYXRhGAEgASgJIiEKEVNoZWxsU3RyZWFtU3RkZXJyEgwKBGRh" + + "dGEYASABKAkitQEKD1NoZWxsU3RyZWFtRXhpdBIMCgRjb2RlGAEgASgNEgsKA2N3ZBgCIAEoCRI2" + + "Cg9vdXRwdXRfbG9jYXRpb24YAyABKAsyGC5hZ2VudC52MS5PdXRwdXRMb2NhdGlvbkgAiAEBEg8K" + + "B2Fib3J0ZWQYBCABKAgSGQoMYWJvcnRfcmVhc29uGAUgASgFSAGIAQFCEgoQX291dHB1dF9sb2Nh" + + "dGlvbkIPCg1fYWJvcnRfcmVhc29uIlsKEFNoZWxsU3RyZWFtU3RhcnQSNAoOc2FuZGJveF9wb2xp" + + "Y3kYASABKAsyFy5hZ2VudC52MS5TYW5kYm94UG9saWN5SACIAQFCEQoPX3NhbmRib3hfcG9saWN5" + + "IpkBChdTaGVsbFN0cmVhbUJhY2tncm91bmRlZBIQCghzaGVsbF9pZBgBIAEoDRIPCgdjb21tYW5k" + + "GAIgASgJEhkKEXdvcmtpbmdfZGlyZWN0b3J5GAMgASgJEhAKA3BpZBgEIAEoDUgAiAEBEhcKCm1z" + + "X3RvX3dhaXQYBSABKAVIAYgBAUIGCgRfcGlkQg0KC19tc190b193YWl0IvICCgtTaGVsbFN0cmVh" + + "bRItCgZzdGRvdXQYASABKAsyGy5hZ2VudC52MS5TaGVsbFN0cmVhbVN0ZG91dEgAEi0KBnN0ZGVy" + + "chgCIAEoCzIbLmFnZW50LnYxLlNoZWxsU3RyZWFtU3RkZXJySAASKQoEZXhpdBgDIAEoCzIZLmFn" + + "ZW50LnYxLlNoZWxsU3RyZWFtRXhpdEgAEisKBXN0YXJ0GAQgASgLMhouYWdlbnQudjEuU2hlbGxT" + + "dHJlYW1TdGFydEgAEisKCHJlamVjdGVkGAUgASgLMhcuYWdlbnQudjEuU2hlbGxSZWplY3RlZEgA" + + "EjwKEXBlcm1pc3Npb25fZGVuaWVkGAYgASgLMh8uYWdlbnQudjEuU2hlbGxQZXJtaXNzaW9uRGVu" + + "aWVkSAASOQoMYmFja2dyb3VuZGVkGAcgASgLMiEuYWdlbnQudjEuU2hlbGxTdHJlYW1CYWNrZ3Jv" + + "dW5kZWRIAEIHCgVldmVudCJLCg5PdXRwdXRMb2NhdGlvbhIRCglmaWxlX3BhdGgYASABKAkSEgoK" + + "c2l6ZV9ieXRlcxgCIAEoAxISCgpsaW5lX2NvdW50GAMgASgDIv8CCgxTaGVsbFN1Y2Nlc3MSDwoH" + + "Y29tbWFuZBgBIAEoCRIZChF3b3JraW5nX2RpcmVjdG9yeRgCIAEoCRIRCglleGl0X2NvZGUYAyAB" + + "KAUSDgoGc2lnbmFsGAQgASgJEg4KBnN0ZG91dBgFIAEoCRIOCgZzdGRlcnIYBiABKAkSFgoOZXhl" + + "Y3V0aW9uX3RpbWUYByABKAUSNgoPb3V0cHV0X2xvY2F0aW9uGAggASgLMhguYWdlbnQudjEuT3V0" + + "cHV0TG9jYXRpb25IAIgBARIVCghzaGVsbF9pZBgJIAEoDUgBiAEBEh8KEmludGVybGVhdmVkX291" + + "dHB1dBgKIAEoCUgCiAEBEhAKA3BpZBgLIAEoDUgDiAEBEhcKCm1zX3RvX3dhaXQYDCABKAVIBIgB" + + "AUISChBfb3V0cHV0X2xvY2F0aW9uQgsKCV9zaGVsbF9pZEIVChNfaW50ZXJsZWF2ZWRfb3V0cHV0" + + "QgYKBF9waWRCDQoLX21zX3RvX3dhaXQi1gIKDFNoZWxsRmFpbHVyZRIPCgdjb21tYW5kGAEgASgJ" + + "EhkKEXdvcmtpbmdfZGlyZWN0b3J5GAIgASgJEhEKCWV4aXRfY29kZRgDIAEoBRIOCgZzaWduYWwY" + + "BCABKAkSDgoGc3Rkb3V0GAUgASgJEg4KBnN0ZGVychgGIAEoCRIWCg5leGVjdXRpb25fdGltZRgH" + + "IAEoBRI2Cg9vdXRwdXRfbG9jYXRpb24YCCABKAsyGC5hZ2VudC52MS5PdXRwdXRMb2NhdGlvbkgA" + + "iAEBEh8KEmludGVybGVhdmVkX291dHB1dBgJIAEoCUgBiAEBEhkKDGFib3J0X3JlYXNvbhgKIAEo" + + "BUgCiAEBEg8KB2Fib3J0ZWQYCyABKAhCEgoQX291dHB1dF9sb2NhdGlvbkIVChNfaW50ZXJsZWF2" + + "ZWRfb3V0cHV0Qg8KDV9hYm9ydF9yZWFzb24iTgoMU2hlbGxUaW1lb3V0Eg8KB2NvbW1hbmQYASAB" + + "KAkSGQoRd29ya2luZ19kaXJlY3RvcnkYAiABKAkSEgoKdGltZW91dF9tcxgDIAEoBSJgCg1TaGVs" + + "bFJlamVjdGVkEg8KB2NvbW1hbmQYASABKAkSGQoRd29ya2luZ19kaXJlY3RvcnkYAiABKAkSDgoG" + + "cmVhc29uGAMgASgJEhMKC2lzX3JlYWRvbmx5GAQgASgIImcKFVNoZWxsUGVybWlzc2lvbkRlbmll" + + "ZBIPCgdjb21tYW5kGAEgASgJEhkKEXdvcmtpbmdfZGlyZWN0b3J5GAIgASgJEg0KBWVycm9yGAMg" + + "ASgJEhMKC2lzX3JlYWRvbmx5GAQgASgIIkwKD1NoZWxsU3Bhd25FcnJvchIPCgdjb21tYW5kGAEg" + + "ASgJEhkKEXdvcmtpbmdfZGlyZWN0b3J5GAIgASgJEg0KBWVycm9yGAMgASgJIkAKElNoZWxsUGFy" + + "dGlhbFJlc3VsdBIUCgxzdGRvdXRfZGVsdGEYASABKAkSFAoMc3RkZXJyX2RlbHRhGAIgASgJIlkK" + + "DVNoZWxsVG9vbENhbGwSIQoEYXJncxgBIAEoCzITLmFnZW50LnYxLlNoZWxsQXJncxIlCgZyZXN1" + + "bHQYAiABKAsyFS5hZ2VudC52MS5TaGVsbFJlc3VsdCIrChhTaGVsbFRvb2xDYWxsU3Rkb3V0RGVs" + + "dGESDwoHY29udGVudBgBIAEoCSIrChhTaGVsbFRvb2xDYWxsU3RkZXJyRGVsdGESDwoHY29udGVu" + + "dBgBIAEoCSKJAQoSU2hlbGxUb29sQ2FsbERlbHRhEjQKBnN0ZG91dBgBIAEoCzIiLmFnZW50LnYx" + + "LlNoZWxsVG9vbENhbGxTdGRvdXREZWx0YUgAEjQKBnN0ZGVychgCIAEoCzIiLmFnZW50LnYxLlNo" + + "ZWxsVG9vbENhbGxTdGRlcnJEZWx0YUgAQgcKBWRlbHRhIu0BCgxTdWJhZ2VudFR5cGUSOAoLdW5z" + + "cGVjaWZpZWQYASABKAsyIS5hZ2VudC52MS5TdWJhZ2VudFR5cGVVbnNwZWNpZmllZEgAEjkKDGNv" + + "bXB1dGVyX3VzZRgCIAEoCzIhLmFnZW50LnYxLlN1YmFnZW50VHlwZUNvbXB1dGVyVXNlSAASLgoG" + + "Y3VzdG9tGAMgASgLMhwuYWdlbnQudjEuU3ViYWdlbnRUeXBlQ3VzdG9tSAASMAoHZXhwbG9yZRgE" + + "IAEoCzIdLmFnZW50LnYxLlN1YmFnZW50VHlwZUV4cGxvcmVIAEIGCgR0eXBlIhkKF1N1YmFnZW50" + + "VHlwZVVuc3BlY2lmaWVkIhkKF1N1YmFnZW50VHlwZUNvbXB1dGVyVXNlIhUKE1N1YmFnZW50VHlw" + + "ZUV4cGxvcmUiIgoSU3ViYWdlbnRUeXBlQ3VzdG9tEgwKBG5hbWUYASABKAkijQEKDkN1c3RvbVN1" + + "YmFnZW50EhEKCWZ1bGxfcGF0aBgBIAEoCRIMCgRuYW1lGAIgASgJEhMKC2Rlc2NyaXB0aW9uGAMg" + + "ASgJEg0KBXRvb2xzGAQgAygJEg0KBW1vZGVsGAUgASgJEg4KBnByb21wdBgGIAEoCRIXCg9wZXJt" + + "aXNzaW9uX21vZGUYByABKAUiaAoOU3dpdGNoTW9kZUFyZ3MSFgoOdGFyZ2V0X21vZGVfaWQYASAB" + + "KAkSGAoLZXhwbGFuYXRpb24YAiABKAlIAIgBARIUCgx0b29sX2NhbGxfaWQYAyABKAlCDgoMX2V4" + + "cGxhbmF0aW9uIqoBChBTd2l0Y2hNb2RlUmVzdWx0Ei4KB3N1Y2Nlc3MYASABKAsyGy5hZ2VudC52" + + "MS5Td2l0Y2hNb2RlU3VjY2Vzc0gAEioKBWVycm9yGAIgASgLMhkuYWdlbnQudjEuU3dpdGNoTW9k" + + "ZUVycm9ySAASMAoIcmVqZWN0ZWQYAyABKAsyHC5hZ2VudC52MS5Td2l0Y2hNb2RlUmVqZWN0ZWRI" + + "AEIICgZyZXN1bHQiPQoRU3dpdGNoTW9kZVN1Y2Nlc3MSFAoMZnJvbV9tb2RlX2lkGAEgASgJEhIK" + + "CnRvX21vZGVfaWQYAiABKAkiIAoPU3dpdGNoTW9kZUVycm9yEg0KBWVycm9yGAEgASgJIiQKElN3" + + "aXRjaE1vZGVSZWplY3RlZBIOCgZyZWFzb24YASABKAkiaAoSU3dpdGNoTW9kZVRvb2xDYWxsEiYK" + + "BGFyZ3MYASABKAsyGC5hZ2VudC52MS5Td2l0Y2hNb2RlQXJncxIqCgZyZXN1bHQYAiABKAsyGi5h" + + "Z2VudC52MS5Td2l0Y2hNb2RlUmVzdWx0IkAKFlN3aXRjaE1vZGVSZXF1ZXN0UXVlcnkSJgoEYXJn" + + "cxgBIAEoCzIYLmFnZW50LnYxLlN3aXRjaE1vZGVBcmdzIqkBChlTd2l0Y2hNb2RlUmVxdWVzdFJl" + + "c3BvbnNlEkAKCGFwcHJvdmVkGAEgASgLMiwuYWdlbnQudjEuU3dpdGNoTW9kZVJlcXVlc3RSZXNw" + + "b25zZV9BcHByb3ZlZEgAEkAKCHJlamVjdGVkGAIgASgLMiwuYWdlbnQudjEuU3dpdGNoTW9kZVJl" + + "cXVlc3RSZXNwb25zZV9SZWplY3RlZEgAQggKBnJlc3VsdCIkCiJTd2l0Y2hNb2RlUmVxdWVzdFJl" + + "c3BvbnNlX0FwcHJvdmVkIjQKIlN3aXRjaE1vZGVSZXF1ZXN0UmVzcG9uc2VfUmVqZWN0ZWQSDgoG" + + "cmVhc29uGAEgASgJInUKCFRvZG9JdGVtEgoKAmlkGAEgASgJEg8KB2NvbnRlbnQYAiABKAkSDgoG" + + "c3RhdHVzGAMgASgFEhIKCmNyZWF0ZWRfYXQYBCABKAMSEgoKdXBkYXRlZF9hdBgFIAEoAxIUCgxk" + + "ZXBlbmRlbmNpZXMYBiADKAkiawoTVXBkYXRlVG9kb3NUb29sQ2FsbBInCgRhcmdzGAEgASgLMhku" + + "YWdlbnQudjEuVXBkYXRlVG9kb3NBcmdzEisKBnJlc3VsdBgCIAEoCzIbLmFnZW50LnYxLlVwZGF0" + + "ZVRvZG9zUmVzdWx0IkMKD1VwZGF0ZVRvZG9zQXJncxIhCgV0b2RvcxgBIAMoCzISLmFnZW50LnYx" + + "LlRvZG9JdGVtEg0KBW1lcmdlGAIgASgIInsKEVVwZGF0ZVRvZG9zUmVzdWx0Ei8KB3N1Y2Nlc3MY" + + "ASABKAsyHC5hZ2VudC52MS5VcGRhdGVUb2Rvc1N1Y2Nlc3NIABIrCgVlcnJvchgCIAEoCzIaLmFn" + + "ZW50LnYxLlVwZGF0ZVRvZG9zRXJyb3JIAEIICgZyZXN1bHQiXwoSVXBkYXRlVG9kb3NTdWNjZXNz" + + "EiEKBXRvZG9zGAEgAygLMhIuYWdlbnQudjEuVG9kb0l0ZW0SEwoLdG90YWxfY291bnQYAiABKAUS" + + "EQoJd2FzX21lcmdlGAMgASgIIiEKEFVwZGF0ZVRvZG9zRXJyb3ISDQoFZXJyb3IYASABKAkiZQoR" + + "UmVhZFRvZG9zVG9vbENhbGwSJQoEYXJncxgBIAEoCzIXLmFnZW50LnYxLlJlYWRUb2Rvc0FyZ3MS" + + "KQoGcmVzdWx0GAIgASgLMhkuYWdlbnQudjEuUmVhZFRvZG9zUmVzdWx0IjkKDVJlYWRUb2Rvc0Fy" + + "Z3MSFQoNc3RhdHVzX2ZpbHRlchgBIAMoBRIRCglpZF9maWx0ZXIYAiADKAkidQoPUmVhZFRvZG9z" + + "UmVzdWx0Ei0KB3N1Y2Nlc3MYASABKAsyGi5hZ2VudC52MS5SZWFkVG9kb3NTdWNjZXNzSAASKQoF" + + "ZXJyb3IYAiABKAsyGC5hZ2VudC52MS5SZWFkVG9kb3NFcnJvckgAQggKBnJlc3VsdCJKChBSZWFk" + + "VG9kb3NTdWNjZXNzEiEKBXRvZG9zGAEgAygLMhIuYWdlbnQudjEuVG9kb0l0ZW0SEwoLdG90YWxf" + + "Y291bnQYAiABKAUiHwoOUmVhZFRvZG9zRXJyb3ISDQoFZXJyb3IYASABKAkiSwoFUmFuZ2USIQoF" + + "c3RhcnQYASABKAsyEi5hZ2VudC52MS5Qb3NpdGlvbhIfCgNlbmQYAiABKAsyEi5hZ2VudC52MS5Q" + + "b3NpdGlvbiIoCghQb3NpdGlvbhIMCgRsaW5lGAEgASgNEg4KBmNvbHVtbhgCIAEoDSIYCgVFcnJv" + + "chIPCgdtZXNzYWdlGAEgASgJIjoKDVdlYlNlYXJjaEFyZ3MSEwoLc2VhcmNoX3Rlcm0YASABKAkS" + + "FAoMdG9vbF9jYWxsX2lkGAIgASgJIqYBCg9XZWJTZWFyY2hSZXN1bHQSLQoHc3VjY2VzcxgBIAEo" + + "CzIaLmFnZW50LnYxLldlYlNlYXJjaFN1Y2Nlc3NIABIpCgVlcnJvchgCIAEoCzIYLmFnZW50LnYx" + + "LldlYlNlYXJjaEVycm9ySAASLwoIcmVqZWN0ZWQYAyABKAsyGy5hZ2VudC52MS5XZWJTZWFyY2hS" + + "ZWplY3RlZEgAQggKBnJlc3VsdCJEChBXZWJTZWFyY2hTdWNjZXNzEjAKCnJlZmVyZW5jZXMYASAD" + + "KAsyHC5hZ2VudC52MS5XZWJTZWFyY2hSZWZlcmVuY2UiHwoOV2ViU2VhcmNoRXJyb3ISDQoFZXJy" + + "b3IYASABKAkiIwoRV2ViU2VhcmNoUmVqZWN0ZWQSDgoGcmVhc29uGAEgASgJIj8KEldlYlNlYXJj" + + "aFJlZmVyZW5jZRINCgV0aXRsZRgBIAEoCRILCgN1cmwYAiABKAkSDQoFY2h1bmsYAyABKAkiZQoR" + + "V2ViU2VhcmNoVG9vbENhbGwSJQoEYXJncxgBIAEoCzIXLmFnZW50LnYxLldlYlNlYXJjaEFyZ3MS" + + "KQoGcmVzdWx0GAIgASgLMhkuYWdlbnQudjEuV2ViU2VhcmNoUmVzdWx0Ij4KFVdlYlNlYXJjaFJl" + + "cXVlc3RRdWVyeRIlCgRhcmdzGAEgASgLMhcuYWdlbnQudjEuV2ViU2VhcmNoQXJncyKmAQoYV2Vi" + + "U2VhcmNoUmVxdWVzdFJlc3BvbnNlEj8KCGFwcHJvdmVkGAEgASgLMisuYWdlbnQudjEuV2ViU2Vh" + + "cmNoUmVxdWVzdFJlc3BvbnNlX0FwcHJvdmVkSAASPwoIcmVqZWN0ZWQYAiABKAsyKy5hZ2VudC52" + + "MS5XZWJTZWFyY2hSZXF1ZXN0UmVzcG9uc2VfUmVqZWN0ZWRIAEIICgZyZXN1bHQiIwohV2ViU2Vh" + + "cmNoUmVxdWVzdFJlc3BvbnNlX0FwcHJvdmVkIjMKIVdlYlNlYXJjaFJlcXVlc3RSZXNwb25zZV9S" + + "ZWplY3RlZBIOCgZyZWFzb24YASABKAkifwoJV3JpdGVBcmdzEgwKBHBhdGgYASABKAkSEQoJZmls" + + "ZV90ZXh0GAIgASgJEhQKDHRvb2xfY2FsbF9pZBgDIAEoCRInCh9yZXR1cm5fZmlsZV9jb250ZW50" + + "X2FmdGVyX3dyaXRlGAQgASgIEhIKCmZpbGVfYnl0ZXMYBSABKAwigAIKC1dyaXRlUmVzdWx0EikK" + + "B3N1Y2Nlc3MYASABKAsyFi5hZ2VudC52MS5Xcml0ZVN1Y2Nlc3NIABI8ChFwZXJtaXNzaW9uX2Rl" + + "bmllZBgDIAEoCzIfLmFnZW50LnYxLldyaXRlUGVybWlzc2lvbkRlbmllZEgAEioKCG5vX3NwYWNl" + + "GAQgASgLMhYuYWdlbnQudjEuV3JpdGVOb1NwYWNlSAASJQoFZXJyb3IYBSABKAsyFC5hZ2VudC52" + + "MS5Xcml0ZUVycm9ySAASKwoIcmVqZWN0ZWQYBiABKAsyFy5hZ2VudC52MS5Xcml0ZVJlamVjdGVk" + + "SABCCAoGcmVzdWx0IooBCgxXcml0ZVN1Y2Nlc3MSDAoEcGF0aBgBIAEoCRIVCg1saW5lc19jcmVh" + + "dGVkGAIgASgFEhEKCWZpbGVfc2l6ZRgDIAEoBRIlChhmaWxlX2NvbnRlbnRfYWZ0ZXJfd3JpdGUY" + + "BCABKAlIAIgBAUIbChlfZmlsZV9jb250ZW50X2FmdGVyX3dyaXRlIm8KFVdyaXRlUGVybWlzc2lv" + + "bkRlbmllZBIMCgRwYXRoGAEgASgJEhEKCWRpcmVjdG9yeRgCIAEoCRIRCglvcGVyYXRpb24YAyAB" + + "KAkSDQoFZXJyb3IYBCABKAkSEwoLaXNfcmVhZG9ubHkYBSABKAgiHAoMV3JpdGVOb1NwYWNlEgwK" + + "BHBhdGgYASABKAkiKQoKV3JpdGVFcnJvchIMCgRwYXRoGAEgASgJEg0KBWVycm9yGAIgASgJIi0K" + + "DVdyaXRlUmVqZWN0ZWQSDAoEcGF0aBgBIAEoCRIOCgZyZWFzb24YAiABKAkigwEKF0Jvb3RzdHJh" + + "cFN0YXRzaWdSZXF1ZXN0Eh4KEWlnbm9yZV9kZXZfc3RhdHVzGAEgASgISACIAQESHQoQb3BlcmF0" + + "aW5nX3N5c3RlbRgCIAEoBUgBiAEBQhQKEl9pZ25vcmVfZGV2X3N0YXR1c0ITChFfb3BlcmF0aW5n" + + "X3N5c3RlbSIOCgxQaW5nUmVzcG9uc2UitwEKC0V4ZWNSZXF1ZXN0Eg8KB2NvbW1hbmQYASABKAkS" + + "EAoDY3dkGAIgASgJSACIAQESDAoEYXJncxgDIAMoCRI7CgtlbnZpcm9ubWVudBgEIAMoCzImLmFn" + + "ZW50LnYxLkV4ZWNSZXF1ZXN0LkVudmlyb25tZW50RW50cnkaMgoQRW52aXJvbm1lbnRFbnRyeRIL" + + "CgNrZXkYASABKAkSDQoFdmFsdWUYAiABKAk6AjgBQgYKBF9jd2QioAEKDEV4ZWNSZXNwb25zZRIt" + + "CgxzdGRvdXRfZXZlbnQYASABKAsyFS5hZ2VudC52MS5TdGRvdXRFdmVudEgAEi0KDHN0ZGVycl9l" + + "dmVudBgCIAEoCzIVLmFnZW50LnYxLlN0ZGVyckV2ZW50SAASKQoKZXhpdF9ldmVudBgDIAEoCzIT" + + "LmFnZW50LnYxLkV4aXRFdmVudEgAQgcKBWV2ZW50IhsKC1N0ZG91dEV2ZW50EgwKBGRhdGEYASAB" + + "KAkiGwoLU3RkZXJyRXZlbnQSDAoEZGF0YRgBIAEoCSIeCglFeGl0RXZlbnQSEQoJZXhpdF9jb2Rl" + + "GAEgASgFIiMKE1JlYWRUZXh0RmlsZVJlcXVlc3QSDAoEcGF0aBgBIAEoCSInChRSZWFkVGV4dEZp" + + "bGVSZXNwb25zZRIPCgdjb250ZW50GAEgASgJIjUKFFdyaXRlVGV4dEZpbGVSZXF1ZXN0EgwKBHBh" + + "dGgYASABKAkSDwoHY29udGVudBgCIAEoCSIXChVXcml0ZVRleHRGaWxlUmVzcG9uc2UiJQoVUmVh" + + "ZEJpbmFyeUZpbGVSZXF1ZXN0EgwKBHBhdGgYASABKAkiKQoWUmVhZEJpbmFyeUZpbGVSZXNwb25z" + + "ZRIPCgdjb250ZW50GAEgASgMIjcKFldyaXRlQmluYXJ5RmlsZVJlcXVlc3QSDAoEcGF0aBgBIAEo" + + "CRIPCgdjb250ZW50GAIgASgMIhkKF1dyaXRlQmluYXJ5RmlsZVJlc3BvbnNlIkUKHkdldFdvcmtz" + + "cGFjZUNoYW5nZXNIYXNoUmVxdWVzdBIRCglyb290X3BhdGgYASABKAkSEAoIYmFzZV9yZWYYAiAB" + + "KAkiLwofR2V0V29ya3NwYWNlQ2hhbmdlc0hhc2hSZXNwb25zZRIMCgRoYXNoGAEgASgJIlAKH1Jl" + + "ZnJlc2hHaXRodWJBY2Nlc3NUb2tlblJlcXVlc3QSGwoTZ2l0aHViX2FjY2Vzc190b2tlbhgBIAEo" + + "CRIQCghob3N0bmFtZRgCIAEoCSIiCiBSZWZyZXNoR2l0aHViQWNjZXNzVG9rZW5SZXNwb25zZSJX" + + "Ch1XYXJtUmVtb3RlQWNjZXNzU2VydmVyUmVxdWVzdBIOCgZjb21taXQYASABKAkSDAoEcG9ydBgC" + + "IAEoBRIYChBjb25uZWN0aW9uX3Rva2VuGAMgASgJIiAKHldhcm1SZW1vdGVBY2Nlc3NTZXJ2ZXJS" + + "ZXNwb25zZSIWChRMaXN0QXJ0aWZhY3RzUmVxdWVzdCKKAgoWQXJ0aWZhY3RVcGxvYWRNZXRhZGF0" + + "YRIVCg1hYnNvbHV0ZV9wYXRoGAEgASgJEhIKCnNpemVfYnl0ZXMYAiABKAQSGgoSdXBkYXRlZF9h" + + "dF91bml4X21zGAMgASgDEg4KBnN0YXR1cxgEIAEoBRIWCg5ieXRlc191cGxvYWRlZBgFIAEoBBIS" + + "CgpsYXN0X2Vycm9yGAYgASgJEhcKD3VwbG9hZF9hdHRlbXB0cxgHIAEoDRIfChdsYXN0X3N0YXJ0" + + "ZWRfYXRfdW5peF9tcxgIIAEoAxIgChhsYXN0X2ZpbmlzaGVkX2F0X3VuaXhfbXMYCSABKAMSEQoJ" + + "dXBsb2FkX2lkGAogASgJIkwKFUxpc3RBcnRpZmFjdHNSZXNwb25zZRIzCglhcnRpZmFjdHMYASAD" + + "KAsyIC5hZ2VudC52MS5BcnRpZmFjdFVwbG9hZE1ldGFkYXRhIk4KFlVwbG9hZEFydGlmYWN0c1Jl" + + "cXVlc3QSNAoHdXBsb2FkcxgBIAMoCzIjLmFnZW50LnYxLkFydGlmYWN0VXBsb2FkSW5zdHJ1Y3Rp" + + "b24i1wIKGUFydGlmYWN0VXBsb2FkSW5zdHJ1Y3Rpb24SFQoNYWJzb2x1dGVfcGF0aBgBIAEoCRIS" + + "Cgp1cGxvYWRfdXJsGAIgASgJEg4KBm1ldGhvZBgDIAEoCRJBCgdoZWFkZXJzGAQgAygLMjAuYWdl" + + "bnQudjEuQXJ0aWZhY3RVcGxvYWRJbnN0cnVjdGlvbi5IZWFkZXJzRW50cnkSGQoMY29udGVudF90" + + "eXBlGAUgASgJSACIAQESHQoQc2xhY2tfdXBsb2FkX3VybBgGIAEoCUgBiAEBEhoKDXNsYWNrX2Zp" + + "bGVfaWQYByABKAlIAogBARouCgxIZWFkZXJzRW50cnkSCwoDa2V5GAEgASgJEg0KBXZhbHVlGAIg" + + "ASgJOgI4AUIPCg1fY29udGVudF90eXBlQhMKEV9zbGFja191cGxvYWRfdXJsQhAKDl9zbGFja19m" + + "aWxlX2lkIoQBChxBcnRpZmFjdFVwbG9hZERpc3BhdGNoUmVzdWx0EhUKDWFic29sdXRlX3BhdGgY" + + "ASABKAkSDgoGc3RhdHVzGAIgASgFEg8KB21lc3NhZ2UYAyABKAkSGgoNc2xhY2tfZmlsZV9pZBgE" + + "IAEoCUgAiAEBQhAKDl9zbGFja19maWxlX2lkIlIKF1VwbG9hZEFydGlmYWN0c1Jlc3BvbnNlEjcK" + + "B3Jlc3VsdHMYASADKAsyJi5hZ2VudC52MS5BcnRpZmFjdFVwbG9hZERpc3BhdGNoUmVzdWx0IhwK" + + "GkdldE1jcFJlZnJlc2hUb2tlbnNSZXF1ZXN0IqUBChtHZXRNY3BSZWZyZXNoVG9rZW5zUmVzcG9u" + + "c2USUAoOcmVmcmVzaF90b2tlbnMYASADKAsyOC5hZ2VudC52MS5HZXRNY3BSZWZyZXNoVG9rZW5z" + + "UmVzcG9uc2UuUmVmcmVzaFRva2Vuc0VudHJ5GjQKElJlZnJlc2hUb2tlbnNFbnRyeRILCgNrZXkY" + + "ASABKAkSDQoFdmFsdWUYAiABKAk6AjgBIqMBCiFVcGRhdGVFbnZpcm9ubWVudFZhcmlhYmxlc1Jl" + + "cXVlc3QSQQoDZW52GAEgAygLMjQuYWdlbnQudjEuVXBkYXRlRW52aXJvbm1lbnRWYXJpYWJsZXNS" + + "ZXF1ZXN0LkVudkVudHJ5Eg8KB3JlcGxhY2UYAiABKAgaKgoIRW52RW50cnkSCwoDa2V5GAEgASgJ" + + "Eg0KBXZhbHVlGAIgASgJOgI4ASJGCiJVcGRhdGVFbnZpcm9ubWVudFZhcmlhYmxlc1Jlc3BvbnNl" + + "Eg8KB2FwcGxpZWQYASABKA0SDwoHcmVtb3ZlZBgCIAEoDSKDAQoSTWNwT0F1dGhTdG9yZWREYXRh" + + "EhUKDXJlZnJlc2hfdG9rZW4YASABKAkSEQoJY2xpZW50X2lkGAIgASgJEhoKDWNsaWVudF9zZWNy" + + "ZXQYAyABKAlIAIgBARIVCg1yZWRpcmVjdF91cmlzGAQgAygJQhAKDl9jbGllbnRfc2VjcmV0Ik4K" + + "BUZyYW1lEgoKAmlkGAEgASgJEg4KBm1ldGhvZBgCIAEoCRIMCgRkYXRhGAMgASgMEgwKBGtpbmQY" + + "BCABKAUSDQoFZXJyb3IYBSABKAkiBwoFRW1wdHkiIwoNQmlkaVJlcXVlc3RJZBISCgpyZXF1ZXN0" + + "X2lkGAEgASgJKogBCh1BcHBsaWVkQWdlbnRDaGFuZ2VfQ2hhbmdlVHlwZRIbChdDSEFOR0VfVFlQ" + + "RV9VTlNQRUNJRklFRBAAEhcKE0NIQU5HRV9UWVBFX0NSRUFURUQQARIYChRDSEFOR0VfVFlQRV9N" + + "T0RJRklFRBACEhcKE0NIQU5HRV9UWVBFX0RFTEVURUQQAyqkAQoLTW91c2VCdXR0b24SHAoYTU9V" + + "U0VfQlVUVE9OX1VOU1BFQ0lGSUVEEAASFQoRTU9VU0VfQlVUVE9OX0xFRlQQARIWChJNT1VTRV9C" + + "VVRUT05fUklHSFQQAhIXChNNT1VTRV9CVVRUT05fTUlERExFEAMSFQoRTU9VU0VfQlVUVE9OX0JB" + + "Q0sQBBIYChRNT1VTRV9CVVRUT05fRk9SV0FSRBAFKp4BCg9TY3JvbGxEaXJlY3Rpb24SIAocU0NS" + + "T0xMX0RJUkVDVElPTl9VTlNQRUNJRklFRBAAEhcKE1NDUk9MTF9ESVJFQ1RJT05fVVAQARIZChVT" + + "Q1JPTExfRElSRUNUSU9OX0RPV04QAhIZChVTQ1JPTExfRElSRUNUSU9OX0xFRlQQAxIaChZTQ1JP" + + "TExfRElSRUNUSU9OX1JJR0hUEAQqcAoQQ3Vyc29yUnVsZVNvdXJjZRIiCh5DVVJTT1JfUlVMRV9T" + + "T1VSQ0VfVU5TUEVDSUZJRUQQABIbChdDVVJTT1JfUlVMRV9TT1VSQ0VfVEVBTRABEhsKF0NVUlNP" + + "Ul9SVUxFX1NPVVJDRV9VU0VSEAIqvAEKEkRpYWdub3N0aWNTZXZlcml0eRIjCh9ESUFHTk9TVElD" + + "X1NFVkVSSVRZX1VOU1BFQ0lGSUVEEAASHQoZRElBR05PU1RJQ19TRVZFUklUWV9FUlJPUhABEh8K" + + "G0RJQUdOT1NUSUNfU0VWRVJJVFlfV0FSTklORxACEiMKH0RJQUdOT1NUSUNfU0VWRVJJVFlfSU5G" + + "T1JNQVRJT04QAxIcChhESUFHTk9TVElDX1NFVkVSSVRZX0hJTlQQBCqcAQoNUmVjb3JkaW5nTW9k" + + "ZRIeChpSRUNPUkRJTkdfTU9ERV9VTlNQRUNJRklFRBAAEiIKHlJFQ09SRElOR19NT0RFX1NUQVJU" + + "X1JFQ09SRElORxABEiEKHVJFQ09SRElOR19NT0RFX1NBVkVfUkVDT1JESU5HEAISJAogUkVDT1JE" + + "SU5HX01PREVfRElTQ0FSRF9SRUNPUkRJTkcQAyqTAQofUmVxdWVzdGVkRmlsZVBhdGhSZWplY3Rl" + + "ZFJlYXNvbhIzCi9SRVFVRVNURURfRklMRV9QQVRIX1JFSkVDVEVEX1JFQVNPTl9VTlNQRUNJRklF" + + "RBAAEjsKN1JFUVVFU1RFRF9GSUxFX1BBVEhfUkVKRUNURURfUkVBU09OX1NMQVNIRVNfTk9UX0FM" + + "TE9XRUQQASqtAQoLUGFja2FnZVR5cGUSHAoYUEFDS0FHRV9UWVBFX1VOU1BFQ0lGSUVEEAASHwob" + + "UEFDS0FHRV9UWVBFX0NVUlNPUl9QUk9KRUNUEAESIAocUEFDS0FHRV9UWVBFX0NVUlNPUl9QRVJT" + + "T05BTBACEh0KGVBBQ0tBR0VfVFlQRV9DTEFVREVfU0tJTEwQAxIeChpQQUNLQUdFX1RZUEVfQ0xB" + + "VURFX1BMVUdJThAEKn0KElNhbmRib3hQb2xpY3lfVHlwZRIUChBUWVBFX1VOU1BFQ0lGSUVEEAAS" + + "FgoSVFlQRV9JTlNFQ1VSRV9OT05FEAESHAoYVFlQRV9XT1JLU1BBQ0VfUkVBRFdSSVRFEAISGwoX" + + "VFlQRV9XT1JLU1BBQ0VfUkVBRE9OTFkQAypxCg9UaW1lb3V0QmVoYXZpb3ISIAocVElNRU9VVF9C" + + "RUhBVklPUl9VTlNQRUNJRklFRBAAEhsKF1RJTUVPVVRfQkVIQVZJT1JfQ0FOQ0VMEAESHwobVElN" + + "RU9VVF9CRUhBVklPUl9CQUNLR1JPVU5EEAIqeQoQU2hlbGxBYm9ydFJlYXNvbhIiCh5TSEVMTF9B" + + "Qk9SVF9SRUFTT05fVU5TUEVDSUZJRUQQABIhCh1TSEVMTF9BQk9SVF9SRUFTT05fVVNFUl9BQk9S" + + "VBABEh4KGlNIRUxMX0FCT1JUX1JFQVNPTl9USU1FT1VUEAIqqgEKHEN1c3RvbVN1YmFnZW50UGVy" + + "bWlzc2lvbk1vZGUSLworQ1VTVE9NX1NVQkFHRU5UX1BFUk1JU1NJT05fTU9ERV9VTlNQRUNJRklF" + + "RBAAEisKJ0NVU1RPTV9TVUJBR0VOVF9QRVJNSVNTSU9OX01PREVfREVGQVVMVBABEiwKKENVU1RP" + + "TV9TVUJBR0VOVF9QRVJNSVNTSU9OX01PREVfUkVBRE9OTFkQAiqVAQoKVG9kb1N0YXR1cxIbChdU" + + "T0RPX1NUQVRVU19VTlNQRUNJRklFRBAAEhcKE1RPRE9fU1RBVFVTX1BFTkRJTkcQARIbChdUT0RP" + + "X1NUQVRVU19JTl9QUk9HUkVTUxACEhkKFVRPRE9fU1RBVFVTX0NPTVBMRVRFRBADEhkKFVRPRE9f" + + "U1RBVFVTX0NBTkNFTExFRBAEKmYKCENsaWVudE9TEhkKFUNMSUVOVF9PU19VTlNQRUNJRklFRBAA" + + "EhUKEUNMSUVOVF9PU19XSU5ET1dTEAESEwoPQ0xJRU5UX09TX01BQ09TEAISEwoPQ0xJRU5UX09T" + + "X0xJTlVYEAMq7AEKHEFydGlmYWN0VXBsb2FkRGlzcGF0Y2hTdGF0dXMSLworQVJUSUZBQ1RfVVBM" + + "T0FEX0RJU1BBVENIX1NUQVRVU19VTlNQRUNJRklFRBAAEiwKKEFSVElGQUNUX1VQTE9BRF9ESVNQ" + + "QVRDSF9TVEFUVVNfQUNDRVBURUQQARIsCihBUlRJRkFDVF9VUExPQURfRElTUEFUQ0hfU1RBVFVT" + + "X1JFSkVDVEVEEAISPwo7QVJUSUZBQ1RfVVBMT0FEX0RJU1BBVENIX1NUQVRVU19TS0lQUEVEX0FM" + + "UkVBRFlfSU5fUFJPR1JFU1MQAypXCgpGcmFtZV9LaW5kEhQKEEtJTkRfVU5TUEVDSUZJRUQQABIQ" + + "CgxLSU5EX1JFUVVFU1QQARIRCg1LSU5EX1JFU1BPTlNFEAISDgoKS0lORF9FUlJPUhADKrACChdC" + + "dWdib3REZWVwbGlua0V2ZW50S2luZBIqCiZCVUdCT1RfREVFUExJTktfRVZFTlRfS0lORF9VTlNQ" + + "RUNJRklFRBAAEiYKIkJVR0JPVF9ERUVQTElOS19FVkVOVF9LSU5EX0NMSUNLRUQQARIzCi9CVUdC" + + "T1RfREVFUExJTktfRVZFTlRfS0lORF9IQU5ETEVEX0RJQUxPR19TSE9XThACEjMKL0JVR0JPVF9E" + + "RUVQTElOS19FVkVOVF9LSU5EX0hBTkRMRURfQ0hBVF9DUkVBVEVEEAMSJAogQlVHQk9UX0RFRVBM" + + "SU5LX0VWRU5UX0tJTkRfRVJST1IQBBIxCi1CVUdCT1RfREVFUExJTktfRVZFTlRfS0lORF9IQU5E" + + "TEVEX0ZJWF9JTl9XRUIQBTKHBAoMQWdlbnRTZXJ2aWNlEkEKA1J1bhIcLmFnZW50LnYxLkFnZW50" + + "Q2xpZW50TWVzc2FnZRocLmFnZW50LnYxLkFnZW50U2VydmVyTWVzc2FnZRI/CgZSdW5TU0USFy5h" + + "Z2VudC52MS5CaWRpUmVxdWVzdElkGhwuYWdlbnQudjEuQWdlbnRTZXJ2ZXJNZXNzYWdlEkQKCU5h" + + "bWVBZ2VudBIaLmFnZW50LnYxLk5hbWVBZ2VudFJlcXVlc3QaGy5hZ2VudC52MS5OYW1lQWdlbnRS" + + "ZXNwb25zZRJWCg9HZXRVc2FibGVNb2RlbHMSIC5hZ2VudC52MS5HZXRVc2FibGVNb2RlbHNSZXF1" + + "ZXN0GiEuYWdlbnQudjEuR2V0VXNhYmxlTW9kZWxzUmVzcG9uc2USaAoVR2V0RGVmYXVsdE1vZGVs" + + "Rm9yQ2xpEiYuYWdlbnQudjEuR2V0RGVmYXVsdE1vZGVsRm9yQ2xpUmVxdWVzdBonLmFnZW50LnYx" + + "LkdldERlZmF1bHRNb2RlbEZvckNsaVJlc3BvbnNlEmsKFkdldEFsbG93ZWRNb2RlbEludGVudHMS" + + "Jy5hZ2VudC52MS5HZXRBbGxvd2VkTW9kZWxJbnRlbnRzUmVxdWVzdBooLmFnZW50LnYxLkdldEFs" + + "bG93ZWRNb2RlbEludGVudHNSZXNwb25zZTK1CAoOQ29udHJvbFNlcnZpY2USTQoMUmVhZFRleHRG" + + "aWxlEh0uYWdlbnQudjEuUmVhZFRleHRGaWxlUmVxdWVzdBoeLmFnZW50LnYxLlJlYWRUZXh0Rmls" + + "ZVJlc3BvbnNlElAKDVdyaXRlVGV4dEZpbGUSHi5hZ2VudC52MS5Xcml0ZVRleHRGaWxlUmVxdWVz" + + "dBofLmFnZW50LnYxLldyaXRlVGV4dEZpbGVSZXNwb25zZRJTCg5SZWFkQmluYXJ5RmlsZRIfLmFn" + + "ZW50LnYxLlJlYWRCaW5hcnlGaWxlUmVxdWVzdBogLmFnZW50LnYxLlJlYWRCaW5hcnlGaWxlUmVz" + + "cG9uc2USVgoPV3JpdGVCaW5hcnlGaWxlEiAuYWdlbnQudjEuV3JpdGVCaW5hcnlGaWxlUmVxdWVz" + + "dBohLmFnZW50LnYxLldyaXRlQmluYXJ5RmlsZVJlc3BvbnNlEm4KF0dldFdvcmtzcGFjZUNoYW5n" + + "ZXNIYXNoEiguYWdlbnQudjEuR2V0V29ya3NwYWNlQ2hhbmdlc0hhc2hSZXF1ZXN0GikuYWdlbnQu" + + "djEuR2V0V29ya3NwYWNlQ2hhbmdlc0hhc2hSZXNwb25zZRJxChhSZWZyZXNoR2l0aHViQWNjZXNz" + + "VG9rZW4SKS5hZ2VudC52MS5SZWZyZXNoR2l0aHViQWNjZXNzVG9rZW5SZXF1ZXN0GiouYWdlbnQu" + + "djEuUmVmcmVzaEdpdGh1YkFjY2Vzc1Rva2VuUmVzcG9uc2USawoWV2FybVJlbW90ZUFjY2Vzc1Nl" + + "cnZlchInLmFnZW50LnYxLldhcm1SZW1vdGVBY2Nlc3NTZXJ2ZXJSZXF1ZXN0GiguYWdlbnQudjEu" + + "V2FybVJlbW90ZUFjY2Vzc1NlcnZlclJlc3BvbnNlElAKDUxpc3RBcnRpZmFjdHMSHi5hZ2VudC52" + + "MS5MaXN0QXJ0aWZhY3RzUmVxdWVzdBofLmFnZW50LnYxLkxpc3RBcnRpZmFjdHNSZXNwb25zZRJW" + + "Cg9VcGxvYWRBcnRpZmFjdHMSIC5hZ2VudC52MS5VcGxvYWRBcnRpZmFjdHNSZXF1ZXN0GiEuYWdl" + + "bnQudjEuVXBsb2FkQXJ0aWZhY3RzUmVzcG9uc2USYgoTR2V0TWNwUmVmcmVzaFRva2VucxIkLmFn" + + "ZW50LnYxLkdldE1jcFJlZnJlc2hUb2tlbnNSZXF1ZXN0GiUuYWdlbnQudjEuR2V0TWNwUmVmcmVz" + + "aFRva2Vuc1Jlc3BvbnNlEncKGlVwZGF0ZUVudmlyb25tZW50VmFyaWFibGVzEisuYWdlbnQudjEu" + + "VXBkYXRlRW52aXJvbm1lbnRWYXJpYWJsZXNSZXF1ZXN0GiwuYWdlbnQudjEuVXBkYXRlRW52aXJv" + + "bm1lbnRWYXJpYWJsZXNSZXNwb25zZTINCgtFeGVjU2VydmljZTJRCiJQcml2YXRlV29ya2VyQnJp" + + "ZGdlRXh0ZXJuYWxTZXJ2aWNlEisKB0Nvbm5lY3QSDy5hZ2VudC52MS5GcmFtZRoPLmFnZW50LnYx" + + "LkZyYW1lMngKEExpZmVjeWNsZVNlcnZpY2USMQoNUmVzZXRJbnN0YW5jZRIPLmFnZW50LnYxLkVt" + + "cHR5Gg8uYWdlbnQudjEuRW1wdHkSMQoNUmVuZXdJbnN0YW5jZRIPLmFnZW50LnYxLkVtcHR5Gg8u" + + "YWdlbnQudjEuRW1wdHliBnByb3RvMw==" + +var ( + fileDescOnce sync.Once + fileDesc protoreflect.FileDescriptor +) + +// AgentFileDescriptor returns the parsed FileDescriptor for agent.proto. +func AgentFileDescriptor() protoreflect.FileDescriptor { + fileDescOnce.Do(func() { + raw, err := base64.StdEncoding.DecodeString(agentDescriptorB64) + if err != nil { + panic("cursor proto: failed to decode descriptor: " + err.Error()) + } + fdp := &descrptorpb.FileDescriptorProto{} + if err := proto.Unmarshal(raw, fdp); err != nil { + panic("cursor proto: failed to unmarshal descriptor: " + err.Error()) + } + fd, err := protodesc.NewFile(fdp, nil) + if err != nil { + panic("cursor proto: failed to create file descriptor: " + err.Error()) + } + fileDesc = fd + }) + return fileDesc +} + +// Msg returns the MessageDescriptor for a top-level message by name. +func Msg(name string) protoreflect.MessageDescriptor { + md := AgentFileDescriptor().Messages().ByName(protoreflect.Name(name)) + if md == nil { + panic("cursor proto: message not found: " + name) + } + return md +} diff --git a/internal/auth/cursor/proto/encode.go b/internal/auth/cursor/proto/encode.go new file mode 100644 index 0000000000..b1be6551c2 --- /dev/null +++ b/internal/auth/cursor/proto/encode.go @@ -0,0 +1,664 @@ +// Package proto provides protobuf encoding for Cursor's gRPC API, +// using dynamicpb with the embedded FileDescriptorProto from agent.proto. +// This mirrors the cursor-auth TS plugin's use of @bufbuild/protobuf create()+toBinary(). +package proto + +import ( + "crypto/sha256" + "encoding/hex" + "encoding/json" + "fmt" + + log "github.com/sirupsen/logrus" + "google.golang.org/protobuf/encoding/protowire" + "google.golang.org/protobuf/proto" + "google.golang.org/protobuf/reflect/protoreflect" + "google.golang.org/protobuf/types/dynamicpb" + "google.golang.org/protobuf/types/known/structpb" +) + +// --- Public types --- + +// RunRequestParams holds all data needed to build an AgentRunRequest. +type RunRequestParams struct { + ModelId string + SystemPrompt string + UserText string + MessageId string + ConversationId string + Images []ImageData + Turns []TurnData + McpTools []McpToolDef + BlobStore map[string][]byte // hex(sha256) -> data, populated during encoding + RawCheckpoint []byte // if non-nil, use as conversation_state directly (from server checkpoint) +} + +type ImageData struct { + MimeType string + Data []byte +} + +type TurnData struct { + UserText string + AssistantText string +} + +type McpToolDef struct { + Name string + Description string + InputSchema json.RawMessage +} + +// --- Helper: create a dynamic message and set fields --- + +func newMsg(name string) *dynamicpb.Message { + return dynamicpb.NewMessage(Msg(name)) +} + +func field(msg *dynamicpb.Message, name string) protoreflect.FieldDescriptor { + return msg.Descriptor().Fields().ByName(protoreflect.Name(name)) +} + +func setStr(msg *dynamicpb.Message, name, val string) { + if val != "" { + msg.Set(field(msg, name), protoreflect.ValueOfString(val)) + } +} + +func setBytes(msg *dynamicpb.Message, name string, val []byte) { + if len(val) > 0 { + msg.Set(field(msg, name), protoreflect.ValueOfBytes(val)) + } +} + +func setUint32(msg *dynamicpb.Message, name string, val uint32) { + msg.Set(field(msg, name), protoreflect.ValueOfUint32(val)) +} + +func setBool(msg *dynamicpb.Message, name string, val bool) { + msg.Set(field(msg, name), protoreflect.ValueOfBool(val)) +} + +func setMsg(msg *dynamicpb.Message, name string, sub *dynamicpb.Message) { + msg.Set(field(msg, name), protoreflect.ValueOfMessage(sub.ProtoReflect())) +} + +func marshal(msg *dynamicpb.Message) []byte { + b, err := proto.Marshal(msg) + if err != nil { + panic("cursor proto marshal: " + err.Error()) + } + return b +} + +// --- Encode functions mirroring cursor-fetch.ts --- + +// EncodeHeartbeat returns an encoded AgentClientMessage with clientHeartbeat. +// Mirrors: create(AgentClientMessageSchema, { message: { case: 'clientHeartbeat', value: create(ClientHeartbeatSchema, {}) } }) +func EncodeHeartbeat() []byte { + hb := newMsg("ClientHeartbeat") + acm := newMsg("AgentClientMessage") + setMsg(acm, "client_heartbeat", hb) + return marshal(acm) +} + +// EncodeRunRequest builds a full AgentClientMessage wrapping an AgentRunRequest. +// Mirrors buildCursorRequest() in cursor-fetch.ts. +// If p.RawCheckpoint is set, it is used directly as the conversation_state bytes +// (from a previous conversation_checkpoint_update), skipping manual turn construction. +func EncodeRunRequest(p *RunRequestParams) []byte { + if p.RawCheckpoint != nil { + return encodeRunRequestWithCheckpoint(p) + } + + if p.BlobStore == nil { + p.BlobStore = make(map[string][]byte) + } + + // --- Conversation turns --- + // Each turn is serialized as bytes (ConversationTurnStructure → bytes) + var turnBytes [][]byte + for _, turn := range p.Turns { + // UserMessage for this turn + um := newMsg("UserMessage") + setStr(um, "text", turn.UserText) + setStr(um, "message_id", generateId()) + umBytes := marshal(um) + + // Steps (assistant response) + var stepBytes [][]byte + if turn.AssistantText != "" { + am := newMsg("AssistantMessage") + setStr(am, "text", turn.AssistantText) + step := newMsg("ConversationStep") + setMsg(step, "assistant_message", am) + stepBytes = append(stepBytes, marshal(step)) + } + + // AgentConversationTurnStructure (fields are bytes, not submessages) + agentTurn := newMsg("AgentConversationTurnStructure") + setBytes(agentTurn, "user_message", umBytes) + for _, sb := range stepBytes { + stepsField := field(agentTurn, "steps") + list := agentTurn.Mutable(stepsField).List() + list.Append(protoreflect.ValueOfBytes(sb)) + } + + // ConversationTurnStructure (oneof turn → agentConversationTurn) + cts := newMsg("ConversationTurnStructure") + setMsg(cts, "agent_conversation_turn", agentTurn) + turnBytes = append(turnBytes, marshal(cts)) + } + + // --- System prompt blob --- + systemJSON, _ := json.Marshal(map[string]string{"role": "system", "content": p.SystemPrompt}) + blobId := sha256Sum(systemJSON) + p.BlobStore[hex.EncodeToString(blobId)] = systemJSON + + // --- ConversationStateStructure --- + css := newMsg("ConversationStateStructure") + // rootPromptMessagesJson: repeated bytes + rootField := field(css, "root_prompt_messages_json") + rootList := css.Mutable(rootField).List() + rootList.Append(protoreflect.ValueOfBytes(blobId)) + // turns: repeated bytes (field 8) + turns_old (field 2) for compatibility + turnsField := field(css, "turns") + turnsList := css.Mutable(turnsField).List() + for _, tb := range turnBytes { + turnsList.Append(protoreflect.ValueOfBytes(tb)) + } + turnsOldField := field(css, "turns_old") + if turnsOldField != nil { + turnsOldList := css.Mutable(turnsOldField).List() + for _, tb := range turnBytes { + turnsOldList.Append(protoreflect.ValueOfBytes(tb)) + } + } + + // --- UserMessage (current) --- + userMessage := newMsg("UserMessage") + setStr(userMessage, "text", p.UserText) + setStr(userMessage, "message_id", p.MessageId) + + // Images via SelectedContext + if len(p.Images) > 0 { + sc := newMsg("SelectedContext") + imgsField := field(sc, "selected_images") + imgsList := sc.Mutable(imgsField).List() + for _, img := range p.Images { + si := newMsg("SelectedImage") + setStr(si, "uuid", generateId()) + setStr(si, "mime_type", img.MimeType) + setBytes(si, "data", img.Data) + imgsList.Append(protoreflect.ValueOfMessage(si.ProtoReflect())) + } + setMsg(userMessage, "selected_context", sc) + } + + // --- UserMessageAction --- + uma := newMsg("UserMessageAction") + setMsg(uma, "user_message", userMessage) + + // --- ConversationAction --- + ca := newMsg("ConversationAction") + setMsg(ca, "user_message_action", uma) + + // --- ModelDetails --- + md := newMsg("ModelDetails") + setStr(md, "model_id", p.ModelId) + setStr(md, "display_model_id", p.ModelId) + setStr(md, "display_name", p.ModelId) + + // --- AgentRunRequest --- + arr := newMsg("AgentRunRequest") + setMsg(arr, "conversation_state", css) + setMsg(arr, "action", ca) + setMsg(arr, "model_details", md) + setStr(arr, "conversation_id", p.ConversationId) + + // McpTools + if len(p.McpTools) > 0 { + mcpTools := newMsg("McpTools") + toolsField := field(mcpTools, "mcp_tools") + toolsList := mcpTools.Mutable(toolsField).List() + for _, tool := range p.McpTools { + td := newMsg("McpToolDefinition") + setStr(td, "name", tool.Name) + setStr(td, "description", tool.Description) + if len(tool.InputSchema) > 0 { + setBytes(td, "input_schema", jsonToProtobufValueBytes(tool.InputSchema)) + } + setStr(td, "provider_identifier", "proxy") + setStr(td, "tool_name", tool.Name) + toolsList.Append(protoreflect.ValueOfMessage(td.ProtoReflect())) + } + setMsg(arr, "mcp_tools", mcpTools) + } + + // --- AgentClientMessage --- + acm := newMsg("AgentClientMessage") + setMsg(acm, "run_request", arr) + + return marshal(acm) +} + +// encodeRunRequestWithCheckpoint builds an AgentClientMessage using a raw checkpoint +// as conversation_state. The checkpoint bytes are embedded directly without deserialization. +func encodeRunRequestWithCheckpoint(p *RunRequestParams) []byte { + // Build UserMessage + userMessage := newMsg("UserMessage") + setStr(userMessage, "text", p.UserText) + setStr(userMessage, "message_id", p.MessageId) + if len(p.Images) > 0 { + sc := newMsg("SelectedContext") + imgsField := field(sc, "selected_images") + imgsList := sc.Mutable(imgsField).List() + for _, img := range p.Images { + si := newMsg("SelectedImage") + setStr(si, "uuid", generateId()) + setStr(si, "mime_type", img.MimeType) + setBytes(si, "data", img.Data) + imgsList.Append(protoreflect.ValueOfMessage(si.ProtoReflect())) + } + setMsg(userMessage, "selected_context", sc) + } + + // Build ConversationAction with UserMessageAction + uma := newMsg("UserMessageAction") + setMsg(uma, "user_message", userMessage) + ca := newMsg("ConversationAction") + setMsg(ca, "user_message_action", uma) + caBytes := marshal(ca) + + // Build ModelDetails + md := newMsg("ModelDetails") + setStr(md, "model_id", p.ModelId) + setStr(md, "display_model_id", p.ModelId) + setStr(md, "display_name", p.ModelId) + mdBytes := marshal(md) + + // Build McpTools + var mcpToolsBytes []byte + if len(p.McpTools) > 0 { + mcpTools := newMsg("McpTools") + toolsField := field(mcpTools, "mcp_tools") + toolsList := mcpTools.Mutable(toolsField).List() + for _, tool := range p.McpTools { + td := newMsg("McpToolDefinition") + setStr(td, "name", tool.Name) + setStr(td, "description", tool.Description) + if len(tool.InputSchema) > 0 { + setBytes(td, "input_schema", jsonToProtobufValueBytes(tool.InputSchema)) + } + setStr(td, "provider_identifier", "proxy") + setStr(td, "tool_name", tool.Name) + toolsList.Append(protoreflect.ValueOfMessage(td.ProtoReflect())) + } + mcpToolsBytes = marshal(mcpTools) + } + + // Manually assemble AgentRunRequest using protowire to embed raw checkpoint + var arrBuf []byte + // field 1: conversation_state = raw checkpoint bytes (length-delimited) + arrBuf = protowire.AppendTag(arrBuf, ARR_ConversationState, protowire.BytesType) + arrBuf = protowire.AppendBytes(arrBuf, p.RawCheckpoint) + // field 2: action = ConversationAction + arrBuf = protowire.AppendTag(arrBuf, ARR_Action, protowire.BytesType) + arrBuf = protowire.AppendBytes(arrBuf, caBytes) + // field 3: model_details = ModelDetails + arrBuf = protowire.AppendTag(arrBuf, ARR_ModelDetails, protowire.BytesType) + arrBuf = protowire.AppendBytes(arrBuf, mdBytes) + // field 4: mcp_tools = McpTools + if len(mcpToolsBytes) > 0 { + arrBuf = protowire.AppendTag(arrBuf, ARR_McpTools, protowire.BytesType) + arrBuf = protowire.AppendBytes(arrBuf, mcpToolsBytes) + } + // field 5: conversation_id = string + if p.ConversationId != "" { + arrBuf = protowire.AppendTag(arrBuf, ARR_ConversationId, protowire.BytesType) + arrBuf = protowire.AppendString(arrBuf, p.ConversationId) + } + + // Wrap in AgentClientMessage field 1 (run_request) + var acmBuf []byte + acmBuf = protowire.AppendTag(acmBuf, ACM_RunRequest, protowire.BytesType) + acmBuf = protowire.AppendBytes(acmBuf, arrBuf) + + log.Debugf("cursor encode: built RunRequest with checkpoint (%d bytes), total=%d bytes", len(p.RawCheckpoint), len(acmBuf)) + return acmBuf +} + +// ResumeRequestParams holds data for a ResumeAction request. +type ResumeRequestParams struct { + ModelId string + ConversationId string + McpTools []McpToolDef +} + +// EncodeResumeRequest builds an AgentClientMessage with ResumeAction. +// Used to resume a conversation by conversation_id without re-sending full history. +func EncodeResumeRequest(p *ResumeRequestParams) []byte { + // RequestContext with tools + rc := newMsg("RequestContext") + if len(p.McpTools) > 0 { + toolsField := field(rc, "tools") + toolsList := rc.Mutable(toolsField).List() + for _, tool := range p.McpTools { + td := newMsg("McpToolDefinition") + setStr(td, "name", tool.Name) + setStr(td, "description", tool.Description) + if len(tool.InputSchema) > 0 { + setBytes(td, "input_schema", jsonToProtobufValueBytes(tool.InputSchema)) + } + setStr(td, "provider_identifier", "proxy") + setStr(td, "tool_name", tool.Name) + toolsList.Append(protoreflect.ValueOfMessage(td.ProtoReflect())) + } + } + + // ResumeAction + ra := newMsg("ResumeAction") + setMsg(ra, "request_context", rc) + + // ConversationAction with resume_action + ca := newMsg("ConversationAction") + setMsg(ca, "resume_action", ra) + + // ModelDetails + md := newMsg("ModelDetails") + setStr(md, "model_id", p.ModelId) + setStr(md, "display_model_id", p.ModelId) + setStr(md, "display_name", p.ModelId) + + // AgentRunRequest — no conversation_state needed for resume + arr := newMsg("AgentRunRequest") + setMsg(arr, "action", ca) + setMsg(arr, "model_details", md) + setStr(arr, "conversation_id", p.ConversationId) + + // McpTools at top level + if len(p.McpTools) > 0 { + mcpTools := newMsg("McpTools") + toolsField := field(mcpTools, "mcp_tools") + toolsList := mcpTools.Mutable(toolsField).List() + for _, tool := range p.McpTools { + td := newMsg("McpToolDefinition") + setStr(td, "name", tool.Name) + setStr(td, "description", tool.Description) + if len(tool.InputSchema) > 0 { + setBytes(td, "input_schema", jsonToProtobufValueBytes(tool.InputSchema)) + } + setStr(td, "provider_identifier", "proxy") + setStr(td, "tool_name", tool.Name) + toolsList.Append(protoreflect.ValueOfMessage(td.ProtoReflect())) + } + setMsg(arr, "mcp_tools", mcpTools) + } + + acm := newMsg("AgentClientMessage") + setMsg(acm, "run_request", arr) + return marshal(acm) +} + +// --- KV response encoders --- +// Mirrors handleKvMessage() in cursor-fetch.ts + +// EncodeKvGetBlobResult responds to a getBlobArgs request. +func EncodeKvGetBlobResult(kvId uint32, blobData []byte) []byte { + result := newMsg("GetBlobResult") + if blobData != nil { + setBytes(result, "blob_data", blobData) + } + + kvc := newMsg("KvClientMessage") + setUint32(kvc, "id", kvId) + setMsg(kvc, "get_blob_result", result) + + acm := newMsg("AgentClientMessage") + setMsg(acm, "kv_client_message", kvc) + return marshal(acm) +} + +// EncodeKvSetBlobResult responds to a setBlobArgs request. +func EncodeKvSetBlobResult(kvId uint32) []byte { + result := newMsg("SetBlobResult") + + kvc := newMsg("KvClientMessage") + setUint32(kvc, "id", kvId) + setMsg(kvc, "set_blob_result", result) + + acm := newMsg("AgentClientMessage") + setMsg(acm, "kv_client_message", kvc) + return marshal(acm) +} + +// --- Exec response encoders --- +// Mirrors handleExecMessage() and sendExec() in cursor-fetch.ts + +// EncodeExecRequestContextResult responds to requestContextArgs with tool definitions. +func EncodeExecRequestContextResult(execMsgId uint32, execId string, tools []McpToolDef) []byte { + // RequestContext with tools + rc := newMsg("RequestContext") + if len(tools) > 0 { + toolsField := field(rc, "tools") + toolsList := rc.Mutable(toolsField).List() + for _, tool := range tools { + td := newMsg("McpToolDefinition") + setStr(td, "name", tool.Name) + setStr(td, "description", tool.Description) + if len(tool.InputSchema) > 0 { + setBytes(td, "input_schema", jsonToProtobufValueBytes(tool.InputSchema)) + } + setStr(td, "provider_identifier", "proxy") + setStr(td, "tool_name", tool.Name) + toolsList.Append(protoreflect.ValueOfMessage(td.ProtoReflect())) + } + } + + // RequestContextSuccess + rcs := newMsg("RequestContextSuccess") + setMsg(rcs, "request_context", rc) + + // RequestContextResult (oneof success) + rcr := newMsg("RequestContextResult") + setMsg(rcr, "success", rcs) + + return encodeExecClientMsg(execMsgId, execId, "request_context_result", rcr) +} + +// EncodeExecMcpResult responds with MCP tool result. +func EncodeExecMcpResult(execMsgId uint32, execId string, content string, isError bool) []byte { + textContent := newMsg("McpTextContent") + setStr(textContent, "text", content) + + contentItem := newMsg("McpToolResultContentItem") + setMsg(contentItem, "text", textContent) + + success := newMsg("McpSuccess") + contentField := field(success, "content") + contentList := success.Mutable(contentField).List() + contentList.Append(protoreflect.ValueOfMessage(contentItem.ProtoReflect())) + setBool(success, "is_error", isError) + + result := newMsg("McpResult") + setMsg(result, "success", success) + + return encodeExecClientMsg(execMsgId, execId, "mcp_result", result) +} + +// EncodeExecMcpError responds with MCP error. +func EncodeExecMcpError(execMsgId uint32, execId string, errMsg string) []byte { + mcpErr := newMsg("McpError") + setStr(mcpErr, "error", errMsg) + + result := newMsg("McpResult") + setMsg(result, "error", mcpErr) + + return encodeExecClientMsg(execMsgId, execId, "mcp_result", result) +} + +// --- Rejection encoders (mirror handleExecMessage rejections) --- + +func EncodeExecReadRejected(execMsgId uint32, execId string, path, reason string) []byte { + rej := newMsg("ReadRejected") + setStr(rej, "path", path) + setStr(rej, "reason", reason) + result := newMsg("ReadResult") + setMsg(result, "rejected", rej) + return encodeExecClientMsg(execMsgId, execId, "read_result", result) +} + +func EncodeExecShellRejected(execMsgId uint32, execId string, command, workDir, reason string) []byte { + rej := newMsg("ShellRejected") + setStr(rej, "command", command) + setStr(rej, "working_directory", workDir) + setStr(rej, "reason", reason) + result := newMsg("ShellResult") + setMsg(result, "rejected", rej) + return encodeExecClientMsg(execMsgId, execId, "shell_result", result) +} + +func EncodeExecWriteRejected(execMsgId uint32, execId string, path, reason string) []byte { + rej := newMsg("WriteRejected") + setStr(rej, "path", path) + setStr(rej, "reason", reason) + result := newMsg("WriteResult") + setMsg(result, "rejected", rej) + return encodeExecClientMsg(execMsgId, execId, "write_result", result) +} + +func EncodeExecDeleteRejected(execMsgId uint32, execId string, path, reason string) []byte { + rej := newMsg("DeleteRejected") + setStr(rej, "path", path) + setStr(rej, "reason", reason) + result := newMsg("DeleteResult") + setMsg(result, "rejected", rej) + return encodeExecClientMsg(execMsgId, execId, "delete_result", result) +} + +func EncodeExecLsRejected(execMsgId uint32, execId string, path, reason string) []byte { + rej := newMsg("LsRejected") + setStr(rej, "path", path) + setStr(rej, "reason", reason) + result := newMsg("LsResult") + setMsg(result, "rejected", rej) + return encodeExecClientMsg(execMsgId, execId, "ls_result", result) +} + +func EncodeExecGrepError(execMsgId uint32, execId string, errMsg string) []byte { + grepErr := newMsg("GrepError") + setStr(grepErr, "error", errMsg) + result := newMsg("GrepResult") + setMsg(result, "error", grepErr) + return encodeExecClientMsg(execMsgId, execId, "grep_result", result) +} + +func EncodeExecFetchError(execMsgId uint32, execId string, url, errMsg string) []byte { + fetchErr := newMsg("FetchError") + setStr(fetchErr, "url", url) + setStr(fetchErr, "error", errMsg) + result := newMsg("FetchResult") + setMsg(result, "error", fetchErr) + return encodeExecClientMsg(execMsgId, execId, "fetch_result", result) +} + +func EncodeExecDiagnosticsResult(execMsgId uint32, execId string) []byte { + result := newMsg("DiagnosticsResult") + return encodeExecClientMsg(execMsgId, execId, "diagnostics_result", result) +} + +func EncodeExecBackgroundShellSpawnRejected(execMsgId uint32, execId string, command, workDir, reason string) []byte { + rej := newMsg("ShellRejected") + setStr(rej, "command", command) + setStr(rej, "working_directory", workDir) + setStr(rej, "reason", reason) + result := newMsg("BackgroundShellSpawnResult") + setMsg(result, "rejected", rej) + return encodeExecClientMsg(execMsgId, execId, "background_shell_spawn_result", result) +} + +func EncodeExecWriteShellStdinError(execMsgId uint32, execId string, errMsg string) []byte { + wsErr := newMsg("WriteShellStdinError") + setStr(wsErr, "error", errMsg) + result := newMsg("WriteShellStdinResult") + setMsg(result, "error", wsErr) + return encodeExecClientMsg(execMsgId, execId, "write_shell_stdin_result", result) +} + +// encodeExecClientMsg wraps an exec result in AgentClientMessage. +// Mirrors sendExec() in cursor-fetch.ts. +func encodeExecClientMsg(id uint32, execId string, resultFieldName string, resultMsg *dynamicpb.Message) []byte { + ecm := newMsg("ExecClientMessage") + setUint32(ecm, "id", id) + // Force set exec_id even if empty - Cursor requires this field to be set + ecm.Set(field(ecm, "exec_id"), protoreflect.ValueOfString(execId)) + + // Debug: check if field exists + fd := field(ecm, resultFieldName) + if fd == nil { + panic(fmt.Sprintf("field %q NOT FOUND in ExecClientMessage! Available fields: %v", resultFieldName, listFields(ecm))) + } + + // Debug: log the actual field being set + log.Debugf("encodeExecClientMsg: setting field %q (number=%d, kind=%s)", fd.Name(), fd.Number(), fd.Kind()) + + ecm.Set(fd, protoreflect.ValueOfMessage(resultMsg.ProtoReflect())) + + acm := newMsg("AgentClientMessage") + setMsg(acm, "exec_client_message", ecm) + return marshal(acm) +} + +func listFields(msg *dynamicpb.Message) []string { + var names []string + for i := 0; i < msg.Descriptor().Fields().Len(); i++ { + names = append(names, string(msg.Descriptor().Fields().Get(i).Name())) + } + return names +} + +// --- Utilities --- + +// jsonToProtobufValueBytes converts a JSON schema (json.RawMessage) to protobuf Value binary. +// This mirrors the TS pattern: toBinary(ValueSchema, fromJson(ValueSchema, jsonSchema)) +func jsonToProtobufValueBytes(jsonData json.RawMessage) []byte { + if len(jsonData) == 0 { + return nil + } + var v interface{} + if err := json.Unmarshal(jsonData, &v); err != nil { + return jsonData // fallback to raw JSON if parsing fails + } + pbVal, err := structpb.NewValue(v) + if err != nil { + return jsonData // fallback + } + b, err := proto.Marshal(pbVal) + if err != nil { + return jsonData // fallback + } + return b +} + +// ProtobufValueBytesToJSON converts protobuf Value binary back to JSON. +// This mirrors the TS pattern: toJson(ValueSchema, fromBinary(ValueSchema, value)) +func ProtobufValueBytesToJSON(data []byte) (interface{}, error) { + val := &structpb.Value{} + if err := proto.Unmarshal(data, val); err != nil { + return nil, err + } + return val.AsInterface(), nil +} + +func sha256Sum(data []byte) []byte { + h := sha256.Sum256(data) + return h[:] +} + +var idCounter uint64 + +func generateId() string { + idCounter++ + h := sha256.Sum256([]byte{byte(idCounter), byte(idCounter >> 8), byte(idCounter >> 16)}) + return hex.EncodeToString(h[:16]) +} diff --git a/internal/auth/cursor/proto/fieldnumbers.go b/internal/auth/cursor/proto/fieldnumbers.go new file mode 100644 index 0000000000..4b2accc64c --- /dev/null +++ b/internal/auth/cursor/proto/fieldnumbers.go @@ -0,0 +1,332 @@ +// Package proto provides hand-rolled protobuf encode/decode for Cursor's gRPC API. +// Field numbers are extracted from the TypeScript generated proto/agent_pb.ts in alma-plugins/cursor-auth. +package proto + +// AgentClientMessage (msg 118) oneof "message" +const ( + ACM_RunRequest = 1 // AgentRunRequest + ACM_ExecClientMessage = 2 // ExecClientMessage + ACM_KvClientMessage = 3 // KvClientMessage + ACM_ConversationAction = 4 // ConversationAction + ACM_ExecClientControlMsg = 5 // ExecClientControlMessage + ACM_InteractionResponse = 6 // InteractionResponse + ACM_ClientHeartbeat = 7 // ClientHeartbeat +) + +// AgentServerMessage (msg 119) oneof "message" +const ( + ASM_InteractionUpdate = 1 // InteractionUpdate + ASM_ExecServerMessage = 2 // ExecServerMessage + ASM_ConversationCheckpoint = 3 // ConversationStateStructure + ASM_KvServerMessage = 4 // KvServerMessage + ASM_ExecServerControlMessage = 5 // ExecServerControlMessage + ASM_InteractionQuery = 7 // InteractionQuery +) + +// AgentRunRequest (msg 91) +const ( + ARR_ConversationState = 1 // ConversationStateStructure + ARR_Action = 2 // ConversationAction + ARR_ModelDetails = 3 // ModelDetails + ARR_McpTools = 4 // McpTools + ARR_ConversationId = 5 // string (optional) +) + +// ConversationStateStructure (msg 83) +const ( + CSS_RootPromptMessagesJson = 1 // repeated bytes + CSS_TurnsOld = 2 // repeated bytes (deprecated) + CSS_Todos = 3 // repeated bytes + CSS_PendingToolCalls = 4 // repeated string + CSS_Turns = 8 // repeated bytes (CURRENT field for turns) + CSS_PreviousWorkspaceUris = 9 // repeated string + CSS_SelfSummaryCount = 17 // uint32 + CSS_ReadPaths = 18 // repeated string +) + +// ConversationAction (msg 54) oneof "action" +const ( + CA_UserMessageAction = 1 // UserMessageAction +) + +// UserMessageAction (msg 55) +const ( + UMA_UserMessage = 1 // UserMessage +) + +// UserMessage (msg 63) +const ( + UM_Text = 1 // string + UM_MessageId = 2 // string + UM_SelectedContext = 3 // SelectedContext (optional) +) + +// SelectedContext +const ( + SC_SelectedImages = 1 // repeated SelectedImage +) + +// SelectedImage +const ( + SI_BlobId = 1 // bytes (oneof dataOrBlobId) + SI_Uuid = 2 // string + SI_Path = 3 // string + SI_MimeType = 7 // string + SI_Data = 8 // bytes (oneof dataOrBlobId) +) + +// ModelDetails (msg 88) +const ( + MD_ModelId = 1 // string + MD_ThinkingDetails = 2 // ThinkingDetails (optional) + MD_DisplayModelId = 3 // string + MD_DisplayName = 4 // string +) + +// McpTools (msg 307) +const ( + MT_McpTools = 1 // repeated McpToolDefinition +) + +// McpToolDefinition (msg 306) +const ( + MTD_Name = 1 // string + MTD_Description = 2 // string + MTD_InputSchema = 3 // bytes + MTD_ProviderIdentifier = 4 // string + MTD_ToolName = 5 // string +) + +// ConversationTurnStructure (msg 70) oneof "turn" +const ( + CTS_AgentConversationTurn = 1 // AgentConversationTurnStructure +) + +// AgentConversationTurnStructure (msg 72) +const ( + ACTS_UserMessage = 1 // bytes (serialized UserMessage) + ACTS_Steps = 2 // repeated bytes (serialized ConversationStep) +) + +// ConversationStep (msg 53) oneof "message" +const ( + CS_AssistantMessage = 1 // AssistantMessage +) + +// AssistantMessage +const ( + AM_Text = 1 // string +) + +// --- Server-side message fields --- + +// InteractionUpdate oneof "message" +const ( + IU_TextDelta = 1 // TextDeltaUpdate + IU_ThinkingDelta = 4 // ThinkingDeltaUpdate + IU_ThinkingCompleted = 5 // ThinkingCompletedUpdate +) + +// TextDeltaUpdate (msg 92) +const ( + TDU_Text = 1 // string +) + +// ThinkingDeltaUpdate (msg 97) +const ( + TKD_Text = 1 // string +) + +// KvServerMessage (msg 271) +const ( + KSM_Id = 1 // uint32 + KSM_GetBlobArgs = 2 // GetBlobArgs + KSM_SetBlobArgs = 3 // SetBlobArgs +) + +// GetBlobArgs (msg 267) +const ( + GBA_BlobId = 1 // bytes +) + +// SetBlobArgs (msg 269) +const ( + SBA_BlobId = 1 // bytes + SBA_BlobData = 2 // bytes +) + +// KvClientMessage (msg 272) +const ( + KCM_Id = 1 // uint32 + KCM_GetBlobResult = 2 // GetBlobResult + KCM_SetBlobResult = 3 // SetBlobResult +) + +// GetBlobResult (msg 268) +const ( + GBR_BlobData = 1 // bytes (optional) +) + +// ExecServerMessage +const ( + ESM_Id = 1 // uint32 + ESM_ExecId = 15 // string + // oneof message: + ESM_ShellArgs = 2 // ShellArgs + ESM_WriteArgs = 3 // WriteArgs + ESM_DeleteArgs = 4 // DeleteArgs + ESM_GrepArgs = 5 // GrepArgs + ESM_ReadArgs = 7 // ReadArgs (NOTE: 6 is skipped) + ESM_LsArgs = 8 // LsArgs + ESM_DiagnosticsArgs = 9 // DiagnosticsArgs + ESM_RequestContextArgs = 10 // RequestContextArgs + ESM_McpArgs = 11 // McpArgs + ESM_ShellStreamArgs = 14 // ShellArgs (stream variant) + ESM_BackgroundShellSpawn = 16 // BackgroundShellSpawnArgs + ESM_FetchArgs = 20 // FetchArgs + ESM_WriteShellStdinArgs = 23 // WriteShellStdinArgs +) + +// ExecClientMessage +const ( + ECM_Id = 1 // uint32 + ECM_ExecId = 15 // string + // oneof message (mirrors server fields): + ECM_ShellResult = 2 + ECM_WriteResult = 3 + ECM_DeleteResult = 4 + ECM_GrepResult = 5 + ECM_ReadResult = 7 + ECM_LsResult = 8 + ECM_DiagnosticsResult = 9 + ECM_RequestContextResult = 10 + ECM_McpResult = 11 + ECM_ShellStream = 14 + ECM_BackgroundShellSpawnRes = 16 + ECM_FetchResult = 20 + ECM_WriteShellStdinResult = 23 +) + +// McpArgs +const ( + MCA_Name = 1 // string + MCA_Args = 2 // map + MCA_ToolCallId = 3 // string + MCA_ProviderIdentifier = 4 // string + MCA_ToolName = 5 // string +) + +// RequestContextResult oneof "result" +const ( + RCR_Success = 1 // RequestContextSuccess + RCR_Error = 2 // RequestContextError +) + +// RequestContextSuccess (msg 337) +const ( + RCS_RequestContext = 1 // RequestContext +) + +// RequestContext +const ( + RC_Rules = 2 // repeated CursorRule + RC_Tools = 7 // repeated McpToolDefinition +) + +// McpResult oneof "result" +const ( + MCR_Success = 1 // McpSuccess + MCR_Error = 2 // McpError + MCR_Rejected = 3 // McpRejected +) + +// McpSuccess (msg 290) +const ( + MCS_Content = 1 // repeated McpToolResultContentItem + MCS_IsError = 2 // bool +) + +// McpToolResultContentItem oneof "content" +const ( + MTRCI_Text = 1 // McpTextContent +) + +// McpTextContent (msg 287) +const ( + MTC_Text = 1 // string +) + +// McpError (msg 291) +const ( + MCE_Error = 1 // string +) + +// --- Rejection messages --- + +// ReadRejected: path=1, reason=2 +// ShellRejected: command=1, workingDirectory=2, reason=3, isReadonly=4 +// WriteRejected: path=1, reason=2 +// DeleteRejected: path=1, reason=2 +// LsRejected: path=1, reason=2 +// GrepError: error=1 +// FetchError: url=1, error=2 +// WriteShellStdinError: error=1 + +// ReadResult oneof: success=1, error=2, rejected=3 +// ShellResult oneof: success=1 (+ various), rejected=? +// The TS code uses specific result field numbers from the oneof: +const ( + RR_Rejected = 3 // ReadResult.rejected + SR_Rejected = 5 // ShellResult.rejected (from TS: ShellResult has success/various/rejected) + WR_Rejected = 5 // WriteResult.rejected + DR_Rejected = 3 // DeleteResult.rejected + LR_Rejected = 3 // LsResult.rejected + GR_Error = 2 // GrepResult.error + FR_Error = 2 // FetchResult.error + BSSR_Rejected = 2 // BackgroundShellSpawnResult.rejected (error field) + WSSR_Error = 2 // WriteShellStdinResult.error +) + +// --- Rejection struct fields --- +const ( + REJ_Path = 1 + REJ_Reason = 2 + SREJ_Command = 1 + SREJ_WorkingDir = 2 + SREJ_Reason = 3 + SREJ_IsReadonly = 4 + GERR_Error = 1 + FERR_Url = 1 + FERR_Error = 2 +) + +// ReadArgs +const ( + RA_Path = 1 // string +) + +// WriteArgs +const ( + WA_Path = 1 // string +) + +// DeleteArgs +const ( + DA_Path = 1 // string +) + +// LsArgs +const ( + LA_Path = 1 // string +) + +// ShellArgs +const ( + SHA_Command = 1 // string + SHA_WorkingDirectory = 2 // string +) + +// FetchArgs +const ( + FA_Url = 1 // string +) diff --git a/internal/auth/cursor/proto/h2stream.go b/internal/auth/cursor/proto/h2stream.go new file mode 100644 index 0000000000..5275b28344 --- /dev/null +++ b/internal/auth/cursor/proto/h2stream.go @@ -0,0 +1,313 @@ +package proto + +import ( + "crypto/tls" + "fmt" + "io" + "net" + "sync" + "time" + + log "github.com/sirupsen/logrus" + "golang.org/x/net/http2" + "golang.org/x/net/http2/hpack" +) + +const ( + defaultInitialWindowSize = 65535 // HTTP/2 default + maxFramePayload = 16384 // HTTP/2 default max frame size +) + +// H2Stream provides bidirectional HTTP/2 streaming for the Connect protocol. +// Go's net/http does not support full-duplex HTTP/2, so we use the low-level framer. +type H2Stream struct { + framer *http2.Framer + conn net.Conn + streamID uint32 + mu sync.Mutex + id string // unique identifier for debugging + frameNum int64 // sequential frame counter for debugging + + dataCh chan []byte + doneCh chan struct{} + err error + + // Send-side flow control + sendWindow int32 // available bytes we can send on this stream + connWindow int32 // available bytes on the connection level + windowCond *sync.Cond // signaled when window is updated + windowMu sync.Mutex // protects sendWindow, connWindow +} + +// ID returns the unique identifier for this stream (for logging). +func (s *H2Stream) ID() string { return s.id } + +// FrameNum returns the current frame number for debugging. +func (s *H2Stream) FrameNum() int64 { + s.mu.Lock() + defer s.mu.Unlock() + return s.frameNum +} + +// DialH2Stream establishes a TLS+HTTP/2 connection and opens a new stream. +func DialH2Stream(host string, headers map[string]string) (*H2Stream, error) { + tlsConn, err := tls.Dial("tcp", host+":443", &tls.Config{ + NextProtos: []string{"h2"}, + }) + if err != nil { + return nil, fmt.Errorf("h2: TLS dial failed: %w", err) + } + if tlsConn.ConnectionState().NegotiatedProtocol != "h2" { + tlsConn.Close() + return nil, fmt.Errorf("h2: server did not negotiate h2") + } + + framer := http2.NewFramer(tlsConn, tlsConn) + + // Client connection preface + if _, err := tlsConn.Write([]byte(http2.ClientPreface)); err != nil { + tlsConn.Close() + return nil, fmt.Errorf("h2: preface write failed: %w", err) + } + + // Send initial SETTINGS (tell server how much WE can receive) + if err := framer.WriteSettings( + http2.Setting{ID: http2.SettingInitialWindowSize, Val: 4 * 1024 * 1024}, + http2.Setting{ID: http2.SettingMaxConcurrentStreams, Val: 100}, + ); err != nil { + tlsConn.Close() + return nil, fmt.Errorf("h2: settings write failed: %w", err) + } + + // Connection-level window update (for receiving) + if err := framer.WriteWindowUpdate(0, 3*1024*1024); err != nil { + tlsConn.Close() + return nil, fmt.Errorf("h2: window update failed: %w", err) + } + + // Read and handle initial server frames (SETTINGS, WINDOW_UPDATE) + // Track server's initial window size (how much WE can send) + serverInitialWindowSize := int32(defaultInitialWindowSize) + connWindowSize := int32(defaultInitialWindowSize) // connection-level send window + for i := 0; i < 10; i++ { + f, err := framer.ReadFrame() + if err != nil { + tlsConn.Close() + return nil, fmt.Errorf("h2: initial frame read failed: %w", err) + } + switch sf := f.(type) { + case *http2.SettingsFrame: + if !sf.IsAck() { + sf.ForeachSetting(func(s http2.Setting) error { + if s.ID == http2.SettingInitialWindowSize { + serverInitialWindowSize = int32(s.Val) + log.Debugf("h2: server initial window size: %d", s.Val) + } + return nil + }) + framer.WriteSettingsAck() + } else { + goto handshakeDone + } + case *http2.WindowUpdateFrame: + if sf.StreamID == 0 { + connWindowSize += int32(sf.Increment) + log.Debugf("h2: initial conn window update: +%d, total=%d", sf.Increment, connWindowSize) + } + default: + // unexpected but continue + } + } +handshakeDone: + + // Build HEADERS + streamID := uint32(1) + var hdrBuf []byte + enc := hpack.NewEncoder(&sliceWriter{buf: &hdrBuf}) + enc.WriteField(hpack.HeaderField{Name: ":method", Value: "POST"}) + enc.WriteField(hpack.HeaderField{Name: ":scheme", Value: "https"}) + enc.WriteField(hpack.HeaderField{Name: ":authority", Value: host}) + if p, ok := headers[":path"]; ok { + enc.WriteField(hpack.HeaderField{Name: ":path", Value: p}) + } + for k, v := range headers { + if len(k) > 0 && k[0] == ':' { + continue + } + enc.WriteField(hpack.HeaderField{Name: k, Value: v}) + } + + if err := framer.WriteHeaders(http2.HeadersFrameParam{ + StreamID: streamID, + BlockFragment: hdrBuf, + EndStream: false, + EndHeaders: true, + }); err != nil { + tlsConn.Close() + return nil, fmt.Errorf("h2: headers write failed: %w", err) + } + + s := &H2Stream{ + framer: framer, + conn: tlsConn, + streamID: streamID, + dataCh: make(chan []byte, 256), + doneCh: make(chan struct{}), + id: fmt.Sprintf("%d-%s", streamID, time.Now().Format("150405.000")), + frameNum: 0, + sendWindow: serverInitialWindowSize, + connWindow: connWindowSize, + } + s.windowCond = sync.NewCond(&s.windowMu) + go s.readLoop() + return s, nil +} + +// Write sends a DATA frame on the stream, respecting flow control. +func (s *H2Stream) Write(data []byte) error { + for len(data) > 0 { + chunk := data + if len(chunk) > maxFramePayload { + chunk = data[:maxFramePayload] + } + + // Wait for flow control window + s.windowMu.Lock() + for s.sendWindow <= 0 || s.connWindow <= 0 { + s.windowCond.Wait() + } + // Limit chunk to available window + allowed := int(s.sendWindow) + if int(s.connWindow) < allowed { + allowed = int(s.connWindow) + } + if len(chunk) > allowed { + chunk = chunk[:allowed] + } + s.sendWindow -= int32(len(chunk)) + s.connWindow -= int32(len(chunk)) + s.windowMu.Unlock() + + s.mu.Lock() + err := s.framer.WriteData(s.streamID, false, chunk) + s.mu.Unlock() + if err != nil { + return err + } + data = data[len(chunk):] + } + return nil +} + +// Data returns the channel of received data chunks. +func (s *H2Stream) Data() <-chan []byte { return s.dataCh } + +// Done returns a channel closed when the stream ends. +func (s *H2Stream) Done() <-chan struct{} { return s.doneCh } + +// Err returns the error (if any) that caused the stream to close. +// Returns nil for a clean shutdown (EOF / StreamEnded). +func (s *H2Stream) Err() error { return s.err } + +// Close tears down the connection. +func (s *H2Stream) Close() { + s.conn.Close() + // Unblock any writers waiting on flow control + s.windowCond.Broadcast() +} + +func (s *H2Stream) readLoop() { + defer close(s.doneCh) + defer close(s.dataCh) + + for { + f, err := s.framer.ReadFrame() + if err != nil { + if err != io.EOF { + s.err = err + log.Debugf("h2stream[%s]: readLoop error: %v", s.id, err) + } + return + } + + // Increment frame counter + s.mu.Lock() + s.frameNum++ + s.mu.Unlock() + + switch frame := f.(type) { + case *http2.DataFrame: + if frame.StreamID == s.streamID && len(frame.Data()) > 0 { + cp := make([]byte, len(frame.Data())) + copy(cp, frame.Data()) + s.dataCh <- cp + + // Flow control: send WINDOW_UPDATE for received data + s.mu.Lock() + s.framer.WriteWindowUpdate(0, uint32(len(cp))) + s.framer.WriteWindowUpdate(s.streamID, uint32(len(cp))) + s.mu.Unlock() + } + if frame.StreamEnded() { + return + } + + case *http2.HeadersFrame: + if frame.StreamEnded() { + return + } + + case *http2.RSTStreamFrame: + s.err = fmt.Errorf("h2: RST_STREAM code=%d", frame.ErrCode) + log.Debugf("h2stream[%s]: received RST_STREAM code=%d", s.id, frame.ErrCode) + return + + case *http2.GoAwayFrame: + s.err = fmt.Errorf("h2: GOAWAY code=%d", frame.ErrCode) + return + + case *http2.PingFrame: + if !frame.IsAck() { + s.mu.Lock() + s.framer.WritePing(true, frame.Data) + s.mu.Unlock() + } + + case *http2.SettingsFrame: + if !frame.IsAck() { + // Check for window size changes + frame.ForeachSetting(func(setting http2.Setting) error { + if setting.ID == http2.SettingInitialWindowSize { + s.windowMu.Lock() + delta := int32(setting.Val) - s.sendWindow + s.sendWindow += delta + s.windowMu.Unlock() + s.windowCond.Broadcast() + } + return nil + }) + s.mu.Lock() + s.framer.WriteSettingsAck() + s.mu.Unlock() + } + + case *http2.WindowUpdateFrame: + // Update send-side flow control window + s.windowMu.Lock() + if frame.StreamID == 0 { + s.connWindow += int32(frame.Increment) + } else if frame.StreamID == s.streamID { + s.sendWindow += int32(frame.Increment) + } + s.windowMu.Unlock() + s.windowCond.Broadcast() + } + } +} + +type sliceWriter struct{ buf *[]byte } + +func (w *sliceWriter) Write(p []byte) (int, error) { + *w.buf = append(*w.buf, p...) + return len(p), nil +} diff --git a/internal/auth/gemini/gemini_auth.go b/internal/auth/gemini/gemini_auth.go index 2995a1cb5e..5b9ee82d26 100644 --- a/internal/auth/gemini/gemini_auth.go +++ b/internal/auth/gemini/gemini_auth.go @@ -13,12 +13,12 @@ import ( "net/http" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/browser" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" diff --git a/internal/auth/gemini/gemini_token.go b/internal/auth/gemini/gemini_token.go index 6848b708e2..a6ea8c5151 100644 --- a/internal/auth/gemini/gemini_token.go +++ b/internal/auth/gemini/gemini_token.go @@ -10,7 +10,7 @@ import ( "path/filepath" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" log "github.com/sirupsen/logrus" ) diff --git a/internal/auth/gitlab/gitlab.go b/internal/auth/gitlab/gitlab.go new file mode 100644 index 0000000000..c050732f47 --- /dev/null +++ b/internal/auth/gitlab/gitlab.go @@ -0,0 +1,492 @@ +package gitlab + +import ( + "context" + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net" + "net/http" + "net/url" + "strconv" + "strings" + "sync" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + DefaultBaseURL = "https://gitlab.com" + DefaultCallbackPort = 17171 + defaultOAuthScope = "api read_user" +) + +type PKCECodes struct { + CodeVerifier string + CodeChallenge string +} + +type OAuthResult struct { + Code string + State string + Error string +} + +type OAuthServer struct { + server *http.Server + port int + resultChan chan *OAuthResult + errorChan chan error + mu sync.Mutex + running bool +} + +type TokenResponse struct { + AccessToken string `json:"access_token"` + TokenType string `json:"token_type"` + RefreshToken string `json:"refresh_token"` + Scope string `json:"scope"` + CreatedAt int64 `json:"created_at"` + ExpiresIn int `json:"expires_in"` +} + +type User struct { + ID int64 `json:"id"` + Username string `json:"username"` + Name string `json:"name"` + Email string `json:"email"` + PublicEmail string `json:"public_email"` +} + +type PersonalAccessTokenSelf struct { + ID int64 `json:"id"` + Name string `json:"name"` + Scopes []string `json:"scopes"` + UserID int64 `json:"user_id"` +} + +type ModelDetails struct { + ModelProvider string `json:"model_provider"` + ModelName string `json:"model_name"` +} + +type DirectAccessResponse struct { + BaseURL string `json:"base_url"` + Token string `json:"token"` + ExpiresAt int64 `json:"expires_at"` + Headers map[string]string `json:"headers"` + ModelDetails *ModelDetails `json:"model_details,omitempty"` +} + +type DiscoveredModel struct { + ModelProvider string + ModelName string +} + +type AuthClient struct { + httpClient *http.Client +} + +func NewAuthClient(cfg *config.Config) *AuthClient { + client := &http.Client{} + if cfg != nil { + client = util.SetProxy(&cfg.SDKConfig, client) + } + return &AuthClient{httpClient: client} +} + +func NormalizeBaseURL(raw string) string { + value := strings.TrimSpace(raw) + if value == "" { + return DefaultBaseURL + } + if !strings.Contains(value, "://") { + value = "https://" + value + } + value = strings.TrimRight(value, "/") + return value +} + +func TokenExpiry(now time.Time, token *TokenResponse) time.Time { + if token == nil { + return time.Time{} + } + if token.CreatedAt > 0 && token.ExpiresIn > 0 { + return time.Unix(token.CreatedAt+int64(token.ExpiresIn), 0).UTC() + } + if token.ExpiresIn > 0 { + return now.UTC().Add(time.Duration(token.ExpiresIn) * time.Second) + } + return time.Time{} +} + +func GeneratePKCECodes() (*PKCECodes, error) { + verifierBytes := make([]byte, 32) + if _, err := rand.Read(verifierBytes); err != nil { + return nil, fmt.Errorf("gitlab pkce generation failed: %w", err) + } + verifier := base64.RawURLEncoding.EncodeToString(verifierBytes) + sum := sha256.Sum256([]byte(verifier)) + challenge := base64.RawURLEncoding.EncodeToString(sum[:]) + return &PKCECodes{ + CodeVerifier: verifier, + CodeChallenge: challenge, + }, nil +} + +func NewOAuthServer(port int) *OAuthServer { + return &OAuthServer{ + port: port, + resultChan: make(chan *OAuthResult, 1), + errorChan: make(chan error, 1), + } +} + +func (s *OAuthServer) Start() error { + s.mu.Lock() + defer s.mu.Unlock() + + if s.running { + return fmt.Errorf("gitlab oauth server already running") + } + if !s.isPortAvailable() { + return fmt.Errorf("port %d is already in use", s.port) + } + + mux := http.NewServeMux() + mux.HandleFunc("/auth/callback", s.handleCallback) + + s.server = &http.Server{ + Addr: fmt.Sprintf(":%d", s.port), + Handler: mux, + ReadTimeout: 10 * time.Second, + WriteTimeout: 10 * time.Second, + } + s.running = true + + go func() { + if err := s.server.ListenAndServe(); err != nil && err != http.ErrServerClosed { + s.errorChan <- err + } + }() + + time.Sleep(100 * time.Millisecond) + return nil +} + +func (s *OAuthServer) Stop(ctx context.Context) error { + s.mu.Lock() + defer s.mu.Unlock() + if !s.running || s.server == nil { + return nil + } + defer func() { + s.running = false + s.server = nil + }() + return s.server.Shutdown(ctx) +} + +func (s *OAuthServer) WaitForCallback(timeout time.Duration) (*OAuthResult, error) { + select { + case result := <-s.resultChan: + return result, nil + case err := <-s.errorChan: + return nil, err + case <-time.After(timeout): + return nil, fmt.Errorf("timeout waiting for OAuth callback") + } +} + +func (s *OAuthServer) handleCallback(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + http.Error(w, "method not allowed", http.StatusMethodNotAllowed) + return + } + query := r.URL.Query() + if errParam := strings.TrimSpace(query.Get("error")); errParam != "" { + s.sendResult(&OAuthResult{Error: errParam}) + http.Error(w, errParam, http.StatusBadRequest) + return + } + code := strings.TrimSpace(query.Get("code")) + state := strings.TrimSpace(query.Get("state")) + if code == "" || state == "" { + s.sendResult(&OAuthResult{Error: "missing_code_or_state"}) + http.Error(w, "missing code or state", http.StatusBadRequest) + return + } + s.sendResult(&OAuthResult{Code: code, State: state}) + _, _ = w.Write([]byte("GitLab authentication received. You can close this tab.")) +} + +func (s *OAuthServer) sendResult(result *OAuthResult) { + select { + case s.resultChan <- result: + default: + log.Debug("gitlab oauth result channel full, dropping callback result") + } +} + +func (s *OAuthServer) isPortAvailable() bool { + listener, err := net.Listen("tcp", fmt.Sprintf(":%d", s.port)) + if err != nil { + return false + } + _ = listener.Close() + return true +} + +func RedirectURL(port int) string { + return fmt.Sprintf("http://localhost:%d/auth/callback", port) +} + +func (c *AuthClient) GenerateAuthURL(baseURL, clientID, redirectURI, state string, pkce *PKCECodes) (string, error) { + if pkce == nil { + return "", fmt.Errorf("gitlab auth URL generation failed: PKCE codes are required") + } + if strings.TrimSpace(clientID) == "" { + return "", fmt.Errorf("gitlab auth URL generation failed: client ID is required") + } + baseURL = NormalizeBaseURL(baseURL) + params := url.Values{ + "client_id": {strings.TrimSpace(clientID)}, + "response_type": {"code"}, + "redirect_uri": {strings.TrimSpace(redirectURI)}, + "scope": {defaultOAuthScope}, + "state": {strings.TrimSpace(state)}, + "code_challenge": {pkce.CodeChallenge}, + "code_challenge_method": {"S256"}, + } + return fmt.Sprintf("%s/oauth/authorize?%s", baseURL, params.Encode()), nil +} + +func (c *AuthClient) ExchangeCodeForTokens(ctx context.Context, baseURL, clientID, clientSecret, redirectURI, code, codeVerifier string) (*TokenResponse, error) { + form := url.Values{ + "grant_type": {"authorization_code"}, + "client_id": {strings.TrimSpace(clientID)}, + "code": {strings.TrimSpace(code)}, + "redirect_uri": {strings.TrimSpace(redirectURI)}, + "code_verifier": {strings.TrimSpace(codeVerifier)}, + } + if secret := strings.TrimSpace(clientSecret); secret != "" { + form.Set("client_secret", secret) + } + return c.postToken(ctx, NormalizeBaseURL(baseURL)+"/oauth/token", form) +} + +func (c *AuthClient) RefreshTokens(ctx context.Context, baseURL, clientID, clientSecret, refreshToken string) (*TokenResponse, error) { + form := url.Values{ + "grant_type": {"refresh_token"}, + "refresh_token": {strings.TrimSpace(refreshToken)}, + } + if clientID = strings.TrimSpace(clientID); clientID != "" { + form.Set("client_id", clientID) + } + if secret := strings.TrimSpace(clientSecret); secret != "" { + form.Set("client_secret", secret) + } + return c.postToken(ctx, NormalizeBaseURL(baseURL)+"/oauth/token", form) +} + +func (c *AuthClient) postToken(ctx context.Context, tokenURL string, form url.Values) (*TokenResponse, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, strings.NewReader(form.Encode())) + if err != nil { + return nil, fmt.Errorf("gitlab token request failed: %w", err) + } + req.Header.Set("Content-Type", "application/x-www-form-urlencoded") + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("gitlab token request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("gitlab token response read failed: %w", err) + } + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + return nil, fmt.Errorf("gitlab token request failed with status %d: %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + var token TokenResponse + if err := json.Unmarshal(body, &token); err != nil { + return nil, fmt.Errorf("gitlab token response decode failed: %w", err) + } + return &token, nil +} + +func (c *AuthClient) GetCurrentUser(ctx context.Context, baseURL, token string) (*User, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, NormalizeBaseURL(baseURL)+"/api/v4/user", nil) + if err != nil { + return nil, fmt.Errorf("gitlab user request failed: %w", err) + } + req.Header.Set("Authorization", "Bearer "+strings.TrimSpace(token)) + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("gitlab user request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("gitlab user response read failed: %w", err) + } + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + return nil, fmt.Errorf("gitlab user request failed with status %d: %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var user User + if err := json.Unmarshal(body, &user); err != nil { + return nil, fmt.Errorf("gitlab user response decode failed: %w", err) + } + return &user, nil +} + +func (c *AuthClient) GetPersonalAccessTokenSelf(ctx context.Context, baseURL, token string) (*PersonalAccessTokenSelf, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, NormalizeBaseURL(baseURL)+"/api/v4/personal_access_tokens/self", nil) + if err != nil { + return nil, fmt.Errorf("gitlab PAT self request failed: %w", err) + } + req.Header.Set("Authorization", "Bearer "+strings.TrimSpace(token)) + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("gitlab PAT self request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("gitlab PAT self response read failed: %w", err) + } + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + return nil, fmt.Errorf("gitlab PAT self request failed with status %d: %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var pat PersonalAccessTokenSelf + if err := json.Unmarshal(body, &pat); err != nil { + return nil, fmt.Errorf("gitlab PAT self response decode failed: %w", err) + } + return &pat, nil +} + +func (c *AuthClient) FetchDirectAccess(ctx context.Context, baseURL, token string) (*DirectAccessResponse, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodPost, NormalizeBaseURL(baseURL)+"/api/v4/code_suggestions/direct_access", nil) + if err != nil { + return nil, fmt.Errorf("gitlab direct access request failed: %w", err) + } + req.Header.Set("Authorization", "Bearer "+strings.TrimSpace(token)) + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("gitlab direct access request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("gitlab direct access response read failed: %w", err) + } + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + return nil, fmt.Errorf("gitlab direct access request failed with status %d: %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var direct DirectAccessResponse + if err := json.Unmarshal(body, &direct); err != nil { + return nil, fmt.Errorf("gitlab direct access response decode failed: %w", err) + } + if direct.Headers == nil { + direct.Headers = make(map[string]string) + } + return &direct, nil +} + +func ExtractDiscoveredModels(metadata map[string]any) []DiscoveredModel { + if len(metadata) == 0 { + return nil + } + + models := make([]DiscoveredModel, 0, 4) + seen := make(map[string]struct{}) + appendModel := func(provider, name string) { + provider = strings.TrimSpace(provider) + name = strings.TrimSpace(name) + if name == "" { + return + } + key := strings.ToLower(name) + if _, ok := seen[key]; ok { + return + } + seen[key] = struct{}{} + models = append(models, DiscoveredModel{ + ModelProvider: provider, + ModelName: name, + }) + } + + if raw, ok := metadata["model_details"]; ok { + appendDiscoveredModels(raw, appendModel) + } + appendModel(stringValue(metadata["model_provider"]), stringValue(metadata["model_name"])) + + for _, key := range []string{"models", "supported_models", "discovered_models"} { + if raw, ok := metadata[key]; ok { + appendDiscoveredModels(raw, appendModel) + } + } + + return models +} + +func appendDiscoveredModels(raw any, appendModel func(provider, name string)) { + switch typed := raw.(type) { + case map[string]any: + appendModel(stringValue(typed["model_provider"]), stringValue(typed["model_name"])) + appendModel(stringValue(typed["provider"]), stringValue(typed["name"])) + if nested, ok := typed["models"]; ok { + appendDiscoveredModels(nested, appendModel) + } + case []any: + for _, item := range typed { + appendDiscoveredModels(item, appendModel) + } + case []string: + for _, item := range typed { + appendModel("", item) + } + case string: + appendModel("", typed) + } +} + +func stringValue(raw any) string { + switch typed := raw.(type) { + case string: + return strings.TrimSpace(typed) + case fmt.Stringer: + return strings.TrimSpace(typed.String()) + case json.Number: + return typed.String() + case int: + return strconv.Itoa(typed) + case int64: + return strconv.FormatInt(typed, 10) + case float64: + return strconv.FormatInt(int64(typed), 10) + default: + return "" + } +} diff --git a/internal/auth/gitlab/gitlab_test.go b/internal/auth/gitlab/gitlab_test.go new file mode 100644 index 0000000000..dde09dd7d4 --- /dev/null +++ b/internal/auth/gitlab/gitlab_test.go @@ -0,0 +1,138 @@ +package gitlab + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "net/url" + "strings" + "testing" +) + +func TestAuthClientGenerateAuthURLIncludesPKCE(t *testing.T) { + client := NewAuthClient(nil) + pkce, err := GeneratePKCECodes() + if err != nil { + t.Fatalf("GeneratePKCECodes() error = %v", err) + } + + rawURL, err := client.GenerateAuthURL("https://gitlab.example.com", "client-id", RedirectURL(17171), "state-123", pkce) + if err != nil { + t.Fatalf("GenerateAuthURL() error = %v", err) + } + + parsed, err := url.Parse(rawURL) + if err != nil { + t.Fatalf("Parse(authURL) error = %v", err) + } + if got := parsed.Path; got != "/oauth/authorize" { + t.Fatalf("expected /oauth/authorize path, got %q", got) + } + query := parsed.Query() + if got := query.Get("client_id"); got != "client-id" { + t.Fatalf("expected client_id, got %q", got) + } + if got := query.Get("scope"); got != defaultOAuthScope { + t.Fatalf("expected scope %q, got %q", defaultOAuthScope, got) + } + if got := query.Get("code_challenge_method"); got != "S256" { + t.Fatalf("expected PKCE method S256, got %q", got) + } + if got := query.Get("code_challenge"); got == "" { + t.Fatal("expected non-empty code_challenge") + } +} + +func TestAuthClientExchangeCodeForTokens(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path != "/oauth/token" { + t.Fatalf("unexpected path %q", r.URL.Path) + } + if err := r.ParseForm(); err != nil { + t.Fatalf("ParseForm() error = %v", err) + } + if got := r.Form.Get("grant_type"); got != "authorization_code" { + t.Fatalf("expected authorization_code grant, got %q", got) + } + if got := r.Form.Get("code_verifier"); got != "verifier-123" { + t.Fatalf("expected code_verifier, got %q", got) + } + _ = json.NewEncoder(w).Encode(map[string]any{ + "access_token": "oauth-access", + "refresh_token": "oauth-refresh", + "token_type": "Bearer", + "scope": "api read_user", + "created_at": 1710000000, + "expires_in": 3600, + }) + })) + defer srv.Close() + + client := NewAuthClient(nil) + token, err := client.ExchangeCodeForTokens(context.Background(), srv.URL, "client-id", "client-secret", RedirectURL(17171), "auth-code", "verifier-123") + if err != nil { + t.Fatalf("ExchangeCodeForTokens() error = %v", err) + } + if token.AccessToken != "oauth-access" { + t.Fatalf("expected access token, got %q", token.AccessToken) + } + if token.RefreshToken != "oauth-refresh" { + t.Fatalf("expected refresh token, got %q", token.RefreshToken) + } +} + +func TestExtractDiscoveredModels(t *testing.T) { + models := ExtractDiscoveredModels(map[string]any{ + "model_details": map[string]any{ + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + "supported_models": []any{ + map[string]any{"model_provider": "openai", "model_name": "gpt-4.1"}, + "claude-sonnet-4-5", + }, + }) + if len(models) != 2 { + t.Fatalf("expected 2 unique models, got %d", len(models)) + } + if models[0].ModelName != "claude-sonnet-4-5" { + t.Fatalf("unexpected first model %q", models[0].ModelName) + } + if models[1].ModelName != "gpt-4.1" { + t.Fatalf("unexpected second model %q", models[1].ModelName) + } +} + +func TestFetchDirectAccessDecodesModelDetails(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path != "/api/v4/code_suggestions/direct_access" { + t.Fatalf("unexpected path %q", r.URL.Path) + } + if got := r.Header.Get("Authorization"); !strings.Contains(got, "token-123") { + t.Fatalf("expected bearer token, got %q", got) + } + _ = json.NewEncoder(w).Encode(map[string]any{ + "base_url": "https://cloud.gitlab.example.com", + "token": "gateway-token", + "expires_at": 1710003600, + "headers": map[string]string{ + "X-Gitlab-Realm": "saas", + }, + "model_details": map[string]any{ + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + }) + })) + defer srv.Close() + + client := NewAuthClient(nil) + direct, err := client.FetchDirectAccess(context.Background(), srv.URL, "token-123") + if err != nil { + t.Fatalf("FetchDirectAccess() error = %v", err) + } + if direct.ModelDetails == nil || direct.ModelDetails.ModelName != "claude-sonnet-4-5" { + t.Fatalf("expected model details, got %+v", direct.ModelDetails) + } +} diff --git a/internal/auth/iflow/cookie_helpers.go b/internal/auth/iflow/cookie_helpers.go new file mode 100644 index 0000000000..7e0f4264be --- /dev/null +++ b/internal/auth/iflow/cookie_helpers.go @@ -0,0 +1,99 @@ +package iflow + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + "strings" +) + +// NormalizeCookie normalizes raw cookie strings for iFlow authentication flows. +func NormalizeCookie(raw string) (string, error) { + trimmed := strings.TrimSpace(raw) + if trimmed == "" { + return "", fmt.Errorf("cookie cannot be empty") + } + + combined := strings.Join(strings.Fields(trimmed), " ") + if !strings.HasSuffix(combined, ";") { + combined += ";" + } + if !strings.Contains(combined, "BXAuth=") { + return "", fmt.Errorf("cookie missing BXAuth field") + } + return combined, nil +} + +// SanitizeIFlowFileName normalizes user identifiers for safe filename usage. +func SanitizeIFlowFileName(raw string) string { + if raw == "" { + return "" + } + cleanEmail := strings.ReplaceAll(raw, "*", "x") + var result strings.Builder + for _, r := range cleanEmail { + if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || r == '_' || r == '@' || r == '.' || r == '-' { + result.WriteRune(r) + } + } + return strings.TrimSpace(result.String()) +} + +// ExtractBXAuth extracts the BXAuth value from a cookie string. +func ExtractBXAuth(cookie string) string { + parts := strings.Split(cookie, ";") + for _, part := range parts { + part = strings.TrimSpace(part) + if strings.HasPrefix(part, "BXAuth=") { + return strings.TrimPrefix(part, "BXAuth=") + } + } + return "" +} + +// CheckDuplicateBXAuth checks if the given BXAuth value already exists in any iflow auth file. +// Returns the path of the existing file if found, empty string otherwise. +func CheckDuplicateBXAuth(authDir, bxAuth string) (string, error) { + if bxAuth == "" { + return "", nil + } + + entries, err := os.ReadDir(authDir) + if err != nil { + if os.IsNotExist(err) { + return "", nil + } + return "", fmt.Errorf("read auth dir failed: %w", err) + } + + for _, entry := range entries { + if entry.IsDir() { + continue + } + name := entry.Name() + if !strings.HasPrefix(name, "iflow-") || !strings.HasSuffix(name, ".json") { + continue + } + + filePath := filepath.Join(authDir, name) + data, err := os.ReadFile(filePath) + if err != nil { + continue + } + + var tokenData struct { + Cookie string `json:"cookie"` + } + if err := json.Unmarshal(data, &tokenData); err != nil { + continue + } + + existingBXAuth := ExtractBXAuth(tokenData.Cookie) + if existingBXAuth != "" && existingBXAuth == bxAuth { + return filePath, nil + } + } + + return "", nil +} diff --git a/internal/auth/iflow/iflow_auth.go b/internal/auth/iflow/iflow_auth.go new file mode 100644 index 0000000000..62bf83b6c2 --- /dev/null +++ b/internal/auth/iflow/iflow_auth.go @@ -0,0 +1,535 @@ +package iflow + +import ( + "compress/gzip" + "context" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "os" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + // OAuth endpoints and client metadata are derived from the reference Python implementation. + iFlowOAuthTokenEndpoint = "https://iflow.cn/oauth/token" + iFlowOAuthAuthorizeEndpoint = "https://iflow.cn/oauth" + iFlowUserInfoEndpoint = "https://iflow.cn/api/oauth/getUserInfo" + iFlowSuccessRedirectURL = "https://iflow.cn/oauth/success" + + // Cookie authentication endpoints + iFlowAPIKeyEndpoint = "https://platform.iflow.cn/api/openapi/apikey" + + // Client credentials provided by iFlow for the Code Assist integration. + iFlowOAuthClientID = "10009311001" + // Default client secret (can be overridden via IFLOW_CLIENT_SECRET env var) + defaultIFlowClientSecret = "4Z3YjXycVsQvyGF1etiNlIBB4RsqSDtW" +) + +// getIFlowClientSecret returns the iFlow OAuth client secret. +// It first checks the IFLOW_CLIENT_SECRET environment variable, +// falling back to the default value if not set. +func getIFlowClientSecret() string { + if secret := os.Getenv("IFLOW_CLIENT_SECRET"); secret != "" { + return secret + } + return defaultIFlowClientSecret +} + +// DefaultAPIBaseURL is the canonical chat completions endpoint. +const DefaultAPIBaseURL = "https://apis.iflow.cn/v1" + +// SuccessRedirectURL is exposed for consumers needing the official success page. +const SuccessRedirectURL = iFlowSuccessRedirectURL + +// CallbackPort defines the local port used for OAuth callbacks. +const CallbackPort = 11451 + +// IFlowAuth encapsulates the HTTP client helpers for the OAuth flow. +type IFlowAuth struct { + httpClient *http.Client +} + +// NewIFlowAuth constructs a new IFlowAuth with proxy-aware transport. +func NewIFlowAuth(cfg *config.Config) *IFlowAuth { + client := &http.Client{Timeout: 30 * time.Second} + return &IFlowAuth{httpClient: util.SetProxy(&cfg.SDKConfig, client)} +} + +// AuthorizationURL builds the authorization URL and matching redirect URI. +func (ia *IFlowAuth) AuthorizationURL(state string, port int) (authURL, redirectURI string) { + redirectURI = fmt.Sprintf("http://localhost:%d/oauth2callback", port) + values := url.Values{} + values.Set("loginMethod", "phone") + values.Set("type", "phone") + values.Set("redirect", redirectURI) + values.Set("state", state) + values.Set("client_id", iFlowOAuthClientID) + authURL = fmt.Sprintf("%s?%s", iFlowOAuthAuthorizeEndpoint, values.Encode()) + return authURL, redirectURI +} + +// ExchangeCodeForTokens exchanges an authorization code for access and refresh tokens. +func (ia *IFlowAuth) ExchangeCodeForTokens(ctx context.Context, code, redirectURI string) (*IFlowTokenData, error) { + form := url.Values{} + form.Set("grant_type", "authorization_code") + form.Set("code", code) + form.Set("redirect_uri", redirectURI) + form.Set("client_id", iFlowOAuthClientID) + form.Set("client_secret", getIFlowClientSecret()) + + req, err := ia.newTokenRequest(ctx, form) + if err != nil { + return nil, err + } + + return ia.doTokenRequest(ctx, req) +} + +// RefreshTokens exchanges a refresh token for a new access token. +func (ia *IFlowAuth) RefreshTokens(ctx context.Context, refreshToken string) (*IFlowTokenData, error) { + form := url.Values{} + form.Set("grant_type", "refresh_token") + form.Set("refresh_token", refreshToken) + form.Set("client_id", iFlowOAuthClientID) + form.Set("client_secret", getIFlowClientSecret()) + + req, err := ia.newTokenRequest(ctx, form) + if err != nil { + return nil, err + } + + return ia.doTokenRequest(ctx, req) +} + +func (ia *IFlowAuth) newTokenRequest(ctx context.Context, form url.Values) (*http.Request, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodPost, iFlowOAuthTokenEndpoint, strings.NewReader(form.Encode())) + if err != nil { + return nil, fmt.Errorf("iflow token: create request failed: %w", err) + } + + basic := base64.StdEncoding.EncodeToString([]byte(iFlowOAuthClientID + ":" + getIFlowClientSecret())) + req.Header.Set("Content-Type", "application/x-www-form-urlencoded") + req.Header.Set("Accept", "application/json") + req.Header.Set("Authorization", "Basic "+basic) + return req, nil +} + +func (ia *IFlowAuth) doTokenRequest(ctx context.Context, req *http.Request) (*IFlowTokenData, error) { + resp, err := ia.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("iflow token: request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("iflow token: read response failed: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("iflow token request failed: status=%d body=%s", resp.StatusCode, string(body)) + return nil, fmt.Errorf("iflow token: %d %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var tokenResp IFlowTokenResponse + if err = json.Unmarshal(body, &tokenResp); err != nil { + return nil, fmt.Errorf("iflow token: decode response failed: %w", err) + } + + data := &IFlowTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + TokenType: tokenResp.TokenType, + Scope: tokenResp.Scope, + Expire: time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second).Format(time.RFC3339), + } + + if tokenResp.AccessToken == "" { + log.Debug(string(body)) + return nil, fmt.Errorf("iflow token: missing access token in response") + } + + info, errAPI := ia.FetchUserInfo(ctx, tokenResp.AccessToken) + if errAPI != nil { + return nil, fmt.Errorf("iflow token: fetch user info failed: %w", errAPI) + } + if strings.TrimSpace(info.APIKey) == "" { + return nil, fmt.Errorf("iflow token: empty api key returned") + } + email := strings.TrimSpace(info.Email) + if email == "" { + email = strings.TrimSpace(info.Phone) + } + if email == "" { + return nil, fmt.Errorf("iflow token: missing account email/phone in user info") + } + data.APIKey = info.APIKey + data.Email = email + + return data, nil +} + +// FetchUserInfo retrieves account metadata (including API key) for the provided access token. +func (ia *IFlowAuth) FetchUserInfo(ctx context.Context, accessToken string) (*userInfoData, error) { + if strings.TrimSpace(accessToken) == "" { + return nil, fmt.Errorf("iflow api key: access token is empty") + } + + endpoint := fmt.Sprintf("%s?accessToken=%s", iFlowUserInfoEndpoint, url.QueryEscape(accessToken)) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpoint, nil) + if err != nil { + return nil, fmt.Errorf("iflow api key: create request failed: %w", err) + } + req.Header.Set("Accept", "application/json") + + resp, err := ia.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("iflow api key: request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("iflow api key: read response failed: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("iflow api key failed: status=%d body=%s", resp.StatusCode, string(body)) + return nil, fmt.Errorf("iflow api key: %d %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var result userInfoResponse + if err = json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("iflow api key: decode body failed: %w", err) + } + + if !result.Success { + return nil, fmt.Errorf("iflow api key: request not successful") + } + + if result.Data.APIKey == "" { + return nil, fmt.Errorf("iflow api key: missing api key in response") + } + + return &result.Data, nil +} + +// CreateTokenStorage converts token data into persistence storage. +func (ia *IFlowAuth) CreateTokenStorage(data *IFlowTokenData) *IFlowTokenStorage { + if data == nil { + return nil + } + return &IFlowTokenStorage{ + AccessToken: data.AccessToken, + RefreshToken: data.RefreshToken, + LastRefresh: time.Now().Format(time.RFC3339), + Expire: data.Expire, + APIKey: data.APIKey, + Email: data.Email, + TokenType: data.TokenType, + Scope: data.Scope, + } +} + +// UpdateTokenStorage updates the persisted token storage with latest token data. +func (ia *IFlowAuth) UpdateTokenStorage(storage *IFlowTokenStorage, data *IFlowTokenData) { + if storage == nil || data == nil { + return + } + storage.AccessToken = data.AccessToken + storage.RefreshToken = data.RefreshToken + storage.LastRefresh = time.Now().Format(time.RFC3339) + storage.Expire = data.Expire + if data.APIKey != "" { + storage.APIKey = data.APIKey + } + if data.Email != "" { + storage.Email = data.Email + } + storage.TokenType = data.TokenType + storage.Scope = data.Scope +} + +// IFlowTokenResponse models the OAuth token endpoint response. +type IFlowTokenResponse struct { + AccessToken string `json:"access_token"` + RefreshToken string `json:"refresh_token"` + ExpiresIn int `json:"expires_in"` + TokenType string `json:"token_type"` + Scope string `json:"scope"` +} + +// IFlowTokenData captures processed token details. +type IFlowTokenData struct { + AccessToken string + RefreshToken string + TokenType string + Scope string + Expire string + APIKey string + Email string + Cookie string +} + +// userInfoResponse represents the structure returned by the user info endpoint. +type userInfoResponse struct { + Success bool `json:"success"` + Data userInfoData `json:"data"` +} + +type userInfoData struct { + APIKey string `json:"apiKey"` + Email string `json:"email"` + Phone string `json:"phone"` +} + +// iFlowAPIKeyResponse represents the response from the API key endpoint +type iFlowAPIKeyResponse struct { + Success bool `json:"success"` + Code string `json:"code"` + Message string `json:"message"` + Data iFlowKeyData `json:"data"` + Extra interface{} `json:"extra"` +} + +// iFlowKeyData contains the API key information +type iFlowKeyData struct { + HasExpired bool `json:"hasExpired"` + ExpireTime string `json:"expireTime"` + Name string `json:"name"` + APIKey string `json:"apiKey"` + APIKeyMask string `json:"apiKeyMask"` +} + +// iFlowRefreshRequest represents the request body for refreshing API key +type iFlowRefreshRequest struct { + Name string `json:"name"` +} + +// AuthenticateWithCookie performs authentication using browser cookies +func (ia *IFlowAuth) AuthenticateWithCookie(ctx context.Context, cookie string) (*IFlowTokenData, error) { + if strings.TrimSpace(cookie) == "" { + return nil, fmt.Errorf("iflow cookie authentication: cookie is empty") + } + + // First, get initial API key information using GET request to obtain the name + keyInfo, err := ia.fetchAPIKeyInfo(ctx, cookie) + if err != nil { + return nil, fmt.Errorf("iflow cookie authentication: fetch initial API key info failed: %w", err) + } + + // Refresh the API key using POST request + refreshedKeyInfo, err := ia.RefreshAPIKey(ctx, cookie, keyInfo.Name) + if err != nil { + return nil, fmt.Errorf("iflow cookie authentication: refresh API key failed: %w", err) + } + + // Convert to token data format using refreshed key + data := &IFlowTokenData{ + APIKey: refreshedKeyInfo.APIKey, + Expire: refreshedKeyInfo.ExpireTime, + Email: refreshedKeyInfo.Name, + Cookie: cookie, + } + + return data, nil +} + +// fetchAPIKeyInfo retrieves API key information using GET request with cookie +func (ia *IFlowAuth) fetchAPIKeyInfo(ctx context.Context, cookie string) (*iFlowKeyData, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, iFlowAPIKeyEndpoint, nil) + if err != nil { + return nil, fmt.Errorf("iflow cookie: create GET request failed: %w", err) + } + + // Set cookie and other headers to mimic browser + req.Header.Set("Cookie", cookie) + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36") + req.Header.Set("Accept-Language", "zh-CN,zh;q=0.9,en;q=0.8") + req.Header.Set("Accept-Encoding", "gzip, deflate, br") + req.Header.Set("Connection", "keep-alive") + req.Header.Set("Sec-Fetch-Dest", "empty") + req.Header.Set("Sec-Fetch-Mode", "cors") + req.Header.Set("Sec-Fetch-Site", "same-origin") + + resp, err := ia.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("iflow cookie: GET request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + // Handle gzip compression + var reader io.Reader = resp.Body + if resp.Header.Get("Content-Encoding") == "gzip" { + gzipReader, err := gzip.NewReader(resp.Body) + if err != nil { + return nil, fmt.Errorf("iflow cookie: create gzip reader failed: %w", err) + } + defer func() { _ = gzipReader.Close() }() + reader = gzipReader + } + + body, err := io.ReadAll(reader) + if err != nil { + return nil, fmt.Errorf("iflow cookie: read GET response failed: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("iflow cookie GET request failed: status=%d body=%s", resp.StatusCode, string(body)) + return nil, fmt.Errorf("iflow cookie: GET request failed with status %d: %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var keyResp iFlowAPIKeyResponse + if err = json.Unmarshal(body, &keyResp); err != nil { + return nil, fmt.Errorf("iflow cookie: decode GET response failed: %w", err) + } + + if !keyResp.Success { + return nil, fmt.Errorf("iflow cookie: GET request not successful: %s", keyResp.Message) + } + + // Handle initial response where apiKey field might be apiKeyMask + if keyResp.Data.APIKey == "" && keyResp.Data.APIKeyMask != "" { + keyResp.Data.APIKey = keyResp.Data.APIKeyMask + } + + return &keyResp.Data, nil +} + +// RefreshAPIKey refreshes the API key using POST request +func (ia *IFlowAuth) RefreshAPIKey(ctx context.Context, cookie, name string) (*iFlowKeyData, error) { + if strings.TrimSpace(cookie) == "" { + return nil, fmt.Errorf("iflow cookie refresh: cookie is empty") + } + if strings.TrimSpace(name) == "" { + return nil, fmt.Errorf("iflow cookie refresh: name is empty") + } + + // Prepare request body + refreshReq := iFlowRefreshRequest{ + Name: name, + } + + bodyBytes, err := json.Marshal(refreshReq) + if err != nil { + return nil, fmt.Errorf("iflow cookie refresh: marshal request failed: %w", err) + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, iFlowAPIKeyEndpoint, strings.NewReader(string(bodyBytes))) + if err != nil { + return nil, fmt.Errorf("iflow cookie refresh: create POST request failed: %w", err) + } + + // Set cookie and other headers to mimic browser + req.Header.Set("Cookie", cookie) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36") + req.Header.Set("Accept-Language", "zh-CN,zh;q=0.9,en;q=0.8") + req.Header.Set("Accept-Encoding", "gzip, deflate, br") + req.Header.Set("Connection", "keep-alive") + req.Header.Set("Origin", "https://platform.iflow.cn") + req.Header.Set("Referer", "https://platform.iflow.cn/") + + resp, err := ia.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("iflow cookie refresh: POST request failed: %w", err) + } + defer func() { _ = resp.Body.Close() }() + + // Handle gzip compression + var reader io.Reader = resp.Body + if resp.Header.Get("Content-Encoding") == "gzip" { + gzipReader, err := gzip.NewReader(resp.Body) + if err != nil { + return nil, fmt.Errorf("iflow cookie refresh: create gzip reader failed: %w", err) + } + defer func() { _ = gzipReader.Close() }() + reader = gzipReader + } + + body, err := io.ReadAll(reader) + if err != nil { + return nil, fmt.Errorf("iflow cookie refresh: read POST response failed: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("iflow cookie POST request failed: status=%d body=%s", resp.StatusCode, string(body)) + return nil, fmt.Errorf("iflow cookie refresh: POST request failed with status %d: %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + + var keyResp iFlowAPIKeyResponse + if err = json.Unmarshal(body, &keyResp); err != nil { + return nil, fmt.Errorf("iflow cookie refresh: decode POST response failed: %w", err) + } + + if !keyResp.Success { + return nil, fmt.Errorf("iflow cookie refresh: POST request not successful: %s", keyResp.Message) + } + + return &keyResp.Data, nil +} + +// ShouldRefreshAPIKey checks if the API key needs to be refreshed (within 2 days of expiry) +func ShouldRefreshAPIKey(expireTime string) (bool, time.Duration, error) { + if strings.TrimSpace(expireTime) == "" { + return false, 0, fmt.Errorf("iflow cookie: expire time is empty") + } + + expire, err := time.Parse("2006-01-02 15:04", expireTime) + if err != nil { + return false, 0, fmt.Errorf("iflow cookie: parse expire time failed: %w", err) + } + + now := time.Now() + twoDaysFromNow := now.Add(48 * time.Hour) + + needsRefresh := expire.Before(twoDaysFromNow) + timeUntilExpiry := expire.Sub(now) + + return needsRefresh, timeUntilExpiry, nil +} + +// CreateCookieTokenStorage converts cookie-based token data into persistence storage +func (ia *IFlowAuth) CreateCookieTokenStorage(data *IFlowTokenData) *IFlowTokenStorage { + if data == nil { + return nil + } + + // Only save the BXAuth field from the cookie + bxAuth := ExtractBXAuth(data.Cookie) + cookieToSave := "" + if bxAuth != "" { + cookieToSave = "BXAuth=" + bxAuth + ";" + } + + return &IFlowTokenStorage{ + APIKey: data.APIKey, + Email: data.Email, + Expire: data.Expire, + Cookie: cookieToSave, + LastRefresh: time.Now().Format(time.RFC3339), + Type: "iflow", + } +} + +// UpdateCookieTokenStorage updates the persisted token storage with refreshed API key data +func (ia *IFlowAuth) UpdateCookieTokenStorage(storage *IFlowTokenStorage, keyData *iFlowKeyData) { + if storage == nil || keyData == nil { + return + } + + storage.APIKey = keyData.APIKey + storage.Expire = keyData.ExpireTime + storage.LastRefresh = time.Now().Format(time.RFC3339) +} diff --git a/internal/auth/iflow/iflow_token.go b/internal/auth/iflow/iflow_token.go new file mode 100644 index 0000000000..eadb69aa33 --- /dev/null +++ b/internal/auth/iflow/iflow_token.go @@ -0,0 +1,59 @@ +package iflow + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" +) + +// IFlowTokenStorage persists iFlow OAuth credentials alongside the derived API key. +type IFlowTokenStorage struct { + AccessToken string `json:"access_token"` + RefreshToken string `json:"refresh_token"` + LastRefresh string `json:"last_refresh"` + Expire string `json:"expired"` + APIKey string `json:"api_key"` + Email string `json:"email"` + TokenType string `json:"token_type"` + Scope string `json:"scope"` + Cookie string `json:"cookie"` + Type string `json:"type"` + + // Metadata holds arbitrary key-value pairs injected via hooks. + // It is not exported to JSON directly to allow flattening during serialization. + Metadata map[string]any `json:"-"` +} + +// SetMetadata allows external callers to inject metadata into the storage before saving. +func (ts *IFlowTokenStorage) SetMetadata(meta map[string]any) { + ts.Metadata = meta +} + +// SaveTokenToFile serialises the token storage to disk. +func (ts *IFlowTokenStorage) SaveTokenToFile(authFilePath string) error { + misc.LogSavingCredentials(authFilePath) + ts.Type = "iflow" + if err := os.MkdirAll(filepath.Dir(authFilePath), 0o700); err != nil { + return fmt.Errorf("iflow token: create directory failed: %w", err) + } + + f, err := os.Create(authFilePath) + if err != nil { + return fmt.Errorf("iflow token: create file failed: %w", err) + } + defer func() { _ = f.Close() }() + + // Merge metadata using helper + data, errMerge := misc.MergeMetadata(ts, ts.Metadata) + if errMerge != nil { + return fmt.Errorf("failed to merge metadata: %w", errMerge) + } + + if err = json.NewEncoder(f).Encode(data); err != nil { + return fmt.Errorf("iflow token: encode token failed: %w", err) + } + return nil +} diff --git a/internal/auth/iflow/oauth_server.go b/internal/auth/iflow/oauth_server.go new file mode 100644 index 0000000000..2a8b7b9f59 --- /dev/null +++ b/internal/auth/iflow/oauth_server.go @@ -0,0 +1,143 @@ +package iflow + +import ( + "context" + "fmt" + "net" + "net/http" + "strings" + "sync" + "time" + + log "github.com/sirupsen/logrus" +) + +const errorRedirectURL = "https://iflow.cn/oauth/error" + +// OAuthResult captures the outcome of the local OAuth callback. +type OAuthResult struct { + Code string + State string + Error string +} + +// OAuthServer provides a minimal HTTP server for handling the iFlow OAuth callback. +type OAuthServer struct { + server *http.Server + port int + result chan *OAuthResult + errChan chan error + mu sync.Mutex + running bool +} + +// NewOAuthServer constructs a new OAuthServer bound to the provided port. +func NewOAuthServer(port int) *OAuthServer { + return &OAuthServer{ + port: port, + result: make(chan *OAuthResult, 1), + errChan: make(chan error, 1), + } +} + +// Start launches the callback listener. +func (s *OAuthServer) Start() error { + s.mu.Lock() + defer s.mu.Unlock() + if s.running { + return fmt.Errorf("iflow oauth server already running") + } + if !s.isPortAvailable() { + return fmt.Errorf("port %d is already in use", s.port) + } + + mux := http.NewServeMux() + mux.HandleFunc("/oauth2callback", s.handleCallback) + + s.server = &http.Server{ + Addr: fmt.Sprintf(":%d", s.port), + Handler: mux, + ReadTimeout: 10 * time.Second, + WriteTimeout: 10 * time.Second, + } + + s.running = true + + go func() { + if err := s.server.ListenAndServe(); err != nil && err != http.ErrServerClosed { + s.errChan <- err + } + }() + + time.Sleep(100 * time.Millisecond) + return nil +} + +// Stop gracefully terminates the callback listener. +func (s *OAuthServer) Stop(ctx context.Context) error { + s.mu.Lock() + defer s.mu.Unlock() + if !s.running || s.server == nil { + return nil + } + defer func() { + s.running = false + s.server = nil + }() + return s.server.Shutdown(ctx) +} + +// WaitForCallback blocks until a callback result, server error, or timeout occurs. +func (s *OAuthServer) WaitForCallback(timeout time.Duration) (*OAuthResult, error) { + select { + case res := <-s.result: + return res, nil + case err := <-s.errChan: + return nil, err + case <-time.After(timeout): + return nil, fmt.Errorf("timeout waiting for OAuth callback") + } +} + +func (s *OAuthServer) handleCallback(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + http.Error(w, "method not allowed", http.StatusMethodNotAllowed) + return + } + + query := r.URL.Query() + if errParam := strings.TrimSpace(query.Get("error")); errParam != "" { + s.sendResult(&OAuthResult{Error: errParam}) + http.Redirect(w, r, errorRedirectURL, http.StatusFound) + return + } + + code := strings.TrimSpace(query.Get("code")) + if code == "" { + s.sendResult(&OAuthResult{Error: "missing_code"}) + http.Redirect(w, r, errorRedirectURL, http.StatusFound) + return + } + + state := query.Get("state") + s.sendResult(&OAuthResult{Code: code, State: state}) + http.Redirect(w, r, SuccessRedirectURL, http.StatusFound) +} + +func (s *OAuthServer) sendResult(res *OAuthResult) { + select { + case s.result <- res: + default: + log.Debug("iflow oauth result channel full, dropping result") + } +} + +func (s *OAuthServer) isPortAvailable() bool { + addr := fmt.Sprintf(":%d", s.port) + listener, err := net.Listen("tcp", addr) + if err != nil { + return false + } + _ = listener.Close() + return true +} diff --git a/internal/auth/joycode/joycode_auth.go b/internal/auth/joycode/joycode_auth.go new file mode 100644 index 0000000000..2dc5948e02 --- /dev/null +++ b/internal/auth/joycode/joycode_auth.go @@ -0,0 +1,137 @@ +package joycode + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "time" + + log "github.com/sirupsen/logrus" +) + +const ( + APIBaseURL = "https://joycode-api.jd.com" + UserInfoPath = "/api/saas/user/v1/userInfo" + ModelListPath = "/api/saas/models/v1/modelList" + ChatPath = "/api/saas/openai/v1/chat/completions" + JoyCodeUA = "JoyCode/2.4.8 (Windows)" +) + +type JoyCodeAuth struct { + httpClient *http.Client +} + +func NewJoyCodeAuth(httpClient *http.Client) *JoyCodeAuth { + if httpClient == nil { + httpClient = &http.Client{Timeout: 30 * time.Second} + } + return &JoyCodeAuth{httpClient: httpClient} +} + +func (a *JoyCodeAuth) VerifyToken(ctx context.Context, ptKey string) (*JoyCodeTokenData, error) { + for _, loginType := range []string{"", "IDE", "ERP"} { + result, err := a.tryUserInfo(ctx, ptKey, loginType) + if err != nil { + log.Debugf("joycode: loginType=%s verify failed: %v", loginType, err) + continue + } + if result != nil { + return result, nil + } + } + return nil, fmt.Errorf("joycode: all loginType attempts failed") +} + +func (a *JoyCodeAuth) tryUserInfo(ctx context.Context, ptKey, loginType string) (*JoyCodeTokenData, error) { + payload, _ := json.Marshal(map[string]interface{}{}) + req, err := http.NewRequestWithContext(ctx, "POST", APIBaseURL+UserInfoPath, bytes.NewReader(payload)) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + req.Header.Set("ptKey", ptKey) + req.Header.Set("loginType", loginType) + req.Header.Set("User-Agent", JoyCodeUA) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, _ := io.ReadAll(resp.Body) + var result map[string]interface{} + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("joycode: failed to parse userInfo response: %w", err) + } + + code, _ := result["code"].(float64) + if int(code) != 0 { + msg, _ := result["msg"].(string) + return nil, fmt.Errorf("joycode: userInfo returned code=%v msg=%s", code, msg) + } + + data, _ := result["data"].(map[string]interface{}) + if data == nil { + return nil, fmt.Errorf("joycode: userInfo returned nil data") + } + + userID, _ := data["userId"].(string) + tenant, _ := data["tenant"].(string) + if tenant == "" { + tenant = "JD" + } + orgFullName, _ := data["orgFullName"].(string) + returnedPTKey, _ := data["ptKey"].(string) + if returnedPTKey == "" { + returnedPTKey = ptKey + } + effectiveLoginType := loginType + if effectiveLoginType == "" { + effectiveLoginType = "IDE" + } + + return &JoyCodeTokenData{ + PTKey: returnedPTKey, + UserID: userID, + Tenant: tenant, + OrgFullName: orgFullName, + LoginType: effectiveLoginType, + }, nil +} + +func (a *JoyCodeAuth) FetchModelList(ctx context.Context, ptKey string) ([]interface{}, error) { + payload, _ := json.Marshal(map[string]interface{}{}) + req, err := http.NewRequestWithContext(ctx, "POST", APIBaseURL+ModelListPath, bytes.NewReader(payload)) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + req.Header.Set("ptKey", ptKey) + req.Header.Set("User-Agent", JoyCodeUA) + + resp, err := a.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, _ := io.ReadAll(resp.Body) + var result map[string]interface{} + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("joycode: failed to parse model list response: %w", err) + } + + code, _ := result["code"].(float64) + if int(code) != 0 { + return nil, fmt.Errorf("joycode: model list returned code=%v", code) + } + + data, _ := result["data"].([]interface{}) + return data, nil +} diff --git a/internal/auth/joycode/models.go b/internal/auth/joycode/models.go new file mode 100644 index 0000000000..88d88ae95d --- /dev/null +++ b/internal/auth/joycode/models.go @@ -0,0 +1,12 @@ +package joycode + +import "time" + +type JoyCodeTokenData struct { + PTKey string `json:"ptKey"` + UserID string `json:"userId,omitempty"` + Tenant string `json:"tenant,omitempty"` + OrgFullName string `json:"orgFullName,omitempty"` + LoginType string `json:"loginType,omitempty"` + ExpiresAt time.Time `json:"expires_at,omitempty"` +} diff --git a/internal/auth/joycode/oauth_web.go b/internal/auth/joycode/oauth_web.go new file mode 100644 index 0000000000..e179d7692b --- /dev/null +++ b/internal/auth/joycode/oauth_web.go @@ -0,0 +1,369 @@ +package joycode + +import ( + "context" + "crypto/rand" + "encoding/base64" + "encoding/hex" + "encoding/json" + "fmt" + "net/http" + "os" + "path/filepath" + "sync" + "time" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +type sessionStatus string + +const ( + jcPending sessionStatus = "pending" + jcWaiting sessionStatus = "waiting" + jcSuccess sessionStatus = "success" + jcFailed sessionStatus = "failed" +) + +type jcWebSession struct { + stateID string + authKey string + port int + status sessionStatus + startedAt time.Time + error string + token *JoyCodeTokenData + cancel context.CancelFunc +} + +type OAuthWebHandler struct { + cfg *config.Config + sessions map[string]*jcWebSession + mu sync.RWMutex + auth *JoyCodeAuth +} + +func NewOAuthWebHandler(cfg *config.Config) *OAuthWebHandler { + return &OAuthWebHandler{ + cfg: cfg, + sessions: make(map[string]*jcWebSession), + auth: NewJoyCodeAuth(nil), + } +} + +func (h *OAuthWebHandler) RegisterRoutes(router gin.IRouter) { + oauth := router.Group("/v0/oauth/joycode") + { + oauth.GET("", h.handleIndex) + oauth.GET("/start", h.handleStart) + oauth.GET("/callback", h.handleCallback) + oauth.GET("/status", h.handleStatus) + } + // JoyCode login page redirects to http://127.0.0.1:{port} with query params + router.GET("/joycode/callback", h.handleCallback) +} + +func generateJCState() (string, error) { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +func generateJCAuthKey() string { + b := make([]byte, 16) + rand.Read(b) + return hex.EncodeToString(b) +} + +func (h *OAuthWebHandler) handleIndex(c *gin.Context) { + c.Header("Content-Type", "text/html; charset=utf-8") + c.String(http.StatusOK, joyCodeLoginPage) +} + +// HandleCallback is the public accessor for handleCallback, used by the root-path handler in server.go. +func (h *OAuthWebHandler) HandleCallback(c *gin.Context) { + h.handleCallback(c) +} + +// HandleRootCallback intercepts root-path requests that contain JoyCode auth parameters. +// JoyCode login redirects to http://127.0.0.1:{port}/?authKey=...&pt_key=... +func (h *OAuthWebHandler) HandleRootCallback(c *gin.Context) { + if c.Request.URL.Path != "/" { + c.Next() + return + } + ptKey := c.Query("pt_key") + if ptKey == "" { + ptKey = c.Query("ptKey") + } + if ptKey == "" && c.Query("authKey") == "" { + c.Next() + return + } + h.handleCallback(c) + c.Abort() +} + +func (h *OAuthWebHandler) handleStart(c *gin.Context) { + stateID, err := generateJCState() + if err != nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to generate state"}) + return + } + + authKey := generateJCAuthKey() + + port := h.cfg.Port + if port == 0 { + port = 8318 + } + + sess := &jcWebSession{ + stateID: stateID, + authKey: authKey, + port: port, + status: jcWaiting, + startedAt: time.Now(), + } + + h.mu.Lock() + h.sessions[stateID] = sess + h.mu.Unlock() + + loginURL := fmt.Sprintf("https://joycode.jd.com/login/?ideAppName=JoyCode&fromIde=ide&redirect=0&authPort=%d&authKey=%s", port, authKey) + + log.Infof("JoyCode OAuth: session %s started, login URL: %s", stateID, loginURL) + + if c.GetHeader("Accept") == "application/json" { + c.JSON(http.StatusOK, gin.H{"url": loginURL, "state": stateID}) + return + } + + c.Header("Content-Type", "text/html; charset=utf-8") + c.String(http.StatusOK, fmt.Sprintf(joyCodeWaitingPage, loginURL, stateID)) +} + +func (h *OAuthWebHandler) handleCallback(c *gin.Context) { + authKey := c.Query("authKey") + ptKey := c.Query("pt_key") + if ptKey == "" { + ptKey = c.Query("ptKey") + } + + log.Infof("JoyCode OAuth: callback received, authKey=%s, ptKey_len=%d", authKey, len(ptKey)) + + if ptKey == "" { + c.JSON(http.StatusBadRequest, gin.H{"error": "missing pt_key parameter"}) + return + } + + h.mu.Lock() + var matchedSess *jcWebSession + + if authKey != "" { + for _, sess := range h.sessions { + if sess.authKey == authKey { + matchedSess = sess + break + } + } + } + + if matchedSess == nil { + var latestSess *jcWebSession + for _, sess := range h.sessions { + if sess.status == jcWaiting { + if latestSess == nil || sess.startedAt.After(latestSess.startedAt) { + latestSess = sess + } + } + } + if latestSess != nil { + matchedSess = latestSess + } + } + + if matchedSess != nil { + matchedSess.status = jcPending + } + h.mu.Unlock() + + if matchedSess != nil { + go h.verifyAndSave(matchedSess, ptKey) + } else { + log.Warn("JoyCode OAuth: no matching session found for callback") + } + + c.Header("Content-Type", "text/html; charset=utf-8") + c.String(http.StatusOK, `Authorization Successful

✓ Authorization Successful

Credential captured, syncing. Please return to the command line.

`) +} + +func (h *OAuthWebHandler) verifyAndSave(sess *jcWebSession, ptKey string) { + ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + sess.cancel = cancel + + log.Infof("JoyCode OAuth: verifying token for session %s", sess.stateID) + + tokenData, err := h.auth.VerifyToken(ctx, ptKey) + if err != nil { + h.mu.Lock() + sess.status = jcFailed + sess.error = err.Error() + h.mu.Unlock() + log.Errorf("JoyCode OAuth: token verification failed: %v", err) + return + } + + h.mu.Lock() + sess.status = jcSuccess + sess.token = tokenData + h.mu.Unlock() + + h.saveTokenToFile(tokenData) + log.Infof("JoyCode OAuth: authentication successful for user %s", tokenData.UserID) +} + +func (h *OAuthWebHandler) handleStatus(c *gin.Context) { + stateID := c.Query("state") + if stateID == "" { + c.JSON(http.StatusBadRequest, gin.H{"error": "missing state"}) + return + } + + h.mu.RLock() + sess, ok := h.sessions[stateID] + h.mu.RUnlock() + + if !ok { + c.JSON(http.StatusNotFound, gin.H{"error": "session not found"}) + return + } + + switch sess.status { + case jcSuccess: + msg := "Login successful! Token saved." + if sess.token != nil && sess.token.UserID != "" { + msg = fmt.Sprintf("Login successful! User: %s", sess.token.UserID) + } + c.JSON(http.StatusOK, gin.H{"status": "success", "message": msg}) + case jcFailed: + c.JSON(http.StatusOK, gin.H{"status": "failed", "error": sess.error}) + default: + c.JSON(http.StatusOK, gin.H{"status": "pending", "message": "Waiting for browser callback..."}) + } +} + +func (h *OAuthWebHandler) saveTokenToFile(tokenData *JoyCodeTokenData) { + authDir := "" + if h.cfg != nil && h.cfg.AuthDir != "" { + var err error + authDir, err = util.ResolveAuthDir(h.cfg.AuthDir) + if err != nil { + log.Errorf("JoyCode OAuth: failed to resolve auth directory: %v", err) + } + } + if authDir == "" { + home, err := os.UserHomeDir() + if err != nil { + log.Errorf("JoyCode OAuth: failed to get home directory: %v", err) + return + } + authDir = filepath.Join(home, ".cli-proxy-api") + } + if err := os.MkdirAll(authDir, 0700); err != nil { + log.Errorf("JoyCode OAuth: failed to create auth directory: %v", err) + return + } + + fileName := "joycode-token.json" + if tokenData.UserID != "" { + fileName = fmt.Sprintf("joycode-%s.json", tokenData.UserID) + } + + storage := map[string]interface{}{ + "type": "joycode", + "ptKey": tokenData.PTKey, + "userId": tokenData.UserID, + "tenant": tokenData.Tenant, + "orgFullName": tokenData.OrgFullName, + "loginType": tokenData.LoginType, + "last_refresh": time.Now().Format(time.RFC3339), + } + + data, err := json.MarshalIndent(storage, "", " ") + if err != nil { + log.Errorf("JoyCode OAuth: failed to marshal token: %v", err) + return + } + + authFilePath := filepath.Join(authDir, fileName) + if err := os.WriteFile(authFilePath, data, 0600); err != nil { + log.Errorf("JoyCode OAuth: failed to write auth file: %v", err) + return + } + log.Infof("JoyCode OAuth: token saved to %s", authFilePath) +} + +const joyCodeLoginPage = ` +JoyCode Login + +
+

🔑 JoyCode Login

+

Login with your JD account to use JoyCode models through CLIProxyAPI.

+Start Login +
` + +const joyCodeWaitingPage = ` +JoyCode Login - Waiting + +
+

🔑 JoyCode Login

+

Click the button below to open JoyCode login page. After login, credentials will be captured automatically.

+Open JoyCode Login +
⏳ Waiting for login callback...
+
+` diff --git a/internal/auth/kilo/kilo_auth.go b/internal/auth/kilo/kilo_auth.go new file mode 100644 index 0000000000..dc128bf204 --- /dev/null +++ b/internal/auth/kilo/kilo_auth.go @@ -0,0 +1,168 @@ +// Package kilo provides authentication and token management functionality +// for Kilo AI services. +package kilo + +import ( + "context" + "encoding/json" + "fmt" + "net/http" + "time" +) + +const ( + // BaseURL is the base URL for the Kilo AI API. + BaseURL = "https://api.kilo.ai/api" +) + +// DeviceAuthResponse represents the response from initiating device flow. +type DeviceAuthResponse struct { + Code string `json:"code"` + VerificationURL string `json:"verificationUrl"` + ExpiresIn int `json:"expiresIn"` +} + +// DeviceStatusResponse represents the response when polling for device flow status. +type DeviceStatusResponse struct { + Status string `json:"status"` + Token string `json:"token"` + UserEmail string `json:"userEmail"` +} + +// Profile represents the user profile from Kilo AI. +type Profile struct { + Email string `json:"email"` + Orgs []Organization `json:"organizations"` +} + +// Organization represents a Kilo AI organization. +type Organization struct { + ID string `json:"id"` + Name string `json:"name"` +} + +// Defaults represents default settings for an organization or user. +type Defaults struct { + Model string `json:"model"` +} + +// KiloAuth provides methods for handling the Kilo AI authentication flow. +type KiloAuth struct { + client *http.Client +} + +// NewKiloAuth creates a new instance of KiloAuth. +func NewKiloAuth() *KiloAuth { + return &KiloAuth{ + client: &http.Client{Timeout: 30 * time.Second}, + } +} + +// InitiateDeviceFlow starts the device authentication flow. +func (k *KiloAuth) InitiateDeviceFlow(ctx context.Context) (*DeviceAuthResponse, error) { + resp, err := k.client.Post(BaseURL+"/device-auth/codes", "application/json", nil) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusCreated && resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("failed to initiate device flow: status %d", resp.StatusCode) + } + + var data DeviceAuthResponse + if err := json.NewDecoder(resp.Body).Decode(&data); err != nil { + return nil, err + } + return &data, nil +} + +// PollForToken polls for the device flow completion. +func (k *KiloAuth) PollForToken(ctx context.Context, code string) (*DeviceStatusResponse, error) { + ticker := time.NewTicker(5 * time.Second) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-ticker.C: + resp, err := k.client.Get(BaseURL + "/device-auth/codes/" + code) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + var data DeviceStatusResponse + if err := json.NewDecoder(resp.Body).Decode(&data); err != nil { + return nil, err + } + + switch data.Status { + case "approved": + return &data, nil + case "denied", "expired": + return nil, fmt.Errorf("device flow %s", data.Status) + case "pending": + continue + default: + return nil, fmt.Errorf("unknown status: %s", data.Status) + } + } + } +} + +// GetProfile fetches the user's profile. +func (k *KiloAuth) GetProfile(ctx context.Context, token string) (*Profile, error) { + req, err := http.NewRequestWithContext(ctx, "GET", BaseURL+"/profile", nil) + if err != nil { + return nil, fmt.Errorf("failed to create get profile request: %w", err) + } + req.Header.Set("Authorization", "Bearer "+token) + + resp, err := k.client.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("failed to get profile: status %d", resp.StatusCode) + } + + var profile Profile + if err := json.NewDecoder(resp.Body).Decode(&profile); err != nil { + return nil, err + } + return &profile, nil +} + +// GetDefaults fetches default settings for an organization. +func (k *KiloAuth) GetDefaults(ctx context.Context, token, orgID string) (*Defaults, error) { + url := BaseURL + "/defaults" + if orgID != "" { + url = BaseURL + "/organizations/" + orgID + "/defaults" + } + + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, fmt.Errorf("failed to create get defaults request: %w", err) + } + req.Header.Set("Authorization", "Bearer "+token) + + resp, err := k.client.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("failed to get defaults: status %d", resp.StatusCode) + } + + var defaults Defaults + if err := json.NewDecoder(resp.Body).Decode(&defaults); err != nil { + return nil, err + } + return &defaults, nil +} diff --git a/internal/auth/kilo/kilo_token.go b/internal/auth/kilo/kilo_token.go new file mode 100644 index 0000000000..faf98cc40d --- /dev/null +++ b/internal/auth/kilo/kilo_token.go @@ -0,0 +1,60 @@ +// Package kilo provides authentication and token management functionality +// for Kilo AI services. +package kilo + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + log "github.com/sirupsen/logrus" +) + +// KiloTokenStorage stores token information for Kilo AI authentication. +type KiloTokenStorage struct { + // Token is the Kilo access token. + Token string `json:"kilocodeToken"` + + // OrganizationID is the Kilo organization ID. + OrganizationID string `json:"kilocodeOrganizationId"` + + // Model is the default model to use. + Model string `json:"kilocodeModel"` + + // Email is the email address of the authenticated user. + Email string `json:"email"` + + // Type indicates the authentication provider type, always "kilo" for this storage. + Type string `json:"type"` +} + +// SaveTokenToFile serializes the Kilo token storage to a JSON file. +func (ts *KiloTokenStorage) SaveTokenToFile(authFilePath string) error { + misc.LogSavingCredentials(authFilePath) + ts.Type = "kilo" + if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil { + return fmt.Errorf("failed to create directory: %v", err) + } + + f, err := os.Create(authFilePath) + if err != nil { + return fmt.Errorf("failed to create token file: %w", err) + } + defer func() { + if errClose := f.Close(); errClose != nil { + log.Errorf("failed to close file: %v", errClose) + } + }() + + if err = json.NewEncoder(f).Encode(ts); err != nil { + return fmt.Errorf("failed to write token to file: %w", err) + } + return nil +} + +// CredentialFileName returns the filename used to persist Kilo credentials. +func CredentialFileName(email string) string { + return fmt.Sprintf("kilo-%s.json", email) +} diff --git a/internal/auth/kimi/kimi.go b/internal/auth/kimi/kimi.go index ccb1a6c2ff..27c5f73b42 100644 --- a/internal/auth/kimi/kimi.go +++ b/internal/auth/kimi/kimi.go @@ -15,8 +15,8 @@ import ( "time" "github.com/google/uuid" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" ) diff --git a/internal/auth/kimi/kimi_proxy_test.go b/internal/auth/kimi/kimi_proxy_test.go index 130f34f52b..a95ba01dba 100644 --- a/internal/auth/kimi/kimi_proxy_test.go +++ b/internal/auth/kimi/kimi_proxy_test.go @@ -4,7 +4,7 @@ import ( "net/http" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestNewDeviceFlowClientWithDeviceIDAndProxyURL_OverrideDirectDisablesProxy(t *testing.T) { diff --git a/internal/auth/kimi/token.go b/internal/auth/kimi/token.go index 7320d760ef..347b546cbd 100644 --- a/internal/auth/kimi/token.go +++ b/internal/auth/kimi/token.go @@ -10,7 +10,7 @@ import ( "path/filepath" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" ) // KimiTokenStorage stores OAuth2 token information for Kimi API authentication. diff --git a/internal/auth/kiro/aws.go b/internal/auth/kiro/aws.go new file mode 100644 index 0000000000..572050c5e3 --- /dev/null +++ b/internal/auth/kiro/aws.go @@ -0,0 +1,681 @@ +// Package kiro provides authentication functionality for AWS CodeWhisperer (Kiro) API. +// It includes interfaces and implementations for token storage and authentication methods. +package kiro + +import ( + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "net/url" + "os" + "path/filepath" + "strings" + "time" + + log "github.com/sirupsen/logrus" +) + +// PKCECodes holds PKCE verification codes for OAuth2 PKCE flow +type PKCECodes struct { + // CodeVerifier is the cryptographically random string used to correlate + // the authorization request to the token request + CodeVerifier string `json:"code_verifier"` + // CodeChallenge is the SHA256 hash of the code verifier, base64url-encoded + CodeChallenge string `json:"code_challenge"` +} + +// KiroTokenData holds OAuth token information from AWS CodeWhisperer (Kiro) +type KiroTokenData struct { + // AccessToken is the OAuth2 access token for API access + AccessToken string `json:"accessToken"` + // RefreshToken is used to obtain new access tokens + RefreshToken string `json:"refreshToken"` + // ProfileArn is the AWS CodeWhisperer profile ARN + ProfileArn string `json:"profileArn"` + // ExpiresAt is the timestamp when the token expires + ExpiresAt string `json:"expiresAt"` + // AuthMethod indicates the authentication method used (e.g., "builder-id", "social", "idc") + AuthMethod string `json:"authMethod"` + // Provider indicates the OAuth provider (e.g., "AWS", "Google", "Enterprise") + Provider string `json:"provider"` + // ClientID is the OIDC client ID (needed for token refresh) + ClientID string `json:"clientId,omitempty"` + // ClientSecret is the OIDC client secret (needed for token refresh) + ClientSecret string `json:"clientSecret,omitempty"` + // ClientIDHash is the hash of client ID used to locate device registration file + // (Enterprise Kiro IDE stores clientId/clientSecret in ~/.aws/sso/cache/{clientIdHash}.json) + ClientIDHash string `json:"clientIdHash,omitempty"` + // Email is the user's email address (used for file naming) + Email string `json:"email,omitempty"` + // StartURL is the IDC/Identity Center start URL (only for IDC auth method) + StartURL string `json:"startUrl,omitempty"` + // Region is the OIDC region for IDC login and token refresh + Region string `json:"region,omitempty"` +} + +// KiroAuthBundle aggregates authentication data after OAuth flow completion +type KiroAuthBundle struct { + // TokenData contains the OAuth tokens from the authentication flow + TokenData KiroTokenData `json:"token_data"` + // LastRefresh is the timestamp of the last token refresh + LastRefresh string `json:"last_refresh"` +} + +// KiroUsageInfo represents usage information from CodeWhisperer API +type KiroUsageInfo struct { + // SubscriptionTitle is the subscription plan name (e.g., "KIRO FREE") + SubscriptionTitle string `json:"subscription_title"` + // CurrentUsage is the current credit usage + CurrentUsage float64 `json:"current_usage"` + // UsageLimit is the maximum credit limit + UsageLimit float64 `json:"usage_limit"` + // NextReset is the timestamp of the next usage reset + NextReset string `json:"next_reset"` +} + +// KiroModel represents a model available through the CodeWhisperer API +type KiroModel struct { + // ModelID is the unique identifier for the model + ModelID string `json:"modelId"` + // ModelName is the human-readable name + ModelName string `json:"modelName"` + // Description is the model description + Description string `json:"description"` + // RateMultiplier is the credit multiplier for this model + RateMultiplier float64 `json:"rateMultiplier"` + // RateUnit is the unit for rate calculation (e.g., "credit") + RateUnit string `json:"rateUnit"` + // MaxInputTokens is the maximum input token limit + MaxInputTokens int `json:"maxInputTokens,omitempty"` +} + +// KiroIDETokenFile is the default path to Kiro IDE's token file +const KiroIDETokenFile = ".aws/sso/cache/kiro-auth-token.json" + +// Default retry configuration for file reading +const ( + defaultTokenReadMaxAttempts = 10 // Maximum retry attempts + defaultTokenReadBaseDelay = 50 * time.Millisecond // Base delay between retries +) + +// isTransientFileError checks if the error is a transient file access error +// that may be resolved by retrying (e.g., file locked by another process on Windows). +func isTransientFileError(err error) bool { + if err == nil { + return false + } + + // Check for OS-level file access errors (Windows sharing violation, etc.) + var pathErr *os.PathError + if errors.As(err, &pathErr) { + // Windows sharing violation (ERROR_SHARING_VIOLATION = 32) + // Windows lock violation (ERROR_LOCK_VIOLATION = 33) + errStr := pathErr.Err.Error() + if strings.Contains(errStr, "being used by another process") || + strings.Contains(errStr, "sharing violation") || + strings.Contains(errStr, "lock violation") { + return true + } + } + + // Check error message for common transient patterns + errMsg := strings.ToLower(err.Error()) + transientPatterns := []string{ + "being used by another process", + "sharing violation", + "lock violation", + "access is denied", + "unexpected end of json", + "unexpected eof", + } + for _, pattern := range transientPatterns { + if strings.Contains(errMsg, pattern) { + return true + } + } + + return false +} + +// LoadKiroIDETokenWithRetry loads token data from Kiro IDE's token file with retry logic. +// This handles transient file access errors (e.g., file locked by Kiro IDE during write). +// maxAttempts: maximum number of retry attempts (default 10 if <= 0) +// baseDelay: base delay between retries with exponential backoff (default 50ms if <= 0) +func LoadKiroIDETokenWithRetry(maxAttempts int, baseDelay time.Duration) (*KiroTokenData, error) { + if maxAttempts <= 0 { + maxAttempts = defaultTokenReadMaxAttempts + } + if baseDelay <= 0 { + baseDelay = defaultTokenReadBaseDelay + } + + var lastErr error + for attempt := 0; attempt < maxAttempts; attempt++ { + token, err := LoadKiroIDEToken() + if err == nil { + return token, nil + } + lastErr = err + + // Only retry for transient errors + if !isTransientFileError(err) { + return nil, err + } + + // Exponential backoff: delay * 2^attempt, capped at 500ms + delay := baseDelay * time.Duration(1< 500*time.Millisecond { + delay = 500 * time.Millisecond + } + time.Sleep(delay) + } + + return nil, fmt.Errorf("failed to read token file after %d attempts: %w", maxAttempts, lastErr) +} + +// LoadKiroIDEToken loads token data from Kiro IDE's token file. +// For Enterprise Kiro IDE (IDC auth), it also loads clientId and clientSecret +// from the device registration file referenced by clientIdHash. +func LoadKiroIDEToken() (*KiroTokenData, error) { + homeDir, err := os.UserHomeDir() + if err != nil { + return nil, fmt.Errorf("failed to get home directory: %w", err) + } + + tokenPath := filepath.Join(homeDir, KiroIDETokenFile) + data, err := os.ReadFile(tokenPath) + if err != nil { + return nil, fmt.Errorf("failed to read Kiro IDE token file (%s): %w", tokenPath, err) + } + + var token KiroTokenData + if err := json.Unmarshal(data, &token); err != nil { + return nil, fmt.Errorf("failed to parse Kiro IDE token: %w", err) + } + + if token.AccessToken == "" { + return nil, fmt.Errorf("access token is empty in Kiro IDE token file") + } + + // Normalize AuthMethod to lowercase (Kiro IDE uses "IdC" but we expect "idc") + token.AuthMethod = strings.ToLower(token.AuthMethod) + + // For Enterprise Kiro IDE (IDC auth), load clientId and clientSecret from device registration + // The device registration file is located at ~/.aws/sso/cache/{clientIdHash}.json + if token.ClientIDHash != "" && token.ClientID == "" { + if err := loadDeviceRegistration(homeDir, token.ClientIDHash, &token); err != nil { + // Log warning but don't fail - token might still work for some operations + fmt.Printf("warning: failed to load device registration for clientIdHash %s: %v\n", token.ClientIDHash, err) + } + } + + return &token, nil +} + +// loadDeviceRegistration loads clientId and clientSecret from the device registration file. +// Enterprise Kiro IDE stores these in ~/.aws/sso/cache/{clientIdHash}.json +func loadDeviceRegistration(homeDir, clientIDHash string, token *KiroTokenData) error { + if clientIDHash == "" { + return fmt.Errorf("clientIdHash is empty") + } + + // Sanitize clientIdHash to prevent path traversal + if strings.Contains(clientIDHash, "/") || strings.Contains(clientIDHash, "\\") || strings.Contains(clientIDHash, "..") { + return fmt.Errorf("invalid clientIdHash: contains path separator") + } + + deviceRegPath := filepath.Join(homeDir, ".aws", "sso", "cache", clientIDHash+".json") + data, err := os.ReadFile(deviceRegPath) + if err != nil { + return fmt.Errorf("failed to read device registration file (%s): %w", deviceRegPath, err) + } + + // Device registration file structure + var deviceReg struct { + ClientID string `json:"clientId"` + ClientSecret string `json:"clientSecret"` + ExpiresAt string `json:"expiresAt"` + } + + if err := json.Unmarshal(data, &deviceReg); err != nil { + return fmt.Errorf("failed to parse device registration: %w", err) + } + + if deviceReg.ClientID == "" || deviceReg.ClientSecret == "" { + return fmt.Errorf("device registration missing clientId or clientSecret") + } + + token.ClientID = deviceReg.ClientID + token.ClientSecret = deviceReg.ClientSecret + + return nil +} + +// LoadKiroTokenFromPath loads token data from a custom path. +// This supports multiple accounts by allowing different token files. +// For Enterprise Kiro IDE (IDC auth), it also loads clientId and clientSecret +// from the device registration file referenced by clientIdHash. +func LoadKiroTokenFromPath(tokenPath string) (*KiroTokenData, error) { + homeDir, err := os.UserHomeDir() + if err != nil { + return nil, fmt.Errorf("failed to get home directory: %w", err) + } + + // Expand ~ to home directory + if len(tokenPath) > 0 && tokenPath[0] == '~' { + tokenPath = filepath.Join(homeDir, tokenPath[1:]) + } + + data, err := os.ReadFile(tokenPath) + if err != nil { + return nil, fmt.Errorf("failed to read token file (%s): %w", tokenPath, err) + } + + var token KiroTokenData + if err := json.Unmarshal(data, &token); err != nil { + return nil, fmt.Errorf("failed to parse token file: %w", err) + } + + if token.AccessToken == "" { + return nil, fmt.Errorf("access token is empty in token file") + } + + // Normalize AuthMethod to lowercase (Kiro IDE uses "IdC" but we expect "idc") + token.AuthMethod = strings.ToLower(token.AuthMethod) + + // For Enterprise Kiro IDE (IDC auth), load clientId and clientSecret from device registration + if token.ClientIDHash != "" && token.ClientID == "" { + if err := loadDeviceRegistration(homeDir, token.ClientIDHash, &token); err != nil { + // Log warning but don't fail - token might still work for some operations + fmt.Printf("warning: failed to load device registration for clientIdHash %s: %v\n", token.ClientIDHash, err) + } + } + + return &token, nil +} + +// ListKiroTokenFiles lists all Kiro token files in the cache directory. +// This supports multiple accounts by finding all token files. +func ListKiroTokenFiles() ([]string, error) { + homeDir, err := os.UserHomeDir() + if err != nil { + return nil, fmt.Errorf("failed to get home directory: %w", err) + } + + cacheDir := filepath.Join(homeDir, ".aws", "sso", "cache") + + // Check if directory exists + if _, err := os.Stat(cacheDir); os.IsNotExist(err) { + return nil, nil // No token files + } + + entries, err := os.ReadDir(cacheDir) + if err != nil { + return nil, fmt.Errorf("failed to read cache directory: %w", err) + } + + var tokenFiles []string + for _, entry := range entries { + if entry.IsDir() { + continue + } + name := entry.Name() + // Look for kiro token files only (avoid matching unrelated AWS SSO cache files) + if strings.HasSuffix(name, ".json") && strings.HasPrefix(name, "kiro") { + tokenFiles = append(tokenFiles, filepath.Join(cacheDir, name)) + } + } + + return tokenFiles, nil +} + +// LoadAllKiroTokens loads all Kiro tokens from the cache directory. +// This supports multiple accounts. +func LoadAllKiroTokens() ([]*KiroTokenData, error) { + files, err := ListKiroTokenFiles() + if err != nil { + return nil, err + } + + var tokens []*KiroTokenData + for _, file := range files { + token, err := LoadKiroTokenFromPath(file) + if err != nil { + // Skip invalid token files + continue + } + tokens = append(tokens, token) + } + + return tokens, nil +} + +// JWTClaims represents the claims we care about from a JWT token. +// JWT tokens from Kiro/AWS contain user information in the payload. +type JWTClaims struct { + Email string `json:"email,omitempty"` + Sub string `json:"sub,omitempty"` + PreferredUser string `json:"preferred_username,omitempty"` + Name string `json:"name,omitempty"` + Iss string `json:"iss,omitempty"` +} + +// ExtractEmailFromJWT extracts the user's email from a JWT access token. +// JWT tokens typically have format: header.payload.signature +// The payload is base64url-encoded JSON containing user claims. +func ExtractEmailFromJWT(accessToken string) string { + if accessToken == "" { + return "" + } + + // JWT format: header.payload.signature + parts := strings.Split(accessToken, ".") + if len(parts) != 3 { + return "" + } + + // Decode the payload (second part) + payload := parts[1] + + // Add padding if needed (base64url requires padding) + switch len(payload) % 4 { + case 2: + payload += "==" + case 3: + payload += "=" + } + + decoded, err := base64.URLEncoding.DecodeString(payload) + if err != nil { + // Try RawURLEncoding (no padding) + decoded, err = base64.RawURLEncoding.DecodeString(parts[1]) + if err != nil { + return "" + } + } + + var claims JWTClaims + if err := json.Unmarshal(decoded, &claims); err != nil { + return "" + } + + // Return email if available + if claims.Email != "" { + return claims.Email + } + + // Fallback to preferred_username (some providers use this) + if claims.PreferredUser != "" && strings.Contains(claims.PreferredUser, "@") { + return claims.PreferredUser + } + + // Fallback to sub if it looks like an email + if claims.Sub != "" && strings.Contains(claims.Sub, "@") { + return claims.Sub + } + + return "" +} + +// SanitizeEmailForFilename sanitizes an email address for use in a filename. +// Replaces special characters with underscores and prevents path traversal attacks. +// Also handles URL-encoded characters to prevent encoded path traversal attempts. +func SanitizeEmailForFilename(email string) string { + if email == "" { + return "" + } + + result := email + + // First, handle URL-encoded path traversal attempts (%2F, %2E, %5C, etc.) + // This prevents encoded characters from bypassing the sanitization. + // Note: We replace % last to catch any remaining encodings including double-encoding (%252F) + result = strings.ReplaceAll(result, "%2F", "_") // / + result = strings.ReplaceAll(result, "%2f", "_") + result = strings.ReplaceAll(result, "%5C", "_") // \ + result = strings.ReplaceAll(result, "%5c", "_") + result = strings.ReplaceAll(result, "%2E", "_") // . + result = strings.ReplaceAll(result, "%2e", "_") + result = strings.ReplaceAll(result, "%00", "_") // null byte + result = strings.ReplaceAll(result, "%", "_") // Catch remaining % to prevent double-encoding attacks + + // Replace characters that are problematic in filenames + // Keep @ and . in middle but replace other special characters + for _, char := range []string{"/", "\\", ":", "*", "?", "\"", "<", ">", "|", " ", "\x00"} { + result = strings.ReplaceAll(result, char, "_") + } + + // Prevent path traversal: replace leading dots in each path component + // This handles cases like "../../../etc/passwd" → "_.._.._.._etc_passwd" + parts := strings.Split(result, "_") + for i, part := range parts { + for strings.HasPrefix(part, ".") { + part = "_" + part[1:] + } + parts[i] = part + } + result = strings.Join(parts, "_") + + return result +} + +// ExtractIDCIdentifier extracts a unique identifier from IDC startUrl. +// Examples: +// - "https://d-1234567890.awsapps.com/start" -> "d-1234567890" +// - "https://my-company.awsapps.com/start" -> "my-company" +// - "https://acme-corp.awsapps.com/start" -> "acme-corp" +func ExtractIDCIdentifier(startURL string) string { + if startURL == "" { + return "" + } + + // Remove protocol prefix + url := strings.TrimPrefix(startURL, "https://") + url = strings.TrimPrefix(url, "http://") + + // Extract subdomain (first part before the first dot) + // Format: {identifier}.awsapps.com/start + parts := strings.Split(url, ".") + if len(parts) > 0 && parts[0] != "" { + identifier := parts[0] + // Sanitize for filename safety + identifier = strings.ReplaceAll(identifier, "/", "_") + identifier = strings.ReplaceAll(identifier, "\\", "_") + identifier = strings.ReplaceAll(identifier, ":", "_") + return identifier + } + + return "" +} + +// GenerateTokenFileName generates a unique filename for token storage. +// Priority: email > startUrl identifier (for IDC) > authMethod only +// Email is unique, so no sequence suffix needed. Sequence is only added +// when email is unavailable to prevent filename collisions. +// Format: kiro-{authMethod}-{identifier}[-{seq}].json +func GenerateTokenFileName(tokenData *KiroTokenData) string { + authMethod := tokenData.AuthMethod + if authMethod == "" { + authMethod = "unknown" + } + + // Priority 1: Use email if available (no sequence needed, email is unique) + if tokenData.Email != "" { + // Sanitize email for filename (replace @ and . with -) + sanitizedEmail := tokenData.Email + sanitizedEmail = strings.ReplaceAll(sanitizedEmail, "@", "-") + sanitizedEmail = strings.ReplaceAll(sanitizedEmail, ".", "-") + return fmt.Sprintf("kiro-%s-%s.json", authMethod, sanitizedEmail) + } + + // Generate sequence only when email is unavailable + seq := time.Now().UnixNano() % 100000 + + // Priority 2: For IDC, use startUrl identifier with sequence + if authMethod == "idc" && tokenData.StartURL != "" { + identifier := ExtractIDCIdentifier(tokenData.StartURL) + if identifier != "" { + return fmt.Sprintf("kiro-%s-%s-%05d.json", authMethod, identifier, seq) + } + } + + // Priority 3: Fallback to authMethod only with sequence + return fmt.Sprintf("kiro-%s-%05d.json", authMethod, seq) +} + +// DefaultKiroRegion is the fallback region when none is specified. +const DefaultKiroRegion = "us-east-1" + +// GetCodeWhispererLegacyEndpoint returns the legacy CodeWhisperer JSON-RPC endpoint. +// This endpoint supports JSON-RPC style requests with x-amz-target headers. +// The Q endpoint (q.{region}.amazonaws.com) does NOT support JSON-RPC style. +func GetCodeWhispererLegacyEndpoint(region string) string { + if region == "" { + region = DefaultKiroRegion + } + return "https://codewhisperer." + region + ".amazonaws.com" +} + +// ProfileARN represents a parsed AWS CodeWhisperer profile ARN. +// ARN format: arn:partition:service:region:account-id:resource-type/resource-id +// Example: arn:aws:codewhisperer:us-east-1:123456789012:profile/ABCDEFGHIJKL +type ProfileARN struct { + // Raw is the original ARN string + Raw string + // Partition is the AWS partition (aws) + Partition string + // Service is the AWS service name (codewhisperer) + Service string + // Region is the AWS region (us-east-1, ap-southeast-1, etc.) + Region string + // AccountID is the AWS account ID + AccountID string + // ResourceType is the resource type (profile) + ResourceType string + // ResourceID is the resource identifier (e.g., ABCDEFGHIJKL) + ResourceID string +} + +// ParseProfileARN parses an AWS ARN string into a ProfileARN struct. +// Returns nil if the ARN is empty, invalid, or not a codewhisperer ARN. +func ParseProfileARN(arn string) *ProfileARN { + if arn == "" { + return nil + } + // ARN format: arn:partition:service:region:account-id:resource + // Minimum 6 parts separated by ":" + parts := strings.Split(arn, ":") + if len(parts) < 6 { + log.Warnf("invalid ARN format: %s", arn) + return nil + } + // Validate ARN prefix + if parts[0] != "arn" { + return nil + } + // Validate partition + partition := parts[1] + if partition == "" { + return nil + } + // Validate service is codewhisperer + service := parts[2] + if service != "codewhisperer" { + return nil + } + // Validate region format (must contain "-") + region := parts[3] + if region == "" || !strings.Contains(region, "-") { + return nil + } + // Account ID + accountID := parts[4] + + // Parse resource (format: resource-type/resource-id) + // Join remaining parts in case resource contains ":" + resource := strings.Join(parts[5:], ":") + resourceType := "" + resourceID := "" + if idx := strings.Index(resource, "/"); idx > 0 { + resourceType = resource[:idx] + resourceID = resource[idx+1:] + } else { + resourceType = resource + } + + return &ProfileARN{ + Raw: arn, + Partition: partition, + Service: service, + Region: region, + AccountID: accountID, + ResourceType: resourceType, + ResourceID: resourceID, + } +} + +// GetKiroAPIEndpoint returns the Q API endpoint for the specified region. +// If region is empty, defaults to us-east-1. +func GetKiroAPIEndpoint(region string) string { + if region == "" { + region = DefaultKiroRegion + } + return "https://q." + region + ".amazonaws.com" +} + +// GetKiroAPIEndpointFromProfileArn extracts region from profileArn and returns the endpoint. +// Returns default us-east-1 endpoint if region cannot be extracted. +func GetKiroAPIEndpointFromProfileArn(profileArn string) string { + region := ExtractRegionFromProfileArn(profileArn) + return GetKiroAPIEndpoint(region) +} + +// ExtractRegionFromProfileArn extracts the AWS region from a ProfileARN string. +// Returns empty string if ARN is invalid or region cannot be extracted. +func ExtractRegionFromProfileArn(profileArn string) string { + parsed := ParseProfileARN(profileArn) + if parsed == nil { + return "" + } + return parsed.Region +} + +// ExtractRegionFromMetadata extracts API region from auth metadata. +// Priority: api_region > profile_arn > DefaultKiroRegion +func ExtractRegionFromMetadata(metadata map[string]interface{}) string { + if metadata == nil { + return DefaultKiroRegion + } + + // Priority 1: Explicit api_region override + if r, ok := metadata["api_region"].(string); ok && r != "" { + return r + } + + // Priority 2: Extract from ProfileARN + if profileArn, ok := metadata["profile_arn"].(string); ok && profileArn != "" { + if region := ExtractRegionFromProfileArn(profileArn); region != "" { + return region + } + } + + return DefaultKiroRegion +} + +func buildURL(endpoint, path string, queryParams map[string]string) string { + fullURL := fmt.Sprintf("%s/%s", endpoint, path) + if len(queryParams) > 0 { + values := url.Values{} + for key, value := range queryParams { + if value == "" { + continue + } + values.Set(key, value) + } + if encoded := values.Encode(); encoded != "" { + fullURL = fullURL + "?" + encoded + } + } + return fullURL +} diff --git a/internal/auth/kiro/aws_auth.go b/internal/auth/kiro/aws_auth.go new file mode 100644 index 0000000000..5f5abc7923 --- /dev/null +++ b/internal/auth/kiro/aws_auth.go @@ -0,0 +1,326 @@ +// Package kiro provides OAuth2 authentication functionality for AWS CodeWhisperer (Kiro) API. +// This package implements token loading, refresh, and API communication with CodeWhisperer. +package kiro + +import ( + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "os" + "path/filepath" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + pathGetUsageLimits = "getUsageLimits" + pathListAvailableModels = "ListAvailableModels" +) + +// KiroAuth handles AWS CodeWhisperer authentication and API communication. +// It provides methods for loading tokens, refreshing expired tokens, +// and communicating with the CodeWhisperer API. +type KiroAuth struct { + httpClient *http.Client +} + +// NewKiroAuth creates a new Kiro authentication service. +// It initializes the HTTP client with proxy settings from the configuration. +// +// Parameters: +// - cfg: The application configuration containing proxy settings +// +// Returns: +// - *KiroAuth: A new Kiro authentication service instance +func NewKiroAuth(cfg *config.Config) *KiroAuth { + return &KiroAuth{ + httpClient: util.SetProxy(&cfg.SDKConfig, &http.Client{Timeout: 120 * time.Second}), + } +} + +// LoadTokenFromFile loads token data from a file path. +// This method reads and parses the token file, expanding ~ to the home directory. +// +// Parameters: +// - tokenFile: Path to the token file (supports ~ expansion) +// +// Returns: +// - *KiroTokenData: The parsed token data +// - error: An error if file reading or parsing fails +func (k *KiroAuth) LoadTokenFromFile(tokenFile string) (*KiroTokenData, error) { + // Expand ~ to home directory + if strings.HasPrefix(tokenFile, "~") { + home, err := os.UserHomeDir() + if err != nil { + return nil, fmt.Errorf("failed to get home directory: %w", err) + } + tokenFile = filepath.Join(home, tokenFile[1:]) + } + + data, err := os.ReadFile(tokenFile) + if err != nil { + return nil, fmt.Errorf("failed to read token file: %w", err) + } + + var tokenData KiroTokenData + if err := json.Unmarshal(data, &tokenData); err != nil { + return nil, fmt.Errorf("failed to parse token file: %w", err) + } + + return &tokenData, nil +} + +// IsTokenExpired checks if the token has expired. +// This method parses the expiration timestamp and compares it with the current time. +// +// Parameters: +// - tokenData: The token data to check +// +// Returns: +// - bool: True if the token has expired, false otherwise +func (k *KiroAuth) IsTokenExpired(tokenData *KiroTokenData) bool { + if tokenData.ExpiresAt == "" { + return true + } + + expiresAt, err := time.Parse(time.RFC3339, tokenData.ExpiresAt) + if err != nil { + // Try alternate format + expiresAt, err = time.Parse("2006-01-02T15:04:05.000Z", tokenData.ExpiresAt) + if err != nil { + return true + } + } + + return time.Now().After(expiresAt) +} + +// makeRequest sends a REST-style GET request to the CodeWhisperer API. +// +// Parameters: +// - ctx: The context for the request +// - path: The API path (e.g., "getUsageLimits") +// - tokenData: The token data containing access token, refresh token, and profile ARN +// - queryParams: Query parameters to add to the URL +// +// Returns: +// - []byte: The response body +// - error: An error if the request fails +func (k *KiroAuth) makeRequest(ctx context.Context, path string, tokenData *KiroTokenData, queryParams map[string]string) ([]byte, error) { + // Get endpoint from profileArn (defaults to us-east-1 if empty) + profileArn := queryParams["profileArn"] + endpoint := GetKiroAPIEndpointFromProfileArn(profileArn) + url := buildURL(endpoint, path, queryParams) + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) + if err != nil { + return nil, fmt.Errorf("failed to create request: %w", err) + } + + accountKey := GetAccountKey(tokenData.ClientID, tokenData.RefreshToken) + setRuntimeHeaders(req, tokenData.AccessToken, accountKey) + + resp, err := k.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("request failed: %w", err) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("failed to close response body: %v", errClose) + } + }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("API error (status %d): %s", resp.StatusCode, string(body)) + } + + return body, nil +} + +// GetUsageLimits retrieves usage information from the CodeWhisperer API. +// This method fetches the current usage statistics and subscription information. +// +// Parameters: +// - ctx: The context for the request +// - tokenData: The token data containing access token and profile ARN +// +// Returns: +// - *KiroUsageInfo: The usage information +// - error: An error if the request fails +func (k *KiroAuth) GetUsageLimits(ctx context.Context, tokenData *KiroTokenData) (*KiroUsageInfo, error) { + queryParams := map[string]string{ + "origin": "AI_EDITOR", + "profileArn": tokenData.ProfileArn, + "resourceType": "AGENTIC_REQUEST", + } + + body, err := k.makeRequest(ctx, pathGetUsageLimits, tokenData, queryParams) + if err != nil { + return nil, err + } + + var result struct { + SubscriptionInfo struct { + SubscriptionTitle string `json:"subscriptionTitle"` + } `json:"subscriptionInfo"` + UsageBreakdownList []struct { + CurrentUsageWithPrecision float64 `json:"currentUsageWithPrecision"` + UsageLimitWithPrecision float64 `json:"usageLimitWithPrecision"` + } `json:"usageBreakdownList"` + NextDateReset float64 `json:"nextDateReset"` + } + + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("failed to parse usage response: %w", err) + } + + usage := &KiroUsageInfo{ + SubscriptionTitle: result.SubscriptionInfo.SubscriptionTitle, + NextReset: fmt.Sprintf("%v", result.NextDateReset), + } + + if len(result.UsageBreakdownList) > 0 { + usage.CurrentUsage = result.UsageBreakdownList[0].CurrentUsageWithPrecision + usage.UsageLimit = result.UsageBreakdownList[0].UsageLimitWithPrecision + } + + return usage, nil +} + +// ListAvailableModels retrieves available models from the CodeWhisperer API. +// This method fetches the list of AI models available for the authenticated user. +// +// Parameters: +// - ctx: The context for the request +// - tokenData: The token data containing access token and profile ARN +// +// Returns: +// - []*KiroModel: The list of available models +// - error: An error if the request fails +func (k *KiroAuth) ListAvailableModels(ctx context.Context, tokenData *KiroTokenData) ([]*KiroModel, error) { + queryParams := map[string]string{ + "origin": "AI_EDITOR", + "profileArn": tokenData.ProfileArn, + } + + body, err := k.makeRequest(ctx, pathListAvailableModels, tokenData, queryParams) + if err != nil { + return nil, err + } + + var result struct { + Models []struct { + ModelID string `json:"modelId"` + ModelName string `json:"modelName"` + Description string `json:"description"` + RateMultiplier float64 `json:"rateMultiplier"` + RateUnit string `json:"rateUnit"` + TokenLimits *struct { + MaxInputTokens int `json:"maxInputTokens"` + } `json:"tokenLimits"` + } `json:"models"` + } + + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("failed to parse models response: %w", err) + } + + models := make([]*KiroModel, 0, len(result.Models)) + for _, m := range result.Models { + maxInputTokens := 0 + if m.TokenLimits != nil { + maxInputTokens = m.TokenLimits.MaxInputTokens + } + models = append(models, &KiroModel{ + ModelID: m.ModelID, + ModelName: m.ModelName, + Description: m.Description, + RateMultiplier: m.RateMultiplier, + RateUnit: m.RateUnit, + MaxInputTokens: maxInputTokens, + }) + } + + return models, nil +} + +// CreateTokenStorage creates a new KiroTokenStorage from token data. +// This method converts the token data into a storage structure suitable for persistence. +// +// Parameters: +// - tokenData: The token data to convert +// +// Returns: +// - *KiroTokenStorage: A new token storage instance +func (k *KiroAuth) CreateTokenStorage(tokenData *KiroTokenData) *KiroTokenStorage { + return &KiroTokenStorage{ + AccessToken: tokenData.AccessToken, + RefreshToken: tokenData.RefreshToken, + ProfileArn: tokenData.ProfileArn, + ExpiresAt: tokenData.ExpiresAt, + AuthMethod: tokenData.AuthMethod, + Provider: tokenData.Provider, + LastRefresh: time.Now().Format(time.RFC3339), + ClientID: tokenData.ClientID, + ClientSecret: tokenData.ClientSecret, + Region: tokenData.Region, + StartURL: tokenData.StartURL, + Email: tokenData.Email, + } +} + +// ValidateToken checks if the token is valid by making a test API call. +// This method verifies the token by attempting to fetch usage limits. +// +// Parameters: +// - ctx: The context for the request +// - tokenData: The token data to validate +// +// Returns: +// - error: An error if the token is invalid +func (k *KiroAuth) ValidateToken(ctx context.Context, tokenData *KiroTokenData) error { + _, err := k.GetUsageLimits(ctx, tokenData) + return err +} + +// UpdateTokenStorage updates an existing token storage with new token data. +// This method refreshes the token storage with newly obtained access and refresh tokens. +// +// Parameters: +// - storage: The existing token storage to update +// - tokenData: The new token data to apply +func (k *KiroAuth) UpdateTokenStorage(storage *KiroTokenStorage, tokenData *KiroTokenData) { + storage.AccessToken = tokenData.AccessToken + storage.RefreshToken = tokenData.RefreshToken + storage.ProfileArn = tokenData.ProfileArn + storage.ExpiresAt = tokenData.ExpiresAt + storage.AuthMethod = tokenData.AuthMethod + storage.Provider = tokenData.Provider + storage.LastRefresh = time.Now().Format(time.RFC3339) + if tokenData.ClientID != "" { + storage.ClientID = tokenData.ClientID + } + if tokenData.ClientSecret != "" { + storage.ClientSecret = tokenData.ClientSecret + } + if tokenData.Region != "" { + storage.Region = tokenData.Region + } + if tokenData.StartURL != "" { + storage.StartURL = tokenData.StartURL + } + if tokenData.Email != "" { + storage.Email = tokenData.Email + } +} diff --git a/internal/auth/kiro/aws_test.go b/internal/auth/kiro/aws_test.go new file mode 100644 index 0000000000..da20bc42d6 --- /dev/null +++ b/internal/auth/kiro/aws_test.go @@ -0,0 +1,750 @@ +package kiro + +import ( + "encoding/base64" + "encoding/json" + "strings" + "testing" +) + +func TestExtractEmailFromJWT(t *testing.T) { + tests := []struct { + name string + token string + expected string + }{ + { + name: "Empty token", + token: "", + expected: "", + }, + { + name: "Invalid token format", + token: "not.a.valid.jwt", + expected: "", + }, + { + name: "Invalid token - not base64", + token: "xxx.yyy.zzz", + expected: "", + }, + { + name: "Valid JWT with email", + token: createTestJWT(map[string]any{"email": "test@example.com", "sub": "user123"}), + expected: "test@example.com", + }, + { + name: "JWT without email but with preferred_username", + token: createTestJWT(map[string]any{"preferred_username": "user@domain.com", "sub": "user123"}), + expected: "user@domain.com", + }, + { + name: "JWT with email-like sub", + token: createTestJWT(map[string]any{"sub": "another@test.com"}), + expected: "another@test.com", + }, + { + name: "JWT without any email fields", + token: createTestJWT(map[string]any{"sub": "user123", "name": "Test User"}), + expected: "", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ExtractEmailFromJWT(tt.token) + if result != tt.expected { + t.Errorf("ExtractEmailFromJWT() = %q, want %q", result, tt.expected) + } + }) + } +} + +func TestSanitizeEmailForFilename(t *testing.T) { + tests := []struct { + name string + email string + expected string + }{ + { + name: "Empty email", + email: "", + expected: "", + }, + { + name: "Simple email", + email: "user@example.com", + expected: "user@example.com", + }, + { + name: "Email with space", + email: "user name@example.com", + expected: "user_name@example.com", + }, + { + name: "Email with special chars", + email: "user:name@example.com", + expected: "user_name@example.com", + }, + { + name: "Email with multiple special chars", + email: "user/name:test@example.com", + expected: "user_name_test@example.com", + }, + { + name: "Path traversal attempt", + email: "../../../etc/passwd", + expected: "_.__.__._etc_passwd", + }, + { + name: "Path traversal with backslash", + email: `..\..\..\..\windows\system32`, + expected: "_.__.__.__._windows_system32", + }, + { + name: "Null byte injection attempt", + email: "user\x00@evil.com", + expected: "user_@evil.com", + }, + // URL-encoded path traversal tests + { + name: "URL-encoded slash", + email: "user%2Fpath@example.com", + expected: "user_path@example.com", + }, + { + name: "URL-encoded backslash", + email: "user%5Cpath@example.com", + expected: "user_path@example.com", + }, + { + name: "URL-encoded dot", + email: "%2E%2E%2Fetc%2Fpasswd", + expected: "___etc_passwd", + }, + { + name: "URL-encoded null", + email: "user%00@evil.com", + expected: "user_@evil.com", + }, + { + name: "Double URL-encoding attack", + email: "%252F%252E%252E", + expected: "_252F_252E_252E", // % replaced with _, remaining chars preserved (safe) + }, + { + name: "Mixed case URL-encoding", + email: "%2f%2F%5c%5C", + expected: "____", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := SanitizeEmailForFilename(tt.email) + if result != tt.expected { + t.Errorf("SanitizeEmailForFilename() = %q, want %q", result, tt.expected) + } + }) + } +} + +// createTestJWT creates a test JWT token with the given claims +func createTestJWT(claims map[string]any) string { + header := base64.RawURLEncoding.EncodeToString([]byte(`{"alg":"RS256","typ":"JWT"}`)) + + payloadBytes, _ := json.Marshal(claims) + payload := base64.RawURLEncoding.EncodeToString(payloadBytes) + + signature := base64.RawURLEncoding.EncodeToString([]byte("fake-signature")) + + return header + "." + payload + "." + signature +} + +func TestExtractIDCIdentifier(t *testing.T) { + tests := []struct { + name string + startURL string + expected string + }{ + { + name: "Empty URL", + startURL: "", + expected: "", + }, + { + name: "Standard IDC URL with d- prefix", + startURL: "https://d-1234567890.awsapps.com/start", + expected: "d-1234567890", + }, + { + name: "IDC URL with company name", + startURL: "https://my-company.awsapps.com/start", + expected: "my-company", + }, + { + name: "IDC URL with simple name", + startURL: "https://acme-corp.awsapps.com/start", + expected: "acme-corp", + }, + { + name: "IDC URL without https", + startURL: "http://d-9876543210.awsapps.com/start", + expected: "d-9876543210", + }, + { + name: "IDC URL with subdomain only", + startURL: "https://test.awsapps.com/start", + expected: "test", + }, + { + name: "Builder ID URL", + startURL: "https://view.awsapps.com/start", + expected: "view", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ExtractIDCIdentifier(tt.startURL) + if result != tt.expected { + t.Errorf("ExtractIDCIdentifier() = %q, want %q", result, tt.expected) + } + }) + } +} + +func TestGenerateTokenFileName(t *testing.T) { + tests := []struct { + name string + tokenData *KiroTokenData + exact string // exact match (for cases with email) + prefix string // prefix match (for cases without email, where sequence is appended) + }{ + { + name: "IDC with email", + tokenData: &KiroTokenData{ + AuthMethod: "idc", + Email: "user@example.com", + StartURL: "https://d-1234567890.awsapps.com/start", + }, + exact: "kiro-idc-user-example-com.json", + }, + { + name: "IDC without email but with startUrl", + tokenData: &KiroTokenData{ + AuthMethod: "idc", + Email: "", + StartURL: "https://d-1234567890.awsapps.com/start", + }, + prefix: "kiro-idc-d-1234567890-", + }, + { + name: "IDC with company name in startUrl", + tokenData: &KiroTokenData{ + AuthMethod: "idc", + Email: "", + StartURL: "https://my-company.awsapps.com/start", + }, + prefix: "kiro-idc-my-company-", + }, + { + name: "IDC without email and without startUrl", + tokenData: &KiroTokenData{ + AuthMethod: "idc", + Email: "", + StartURL: "", + }, + prefix: "kiro-idc-", + }, + { + name: "Builder ID with email", + tokenData: &KiroTokenData{ + AuthMethod: "builder-id", + Email: "user@gmail.com", + StartURL: "https://view.awsapps.com/start", + }, + exact: "kiro-builder-id-user-gmail-com.json", + }, + { + name: "Builder ID without email", + tokenData: &KiroTokenData{ + AuthMethod: "builder-id", + Email: "", + StartURL: "https://view.awsapps.com/start", + }, + prefix: "kiro-builder-id-", + }, + { + name: "Social auth with email", + tokenData: &KiroTokenData{ + AuthMethod: "google", + Email: "user@gmail.com", + }, + exact: "kiro-google-user-gmail-com.json", + }, + { + name: "Empty auth method", + tokenData: &KiroTokenData{ + AuthMethod: "", + Email: "", + }, + prefix: "kiro-unknown-", + }, + { + name: "Email with special characters", + tokenData: &KiroTokenData{ + AuthMethod: "idc", + Email: "user.name+tag@sub.example.com", + StartURL: "https://d-1234567890.awsapps.com/start", + }, + exact: "kiro-idc-user-name+tag-sub-example-com.json", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GenerateTokenFileName(tt.tokenData) + if tt.exact != "" { + if result != tt.exact { + t.Errorf("GenerateTokenFileName() = %q, want %q", result, tt.exact) + } + } else if tt.prefix != "" { + if !strings.HasPrefix(result, tt.prefix) || !strings.HasSuffix(result, ".json") { + t.Errorf("GenerateTokenFileName() = %q, want prefix %q with .json suffix", result, tt.prefix) + } + } + }) + } +} + +func TestParseProfileARN(t *testing.T) { + tests := []struct { + name string + arn string + expected *ProfileARN + }{ + { + name: "Empty ARN", + arn: "", + expected: nil, + }, + { + name: "Invalid format - too few parts", + arn: "arn:aws:codewhisperer", + expected: nil, + }, + { + name: "Invalid prefix - not arn", + arn: "notarn:aws:codewhisperer:us-east-1:123456789012:profile/ABC", + expected: nil, + }, + { + name: "Invalid service - not codewhisperer", + arn: "arn:aws:s3:us-east-1:123456789012:bucket/mybucket", + expected: nil, + }, + { + name: "Invalid region - no hyphen", + arn: "arn:aws:codewhisperer:useast1:123456789012:profile/ABC", + expected: nil, + }, + { + name: "Empty partition", + arn: "arn::codewhisperer:us-east-1:123456789012:profile/ABC", + expected: nil, + }, + { + name: "Empty region", + arn: "arn:aws:codewhisperer::123456789012:profile/ABC", + expected: nil, + }, + { + name: "Valid ARN - us-east-1", + arn: "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABCDEFGHIJKL", + expected: &ProfileARN{ + Raw: "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABCDEFGHIJKL", + Partition: "aws", + Service: "codewhisperer", + Region: "us-east-1", + AccountID: "123456789012", + ResourceType: "profile", + ResourceID: "ABCDEFGHIJKL", + }, + }, + { + name: "Valid ARN - ap-southeast-1", + arn: "arn:aws:codewhisperer:ap-southeast-1:987654321098:profile/ZYXWVUTSRQ", + expected: &ProfileARN{ + Raw: "arn:aws:codewhisperer:ap-southeast-1:987654321098:profile/ZYXWVUTSRQ", + Partition: "aws", + Service: "codewhisperer", + Region: "ap-southeast-1", + AccountID: "987654321098", + ResourceType: "profile", + ResourceID: "ZYXWVUTSRQ", + }, + }, + { + name: "Valid ARN - eu-west-1", + arn: "arn:aws:codewhisperer:eu-west-1:111222333444:profile/PROFILE123", + expected: &ProfileARN{ + Raw: "arn:aws:codewhisperer:eu-west-1:111222333444:profile/PROFILE123", + Partition: "aws", + Service: "codewhisperer", + Region: "eu-west-1", + AccountID: "111222333444", + ResourceType: "profile", + ResourceID: "PROFILE123", + }, + }, + { + name: "Valid ARN - aws-cn partition", + arn: "arn:aws-cn:codewhisperer:cn-north-1:123456789012:profile/CHINAID", + expected: &ProfileARN{ + Raw: "arn:aws-cn:codewhisperer:cn-north-1:123456789012:profile/CHINAID", + Partition: "aws-cn", + Service: "codewhisperer", + Region: "cn-north-1", + AccountID: "123456789012", + ResourceType: "profile", + ResourceID: "CHINAID", + }, + }, + { + name: "Valid ARN - resource without slash", + arn: "arn:aws:codewhisperer:us-west-2:123456789012:profile", + expected: &ProfileARN{ + Raw: "arn:aws:codewhisperer:us-west-2:123456789012:profile", + Partition: "aws", + Service: "codewhisperer", + Region: "us-west-2", + AccountID: "123456789012", + ResourceType: "profile", + ResourceID: "", + }, + }, + { + name: "Valid ARN - resource with colon", + arn: "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC:extra", + expected: &ProfileARN{ + Raw: "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC:extra", + Partition: "aws", + Service: "codewhisperer", + Region: "us-east-1", + AccountID: "123456789012", + ResourceType: "profile", + ResourceID: "ABC:extra", + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ParseProfileARN(tt.arn) + if tt.expected == nil { + if result != nil { + t.Errorf("ParseProfileARN(%q) = %+v, want nil", tt.arn, result) + } + return + } + if result == nil { + t.Errorf("ParseProfileARN(%q) = nil, want %+v", tt.arn, tt.expected) + return + } + if result.Raw != tt.expected.Raw { + t.Errorf("Raw = %q, want %q", result.Raw, tt.expected.Raw) + } + if result.Partition != tt.expected.Partition { + t.Errorf("Partition = %q, want %q", result.Partition, tt.expected.Partition) + } + if result.Service != tt.expected.Service { + t.Errorf("Service = %q, want %q", result.Service, tt.expected.Service) + } + if result.Region != tt.expected.Region { + t.Errorf("Region = %q, want %q", result.Region, tt.expected.Region) + } + if result.AccountID != tt.expected.AccountID { + t.Errorf("AccountID = %q, want %q", result.AccountID, tt.expected.AccountID) + } + if result.ResourceType != tt.expected.ResourceType { + t.Errorf("ResourceType = %q, want %q", result.ResourceType, tt.expected.ResourceType) + } + if result.ResourceID != tt.expected.ResourceID { + t.Errorf("ResourceID = %q, want %q", result.ResourceID, tt.expected.ResourceID) + } + }) + } +} + +func TestExtractRegionFromProfileArn(t *testing.T) { + tests := []struct { + name string + profileArn string + expected string + }{ + { + name: "Empty ARN", + profileArn: "", + expected: "", + }, + { + name: "Invalid ARN", + profileArn: "invalid-arn", + expected: "", + }, + { + name: "Valid ARN - us-east-1", + profileArn: "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC", + expected: "us-east-1", + }, + { + name: "Valid ARN - ap-southeast-1", + profileArn: "arn:aws:codewhisperer:ap-southeast-1:123456789012:profile/ABC", + expected: "ap-southeast-1", + }, + { + name: "Valid ARN - eu-central-1", + profileArn: "arn:aws:codewhisperer:eu-central-1:123456789012:profile/ABC", + expected: "eu-central-1", + }, + { + name: "Non-codewhisperer ARN", + profileArn: "arn:aws:s3:us-east-1:123456789012:bucket/mybucket", + expected: "", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ExtractRegionFromProfileArn(tt.profileArn) + if result != tt.expected { + t.Errorf("ExtractRegionFromProfileArn(%q) = %q, want %q", tt.profileArn, result, tt.expected) + } + }) + } +} + +func TestGetKiroAPIEndpoint(t *testing.T) { + tests := []struct { + name string + region string + expected string + }{ + { + name: "Empty region - defaults to us-east-1", + region: "", + expected: "https://q.us-east-1.amazonaws.com", + }, + { + name: "us-east-1", + region: "us-east-1", + expected: "https://q.us-east-1.amazonaws.com", + }, + { + name: "us-west-2", + region: "us-west-2", + expected: "https://q.us-west-2.amazonaws.com", + }, + { + name: "ap-southeast-1", + region: "ap-southeast-1", + expected: "https://q.ap-southeast-1.amazonaws.com", + }, + { + name: "eu-west-1", + region: "eu-west-1", + expected: "https://q.eu-west-1.amazonaws.com", + }, + { + name: "cn-north-1", + region: "cn-north-1", + expected: "https://q.cn-north-1.amazonaws.com", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetKiroAPIEndpoint(tt.region) + if result != tt.expected { + t.Errorf("GetKiroAPIEndpoint(%q) = %q, want %q", tt.region, result, tt.expected) + } + }) + } +} + +func TestGetKiroAPIEndpointFromProfileArn(t *testing.T) { + tests := []struct { + name string + profileArn string + expected string + }{ + { + name: "Empty ARN - defaults to us-east-1", + profileArn: "", + expected: "https://q.us-east-1.amazonaws.com", + }, + { + name: "Invalid ARN - defaults to us-east-1", + profileArn: "invalid-arn", + expected: "https://q.us-east-1.amazonaws.com", + }, + { + name: "Valid ARN - us-east-1", + profileArn: "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC", + expected: "https://q.us-east-1.amazonaws.com", + }, + { + name: "Valid ARN - ap-southeast-1", + profileArn: "arn:aws:codewhisperer:ap-southeast-1:123456789012:profile/ABC", + expected: "https://q.ap-southeast-1.amazonaws.com", + }, + { + name: "Valid ARN - eu-central-1", + profileArn: "arn:aws:codewhisperer:eu-central-1:123456789012:profile/ABC", + expected: "https://q.eu-central-1.amazonaws.com", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetKiroAPIEndpointFromProfileArn(tt.profileArn) + if result != tt.expected { + t.Errorf("GetKiroAPIEndpointFromProfileArn(%q) = %q, want %q", tt.profileArn, result, tt.expected) + } + }) + } +} + +func TestGetCodeWhispererLegacyEndpoint(t *testing.T) { + tests := []struct { + name string + region string + expected string + }{ + { + name: "Empty region - defaults to us-east-1", + region: "", + expected: "https://codewhisperer.us-east-1.amazonaws.com", + }, + { + name: "us-east-1", + region: "us-east-1", + expected: "https://codewhisperer.us-east-1.amazonaws.com", + }, + { + name: "us-west-2", + region: "us-west-2", + expected: "https://codewhisperer.us-west-2.amazonaws.com", + }, + { + name: "ap-northeast-1", + region: "ap-northeast-1", + expected: "https://codewhisperer.ap-northeast-1.amazonaws.com", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetCodeWhispererLegacyEndpoint(tt.region) + if result != tt.expected { + t.Errorf("GetCodeWhispererLegacyEndpoint(%q) = %q, want %q", tt.region, result, tt.expected) + } + }) + } +} + +func TestExtractRegionFromMetadata(t *testing.T) { + tests := []struct { + name string + metadata map[string]interface{} + expected string + }{ + { + name: "Nil metadata - defaults to us-east-1", + metadata: nil, + expected: "us-east-1", + }, + { + name: "Empty metadata - defaults to us-east-1", + metadata: map[string]interface{}{}, + expected: "us-east-1", + }, + { + name: "Priority 1: api_region override", + metadata: map[string]interface{}{ + "api_region": "eu-west-1", + "profile_arn": "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC", + }, + expected: "eu-west-1", + }, + { + name: "Priority 2: profile_arn when api_region is empty", + metadata: map[string]interface{}{ + "api_region": "", + "profile_arn": "arn:aws:codewhisperer:ap-southeast-1:123456789012:profile/ABC", + }, + expected: "ap-southeast-1", + }, + { + name: "Priority 2: profile_arn when api_region is missing", + metadata: map[string]interface{}{ + "profile_arn": "arn:aws:codewhisperer:eu-central-1:123456789012:profile/ABC", + }, + expected: "eu-central-1", + }, + { + name: "Fallback: default when profile_arn is invalid", + metadata: map[string]interface{}{ + "profile_arn": "invalid-arn", + }, + expected: "us-east-1", + }, + { + name: "Fallback: default when profile_arn is empty", + metadata: map[string]interface{}{ + "profile_arn": "", + }, + expected: "us-east-1", + }, + { + name: "OIDC region is NOT used for API region", + metadata: map[string]interface{}{ + "region": "ap-northeast-2", // OIDC region - should be ignored + }, + expected: "us-east-1", + }, + { + name: "api_region takes precedence over OIDC region", + metadata: map[string]interface{}{ + "api_region": "us-west-2", + "region": "ap-northeast-2", // OIDC region - should be ignored + }, + expected: "us-west-2", + }, + { + name: "Non-string api_region is ignored", + metadata: map[string]interface{}{ + "api_region": 123, // wrong type + "profile_arn": "arn:aws:codewhisperer:ap-south-1:123456789012:profile/ABC", + }, + expected: "ap-south-1", + }, + { + name: "Non-string profile_arn is ignored", + metadata: map[string]interface{}{ + "profile_arn": 123, // wrong type + }, + expected: "us-east-1", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ExtractRegionFromMetadata(tt.metadata) + if result != tt.expected { + t.Errorf("ExtractRegionFromMetadata(%v) = %q, want %q", tt.metadata, result, tt.expected) + } + }) + } +} diff --git a/internal/auth/kiro/background_refresh.go b/internal/auth/kiro/background_refresh.go new file mode 100644 index 0000000000..af3cef09d2 --- /dev/null +++ b/internal/auth/kiro/background_refresh.go @@ -0,0 +1,247 @@ +package kiro + +import ( + "context" + "log" + "strings" + "sync" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "golang.org/x/sync/semaphore" +) + +type Token struct { + ID string + AccessToken string + RefreshToken string + ExpiresAt time.Time + LastVerified time.Time + ClientID string + ClientSecret string + AuthMethod string + Provider string + StartURL string + Region string +} + +type TokenRepository interface { + FindOldestUnverified(limit int) []*Token + UpdateToken(token *Token) error +} + +type RefresherOption func(*BackgroundRefresher) + +func WithInterval(interval time.Duration) RefresherOption { + return func(r *BackgroundRefresher) { + r.interval = interval + } +} + +func WithBatchSize(size int) RefresherOption { + return func(r *BackgroundRefresher) { + r.batchSize = size + } +} + +func WithConcurrency(concurrency int) RefresherOption { + return func(r *BackgroundRefresher) { + r.concurrency = concurrency + } +} + +type BackgroundRefresher struct { + interval time.Duration + batchSize int + concurrency int + tokenRepo TokenRepository + stopCh chan struct{} + wg sync.WaitGroup + oauth *KiroOAuth + ssoClient *SSOOIDCClient + callbackMu sync.RWMutex // 保护回调函数的并发访问 + onTokenRefreshed func(tokenID string, tokenData *KiroTokenData) // 刷新成功回调 +} + +func NewBackgroundRefresher(repo TokenRepository, opts ...RefresherOption) *BackgroundRefresher { + r := &BackgroundRefresher{ + interval: time.Minute, + batchSize: 50, + concurrency: 10, + tokenRepo: repo, + stopCh: make(chan struct{}), + oauth: nil, // Lazy init - will be set when config available + ssoClient: nil, // Lazy init - will be set when config available + } + for _, opt := range opts { + opt(r) + } + return r +} + +// WithConfig sets the configuration for OAuth and SSO clients. +func WithConfig(cfg *config.Config) RefresherOption { + return func(r *BackgroundRefresher) { + r.oauth = NewKiroOAuth(cfg) + r.ssoClient = NewSSOOIDCClient(cfg) + } +} + +// WithOnTokenRefreshed sets the callback function to be called when a token is successfully refreshed. +// The callback receives the token ID (filename) and the new token data. +// This allows external components (e.g., Watcher) to be notified of token updates. +func WithOnTokenRefreshed(callback func(tokenID string, tokenData *KiroTokenData)) RefresherOption { + return func(r *BackgroundRefresher) { + r.callbackMu.Lock() + r.onTokenRefreshed = callback + r.callbackMu.Unlock() + } +} + +func (r *BackgroundRefresher) Start(ctx context.Context) { + r.wg.Add(1) + go func() { + defer r.wg.Done() + ticker := time.NewTicker(r.interval) + defer ticker.Stop() + + r.refreshBatch(ctx) + + for { + select { + case <-ctx.Done(): + return + case <-r.stopCh: + return + case <-ticker.C: + r.refreshBatch(ctx) + } + } + }() +} + +func (r *BackgroundRefresher) Stop() { + close(r.stopCh) + r.wg.Wait() +} + +func (r *BackgroundRefresher) refreshBatch(ctx context.Context) { + tokens := r.tokenRepo.FindOldestUnverified(r.batchSize) + if len(tokens) == 0 { + return + } + + sem := semaphore.NewWeighted(int64(r.concurrency)) + var wg sync.WaitGroup + + for i, token := range tokens { + if i > 0 { + select { + case <-ctx.Done(): + return + case <-r.stopCh: + return + case <-time.After(100 * time.Millisecond): + } + } + + if err := sem.Acquire(ctx, 1); err != nil { + return + } + + wg.Add(1) + go func(t *Token) { + defer wg.Done() + defer sem.Release(1) + r.refreshSingle(ctx, t) + }(token) + } + + wg.Wait() +} + +func (r *BackgroundRefresher) refreshSingle(ctx context.Context, token *Token) { + // Normalize auth method to lowercase for case-insensitive matching + authMethod := strings.ToLower(token.AuthMethod) + + // Create refresh function based on auth method + refreshFunc := func(ctx context.Context) (*KiroTokenData, error) { + switch authMethod { + case "idc": + return r.ssoClient.RefreshTokenWithRegion( + ctx, + token.ClientID, + token.ClientSecret, + token.RefreshToken, + token.Region, + token.StartURL, + ) + case "builder-id": + return r.ssoClient.RefreshToken( + ctx, + token.ClientID, + token.ClientSecret, + token.RefreshToken, + ) + default: + return r.oauth.RefreshTokenWithFingerprint(ctx, token.RefreshToken, token.ID) + } + } + + // Use graceful degradation for better reliability + result := RefreshWithGracefulDegradation( + ctx, + refreshFunc, + token.AccessToken, + token.ExpiresAt, + ) + + if result.Error != nil { + log.Printf("failed to refresh token %s: %v", token.ID, result.Error) + return + } + + newTokenData := result.TokenData + if result.UsedFallback { + log.Printf("token %s: using existing token as fallback (refresh failed but token still valid)", token.ID) + // Don't update the token file if we're using fallback + // Just update LastVerified to prevent immediate re-check + token.LastVerified = time.Now() + return + } + + token.AccessToken = newTokenData.AccessToken + if newTokenData.RefreshToken != "" { + token.RefreshToken = newTokenData.RefreshToken + } + token.LastVerified = time.Now() + + if newTokenData.ExpiresAt != "" { + if expTime, parseErr := time.Parse(time.RFC3339, newTokenData.ExpiresAt); parseErr == nil { + token.ExpiresAt = expTime + } + } + + if err := r.tokenRepo.UpdateToken(token); err != nil { + log.Printf("failed to update token %s: %v", token.ID, err) + return + } + + // 方案 A: 刷新成功后触发回调,通知 Watcher 更新内存中的 Auth 对象 + r.callbackMu.RLock() + callback := r.onTokenRefreshed + r.callbackMu.RUnlock() + + if callback != nil { + // 使用 defer recover 隔离回调 panic,防止崩溃整个进程 + func() { + defer func() { + if rec := recover(); rec != nil { + log.Printf("background refresh: callback panic for token %s: %v", token.ID, rec) + } + }() + log.Printf("background refresh: notifying token refresh callback for %s", token.ID) + callback(token.ID, newTokenData) + }() + } +} diff --git a/internal/auth/kiro/codewhisperer_client.go b/internal/auth/kiro/codewhisperer_client.go new file mode 100644 index 0000000000..767efe1a45 --- /dev/null +++ b/internal/auth/kiro/codewhisperer_client.go @@ -0,0 +1,153 @@ +// Package kiro provides CodeWhisperer API client for fetching user info. +package kiro + +import ( + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +// CodeWhispererClient handles CodeWhisperer API calls. +type CodeWhispererClient struct { + httpClient *http.Client +} + +// UsageLimitsResponse represents the getUsageLimits API response. +type UsageLimitsResponse struct { + DaysUntilReset *int `json:"daysUntilReset,omitempty"` + NextDateReset *float64 `json:"nextDateReset,omitempty"` + UserInfo *UserInfo `json:"userInfo,omitempty"` + SubscriptionInfo *SubscriptionInfo `json:"subscriptionInfo,omitempty"` + UsageBreakdownList []UsageBreakdown `json:"usageBreakdownList,omitempty"` +} + +// UserInfo contains user information from the API. +type UserInfo struct { + Email string `json:"email,omitempty"` + UserID string `json:"userId,omitempty"` +} + +// SubscriptionInfo contains subscription details. +type SubscriptionInfo struct { + SubscriptionTitle string `json:"subscriptionTitle,omitempty"` + Type string `json:"type,omitempty"` +} + +// UsageBreakdown contains usage details. +type UsageBreakdown struct { + UsageLimit *int `json:"usageLimit,omitempty"` + CurrentUsage *int `json:"currentUsage,omitempty"` + UsageLimitWithPrecision *float64 `json:"usageLimitWithPrecision,omitempty"` + CurrentUsageWithPrecision *float64 `json:"currentUsageWithPrecision,omitempty"` + NextDateReset *float64 `json:"nextDateReset,omitempty"` + DisplayName string `json:"displayName,omitempty"` + ResourceType string `json:"resourceType,omitempty"` +} + +// NewCodeWhispererClient creates a new CodeWhisperer client. +func NewCodeWhispererClient(cfg *config.Config, machineID string) *CodeWhispererClient { + client := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + client = util.SetProxy(&cfg.SDKConfig, client) + } + return &CodeWhispererClient{ + httpClient: client, + } +} + +// GetUsageLimits fetches usage limits and user info from CodeWhisperer API. +// This is the recommended way to get user email after login. +func (c *CodeWhispererClient) GetUsageLimits(ctx context.Context, accessToken, clientID, refreshToken, profileArn string) (*UsageLimitsResponse, error) { + queryParams := map[string]string{ + "origin": "AI_EDITOR", + "resourceType": "AGENTIC_REQUEST", + } + // Determine endpoint based on profileArn region + endpoint := GetKiroAPIEndpointFromProfileArn(profileArn) + if profileArn != "" { + queryParams["profileArn"] = profileArn + } else { + queryParams["isEmailRequired"] = "true" + } + url := buildURL(endpoint, pathGetUsageLimits, queryParams) + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) + if err != nil { + return nil, fmt.Errorf("failed to create request: %w", err) + } + + accountKey := GetAccountKey(clientID, refreshToken) + setRuntimeHeaders(req, accessToken, accountKey) + + log.Debugf("codewhisperer: GET %s", url) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("request failed: %w", err) + } + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read response: %w", err) + } + + log.Debugf("codewhisperer: status=%d, body=%s", resp.StatusCode, string(body)) + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("API returned status %d: %s", resp.StatusCode, string(body)) + } + + var result UsageLimitsResponse + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("failed to parse response: %w", err) + } + + return &result, nil +} + +// FetchUserEmailFromAPI fetches user email using CodeWhisperer getUsageLimits API. +// This is more reliable than JWT parsing as it uses the official API. +func (c *CodeWhispererClient) FetchUserEmailFromAPI(ctx context.Context, accessToken, clientID, refreshToken string) string { + resp, err := c.GetUsageLimits(ctx, accessToken, clientID, refreshToken, "") + if err != nil { + log.Debugf("codewhisperer: failed to get usage limits: %v", err) + return "" + } + + if resp.UserInfo != nil && resp.UserInfo.Email != "" { + log.Debugf("codewhisperer: got email from API: %s", resp.UserInfo.Email) + return resp.UserInfo.Email + } + + log.Debugf("codewhisperer: no email in response") + return "" +} + +// FetchUserEmailWithFallback fetches user email with multiple fallback methods. +// Priority: 1. CodeWhisperer API 2. userinfo endpoint 3. JWT parsing +func FetchUserEmailWithFallback(ctx context.Context, cfg *config.Config, accessToken, clientID, refreshToken string) string { + // Method 1: Try CodeWhisperer API (most reliable) + cwClient := NewCodeWhispererClient(cfg, "") + email := cwClient.FetchUserEmailFromAPI(ctx, accessToken, clientID, refreshToken) + if email != "" { + return email + } + + // Method 2: Try SSO OIDC userinfo endpoint + ssoClient := NewSSOOIDCClient(cfg) + email = ssoClient.FetchUserEmail(ctx, accessToken) + if email != "" { + return email + } + + // Method 3: Fallback to JWT parsing + return ExtractEmailFromJWT(accessToken) +} diff --git a/internal/auth/kiro/cooldown.go b/internal/auth/kiro/cooldown.go new file mode 100644 index 0000000000..716135b688 --- /dev/null +++ b/internal/auth/kiro/cooldown.go @@ -0,0 +1,112 @@ +package kiro + +import ( + "sync" + "time" +) + +const ( + CooldownReason429 = "rate_limit_exceeded" + CooldownReasonSuspended = "account_suspended" + CooldownReasonQuotaExhausted = "quota_exhausted" + + DefaultShortCooldown = 1 * time.Minute + MaxShortCooldown = 5 * time.Minute + LongCooldown = 24 * time.Hour +) + +type CooldownManager struct { + mu sync.RWMutex + cooldowns map[string]time.Time + reasons map[string]string +} + +func NewCooldownManager() *CooldownManager { + return &CooldownManager{ + cooldowns: make(map[string]time.Time), + reasons: make(map[string]string), + } +} + +func (cm *CooldownManager) SetCooldown(tokenKey string, duration time.Duration, reason string) { + cm.mu.Lock() + defer cm.mu.Unlock() + cm.cooldowns[tokenKey] = time.Now().Add(duration) + cm.reasons[tokenKey] = reason +} + +func (cm *CooldownManager) IsInCooldown(tokenKey string) bool { + cm.mu.RLock() + defer cm.mu.RUnlock() + endTime, exists := cm.cooldowns[tokenKey] + if !exists { + return false + } + return time.Now().Before(endTime) +} + +func (cm *CooldownManager) GetRemainingCooldown(tokenKey string) time.Duration { + cm.mu.RLock() + defer cm.mu.RUnlock() + endTime, exists := cm.cooldowns[tokenKey] + if !exists { + return 0 + } + remaining := time.Until(endTime) + if remaining < 0 { + return 0 + } + return remaining +} + +func (cm *CooldownManager) GetCooldownReason(tokenKey string) string { + cm.mu.RLock() + defer cm.mu.RUnlock() + return cm.reasons[tokenKey] +} + +func (cm *CooldownManager) ClearCooldown(tokenKey string) { + cm.mu.Lock() + defer cm.mu.Unlock() + delete(cm.cooldowns, tokenKey) + delete(cm.reasons, tokenKey) +} + +func (cm *CooldownManager) CleanupExpired() { + cm.mu.Lock() + defer cm.mu.Unlock() + now := time.Now() + for tokenKey, endTime := range cm.cooldowns { + if now.After(endTime) { + delete(cm.cooldowns, tokenKey) + delete(cm.reasons, tokenKey) + } + } +} + +func (cm *CooldownManager) StartCleanupRoutine(interval time.Duration, stopCh <-chan struct{}) { + ticker := time.NewTicker(interval) + defer ticker.Stop() + for { + select { + case <-ticker.C: + cm.CleanupExpired() + case <-stopCh: + return + } + } +} + +func CalculateCooldownFor429(retryCount int) time.Duration { + duration := DefaultShortCooldown * time.Duration(1< MaxShortCooldown { + return MaxShortCooldown + } + return duration +} + +func CalculateCooldownUntilNextDay() time.Duration { + now := time.Now() + nextDay := time.Date(now.Year(), now.Month(), now.Day()+1, 0, 0, 0, 0, now.Location()) + return time.Until(nextDay) +} diff --git a/internal/auth/kiro/cooldown_test.go b/internal/auth/kiro/cooldown_test.go new file mode 100644 index 0000000000..e0b35df4fc --- /dev/null +++ b/internal/auth/kiro/cooldown_test.go @@ -0,0 +1,240 @@ +package kiro + +import ( + "sync" + "testing" + "time" +) + +func TestNewCooldownManager(t *testing.T) { + cm := NewCooldownManager() + if cm == nil { + t.Fatal("expected non-nil CooldownManager") + } + if cm.cooldowns == nil { + t.Error("expected non-nil cooldowns map") + } + if cm.reasons == nil { + t.Error("expected non-nil reasons map") + } +} + +func TestSetCooldown(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Minute, CooldownReason429) + + if !cm.IsInCooldown("token1") { + t.Error("expected token to be in cooldown") + } + if cm.GetCooldownReason("token1") != CooldownReason429 { + t.Errorf("expected reason %s, got %s", CooldownReason429, cm.GetCooldownReason("token1")) + } +} + +func TestIsInCooldown_NotSet(t *testing.T) { + cm := NewCooldownManager() + if cm.IsInCooldown("nonexistent") { + t.Error("expected non-existent token to not be in cooldown") + } +} + +func TestIsInCooldown_Expired(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Millisecond, CooldownReason429) + + time.Sleep(10 * time.Millisecond) + + if cm.IsInCooldown("token1") { + t.Error("expected expired cooldown to return false") + } +} + +func TestGetRemainingCooldown(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Second, CooldownReason429) + + remaining := cm.GetRemainingCooldown("token1") + if remaining <= 0 || remaining > 1*time.Second { + t.Errorf("expected remaining cooldown between 0 and 1s, got %v", remaining) + } +} + +func TestGetRemainingCooldown_NotSet(t *testing.T) { + cm := NewCooldownManager() + remaining := cm.GetRemainingCooldown("nonexistent") + if remaining != 0 { + t.Errorf("expected 0 remaining for non-existent, got %v", remaining) + } +} + +func TestGetRemainingCooldown_Expired(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Millisecond, CooldownReason429) + + time.Sleep(10 * time.Millisecond) + + remaining := cm.GetRemainingCooldown("token1") + if remaining != 0 { + t.Errorf("expected 0 remaining for expired, got %v", remaining) + } +} + +func TestGetCooldownReason(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Minute, CooldownReasonSuspended) + + reason := cm.GetCooldownReason("token1") + if reason != CooldownReasonSuspended { + t.Errorf("expected reason %s, got %s", CooldownReasonSuspended, reason) + } +} + +func TestGetCooldownReason_NotSet(t *testing.T) { + cm := NewCooldownManager() + reason := cm.GetCooldownReason("nonexistent") + if reason != "" { + t.Errorf("expected empty reason for non-existent, got %s", reason) + } +} + +func TestClearCooldown(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Minute, CooldownReason429) + cm.ClearCooldown("token1") + + if cm.IsInCooldown("token1") { + t.Error("expected cooldown to be cleared") + } + if cm.GetCooldownReason("token1") != "" { + t.Error("expected reason to be cleared") + } +} + +func TestClearCooldown_NonExistent(t *testing.T) { + cm := NewCooldownManager() + cm.ClearCooldown("nonexistent") +} + +func TestCleanupExpired(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("expired1", 1*time.Millisecond, CooldownReason429) + cm.SetCooldown("expired2", 1*time.Millisecond, CooldownReason429) + cm.SetCooldown("active", 1*time.Hour, CooldownReason429) + + time.Sleep(10 * time.Millisecond) + cm.CleanupExpired() + + if cm.GetCooldownReason("expired1") != "" { + t.Error("expected expired1 to be cleaned up") + } + if cm.GetCooldownReason("expired2") != "" { + t.Error("expected expired2 to be cleaned up") + } + if cm.GetCooldownReason("active") != CooldownReason429 { + t.Error("expected active to remain") + } +} + +func TestCalculateCooldownFor429_FirstRetry(t *testing.T) { + duration := CalculateCooldownFor429(0) + if duration != DefaultShortCooldown { + t.Errorf("expected %v for retry 0, got %v", DefaultShortCooldown, duration) + } +} + +func TestCalculateCooldownFor429_Exponential(t *testing.T) { + d1 := CalculateCooldownFor429(1) + d2 := CalculateCooldownFor429(2) + + if d2 <= d1 { + t.Errorf("expected d2 > d1, got d1=%v, d2=%v", d1, d2) + } +} + +func TestCalculateCooldownFor429_MaxCap(t *testing.T) { + duration := CalculateCooldownFor429(10) + if duration > MaxShortCooldown { + t.Errorf("expected max %v, got %v", MaxShortCooldown, duration) + } +} + +func TestCalculateCooldownUntilNextDay(t *testing.T) { + duration := CalculateCooldownUntilNextDay() + if duration <= 0 || duration > 24*time.Hour { + t.Errorf("expected duration between 0 and 24h, got %v", duration) + } +} + +func TestCooldownManager_ConcurrentAccess(t *testing.T) { + cm := NewCooldownManager() + const numGoroutines = 50 + const numOperations = 100 + + var wg sync.WaitGroup + wg.Add(numGoroutines) + + for i := 0; i < numGoroutines; i++ { + go func(id int) { + defer wg.Done() + tokenKey := "token" + string(rune('a'+id%10)) + for j := 0; j < numOperations; j++ { + switch j % 6 { + case 0: + cm.SetCooldown(tokenKey, time.Duration(j)*time.Millisecond, CooldownReason429) + case 1: + cm.IsInCooldown(tokenKey) + case 2: + cm.GetRemainingCooldown(tokenKey) + case 3: + cm.GetCooldownReason(tokenKey) + case 4: + cm.ClearCooldown(tokenKey) + case 5: + cm.CleanupExpired() + } + } + }(i) + } + + wg.Wait() +} + +func TestCooldownReasonConstants(t *testing.T) { + if CooldownReason429 != "rate_limit_exceeded" { + t.Errorf("unexpected CooldownReason429: %s", CooldownReason429) + } + if CooldownReasonSuspended != "account_suspended" { + t.Errorf("unexpected CooldownReasonSuspended: %s", CooldownReasonSuspended) + } + if CooldownReasonQuotaExhausted != "quota_exhausted" { + t.Errorf("unexpected CooldownReasonQuotaExhausted: %s", CooldownReasonQuotaExhausted) + } +} + +func TestDefaultConstants(t *testing.T) { + if DefaultShortCooldown != 1*time.Minute { + t.Errorf("unexpected DefaultShortCooldown: %v", DefaultShortCooldown) + } + if MaxShortCooldown != 5*time.Minute { + t.Errorf("unexpected MaxShortCooldown: %v", MaxShortCooldown) + } + if LongCooldown != 24*time.Hour { + t.Errorf("unexpected LongCooldown: %v", LongCooldown) + } +} + +func TestSetCooldown_OverwritesPrevious(t *testing.T) { + cm := NewCooldownManager() + cm.SetCooldown("token1", 1*time.Hour, CooldownReason429) + cm.SetCooldown("token1", 1*time.Minute, CooldownReasonSuspended) + + reason := cm.GetCooldownReason("token1") + if reason != CooldownReasonSuspended { + t.Errorf("expected reason to be overwritten to %s, got %s", CooldownReasonSuspended, reason) + } + + remaining := cm.GetRemainingCooldown("token1") + if remaining > 1*time.Minute { + t.Errorf("expected remaining <= 1 minute, got %v", remaining) + } +} diff --git a/internal/auth/kiro/fingerprint.go b/internal/auth/kiro/fingerprint.go new file mode 100644 index 0000000000..97bcdb86cf --- /dev/null +++ b/internal/auth/kiro/fingerprint.go @@ -0,0 +1,278 @@ +package kiro + +import ( + "crypto/sha256" + "encoding/binary" + "encoding/hex" + "fmt" + "math/rand" + "net/http" + "runtime" + "slices" + "sync" + "time" + + "github.com/google/uuid" +) + +// Fingerprint holds multi-dimensional fingerprint data for runtime request disguise. +type Fingerprint struct { + OIDCSDKVersion string // 3.7xx (AWS SDK JS) + RuntimeSDKVersion string // 1.0.x (runtime API) + StreamingSDKVersion string // 1.0.x (streaming API) + OSType string // darwin/windows/linux + OSVersion string + NodeVersion string + KiroVersion string + KiroHash string // SHA256 +} + +// FingerprintConfig holds external fingerprint overrides. +type FingerprintConfig struct { + OIDCSDKVersion string + RuntimeSDKVersion string + StreamingSDKVersion string + OSType string + OSVersion string + NodeVersion string + KiroVersion string + KiroHash string +} + +// FingerprintManager manages per-account fingerprint generation and caching. +type FingerprintManager struct { + mu sync.RWMutex + fingerprints map[string]*Fingerprint // tokenKey -> fingerprint + rng *rand.Rand + config *FingerprintConfig // External config (Optional) +} + +var ( + // SDK versions + oidcSDKVersions = []string{ + "3.980.0", "3.975.0", "3.972.0", "3.808.0", + "3.738.0", "3.737.0", "3.736.0", "3.735.0", + } + // SDKVersions for getUsageLimits/ListAvailableModels/GetProfile (runtime API) + runtimeSDKVersions = []string{"1.0.0"} + // SDKVersions for generateAssistantResponse (streaming API) + streamingSDKVersions = []string{"1.0.27"} + // Valid OS types + osTypes = []string{"darwin", "windows", "linux"} + // OS versions + osVersions = map[string][]string{ + "darwin": {"25.2.0", "25.1.0", "25.0.0", "24.5.0", "24.4.0", "24.3.0"}, + "windows": {"10.0.26200", "10.0.26100", "10.0.22631", "10.0.22621", "10.0.19045"}, + "linux": {"6.12.0", "6.11.0", "6.8.0", "6.6.0", "6.5.0", "6.1.0"}, + } + // Node versions + nodeVersions = []string{ + "22.21.1", "22.21.0", "22.20.0", "22.19.0", "22.18.0", + "20.18.0", "20.17.0", "20.16.0", + } + // Kiro IDE versions + kiroVersions = []string{ + "0.10.32", "0.10.16", "0.10.10", + "0.9.47", "0.9.40", "0.9.2", + "0.8.206", "0.8.140", "0.8.135", "0.8.86", + } + // Global singleton + globalFingerprintManager *FingerprintManager + globalFingerprintManagerOnce sync.Once +) + +func GlobalFingerprintManager() *FingerprintManager { + globalFingerprintManagerOnce.Do(func() { + globalFingerprintManager = NewFingerprintManager() + }) + return globalFingerprintManager +} + +func SetGlobalFingerprintConfig(cfg *FingerprintConfig) { + GlobalFingerprintManager().SetConfig(cfg) +} + +// SetConfig applies the config and clears the fingerprint cache. +func (fm *FingerprintManager) SetConfig(cfg *FingerprintConfig) { + fm.mu.Lock() + defer fm.mu.Unlock() + fm.config = cfg + // Clear cached fingerprints so they regenerate with the new config + fm.fingerprints = make(map[string]*Fingerprint) +} + +func NewFingerprintManager() *FingerprintManager { + return &FingerprintManager{ + fingerprints: make(map[string]*Fingerprint), + rng: rand.New(rand.NewSource(time.Now().UnixNano())), + } +} + +// GetFingerprint returns the fingerprint for tokenKey, creating one if it doesn't exist. +func (fm *FingerprintManager) GetFingerprint(tokenKey string) *Fingerprint { + fm.mu.RLock() + if fp, exists := fm.fingerprints[tokenKey]; exists { + fm.mu.RUnlock() + return fp + } + fm.mu.RUnlock() + + fm.mu.Lock() + defer fm.mu.Unlock() + + if fp, exists := fm.fingerprints[tokenKey]; exists { + return fp + } + + fp := fm.generateFingerprint(tokenKey) + fm.fingerprints[tokenKey] = fp + return fp +} + +func (fm *FingerprintManager) generateFingerprint(tokenKey string) *Fingerprint { + if fm.config != nil { + return fm.generateFromConfig(tokenKey) + } + return fm.generateRandom(tokenKey) +} + +// generateFromConfig uses config values, falling back to random for empty fields. +func (fm *FingerprintManager) generateFromConfig(tokenKey string) *Fingerprint { + cfg := fm.config + + // Helper: config value or random selection + configOrRandom := func(configVal string, choices []string) string { + if configVal != "" { + return configVal + } + return choices[fm.rng.Intn(len(choices))] + } + + osType := cfg.OSType + if osType == "" { + osType = runtime.GOOS + if !slices.Contains(osTypes, osType) { + osType = osTypes[fm.rng.Intn(len(osTypes))] + } + } + + osVersion := cfg.OSVersion + if osVersion == "" { + if versions, ok := osVersions[osType]; ok { + osVersion = versions[fm.rng.Intn(len(versions))] + } + } + + kiroHash := cfg.KiroHash + if kiroHash == "" { + hash := sha256.Sum256([]byte(tokenKey)) + kiroHash = hex.EncodeToString(hash[:]) + } + + return &Fingerprint{ + OIDCSDKVersion: configOrRandom(cfg.OIDCSDKVersion, oidcSDKVersions), + RuntimeSDKVersion: configOrRandom(cfg.RuntimeSDKVersion, runtimeSDKVersions), + StreamingSDKVersion: configOrRandom(cfg.StreamingSDKVersion, streamingSDKVersions), + OSType: osType, + OSVersion: osVersion, + NodeVersion: configOrRandom(cfg.NodeVersion, nodeVersions), + KiroVersion: configOrRandom(cfg.KiroVersion, kiroVersions), + KiroHash: kiroHash, + } +} + +// generateRandom generates a deterministic fingerprint seeded by accountKey hash. +func (fm *FingerprintManager) generateRandom(accountKey string) *Fingerprint { + // Use accountKey hash as seed for deterministic random selection + hash := sha256.Sum256([]byte(accountKey)) + seed := int64(binary.BigEndian.Uint64(hash[:8])) + rng := rand.New(rand.NewSource(seed)) + + osType := runtime.GOOS + if !slices.Contains(osTypes, osType) { + osType = osTypes[rng.Intn(len(osTypes))] + } + osVersion := osVersions[osType][rng.Intn(len(osVersions[osType]))] + + return &Fingerprint{ + OIDCSDKVersion: oidcSDKVersions[rng.Intn(len(oidcSDKVersions))], + RuntimeSDKVersion: runtimeSDKVersions[rng.Intn(len(runtimeSDKVersions))], + StreamingSDKVersion: streamingSDKVersions[rng.Intn(len(streamingSDKVersions))], + OSType: osType, + OSVersion: osVersion, + NodeVersion: nodeVersions[rng.Intn(len(nodeVersions))], + KiroVersion: kiroVersions[rng.Intn(len(kiroVersions))], + KiroHash: hex.EncodeToString(hash[:]), + } +} + +// GenerateAccountKey returns a 16-char hex key derived from SHA256(seed). +func GenerateAccountKey(seed string) string { + hash := sha256.Sum256([]byte(seed)) + return hex.EncodeToString(hash[:8]) +} + +// GetAccountKey derives an account key from clientID > refreshToken > random UUID. +func GetAccountKey(clientID, refreshToken string) string { + // 1. Prefer ClientID + if clientID != "" { + return GenerateAccountKey(clientID) + } + + // 2. Fallback to RefreshToken + if refreshToken != "" { + return GenerateAccountKey(refreshToken) + } + + // 3. Random fallback + return GenerateAccountKey(uuid.New().String()) +} + +// BuildUserAgent format: aws-sdk-js/{SDKVersion} ua/2.1 os/{OSType}#{OSVersion} lang/js md/nodejs#{NodeVersion} api/codewhispererstreaming#{SDKVersion} m/E KiroIDE-{KiroVersion}-{KiroHash} +func (fp *Fingerprint) BuildUserAgent() string { + return fmt.Sprintf( + "aws-sdk-js/%s ua/2.1 os/%s#%s lang/js md/nodejs#%s api/codewhispererstreaming#%s m/E KiroIDE-%s-%s", + fp.StreamingSDKVersion, + fp.OSType, + fp.OSVersion, + fp.NodeVersion, + fp.StreamingSDKVersion, + fp.KiroVersion, + fp.KiroHash, + ) +} + +// BuildAmzUserAgent format: aws-sdk-js/{SDKVersion} KiroIDE-{KiroVersion}-{KiroHash} +func (fp *Fingerprint) BuildAmzUserAgent() string { + return fmt.Sprintf( + "aws-sdk-js/%s KiroIDE-%s-%s", + fp.StreamingSDKVersion, + fp.KiroVersion, + fp.KiroHash, + ) +} + +func SetOIDCHeaders(req *http.Request) { + fp := GlobalFingerprintManager().GetFingerprint("oidc-session") + req.Header.Set("Content-Type", "application/json") + req.Header.Set("x-amz-user-agent", fmt.Sprintf("aws-sdk-js/%s KiroIDE", fp.OIDCSDKVersion)) + req.Header.Set("User-Agent", fmt.Sprintf( + "aws-sdk-js/%s ua/2.1 os/%s#%s lang/js md/nodejs#%s api/%s#%s m/E KiroIDE", + fp.OIDCSDKVersion, fp.OSType, fp.OSVersion, fp.NodeVersion, "sso-oidc", fp.OIDCSDKVersion)) + req.Header.Set("amz-sdk-invocation-id", uuid.New().String()) + req.Header.Set("amz-sdk-request", "attempt=1; max=4") +} + +func setRuntimeHeaders(req *http.Request, accessToken string, accountKey string) { + fp := GlobalFingerprintManager().GetFingerprint(accountKey) + machineID := fp.KiroHash + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("x-amz-user-agent", fmt.Sprintf("aws-sdk-js/%s KiroIDE-%s-%s", + fp.RuntimeSDKVersion, fp.KiroVersion, machineID)) + req.Header.Set("User-Agent", fmt.Sprintf( + "aws-sdk-js/%s ua/2.1 os/%s#%s lang/js md/nodejs#%s api/codewhispererruntime#%s m/N,E KiroIDE-%s-%s", + fp.RuntimeSDKVersion, fp.OSType, fp.OSVersion, fp.NodeVersion, fp.RuntimeSDKVersion, + fp.KiroVersion, machineID)) + req.Header.Set("amz-sdk-invocation-id", uuid.New().String()) + req.Header.Set("amz-sdk-request", "attempt=1; max=1") +} diff --git a/internal/auth/kiro/fingerprint_test.go b/internal/auth/kiro/fingerprint_test.go new file mode 100644 index 0000000000..0ac1b36e0d --- /dev/null +++ b/internal/auth/kiro/fingerprint_test.go @@ -0,0 +1,778 @@ +package kiro + +import ( + "net/http" + "runtime" + "strings" + "sync" + "testing" +) + +func TestNewFingerprintManager(t *testing.T) { + fm := NewFingerprintManager() + if fm == nil { + t.Fatal("expected non-nil FingerprintManager") + } + if fm.fingerprints == nil { + t.Error("expected non-nil fingerprints map") + } + if fm.rng == nil { + t.Error("expected non-nil rng") + } +} + +func TestGetFingerprint_NewToken(t *testing.T) { + fm := NewFingerprintManager() + fp := fm.GetFingerprint("token1") + + if fp == nil { + t.Fatal("expected non-nil Fingerprint") + } + if fp.OIDCSDKVersion == "" { + t.Error("expected non-empty OIDCSDKVersion") + } + if fp.RuntimeSDKVersion == "" { + t.Error("expected non-empty RuntimeSDKVersion") + } + if fp.StreamingSDKVersion == "" { + t.Error("expected non-empty StreamingSDKVersion") + } + if fp.OSType == "" { + t.Error("expected non-empty OSType") + } + if fp.OSVersion == "" { + t.Error("expected non-empty OSVersion") + } + if fp.NodeVersion == "" { + t.Error("expected non-empty NodeVersion") + } + if fp.KiroVersion == "" { + t.Error("expected non-empty KiroVersion") + } + if fp.KiroHash == "" { + t.Error("expected non-empty KiroHash") + } +} + +func TestGetFingerprint_SameTokenReturnsSameFingerprint(t *testing.T) { + fm := NewFingerprintManager() + fp1 := fm.GetFingerprint("token1") + fp2 := fm.GetFingerprint("token1") + + if fp1 != fp2 { + t.Error("expected same fingerprint for same token") + } +} + +func TestGetFingerprint_DifferentTokens(t *testing.T) { + fm := NewFingerprintManager() + fp1 := fm.GetFingerprint("token1") + fp2 := fm.GetFingerprint("token2") + + if fp1 == fp2 { + t.Error("expected different fingerprints for different tokens") + } +} + +func TestBuildUserAgent(t *testing.T) { + fm := NewFingerprintManager() + fp := fm.GetFingerprint("token1") + + ua := fp.BuildUserAgent() + if ua == "" { + t.Error("expected non-empty User-Agent") + } + + amzUA := fp.BuildAmzUserAgent() + if amzUA == "" { + t.Error("expected non-empty X-Amz-User-Agent") + } +} + +func TestGetFingerprint_OSVersionMatchesOSType(t *testing.T) { + fm := NewFingerprintManager() + + for i := 0; i < 20; i++ { + fp := fm.GetFingerprint("token" + string(rune('a'+i))) + validVersions := osVersions[fp.OSType] + found := false + for _, v := range validVersions { + if v == fp.OSVersion { + found = true + break + } + } + if !found { + t.Errorf("OS version %s not valid for OS type %s", fp.OSVersion, fp.OSType) + } + } +} + +func TestGenerateFromConfig_OSTypeFromRuntimeGOOS(t *testing.T) { + fm := NewFingerprintManager() + + // Set config with empty OSType to trigger runtime.GOOS fallback + fm.SetConfig(&FingerprintConfig{ + OIDCSDKVersion: "3.738.0", // Set other fields to use config path + }) + + fp := fm.GetFingerprint("test-token") + + // Expected OS type based on runtime.GOOS mapping + var expectedOS string + switch runtime.GOOS { + case "darwin": + expectedOS = "darwin" + case "windows": + expectedOS = "windows" + default: + expectedOS = "linux" + } + + if fp.OSType != expectedOS { + t.Errorf("expected OSType '%s' from runtime.GOOS '%s', got '%s'", + expectedOS, runtime.GOOS, fp.OSType) + } +} + +func TestFingerprintManager_ConcurrentAccess(t *testing.T) { + fm := NewFingerprintManager() + const numGoroutines = 100 + const numOperations = 100 + + var wg sync.WaitGroup + wg.Add(numGoroutines) + + for i := range numGoroutines { + go func(id int) { + defer wg.Done() + for j := range numOperations { + tokenKey := "token" + string(rune('a'+id%26)) + switch j % 2 { + case 0: + fm.GetFingerprint(tokenKey) + case 1: + fp := fm.GetFingerprint(tokenKey) + _ = fp.BuildUserAgent() + _ = fp.BuildAmzUserAgent() + } + } + }(i) + } + + wg.Wait() +} + +func TestKiroHashStability(t *testing.T) { + fm := NewFingerprintManager() + + // Same token should always return same hash + fp1 := fm.GetFingerprint("token1") + fp2 := fm.GetFingerprint("token1") + if fp1.KiroHash != fp2.KiroHash { + t.Errorf("same token should have same hash: %s vs %s", fp1.KiroHash, fp2.KiroHash) + } + + // Different tokens should have different hashes + fp3 := fm.GetFingerprint("token2") + if fp1.KiroHash == fp3.KiroHash { + t.Errorf("different tokens should have different hashes") + } +} + +func TestKiroHashFormat(t *testing.T) { + fm := NewFingerprintManager() + fp := fm.GetFingerprint("token1") + + if len(fp.KiroHash) != 64 { + t.Errorf("expected KiroHash length 64 (SHA256 hex), got %d", len(fp.KiroHash)) + } + + for _, c := range fp.KiroHash { + if (c < '0' || c > '9') && (c < 'a' || c > 'f') { + t.Errorf("invalid hex character in KiroHash: %c", c) + } + } +} + +func TestGlobalFingerprintManager(t *testing.T) { + fm1 := GlobalFingerprintManager() + fm2 := GlobalFingerprintManager() + + if fm1 == nil { + t.Fatal("expected non-nil GlobalFingerprintManager") + } + if fm1 != fm2 { + t.Error("expected GlobalFingerprintManager to return same instance") + } +} + +func TestSetOIDCHeaders(t *testing.T) { + req, _ := http.NewRequest("GET", "http://example.com", nil) + SetOIDCHeaders(req) + + if req.Header.Get("Content-Type") != "application/json" { + t.Error("expected Content-Type header to be set") + } + + amzUA := req.Header.Get("x-amz-user-agent") + if amzUA == "" { + t.Error("expected x-amz-user-agent header to be set") + } + if !strings.Contains(amzUA, "aws-sdk-js/") { + t.Errorf("x-amz-user-agent should contain aws-sdk-js: %s", amzUA) + } + if !strings.Contains(amzUA, "KiroIDE") { + t.Errorf("x-amz-user-agent should contain KiroIDE: %s", amzUA) + } + + ua := req.Header.Get("User-Agent") + if ua == "" { + t.Error("expected User-Agent header to be set") + } + if !strings.Contains(ua, "api/sso-oidc") { + t.Errorf("User-Agent should contain api name: %s", ua) + } + + if req.Header.Get("amz-sdk-invocation-id") == "" { + t.Error("expected amz-sdk-invocation-id header to be set") + } + if req.Header.Get("amz-sdk-request") != "attempt=1; max=4" { + t.Errorf("unexpected amz-sdk-request header: %s", req.Header.Get("amz-sdk-request")) + } +} + +func TestBuildURL(t *testing.T) { + tests := []struct { + name string + endpoint string + path string + queryParams map[string]string + want string + wantContains []string + }{ + { + name: "no query params", + endpoint: "https://api.example.com", + path: "getUsageLimits", + queryParams: nil, + want: "https://api.example.com/getUsageLimits", + }, + { + name: "empty query params", + endpoint: "https://api.example.com", + path: "getUsageLimits", + queryParams: map[string]string{}, + want: "https://api.example.com/getUsageLimits", + }, + { + name: "single query param", + endpoint: "https://api.example.com", + path: "getUsageLimits", + queryParams: map[string]string{ + "origin": "AI_EDITOR", + }, + want: "https://api.example.com/getUsageLimits?origin=AI_EDITOR", + }, + { + name: "multiple query params", + endpoint: "https://api.example.com", + path: "getUsageLimits", + queryParams: map[string]string{ + "origin": "AI_EDITOR", + "resourceType": "AGENTIC_REQUEST", + "profileArn": "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABCDEF", + }, + wantContains: []string{ + "https://api.example.com/getUsageLimits?", + "origin=AI_EDITOR", + "profileArn=arn%3Aaws%3Acodewhisperer%3Aus-east-1%3A123456789012%3Aprofile%2FABCDEF", + "resourceType=AGENTIC_REQUEST", + }, + }, + { + name: "omit empty params", + endpoint: "https://api.example.com", + path: "getUsageLimits", + queryParams: map[string]string{ + "origin": "AI_EDITOR", + "profileArn": "", + }, + want: "https://api.example.com/getUsageLimits?origin=AI_EDITOR", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := buildURL(tt.endpoint, tt.path, tt.queryParams) + if tt.want != "" { + if got != tt.want { + t.Errorf("buildURL() = %v, want %v", got, tt.want) + } + } + if tt.wantContains != nil { + for _, substr := range tt.wantContains { + if !strings.Contains(got, substr) { + t.Errorf("buildURL() = %v, want to contain %v", got, substr) + } + } + } + }) + } +} + +func TestBuildUserAgentFormat(t *testing.T) { + fm := NewFingerprintManager() + fp := fm.GetFingerprint("token1") + + ua := fp.BuildUserAgent() + requiredParts := []string{ + "aws-sdk-js/", + "ua/2.1", + "os/", + "lang/js", + "md/nodejs#", + "api/codewhispererstreaming#", + "m/E", + "KiroIDE-", + } + for _, part := range requiredParts { + if !strings.Contains(ua, part) { + t.Errorf("User-Agent missing required part %q: %s", part, ua) + } + } +} + +func TestBuildAmzUserAgentFormat(t *testing.T) { + fm := NewFingerprintManager() + fp := fm.GetFingerprint("token1") + + amzUA := fp.BuildAmzUserAgent() + requiredParts := []string{ + "aws-sdk-js/", + "KiroIDE-", + } + for _, part := range requiredParts { + if !strings.Contains(amzUA, part) { + t.Errorf("X-Amz-User-Agent missing required part %q: %s", part, amzUA) + } + } + + // Amz-User-Agent should be shorter than User-Agent + ua := fp.BuildUserAgent() + if len(amzUA) >= len(ua) { + t.Error("X-Amz-User-Agent should be shorter than User-Agent") + } +} + +func TestSetRuntimeHeaders(t *testing.T) { + req, _ := http.NewRequest("GET", "http://example.com", nil) + accessToken := "test-access-token-1234567890" + clientID := "test-client-id-12345" + accountKey := GenerateAccountKey(clientID) + fp := GlobalFingerprintManager().GetFingerprint(accountKey) + machineID := fp.KiroHash + + setRuntimeHeaders(req, accessToken, accountKey) + + // Check Authorization header + if req.Header.Get("Authorization") != "Bearer "+accessToken { + t.Errorf("expected Authorization header 'Bearer %s', got '%s'", accessToken, req.Header.Get("Authorization")) + } + + // Check x-amz-user-agent header + amzUA := req.Header.Get("x-amz-user-agent") + if amzUA == "" { + t.Error("expected x-amz-user-agent header to be set") + } + if !strings.Contains(amzUA, "aws-sdk-js/") { + t.Errorf("x-amz-user-agent should contain aws-sdk-js: %s", amzUA) + } + if !strings.Contains(amzUA, "KiroIDE-") { + t.Errorf("x-amz-user-agent should contain KiroIDE: %s", amzUA) + } + if !strings.Contains(amzUA, machineID) { + t.Errorf("x-amz-user-agent should contain machineID: %s", amzUA) + } + + // Check User-Agent header + ua := req.Header.Get("User-Agent") + if ua == "" { + t.Error("expected User-Agent header to be set") + } + if !strings.Contains(ua, "api/codewhispererruntime#") { + t.Errorf("User-Agent should contain api/codewhispererruntime: %s", ua) + } + if !strings.Contains(ua, "m/N,E") { + t.Errorf("User-Agent should contain m/N,E: %s", ua) + } + + // Check amz-sdk-invocation-id (should be a UUID) + invocationID := req.Header.Get("amz-sdk-invocation-id") + if invocationID == "" { + t.Error("expected amz-sdk-invocation-id header to be set") + } + if len(invocationID) != 36 { + t.Errorf("expected amz-sdk-invocation-id to be UUID (36 chars), got %d", len(invocationID)) + } + + // Check amz-sdk-request + if req.Header.Get("amz-sdk-request") != "attempt=1; max=1" { + t.Errorf("unexpected amz-sdk-request header: %s", req.Header.Get("amz-sdk-request")) + } +} + +func TestSDKVersionsAreValid(t *testing.T) { + // Verify all OIDC SDK versions match expected format (3.xxx.x) + for _, v := range oidcSDKVersions { + if !strings.HasPrefix(v, "3.") { + t.Errorf("OIDC SDK version should start with 3.: %s", v) + } + parts := strings.Split(v, ".") + if len(parts) != 3 { + t.Errorf("OIDC SDK version should have 3 parts: %s", v) + } + } + + for _, v := range runtimeSDKVersions { + parts := strings.Split(v, ".") + if len(parts) != 3 { + t.Errorf("Runtime SDK version should have 3 parts: %s", v) + } + } + + for _, v := range streamingSDKVersions { + parts := strings.Split(v, ".") + if len(parts) != 3 { + t.Errorf("Streaming SDK version should have 3 parts: %s", v) + } + } +} + +func TestKiroVersionsAreValid(t *testing.T) { + // Verify all Kiro versions match expected format (0.x.xxx) + for _, v := range kiroVersions { + if !strings.HasPrefix(v, "0.") { + t.Errorf("Kiro version should start with 0.: %s", v) + } + parts := strings.Split(v, ".") + if len(parts) != 3 { + t.Errorf("Kiro version should have 3 parts: %s", v) + } + } +} + +func TestNodeVersionsAreValid(t *testing.T) { + // Verify all Node versions match expected format (xx.xx.x) + for _, v := range nodeVersions { + parts := strings.Split(v, ".") + if len(parts) != 3 { + t.Errorf("Node version should have 3 parts: %s", v) + } + // Should be Node 20.x or 22.x + if !strings.HasPrefix(v, "20.") && !strings.HasPrefix(v, "22.") { + t.Errorf("Node version should be 20.x or 22.x LTS: %s", v) + } + } +} + +func TestFingerprintManager_SetConfig(t *testing.T) { + fm := NewFingerprintManager() + + // Without config, should generate random fingerprint + fp1 := fm.GetFingerprint("token1") + if fp1 == nil { + t.Fatal("expected non-nil fingerprint") + } + + // Set config with all fields + cfg := &FingerprintConfig{ + OIDCSDKVersion: "3.999.0", + RuntimeSDKVersion: "9.9.9", + StreamingSDKVersion: "8.8.8", + OSType: "darwin", + OSVersion: "99.0.0", + NodeVersion: "99.99.99", + KiroVersion: "9.9.999", + KiroHash: "customhash123", + } + fm.SetConfig(cfg) + + // After setting config, should use config values + fp2 := fm.GetFingerprint("token2") + if fp2.OIDCSDKVersion != "3.999.0" { + t.Errorf("expected OIDCSDKVersion '3.999.0', got '%s'", fp2.OIDCSDKVersion) + } + if fp2.RuntimeSDKVersion != "9.9.9" { + t.Errorf("expected RuntimeSDKVersion '9.9.9', got '%s'", fp2.RuntimeSDKVersion) + } + if fp2.StreamingSDKVersion != "8.8.8" { + t.Errorf("expected StreamingSDKVersion '8.8.8', got '%s'", fp2.StreamingSDKVersion) + } + if fp2.OSType != "darwin" { + t.Errorf("expected OSType 'darwin', got '%s'", fp2.OSType) + } + if fp2.OSVersion != "99.0.0" { + t.Errorf("expected OSVersion '99.0.0', got '%s'", fp2.OSVersion) + } + if fp2.NodeVersion != "99.99.99" { + t.Errorf("expected NodeVersion '99.99.99', got '%s'", fp2.NodeVersion) + } + if fp2.KiroVersion != "9.9.999" { + t.Errorf("expected KiroVersion '9.9.999', got '%s'", fp2.KiroVersion) + } + if fp2.KiroHash != "customhash123" { + t.Errorf("expected KiroHash 'customhash123', got '%s'", fp2.KiroHash) + } +} + +func TestFingerprintManager_SetConfig_PartialFields(t *testing.T) { + fm := NewFingerprintManager() + + // Set config with only some fields + cfg := &FingerprintConfig{ + KiroVersion: "1.2.345", + KiroHash: "myhash", + // Other fields empty - should use random + } + fm.SetConfig(cfg) + + fp := fm.GetFingerprint("token1") + + // Configured fields should use config values + if fp.KiroVersion != "1.2.345" { + t.Errorf("expected KiroVersion '1.2.345', got '%s'", fp.KiroVersion) + } + if fp.KiroHash != "myhash" { + t.Errorf("expected KiroHash 'myhash', got '%s'", fp.KiroHash) + } + + // Empty fields should be randomly selected (non-empty) + if fp.OIDCSDKVersion == "" { + t.Error("expected non-empty OIDCSDKVersion") + } + if fp.OSType == "" { + t.Error("expected non-empty OSType") + } + if fp.NodeVersion == "" { + t.Error("expected non-empty NodeVersion") + } +} + +func TestFingerprintManager_SetConfig_ClearsCache(t *testing.T) { + fm := NewFingerprintManager() + + // Get fingerprint before config + fp1 := fm.GetFingerprint("token1") + originalHash := fp1.KiroHash + + // Set config + cfg := &FingerprintConfig{ + KiroHash: "newcustomhash", + } + fm.SetConfig(cfg) + + // Same token should now return different fingerprint (cache cleared) + fp2 := fm.GetFingerprint("token1") + if fp2.KiroHash == originalHash { + t.Error("expected cache to be cleared after SetConfig") + } + if fp2.KiroHash != "newcustomhash" { + t.Errorf("expected KiroHash 'newcustomhash', got '%s'", fp2.KiroHash) + } +} + +func TestGenerateAccountKey(t *testing.T) { + tests := []struct { + name string + seed string + check func(t *testing.T, result string) + }{ + { + name: "Empty seed", + seed: "", + check: func(t *testing.T, result string) { + if result == "" { + t.Error("expected non-empty result for empty seed") + } + if len(result) != 16 { + t.Errorf("expected 16 char hex string, got %d chars", len(result)) + } + }, + }, + { + name: "Simple seed", + seed: "test-client-id", + check: func(t *testing.T, result string) { + if len(result) != 16 { + t.Errorf("expected 16 char hex string, got %d chars", len(result)) + } + // Verify it's valid hex + for _, c := range result { + if (c < '0' || c > '9') && (c < 'a' || c > 'f') { + t.Errorf("invalid hex character: %c", c) + } + } + }, + }, + { + name: "Same seed produces same result", + seed: "deterministic-seed", + check: func(t *testing.T, result string) { + result2 := GenerateAccountKey("deterministic-seed") + if result != result2 { + t.Errorf("same seed should produce same result: %s vs %s", result, result2) + } + }, + }, + { + name: "Different seeds produce different results", + seed: "seed-one", + check: func(t *testing.T, result string) { + result2 := GenerateAccountKey("seed-two") + if result == result2 { + t.Errorf("different seeds should produce different results: %s vs %s", result, result2) + } + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GenerateAccountKey(tt.seed) + tt.check(t, result) + }) + } +} + +func TestGetAccountKey(t *testing.T) { + tests := []struct { + name string + clientID string + refreshToken string + check func(t *testing.T, result string) + }{ + { + name: "Priority 1: clientID when both provided", + clientID: "client-id-123", + refreshToken: "refresh-token-456", + check: func(t *testing.T, result string) { + expected := GenerateAccountKey("client-id-123") + if result != expected { + t.Errorf("expected clientID-based key %s, got %s", expected, result) + } + }, + }, + { + name: "Priority 2: refreshToken when clientID is empty", + clientID: "", + refreshToken: "refresh-token-789", + check: func(t *testing.T, result string) { + expected := GenerateAccountKey("refresh-token-789") + if result != expected { + t.Errorf("expected refreshToken-based key %s, got %s", expected, result) + } + }, + }, + { + name: "Priority 3: random when both empty", + clientID: "", + refreshToken: "", + check: func(t *testing.T, result string) { + if len(result) != 16 { + t.Errorf("expected 16 char key, got %d chars", len(result)) + } + // Should be different each time (random UUID) + result2 := GetAccountKey("", "") + if result == result2 { + t.Log("warning: random keys are the same (possible but unlikely)") + } + }, + }, + { + name: "clientID only", + clientID: "solo-client-id", + refreshToken: "", + check: func(t *testing.T, result string) { + expected := GenerateAccountKey("solo-client-id") + if result != expected { + t.Errorf("expected clientID-based key %s, got %s", expected, result) + } + }, + }, + { + name: "refreshToken only", + clientID: "", + refreshToken: "solo-refresh-token", + check: func(t *testing.T, result string) { + expected := GenerateAccountKey("solo-refresh-token") + if result != expected { + t.Errorf("expected refreshToken-based key %s, got %s", expected, result) + } + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := GetAccountKey(tt.clientID, tt.refreshToken) + tt.check(t, result) + }) + } +} + +func TestGetAccountKey_Deterministic(t *testing.T) { + // Verify that GetAccountKey produces deterministic results for same inputs + clientID := "test-client-id-abc" + refreshToken := "test-refresh-token-xyz" + + // Call multiple times with same inputs + results := make([]string, 10) + for i := range 10 { + results[i] = GetAccountKey(clientID, refreshToken) + } + + // All results should be identical + for i := 1; i < 10; i++ { + if results[i] != results[0] { + t.Errorf("GetAccountKey should be deterministic: got %s and %s", results[0], results[i]) + } + } +} + +func TestFingerprintDeterministic(t *testing.T) { + // Verify that fingerprints are deterministic based on accountKey + fm := NewFingerprintManager() + + accountKey := GenerateAccountKey("test-client-id") + + // Get fingerprint multiple times + fp1 := fm.GetFingerprint(accountKey) + fp2 := fm.GetFingerprint(accountKey) + + // Should be the same pointer (cached) + if fp1 != fp2 { + t.Error("expected same fingerprint pointer for same key") + } + + // Create new manager and verify same values + fm2 := NewFingerprintManager() + fp3 := fm2.GetFingerprint(accountKey) + + // Values should be identical (deterministic generation) + if fp1.KiroHash != fp3.KiroHash { + t.Errorf("KiroHash should be deterministic: %s vs %s", fp1.KiroHash, fp3.KiroHash) + } + if fp1.OSType != fp3.OSType { + t.Errorf("OSType should be deterministic: %s vs %s", fp1.OSType, fp3.OSType) + } + if fp1.OSVersion != fp3.OSVersion { + t.Errorf("OSVersion should be deterministic: %s vs %s", fp1.OSVersion, fp3.OSVersion) + } + if fp1.KiroVersion != fp3.KiroVersion { + t.Errorf("KiroVersion should be deterministic: %s vs %s", fp1.KiroVersion, fp3.KiroVersion) + } + if fp1.NodeVersion != fp3.NodeVersion { + t.Errorf("NodeVersion should be deterministic: %s vs %s", fp1.NodeVersion, fp3.NodeVersion) + } +} diff --git a/internal/auth/kiro/jitter.go b/internal/auth/kiro/jitter.go new file mode 100644 index 0000000000..fef2aea949 --- /dev/null +++ b/internal/auth/kiro/jitter.go @@ -0,0 +1,174 @@ +package kiro + +import ( + "math/rand" + "sync" + "time" +) + +// Jitter configuration constants +const ( + // JitterPercent is the default percentage of jitter to apply (±30%) + JitterPercent = 0.30 + + // Human-like delay ranges + ShortDelayMin = 50 * time.Millisecond // Minimum for rapid consecutive operations + ShortDelayMax = 200 * time.Millisecond // Maximum for rapid consecutive operations + NormalDelayMin = 1 * time.Second // Minimum for normal thinking time + NormalDelayMax = 3 * time.Second // Maximum for normal thinking time + LongDelayMin = 5 * time.Second // Minimum for reading/resting + LongDelayMax = 10 * time.Second // Maximum for reading/resting + + // Probability thresholds for human-like behavior + ShortDelayProbability = 0.20 // 20% chance of short delay (consecutive ops) + LongDelayProbability = 0.05 // 5% chance of long delay (reading/resting) + NormalDelayProbability = 0.75 // 75% chance of normal delay (thinking) +) + +var ( + jitterRand *rand.Rand + jitterRandOnce sync.Once + jitterMu sync.Mutex + lastRequestTime time.Time +) + +// initJitterRand initializes the random number generator for jitter calculations. +// Uses a time-based seed for unpredictable but reproducible randomness. +func initJitterRand() { + jitterRandOnce.Do(func() { + jitterRand = rand.New(rand.NewSource(time.Now().UnixNano())) + }) +} + +// RandomDelay generates a random delay between min and max duration. +// Thread-safe implementation using mutex protection. +func RandomDelay(min, max time.Duration) time.Duration { + initJitterRand() + jitterMu.Lock() + defer jitterMu.Unlock() + + if min >= max { + return min + } + + rangeMs := max.Milliseconds() - min.Milliseconds() + randomMs := jitterRand.Int63n(rangeMs) + return min + time.Duration(randomMs)*time.Millisecond +} + +// JitterDelay adds jitter to a base delay. +// Applies ±jitterPercent variation to the base delay. +// For example, JitterDelay(1*time.Second, 0.30) returns a value between 700ms and 1300ms. +func JitterDelay(baseDelay time.Duration, jitterPercent float64) time.Duration { + initJitterRand() + jitterMu.Lock() + defer jitterMu.Unlock() + + if jitterPercent <= 0 || jitterPercent > 1 { + jitterPercent = JitterPercent + } + + // Calculate jitter range: base * jitterPercent + jitterRange := float64(baseDelay) * jitterPercent + + // Generate random value in range [-jitterRange, +jitterRange] + jitter := (jitterRand.Float64()*2 - 1) * jitterRange + + result := time.Duration(float64(baseDelay) + jitter) + if result < 0 { + return 0 + } + return result +} + +// JitterDelayDefault applies the default ±30% jitter to a base delay. +func JitterDelayDefault(baseDelay time.Duration) time.Duration { + return JitterDelay(baseDelay, JitterPercent) +} + +// HumanLikeDelay generates a delay that mimics human behavior patterns. +// The delay is selected based on probability distribution: +// - 20% chance: Short delay (50-200ms) - simulates consecutive rapid operations +// - 75% chance: Normal delay (1-3s) - simulates thinking/reading time +// - 5% chance: Long delay (5-10s) - simulates breaks/reading longer content +// +// Returns the delay duration (caller should call time.Sleep with this value). +func HumanLikeDelay() time.Duration { + initJitterRand() + jitterMu.Lock() + defer jitterMu.Unlock() + + // Track time since last request for adaptive behavior + now := time.Now() + timeSinceLastRequest := now.Sub(lastRequestTime) + lastRequestTime = now + + // If requests are very close together, use short delay + if timeSinceLastRequest < 500*time.Millisecond && timeSinceLastRequest > 0 { + rangeMs := ShortDelayMax.Milliseconds() - ShortDelayMin.Milliseconds() + randomMs := jitterRand.Int63n(rangeMs) + return ShortDelayMin + time.Duration(randomMs)*time.Millisecond + } + + // Otherwise, use probability-based selection + roll := jitterRand.Float64() + + var min, max time.Duration + switch { + case roll < ShortDelayProbability: + // Short delay - consecutive operations + min, max = ShortDelayMin, ShortDelayMax + case roll < ShortDelayProbability+LongDelayProbability: + // Long delay - reading/resting + min, max = LongDelayMin, LongDelayMax + default: + // Normal delay - thinking time + min, max = NormalDelayMin, NormalDelayMax + } + + rangeMs := max.Milliseconds() - min.Milliseconds() + randomMs := jitterRand.Int63n(rangeMs) + return min + time.Duration(randomMs)*time.Millisecond +} + +// ApplyHumanLikeDelay applies human-like delay by sleeping. +// This is a convenience function that combines HumanLikeDelay with time.Sleep. +func ApplyHumanLikeDelay() { + delay := HumanLikeDelay() + if delay > 0 { + time.Sleep(delay) + } +} + +// ExponentialBackoffWithJitter calculates retry delay using exponential backoff with jitter. +// Formula: min(baseDelay * 2^attempt + jitter, maxDelay) +// This helps prevent thundering herd problem when multiple clients retry simultaneously. +func ExponentialBackoffWithJitter(attempt int, baseDelay, maxDelay time.Duration) time.Duration { + if attempt < 0 { + attempt = 0 + } + + // Calculate exponential backoff: baseDelay * 2^attempt + backoff := baseDelay * time.Duration(1< maxDelay { + backoff = maxDelay + } + + // Add ±30% jitter + return JitterDelay(backoff, JitterPercent) +} + +// ShouldSkipDelay determines if delay should be skipped based on context. +// Returns true for streaming responses, WebSocket connections, etc. +// This function can be extended to check additional skip conditions. +func ShouldSkipDelay(isStreaming bool) bool { + return isStreaming +} + +// ResetLastRequestTime resets the last request time tracker. +// Useful for testing or when starting a new session. +func ResetLastRequestTime() { + jitterMu.Lock() + defer jitterMu.Unlock() + lastRequestTime = time.Time{} +} diff --git a/internal/auth/kiro/metrics.go b/internal/auth/kiro/metrics.go new file mode 100644 index 0000000000..f9540fc17f --- /dev/null +++ b/internal/auth/kiro/metrics.go @@ -0,0 +1,187 @@ +package kiro + +import ( + "math" + "sync" + "time" +) + +// TokenMetrics holds performance metrics for a single token. +type TokenMetrics struct { + SuccessRate float64 // Success rate (0.0 - 1.0) + AvgLatency float64 // Average latency in milliseconds + QuotaRemaining float64 // Remaining quota (0.0 - 1.0) + LastUsed time.Time // Last usage timestamp + FailCount int // Consecutive failure count + TotalRequests int // Total request count + successCount int // Internal: successful request count + totalLatency float64 // Internal: cumulative latency +} + +// TokenScorer manages token metrics and scoring. +type TokenScorer struct { + mu sync.RWMutex + metrics map[string]*TokenMetrics + + // Scoring weights + successRateWeight float64 + quotaWeight float64 + latencyWeight float64 + lastUsedWeight float64 + failPenaltyMultiplier float64 +} + +// NewTokenScorer creates a new TokenScorer with default weights. +func NewTokenScorer() *TokenScorer { + return &TokenScorer{ + metrics: make(map[string]*TokenMetrics), + successRateWeight: 0.4, + quotaWeight: 0.25, + latencyWeight: 0.2, + lastUsedWeight: 0.15, + failPenaltyMultiplier: 0.1, + } +} + +// getOrCreateMetrics returns existing metrics or creates new ones. +func (s *TokenScorer) getOrCreateMetrics(tokenKey string) *TokenMetrics { + if m, ok := s.metrics[tokenKey]; ok { + return m + } + m := &TokenMetrics{ + SuccessRate: 1.0, + QuotaRemaining: 1.0, + } + s.metrics[tokenKey] = m + return m +} + +// RecordRequest records the result of a request for a token. +func (s *TokenScorer) RecordRequest(tokenKey string, success bool, latency time.Duration) { + s.mu.Lock() + defer s.mu.Unlock() + + m := s.getOrCreateMetrics(tokenKey) + m.TotalRequests++ + m.LastUsed = time.Now() + m.totalLatency += float64(latency.Milliseconds()) + + if success { + m.successCount++ + m.FailCount = 0 + } else { + m.FailCount++ + } + + // Update derived metrics + if m.TotalRequests > 0 { + m.SuccessRate = float64(m.successCount) / float64(m.TotalRequests) + m.AvgLatency = m.totalLatency / float64(m.TotalRequests) + } +} + +// SetQuotaRemaining updates the remaining quota for a token. +func (s *TokenScorer) SetQuotaRemaining(tokenKey string, quota float64) { + s.mu.Lock() + defer s.mu.Unlock() + + m := s.getOrCreateMetrics(tokenKey) + m.QuotaRemaining = quota +} + +// GetMetrics returns a copy of the metrics for a token. +func (s *TokenScorer) GetMetrics(tokenKey string) *TokenMetrics { + s.mu.RLock() + defer s.mu.RUnlock() + + if m, ok := s.metrics[tokenKey]; ok { + copy := *m + return © + } + return nil +} + +// CalculateScore computes the score for a token (higher is better). +func (s *TokenScorer) CalculateScore(tokenKey string) float64 { + s.mu.RLock() + defer s.mu.RUnlock() + + m, ok := s.metrics[tokenKey] + if !ok { + return 1.0 // New tokens get a high initial score + } + + // Success rate component (0-1) + successScore := m.SuccessRate + + // Quota component (0-1) + quotaScore := m.QuotaRemaining + + // Latency component (normalized, lower is better) + // Using exponential decay: score = e^(-latency/1000) + // 1000ms latency -> ~0.37 score, 100ms -> ~0.90 score + latencyScore := math.Exp(-m.AvgLatency / 1000.0) + if m.TotalRequests == 0 { + latencyScore = 1.0 + } + + // Last used component (prefer tokens not recently used) + // Score increases as time since last use increases + timeSinceUse := time.Since(m.LastUsed).Seconds() + // Normalize: 60 seconds -> ~0.63 score, 0 seconds -> 0 score + lastUsedScore := 1.0 - math.Exp(-timeSinceUse/60.0) + if m.LastUsed.IsZero() { + lastUsedScore = 1.0 + } + + // Calculate weighted score + score := s.successRateWeight*successScore + + s.quotaWeight*quotaScore + + s.latencyWeight*latencyScore + + s.lastUsedWeight*lastUsedScore + + // Apply consecutive failure penalty + if m.FailCount > 0 { + penalty := s.failPenaltyMultiplier * float64(m.FailCount) + score = score * math.Max(0, 1.0-penalty) + } + + return score +} + +// SelectBestToken selects the token with the highest score. +func (s *TokenScorer) SelectBestToken(tokens []string) string { + if len(tokens) == 0 { + return "" + } + if len(tokens) == 1 { + return tokens[0] + } + + bestToken := tokens[0] + bestScore := s.CalculateScore(tokens[0]) + + for _, token := range tokens[1:] { + score := s.CalculateScore(token) + if score > bestScore { + bestScore = score + bestToken = token + } + } + + return bestToken +} + +// ResetMetrics clears all metrics for a token. +func (s *TokenScorer) ResetMetrics(tokenKey string) { + s.mu.Lock() + defer s.mu.Unlock() + delete(s.metrics, tokenKey) +} + +// ResetAllMetrics clears all stored metrics. +func (s *TokenScorer) ResetAllMetrics() { + s.mu.Lock() + defer s.mu.Unlock() + s.metrics = make(map[string]*TokenMetrics) +} diff --git a/internal/auth/kiro/metrics_test.go b/internal/auth/kiro/metrics_test.go new file mode 100644 index 0000000000..ffe2a876a3 --- /dev/null +++ b/internal/auth/kiro/metrics_test.go @@ -0,0 +1,301 @@ +package kiro + +import ( + "sync" + "testing" + "time" +) + +func TestNewTokenScorer(t *testing.T) { + s := NewTokenScorer() + if s == nil { + t.Fatal("expected non-nil TokenScorer") + } + if s.metrics == nil { + t.Error("expected non-nil metrics map") + } + if s.successRateWeight != 0.4 { + t.Errorf("expected successRateWeight 0.4, got %f", s.successRateWeight) + } + if s.quotaWeight != 0.25 { + t.Errorf("expected quotaWeight 0.25, got %f", s.quotaWeight) + } +} + +func TestRecordRequest_Success(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + + m := s.GetMetrics("token1") + if m == nil { + t.Fatal("expected non-nil metrics") + } + if m.TotalRequests != 1 { + t.Errorf("expected TotalRequests 1, got %d", m.TotalRequests) + } + if m.SuccessRate != 1.0 { + t.Errorf("expected SuccessRate 1.0, got %f", m.SuccessRate) + } + if m.FailCount != 0 { + t.Errorf("expected FailCount 0, got %d", m.FailCount) + } + if m.AvgLatency != 100 { + t.Errorf("expected AvgLatency 100, got %f", m.AvgLatency) + } +} + +func TestRecordRequest_Failure(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", false, 200*time.Millisecond) + + m := s.GetMetrics("token1") + if m.SuccessRate != 0.0 { + t.Errorf("expected SuccessRate 0.0, got %f", m.SuccessRate) + } + if m.FailCount != 1 { + t.Errorf("expected FailCount 1, got %d", m.FailCount) + } +} + +func TestRecordRequest_MixedResults(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + s.RecordRequest("token1", true, 100*time.Millisecond) + s.RecordRequest("token1", false, 100*time.Millisecond) + s.RecordRequest("token1", true, 100*time.Millisecond) + + m := s.GetMetrics("token1") + if m.TotalRequests != 4 { + t.Errorf("expected TotalRequests 4, got %d", m.TotalRequests) + } + if m.SuccessRate != 0.75 { + t.Errorf("expected SuccessRate 0.75, got %f", m.SuccessRate) + } + if m.FailCount != 0 { + t.Errorf("expected FailCount 0 (reset on success), got %d", m.FailCount) + } +} + +func TestRecordRequest_ConsecutiveFailures(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + s.RecordRequest("token1", false, 100*time.Millisecond) + s.RecordRequest("token1", false, 100*time.Millisecond) + s.RecordRequest("token1", false, 100*time.Millisecond) + + m := s.GetMetrics("token1") + if m.FailCount != 3 { + t.Errorf("expected FailCount 3, got %d", m.FailCount) + } +} + +func TestSetQuotaRemaining(t *testing.T) { + s := NewTokenScorer() + s.SetQuotaRemaining("token1", 0.5) + + m := s.GetMetrics("token1") + if m.QuotaRemaining != 0.5 { + t.Errorf("expected QuotaRemaining 0.5, got %f", m.QuotaRemaining) + } +} + +func TestGetMetrics_NonExistent(t *testing.T) { + s := NewTokenScorer() + m := s.GetMetrics("nonexistent") + if m != nil { + t.Error("expected nil metrics for non-existent token") + } +} + +func TestGetMetrics_ReturnsCopy(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + + m1 := s.GetMetrics("token1") + m1.TotalRequests = 999 + + m2 := s.GetMetrics("token1") + if m2.TotalRequests == 999 { + t.Error("GetMetrics should return a copy") + } +} + +func TestCalculateScore_NewToken(t *testing.T) { + s := NewTokenScorer() + score := s.CalculateScore("newtoken") + if score != 1.0 { + t.Errorf("expected score 1.0 for new token, got %f", score) + } +} + +func TestCalculateScore_PerfectToken(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 50*time.Millisecond) + s.SetQuotaRemaining("token1", 1.0) + + time.Sleep(100 * time.Millisecond) + score := s.CalculateScore("token1") + if score < 0.5 || score > 1.0 { + t.Errorf("expected high score for perfect token, got %f", score) + } +} + +func TestCalculateScore_FailedToken(t *testing.T) { + s := NewTokenScorer() + for i := 0; i < 5; i++ { + s.RecordRequest("token1", false, 1000*time.Millisecond) + } + s.SetQuotaRemaining("token1", 0.1) + + score := s.CalculateScore("token1") + if score > 0.5 { + t.Errorf("expected low score for failed token, got %f", score) + } +} + +func TestCalculateScore_FailPenalty(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + scoreNoFail := s.CalculateScore("token1") + + s.RecordRequest("token1", false, 100*time.Millisecond) + s.RecordRequest("token1", false, 100*time.Millisecond) + scoreWithFail := s.CalculateScore("token1") + + if scoreWithFail >= scoreNoFail { + t.Errorf("expected lower score with consecutive failures: noFail=%f, withFail=%f", scoreNoFail, scoreWithFail) + } +} + +func TestSelectBestToken_Empty(t *testing.T) { + s := NewTokenScorer() + best := s.SelectBestToken([]string{}) + if best != "" { + t.Errorf("expected empty string for empty tokens, got %s", best) + } +} + +func TestSelectBestToken_SingleToken(t *testing.T) { + s := NewTokenScorer() + best := s.SelectBestToken([]string{"token1"}) + if best != "token1" { + t.Errorf("expected token1, got %s", best) + } +} + +func TestSelectBestToken_MultipleTokens(t *testing.T) { + s := NewTokenScorer() + + s.RecordRequest("bad", false, 1000*time.Millisecond) + s.RecordRequest("bad", false, 1000*time.Millisecond) + s.SetQuotaRemaining("bad", 0.1) + + s.RecordRequest("good", true, 50*time.Millisecond) + s.SetQuotaRemaining("good", 0.9) + + time.Sleep(50 * time.Millisecond) + + best := s.SelectBestToken([]string{"bad", "good"}) + if best != "good" { + t.Errorf("expected good token to be selected, got %s", best) + } +} + +func TestResetMetrics(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + s.ResetMetrics("token1") + + m := s.GetMetrics("token1") + if m != nil { + t.Error("expected nil metrics after reset") + } +} + +func TestResetAllMetrics(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + s.RecordRequest("token2", true, 100*time.Millisecond) + s.RecordRequest("token3", true, 100*time.Millisecond) + + s.ResetAllMetrics() + + if s.GetMetrics("token1") != nil { + t.Error("expected nil metrics for token1 after reset all") + } + if s.GetMetrics("token2") != nil { + t.Error("expected nil metrics for token2 after reset all") + } +} + +func TestTokenScorer_ConcurrentAccess(t *testing.T) { + s := NewTokenScorer() + const numGoroutines = 50 + const numOperations = 100 + + var wg sync.WaitGroup + wg.Add(numGoroutines) + + for i := 0; i < numGoroutines; i++ { + go func(id int) { + defer wg.Done() + tokenKey := "token" + string(rune('a'+id%10)) + for j := 0; j < numOperations; j++ { + switch j % 6 { + case 0: + s.RecordRequest(tokenKey, j%2 == 0, time.Duration(j)*time.Millisecond) + case 1: + s.SetQuotaRemaining(tokenKey, float64(j%100)/100) + case 2: + s.GetMetrics(tokenKey) + case 3: + s.CalculateScore(tokenKey) + case 4: + s.SelectBestToken([]string{tokenKey, "token_x", "token_y"}) + case 5: + if j%20 == 0 { + s.ResetMetrics(tokenKey) + } + } + } + }(i) + } + + wg.Wait() +} + +func TestAvgLatencyCalculation(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + s.RecordRequest("token1", true, 200*time.Millisecond) + s.RecordRequest("token1", true, 300*time.Millisecond) + + m := s.GetMetrics("token1") + if m.AvgLatency != 200 { + t.Errorf("expected AvgLatency 200, got %f", m.AvgLatency) + } +} + +func TestLastUsedUpdated(t *testing.T) { + s := NewTokenScorer() + before := time.Now() + s.RecordRequest("token1", true, 100*time.Millisecond) + + m := s.GetMetrics("token1") + if m.LastUsed.Before(before) { + t.Error("expected LastUsed to be after test start time") + } + if m.LastUsed.After(time.Now()) { + t.Error("expected LastUsed to be before or equal to now") + } +} + +func TestDefaultQuotaForNewToken(t *testing.T) { + s := NewTokenScorer() + s.RecordRequest("token1", true, 100*time.Millisecond) + + m := s.GetMetrics("token1") + if m.QuotaRemaining != 1.0 { + t.Errorf("expected default QuotaRemaining 1.0, got %f", m.QuotaRemaining) + } +} diff --git a/internal/auth/kiro/oauth.go b/internal/auth/kiro/oauth.go new file mode 100644 index 0000000000..4101999a32 --- /dev/null +++ b/internal/auth/kiro/oauth.go @@ -0,0 +1,319 @@ +// Package kiro provides OAuth2 authentication for Kiro using native Google login. +package kiro + +import ( + "context" + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "html" + "io" + "net" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + // Kiro auth endpoint + kiroAuthEndpoint = "https://prod.us-east-1.auth.desktop.kiro.dev" + + // Default callback port + defaultCallbackPort = 9876 + + // Auth timeout + authTimeout = 10 * time.Minute +) + +// KiroTokenResponse represents the response from Kiro token endpoint. +type KiroTokenResponse struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` + ProfileArn string `json:"profileArn"` + ExpiresIn int `json:"expiresIn"` +} + +// KiroOAuth handles the OAuth flow for Kiro authentication. +type KiroOAuth struct { + httpClient *http.Client + cfg *config.Config + machineID string + kiroVersion string +} + +// NewKiroOAuth creates a new Kiro OAuth handler. +func NewKiroOAuth(cfg *config.Config) *KiroOAuth { + client := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + client = util.SetProxy(&cfg.SDKConfig, client) + } + fp := GlobalFingerprintManager().GetFingerprint("login") + return &KiroOAuth{ + httpClient: client, + cfg: cfg, + machineID: fp.KiroHash, + kiroVersion: fp.KiroVersion, + } +} + +// generateCodeVerifier generates a random code verifier for PKCE. +func generateCodeVerifier() (string, error) { + b := make([]byte, 32) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +// generateCodeChallenge generates the code challenge from verifier. +func generateCodeChallenge(verifier string) string { + h := sha256.Sum256([]byte(verifier)) + return base64.RawURLEncoding.EncodeToString(h[:]) +} + +// generateState generates a random state parameter. +func generateState() (string, error) { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +// AuthResult contains the authorization code and state from callback. +type AuthResult struct { + Code string + State string + Error string +} + +// startCallbackServer starts a local HTTP server to receive the OAuth callback. +func (o *KiroOAuth) startCallbackServer(ctx context.Context, expectedState string) (string, <-chan AuthResult, error) { + // Try to find an available port - use localhost like Kiro does + listener, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", defaultCallbackPort)) + if err != nil { + // Try with dynamic port (RFC 8252 allows dynamic ports for native apps) + log.Warnf("kiro oauth: default port %d is busy, falling back to dynamic port", defaultCallbackPort) + listener, err = net.Listen("tcp", "localhost:0") + if err != nil { + return "", nil, fmt.Errorf("failed to start callback server: %w", err) + } + } + + port := listener.Addr().(*net.TCPAddr).Port + // Use http scheme for local callback server + redirectURI := fmt.Sprintf("http://localhost:%d/oauth/callback", port) + resultChan := make(chan AuthResult, 1) + + server := &http.Server{ + ReadHeaderTimeout: 10 * time.Second, + } + + mux := http.NewServeMux() + mux.HandleFunc("/oauth/callback", func(w http.ResponseWriter, r *http.Request) { + code := r.URL.Query().Get("code") + state := r.URL.Query().Get("state") + errParam := r.URL.Query().Get("error") + + if errParam != "" { + w.Header().Set("Content-Type", "text/html") + w.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(w, `

Login Failed

%s

You can close this window.

`, html.EscapeString(errParam)) + resultChan <- AuthResult{Error: errParam} + return + } + + if state != expectedState { + w.Header().Set("Content-Type", "text/html") + w.WriteHeader(http.StatusBadRequest) + fmt.Fprint(w, `

Login Failed

Invalid state parameter

You can close this window.

`) + resultChan <- AuthResult{Error: "state mismatch"} + return + } + + w.Header().Set("Content-Type", "text/html") + fmt.Fprint(w, `

Login Successful!

You can close this window and return to the terminal.

`) + resultChan <- AuthResult{Code: code, State: state} + }) + + server.Handler = mux + + go func() { + if err := server.Serve(listener); err != nil && err != http.ErrServerClosed { + log.Debugf("callback server error: %v", err) + } + }() + + go func() { + select { + case <-ctx.Done(): + case <-time.After(authTimeout): + case <-resultChan: + } + _ = server.Shutdown(context.Background()) + }() + + return redirectURI, resultChan, nil +} + +// LoginWithBuilderID performs OAuth login with AWS Builder ID using device code flow. +func (o *KiroOAuth) LoginWithBuilderID(ctx context.Context) (*KiroTokenData, error) { + ssoClient := NewSSOOIDCClient(o.cfg) + return ssoClient.LoginWithBuilderID(ctx) +} + +// LoginWithBuilderIDAuthCode performs OAuth login with AWS Builder ID using authorization code flow. +// This provides a better UX than device code flow as it uses automatic browser callback. +func (o *KiroOAuth) LoginWithBuilderIDAuthCode(ctx context.Context) (*KiroTokenData, error) { + ssoClient := NewSSOOIDCClient(o.cfg) + return ssoClient.LoginWithBuilderIDAuthCode(ctx) +} + +// exchangeCodeForToken exchanges the authorization code for tokens. +func (o *KiroOAuth) exchangeCodeForToken(ctx context.Context, code, codeVerifier, redirectURI string) (*KiroTokenData, error) { + payload := map[string]string{ + "code": code, + "code_verifier": codeVerifier, + "redirect_uri": redirectURI, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, fmt.Errorf("failed to marshal request: %w", err) + } + + tokenURL := kiroAuthEndpoint + "/oauth/token" + req, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, strings.NewReader(string(body))) + if err != nil { + return nil, fmt.Errorf("failed to create request: %w", err) + } + + req.Header.Set("Content-Type", "application/json") + req.Header.Set("User-Agent", fmt.Sprintf("KiroIDE-%s-%s", o.kiroVersion, o.machineID)) + req.Header.Set("Accept", "application/json, text/plain, */*") + + resp, err := o.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("token request failed: %w", err) + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("token exchange failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("token exchange failed (status %d)", resp.StatusCode) + } + + var tokenResp KiroTokenResponse + if err := json.Unmarshal(respBody, &tokenResp); err != nil { + return nil, fmt.Errorf("failed to parse token response: %w", err) + } + + // Validate ExpiresIn - use default 1 hour if invalid + expiresIn := tokenResp.ExpiresIn + if expiresIn <= 0 { + expiresIn = 3600 + } + expiresAt := time.Now().Add(time.Duration(expiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: tokenResp.ProfileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "social", + Provider: "", // Caller should preserve original provider + Region: "us-east-1", + }, nil +} + +// RefreshToken refreshes an expired access token. +// Uses KiroIDE-style User-Agent to match official Kiro IDE behavior. +func (o *KiroOAuth) RefreshToken(ctx context.Context, refreshToken string) (*KiroTokenData, error) { + return o.RefreshTokenWithFingerprint(ctx, refreshToken, "") +} + +// RefreshTokenWithFingerprint refreshes an expired access token with a specific fingerprint. +// tokenKey is used to generate a consistent fingerprint for the token. +func (o *KiroOAuth) RefreshTokenWithFingerprint(ctx context.Context, refreshToken, tokenKey string) (*KiroTokenData, error) { + payload := map[string]string{ + "refreshToken": refreshToken, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, fmt.Errorf("failed to marshal request: %w", err) + } + + refreshURL := kiroAuthEndpoint + "/refreshToken" + req, err := http.NewRequestWithContext(ctx, http.MethodPost, refreshURL, strings.NewReader(string(body))) + if err != nil { + return nil, fmt.Errorf("failed to create request: %w", err) + } + + req.Header.Set("Content-Type", "application/json") + req.Header.Set("User-Agent", fmt.Sprintf("KiroIDE-%s-%s", o.kiroVersion, o.machineID)) + req.Header.Set("Accept", "application/json, text/plain, */*") + + resp, err := o.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("refresh request failed: %w", err) + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("token refresh failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("token refresh failed (status %d): %s", resp.StatusCode, string(respBody)) + } + + var tokenResp KiroTokenResponse + if err := json.Unmarshal(respBody, &tokenResp); err != nil { + return nil, fmt.Errorf("failed to parse token response: %w", err) + } + + // Validate ExpiresIn - use default 1 hour if invalid + expiresIn := tokenResp.ExpiresIn + if expiresIn <= 0 { + expiresIn = 3600 + } + expiresAt := time.Now().Add(time.Duration(expiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: tokenResp.ProfileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "social", + Provider: "", // Caller should preserve original provider + Region: "us-east-1", + }, nil +} + +// LoginWithGoogle performs OAuth login with Google using Kiro's social auth. +// This uses a custom protocol handler (kiro://) to receive the callback. +func (o *KiroOAuth) LoginWithGoogle(ctx context.Context) (*KiroTokenData, error) { + socialClient := NewSocialAuthClient(o.cfg) + return socialClient.LoginWithGoogle(ctx) +} + +// LoginWithGitHub performs OAuth login with GitHub using Kiro's social auth. +// This uses a custom protocol handler (kiro://) to receive the callback. +func (o *KiroOAuth) LoginWithGitHub(ctx context.Context) (*KiroTokenData, error) { + socialClient := NewSocialAuthClient(o.cfg) + return socialClient.LoginWithGitHub(ctx) +} diff --git a/internal/auth/kiro/oauth_web.go b/internal/auth/kiro/oauth_web.go new file mode 100644 index 0000000000..4618a838da --- /dev/null +++ b/internal/auth/kiro/oauth_web.go @@ -0,0 +1,975 @@ +// Package kiro provides OAuth Web authentication for Kiro. +package kiro + +import ( + "context" + "crypto/rand" + "encoding/base64" + "encoding/json" + "fmt" + "html/template" + "net/http" + "os" + "path/filepath" + "strings" + "sync" + "time" + + "github.com/gin-gonic/gin" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + defaultSessionExpiry = 10 * time.Minute + pollIntervalSeconds = 5 +) + +type authSessionStatus string + +const ( + statusPending authSessionStatus = "pending" + statusSuccess authSessionStatus = "success" + statusFailed authSessionStatus = "failed" +) + +type webAuthSession struct { + stateID string + deviceCode string + userCode string + authURL string + verificationURI string + expiresIn int + interval int + status authSessionStatus + startedAt time.Time + completedAt time.Time + expiresAt time.Time + error string + tokenData *KiroTokenData + ssoClient *SSOOIDCClient + clientID string + clientSecret string + region string + cancelFunc context.CancelFunc + authMethod string // "google", "github", "builder-id", "idc" + startURL string // Used for IDC + codeVerifier string // Used for social auth PKCE + codeChallenge string // Used for social auth PKCE +} + +type OAuthWebHandler struct { + cfg *config.Config + sessions map[string]*webAuthSession + mu sync.RWMutex + onTokenObtained func(*KiroTokenData) +} + +func NewOAuthWebHandler(cfg *config.Config) *OAuthWebHandler { + return &OAuthWebHandler{ + cfg: cfg, + sessions: make(map[string]*webAuthSession), + } +} + +func (h *OAuthWebHandler) SetTokenCallback(callback func(*KiroTokenData)) { + h.onTokenObtained = callback +} + +func (h *OAuthWebHandler) RegisterRoutes(router gin.IRouter) { + oauth := router.Group("/v0/oauth/kiro") + { + oauth.GET("", h.handleSelect) + oauth.GET("/start", h.handleStart) + oauth.GET("/callback", h.handleCallback) + oauth.GET("/social/callback", h.handleSocialCallback) + oauth.GET("/status", h.handleStatus) + oauth.POST("/import", h.handleImportToken) + oauth.POST("/refresh", h.handleManualRefresh) + } +} + +func generateStateID() (string, error) { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +func (h *OAuthWebHandler) handleSelect(c *gin.Context) { + h.renderSelectPage(c) +} + +func (h *OAuthWebHandler) handleStart(c *gin.Context) { + method := c.Query("method") + + if method == "" { + c.Redirect(http.StatusFound, "/v0/oauth/kiro") + return + } + + switch method { + case "google", "github": + // Google/GitHub social login is not supported for third-party apps + // due to AWS Cognito redirect_uri restrictions + h.renderError(c, "Google/GitHub login is not available for third-party applications. Please use AWS Builder ID or import your token from Kiro IDE.") + case "builder-id": + h.startBuilderIDAuth(c) + case "idc": + h.startIDCAuth(c) + default: + h.renderError(c, fmt.Sprintf("Unknown authentication method: %s", method)) + } +} + +func (h *OAuthWebHandler) startSocialAuth(c *gin.Context, method string) { + stateID, err := generateStateID() + if err != nil { + h.renderError(c, "Failed to generate state parameter") + return + } + + codeVerifier, codeChallenge, err := generatePKCE() + if err != nil { + h.renderError(c, "Failed to generate PKCE parameters") + return + } + + socialClient := NewSocialAuthClient(h.cfg) + + var provider string + if method == "google" { + provider = string(ProviderGoogle) + } else { + provider = string(ProviderGitHub) + } + + redirectURI := h.getSocialCallbackURL(c) + authURL := socialClient.buildLoginURL(provider, redirectURI, codeChallenge, stateID) + + ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) + + session := &webAuthSession{ + stateID: stateID, + authMethod: method, + authURL: authURL, + status: statusPending, + startedAt: time.Now(), + expiresIn: 600, + codeVerifier: codeVerifier, + codeChallenge: codeChallenge, + region: "us-east-1", + cancelFunc: cancel, + } + + h.mu.Lock() + h.sessions[stateID] = session + h.mu.Unlock() + + go func() { + <-ctx.Done() + h.mu.Lock() + if session.status == statusPending { + session.status = statusFailed + session.error = "Authentication timed out" + } + h.mu.Unlock() + }() + + c.Redirect(http.StatusFound, authURL) +} + +func (h *OAuthWebHandler) getSocialCallbackURL(c *gin.Context) string { + scheme := "http" + if c.Request.TLS != nil || c.GetHeader("X-Forwarded-Proto") == "https" { + scheme = "https" + } + return fmt.Sprintf("%s://%s/v0/oauth/kiro/social/callback", scheme, c.Request.Host) +} + +func (h *OAuthWebHandler) startBuilderIDAuth(c *gin.Context) { + stateID, err := generateStateID() + if err != nil { + h.renderError(c, "Failed to generate state parameter") + return + } + + region := defaultIDCRegion + startURL := builderIDStartURL + + ssoClient := NewSSOOIDCClient(h.cfg) + + regResp, err := ssoClient.RegisterClientWithRegion(c.Request.Context(), region) + if err != nil { + log.Errorf("OAuth Web: failed to register client: %v", err) + h.renderError(c, fmt.Sprintf("Failed to register client: %v", err)) + return + } + + authResp, err := ssoClient.StartDeviceAuthorizationWithIDC( + c.Request.Context(), + regResp.ClientID, + regResp.ClientSecret, + startURL, + region, + ) + if err != nil { + log.Errorf("OAuth Web: failed to start device authorization: %v", err) + h.renderError(c, fmt.Sprintf("Failed to start device authorization: %v", err)) + return + } + + ctx, cancel := context.WithTimeout(context.Background(), time.Duration(authResp.ExpiresIn)*time.Second) + + session := &webAuthSession{ + stateID: stateID, + deviceCode: authResp.DeviceCode, + userCode: authResp.UserCode, + authURL: authResp.VerificationURIComplete, + verificationURI: authResp.VerificationURI, + expiresIn: authResp.ExpiresIn, + interval: authResp.Interval, + status: statusPending, + startedAt: time.Now(), + ssoClient: ssoClient, + clientID: regResp.ClientID, + clientSecret: regResp.ClientSecret, + region: region, + authMethod: "builder-id", + startURL: startURL, + cancelFunc: cancel, + } + + h.mu.Lock() + h.sessions[stateID] = session + h.mu.Unlock() + + go h.pollForToken(ctx, session) + + h.renderStartPage(c, session) +} + +func (h *OAuthWebHandler) startIDCAuth(c *gin.Context) { + startURL := c.Query("startUrl") + region := c.Query("region") + + if startURL == "" { + h.renderError(c, "Missing startUrl parameter for IDC authentication") + return + } + if region == "" { + region = defaultIDCRegion + } + + stateID, err := generateStateID() + if err != nil { + h.renderError(c, "Failed to generate state parameter") + return + } + + ssoClient := NewSSOOIDCClient(h.cfg) + + regResp, err := ssoClient.RegisterClientWithRegion(c.Request.Context(), region) + if err != nil { + log.Errorf("OAuth Web: failed to register client: %v", err) + h.renderError(c, fmt.Sprintf("Failed to register client: %v", err)) + return + } + + authResp, err := ssoClient.StartDeviceAuthorizationWithIDC( + c.Request.Context(), + regResp.ClientID, + regResp.ClientSecret, + startURL, + region, + ) + if err != nil { + log.Errorf("OAuth Web: failed to start device authorization: %v", err) + h.renderError(c, fmt.Sprintf("Failed to start device authorization: %v", err)) + return + } + + ctx, cancel := context.WithTimeout(context.Background(), time.Duration(authResp.ExpiresIn)*time.Second) + + session := &webAuthSession{ + stateID: stateID, + deviceCode: authResp.DeviceCode, + userCode: authResp.UserCode, + authURL: authResp.VerificationURIComplete, + verificationURI: authResp.VerificationURI, + expiresIn: authResp.ExpiresIn, + interval: authResp.Interval, + status: statusPending, + startedAt: time.Now(), + ssoClient: ssoClient, + clientID: regResp.ClientID, + clientSecret: regResp.ClientSecret, + region: region, + authMethod: "idc", + startURL: startURL, + cancelFunc: cancel, + } + + h.mu.Lock() + h.sessions[stateID] = session + h.mu.Unlock() + + go h.pollForToken(ctx, session) + + h.renderStartPage(c, session) +} + +func (h *OAuthWebHandler) pollForToken(ctx context.Context, session *webAuthSession) { + defer session.cancelFunc() + + interval := time.Duration(session.interval) * time.Second + if interval < time.Duration(pollIntervalSeconds)*time.Second { + interval = time.Duration(pollIntervalSeconds) * time.Second + } + + ticker := time.NewTicker(interval) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + h.mu.Lock() + if session.status == statusPending { + session.status = statusFailed + session.error = "Authentication timed out" + } + h.mu.Unlock() + return + case <-ticker.C: + tokenResp, err := h.ssoClient(session).CreateTokenWithRegion( + ctx, + session.clientID, + session.clientSecret, + session.deviceCode, + session.region, + ) + + if err != nil { + errStr := err.Error() + if errStr == ErrAuthorizationPending.Error() { + continue + } + if errStr == ErrSlowDown.Error() { + interval += 5 * time.Second + ticker.Reset(interval) + continue + } + + h.mu.Lock() + session.status = statusFailed + session.error = errStr + session.completedAt = time.Now() + h.mu.Unlock() + + log.Errorf("OAuth Web: token polling failed: %v", err) + return + } + + expiresAt := time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second) + + // Fetch profileArn for IDC + var profileArn string + if session.authMethod == "idc" { + profileArn = session.ssoClient.FetchProfileArn(ctx, tokenResp.AccessToken, session.clientID, tokenResp.RefreshToken) + } + + email := FetchUserEmailWithFallback(ctx, h.cfg, tokenResp.AccessToken, session.clientID, tokenResp.RefreshToken) + + tokenData := &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: profileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: session.authMethod, + Provider: "AWS", + ClientID: session.clientID, + ClientSecret: session.clientSecret, + Email: email, + Region: session.region, + StartURL: session.startURL, + } + + h.mu.Lock() + session.status = statusSuccess + session.completedAt = time.Now() + session.expiresAt = expiresAt + session.tokenData = tokenData + h.mu.Unlock() + + if h.onTokenObtained != nil { + h.onTokenObtained(tokenData) + } + + // Save token to file + h.saveTokenToFile(tokenData) + + log.Infof("OAuth Web: authentication successful for %s", email) + return + } + } +} + +// saveTokenToFile saves the token data to the auth directory +func (h *OAuthWebHandler) saveTokenToFile(tokenData *KiroTokenData) { + // Get auth directory from config or use default + authDir := "" + if h.cfg != nil && h.cfg.AuthDir != "" { + var err error + authDir, err = util.ResolveAuthDir(h.cfg.AuthDir) + if err != nil { + log.Errorf("OAuth Web: failed to resolve auth directory: %v", err) + } + } + + // Fall back to default location + if authDir == "" { + home, err := os.UserHomeDir() + if err != nil { + log.Errorf("OAuth Web: failed to get home directory: %v", err) + return + } + authDir = filepath.Join(home, ".cli-proxy-api") + } + + // Create directory if not exists + if err := os.MkdirAll(authDir, 0700); err != nil { + log.Errorf("OAuth Web: failed to create auth directory: %v", err) + return + } + + // Generate filename using the unified function + fileName := GenerateTokenFileName(tokenData) + + authFilePath := filepath.Join(authDir, fileName) + + // Convert to storage format and save + storage := &KiroTokenStorage{ + Type: "kiro", + AccessToken: tokenData.AccessToken, + RefreshToken: tokenData.RefreshToken, + ProfileArn: tokenData.ProfileArn, + ExpiresAt: tokenData.ExpiresAt, + AuthMethod: tokenData.AuthMethod, + Provider: tokenData.Provider, + LastRefresh: time.Now().Format(time.RFC3339), + ClientID: tokenData.ClientID, + ClientSecret: tokenData.ClientSecret, + Region: tokenData.Region, + StartURL: tokenData.StartURL, + Email: tokenData.Email, + } + + if err := storage.SaveTokenToFile(authFilePath); err != nil { + log.Errorf("OAuth Web: failed to save token to file: %v", err) + return + } + + log.Infof("OAuth Web: token saved to %s", authFilePath) +} + +func (h *OAuthWebHandler) ssoClient(session *webAuthSession) *SSOOIDCClient { + return session.ssoClient +} + +func (h *OAuthWebHandler) handleCallback(c *gin.Context) { + stateID := c.Query("state") + errParam := c.Query("error") + + if errParam != "" { + h.renderError(c, errParam) + return + } + + if stateID == "" { + h.renderError(c, "Missing state parameter") + return + } + + h.mu.RLock() + session, exists := h.sessions[stateID] + h.mu.RUnlock() + + if !exists { + h.renderError(c, "Invalid or expired session") + return + } + + if session.status == statusSuccess { + h.renderSuccess(c, session) + } else if session.status == statusFailed { + h.renderError(c, session.error) + } else { + c.Redirect(http.StatusFound, "/v0/oauth/kiro/start") + } +} + +func (h *OAuthWebHandler) handleSocialCallback(c *gin.Context) { + stateID := c.Query("state") + code := c.Query("code") + errParam := c.Query("error") + + if errParam != "" { + h.renderError(c, errParam) + return + } + + if stateID == "" { + h.renderError(c, "Missing state parameter") + return + } + + if code == "" { + h.renderError(c, "Missing authorization code") + return + } + + h.mu.RLock() + session, exists := h.sessions[stateID] + h.mu.RUnlock() + + if !exists { + h.renderError(c, "Invalid or expired session") + return + } + + if session.authMethod != "google" && session.authMethod != "github" { + h.renderError(c, "Invalid session type for social callback") + return + } + + socialClient := NewSocialAuthClient(h.cfg) + redirectURI := h.getSocialCallbackURL(c) + + tokenReq := &CreateTokenRequest{ + Code: code, + CodeVerifier: session.codeVerifier, + RedirectURI: redirectURI, + } + + tokenResp, err := socialClient.CreateToken(c.Request.Context(), tokenReq) + if err != nil { + log.Errorf("OAuth Web: social token exchange failed: %v", err) + h.mu.Lock() + session.status = statusFailed + session.error = fmt.Sprintf("Token exchange failed: %v", err) + session.completedAt = time.Now() + h.mu.Unlock() + h.renderError(c, session.error) + return + } + + expiresIn := tokenResp.ExpiresIn + if expiresIn <= 0 { + expiresIn = 3600 + } + expiresAt := time.Now().Add(time.Duration(expiresIn) * time.Second) + + email := ExtractEmailFromJWT(tokenResp.AccessToken) + + var provider string + if session.authMethod == "google" { + provider = string(ProviderGoogle) + } else { + provider = string(ProviderGitHub) + } + + tokenData := &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: tokenResp.ProfileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: session.authMethod, + Provider: provider, + Email: email, + Region: "us-east-1", + } + + h.mu.Lock() + session.status = statusSuccess + session.completedAt = time.Now() + session.expiresAt = expiresAt + session.tokenData = tokenData + h.mu.Unlock() + + if session.cancelFunc != nil { + session.cancelFunc() + } + + if h.onTokenObtained != nil { + h.onTokenObtained(tokenData) + } + + // Save token to file + h.saveTokenToFile(tokenData) + + log.Infof("OAuth Web: social authentication successful for %s via %s", email, provider) + h.renderSuccess(c, session) +} + +func (h *OAuthWebHandler) handleStatus(c *gin.Context) { + stateID := c.Query("state") + if stateID == "" { + c.JSON(http.StatusBadRequest, gin.H{"error": "missing state parameter"}) + return + } + + h.mu.RLock() + session, exists := h.sessions[stateID] + h.mu.RUnlock() + + if !exists { + c.JSON(http.StatusNotFound, gin.H{"error": "session not found"}) + return + } + + response := gin.H{ + "status": string(session.status), + } + + switch session.status { + case statusPending: + elapsed := time.Since(session.startedAt).Seconds() + remaining := float64(session.expiresIn) - elapsed + if remaining < 0 { + remaining = 0 + } + response["remaining_seconds"] = int(remaining) + case statusSuccess: + response["completed_at"] = session.completedAt.Format(time.RFC3339) + response["expires_at"] = session.expiresAt.Format(time.RFC3339) + case statusFailed: + response["error"] = session.error + response["failed_at"] = session.completedAt.Format(time.RFC3339) + } + + c.JSON(http.StatusOK, response) +} + +func (h *OAuthWebHandler) renderStartPage(c *gin.Context, session *webAuthSession) { + tmpl, err := template.New("start").Parse(oauthWebStartPageHTML) + if err != nil { + log.Errorf("OAuth Web: failed to parse template: %v", err) + c.String(http.StatusInternalServerError, "Template error") + return + } + + data := map[string]interface{}{ + "AuthURL": session.authURL, + "UserCode": session.userCode, + "ExpiresIn": session.expiresIn, + "StateID": session.stateID, + } + + c.Header("Content-Type", "text/html; charset=utf-8") + if err := tmpl.Execute(c.Writer, data); err != nil { + log.Errorf("OAuth Web: failed to render template: %v", err) + } +} + +func (h *OAuthWebHandler) renderSelectPage(c *gin.Context) { + tmpl, err := template.New("select").Parse(oauthWebSelectPageHTML) + if err != nil { + log.Errorf("OAuth Web: failed to parse select template: %v", err) + c.String(http.StatusInternalServerError, "Template error") + return + } + + c.Header("Content-Type", "text/html; charset=utf-8") + if err := tmpl.Execute(c.Writer, nil); err != nil { + log.Errorf("OAuth Web: failed to render select template: %v", err) + } +} + +func (h *OAuthWebHandler) renderError(c *gin.Context, errMsg string) { + tmpl, err := template.New("error").Parse(oauthWebErrorPageHTML) + if err != nil { + log.Errorf("OAuth Web: failed to parse error template: %v", err) + c.String(http.StatusInternalServerError, "Template error") + return + } + + data := map[string]interface{}{ + "Error": errMsg, + } + + c.Header("Content-Type", "text/html; charset=utf-8") + c.Status(http.StatusBadRequest) + if err := tmpl.Execute(c.Writer, data); err != nil { + log.Errorf("OAuth Web: failed to render error template: %v", err) + } +} + +func (h *OAuthWebHandler) renderSuccess(c *gin.Context, session *webAuthSession) { + tmpl, err := template.New("success").Parse(oauthWebSuccessPageHTML) + if err != nil { + log.Errorf("OAuth Web: failed to parse success template: %v", err) + c.String(http.StatusInternalServerError, "Template error") + return + } + + data := map[string]interface{}{ + "ExpiresAt": session.expiresAt.Format(time.RFC3339), + } + + c.Header("Content-Type", "text/html; charset=utf-8") + if err := tmpl.Execute(c.Writer, data); err != nil { + log.Errorf("OAuth Web: failed to render success template: %v", err) + } +} + +func (h *OAuthWebHandler) CleanupExpiredSessions() { + h.mu.Lock() + defer h.mu.Unlock() + + now := time.Now() + for id, session := range h.sessions { + if session.status != statusPending && now.Sub(session.completedAt) > 30*time.Minute { + delete(h.sessions, id) + } else if session.status == statusPending && now.Sub(session.startedAt) > defaultSessionExpiry { + session.cancelFunc() + delete(h.sessions, id) + } + } +} + +func (h *OAuthWebHandler) GetSession(stateID string) (*webAuthSession, bool) { + h.mu.RLock() + defer h.mu.RUnlock() + session, exists := h.sessions[stateID] + return session, exists +} + +// ImportTokenRequest represents the request body for token import +type ImportTokenRequest struct { + RefreshToken string `json:"refreshToken"` +} + +// handleImportToken handles manual refresh token import from Kiro IDE +func (h *OAuthWebHandler) handleImportToken(c *gin.Context) { + var req ImportTokenRequest + if err := c.ShouldBindJSON(&req); err != nil { + c.JSON(http.StatusBadRequest, gin.H{ + "success": false, + "error": "Invalid request body", + }) + return + } + + refreshToken := strings.TrimSpace(req.RefreshToken) + if refreshToken == "" { + c.JSON(http.StatusBadRequest, gin.H{ + "success": false, + "error": "Refresh token is required", + }) + return + } + + // Validate token format + if !strings.HasPrefix(refreshToken, "aorAAAAAG") { + c.JSON(http.StatusBadRequest, gin.H{ + "success": false, + "error": "Invalid token format. Token should start with aorAAAAAG...", + }) + return + } + + // Create social auth client to refresh and validate the token + socialClient := NewSocialAuthClient(h.cfg) + + // Refresh the token to validate it and get access token + tokenData, err := socialClient.RefreshSocialToken(c.Request.Context(), refreshToken) + if err != nil { + log.Errorf("OAuth Web: token refresh failed during import: %v", err) + c.JSON(http.StatusBadRequest, gin.H{ + "success": false, + "error": fmt.Sprintf("Token validation failed: %v", err), + }) + return + } + + // Set the original refresh token (the refreshed one might be empty) + if tokenData.RefreshToken == "" { + tokenData.RefreshToken = refreshToken + } + tokenData.AuthMethod = "social" + tokenData.Provider = "imported" + + // Notify callback if set + if h.onTokenObtained != nil { + h.onTokenObtained(tokenData) + } + + // Save token to file + h.saveTokenToFile(tokenData) + + // Generate filename for response using the unified function + fileName := GenerateTokenFileName(tokenData) + + log.Infof("OAuth Web: token imported successfully") + c.JSON(http.StatusOK, gin.H{ + "success": true, + "message": "Token imported successfully", + "fileName": fileName, + }) +} + +// handleManualRefresh handles manual token refresh requests from the web UI. +// This allows users to trigger a token refresh when needed, without waiting +// for the automatic 30-second check and 20-minute-before-expiry refresh cycle. +// Uses the same refresh logic as kiro_executor.Refresh for consistency. +func (h *OAuthWebHandler) handleManualRefresh(c *gin.Context) { + authDir := "" + if h.cfg != nil && h.cfg.AuthDir != "" { + var err error + authDir, err = util.ResolveAuthDir(h.cfg.AuthDir) + if err != nil { + log.Errorf("OAuth Web: failed to resolve auth directory: %v", err) + } + } + + if authDir == "" { + home, err := os.UserHomeDir() + if err != nil { + c.JSON(http.StatusInternalServerError, gin.H{ + "success": false, + "error": "Failed to get home directory", + }) + return + } + authDir = filepath.Join(home, ".cli-proxy-api") + } + + // Find all kiro token files in the auth directory + files, err := os.ReadDir(authDir) + if err != nil { + c.JSON(http.StatusInternalServerError, gin.H{ + "success": false, + "error": fmt.Sprintf("Failed to read auth directory: %v", err), + }) + return + } + + var refreshedCount int + var errors []string + + for _, file := range files { + if file.IsDir() { + continue + } + name := file.Name() + if !strings.HasPrefix(name, "kiro-") || !strings.HasSuffix(name, ".json") { + continue + } + + filePath := filepath.Join(authDir, name) + data, err := os.ReadFile(filePath) + if err != nil { + errors = append(errors, fmt.Sprintf("%s: read error - %v", name, err)) + continue + } + + var storage KiroTokenStorage + if err := json.Unmarshal(data, &storage); err != nil { + errors = append(errors, fmt.Sprintf("%s: parse error - %v", name, err)) + continue + } + + if storage.RefreshToken == "" { + errors = append(errors, fmt.Sprintf("%s: no refresh token", name)) + continue + } + + // Refresh token using the same logic as kiro_executor.Refresh + tokenData, err := h.refreshTokenData(c.Request.Context(), &storage) + if err != nil { + errors = append(errors, fmt.Sprintf("%s: refresh failed - %v", name, err)) + continue + } + + // Update storage with new token data + storage.AccessToken = tokenData.AccessToken + if tokenData.RefreshToken != "" { + storage.RefreshToken = tokenData.RefreshToken + } + storage.ExpiresAt = tokenData.ExpiresAt + storage.LastRefresh = time.Now().Format(time.RFC3339) + if tokenData.ProfileArn != "" { + storage.ProfileArn = tokenData.ProfileArn + } + + // Write updated token back to file + updatedData, err := json.MarshalIndent(storage, "", " ") + if err != nil { + errors = append(errors, fmt.Sprintf("%s: marshal error - %v", name, err)) + continue + } + + tmpFile := filePath + ".tmp" + if err := os.WriteFile(tmpFile, updatedData, 0600); err != nil { + errors = append(errors, fmt.Sprintf("%s: write error - %v", name, err)) + continue + } + if err := os.Rename(tmpFile, filePath); err != nil { + errors = append(errors, fmt.Sprintf("%s: rename error - %v", name, err)) + continue + } + + log.Infof("OAuth Web: manually refreshed token in %s, expires at %s", name, tokenData.ExpiresAt) + refreshedCount++ + + // Notify callback if set + if h.onTokenObtained != nil { + h.onTokenObtained(tokenData) + } + } + + if refreshedCount == 0 && len(errors) > 0 { + c.JSON(http.StatusBadRequest, gin.H{ + "success": false, + "error": fmt.Sprintf("All refresh attempts failed: %v", errors), + }) + return + } + + response := gin.H{ + "success": true, + "message": fmt.Sprintf("Refreshed %d token(s)", refreshedCount), + "refreshedCount": refreshedCount, + } + if len(errors) > 0 { + response["warnings"] = errors + } + + c.JSON(http.StatusOK, response) +} + +// refreshTokenData refreshes a token using the appropriate method based on auth type. +// This mirrors the logic in kiro_executor.Refresh for consistency. +func (h *OAuthWebHandler) refreshTokenData(ctx context.Context, storage *KiroTokenStorage) (*KiroTokenData, error) { + ssoClient := NewSSOOIDCClient(h.cfg) + + switch { + case storage.ClientID != "" && storage.ClientSecret != "" && storage.AuthMethod == "idc" && storage.Region != "": + // IDC refresh with region-specific endpoint + log.Debugf("OAuth Web: using SSO OIDC refresh for IDC (region=%s)", storage.Region) + return ssoClient.RefreshTokenWithRegion(ctx, storage.ClientID, storage.ClientSecret, storage.RefreshToken, storage.Region, storage.StartURL) + + case storage.ClientID != "" && storage.ClientSecret != "" && storage.AuthMethod == "builder-id": + // Builder ID refresh with default endpoint + log.Debugf("OAuth Web: using SSO OIDC refresh for AWS Builder ID") + return ssoClient.RefreshToken(ctx, storage.ClientID, storage.ClientSecret, storage.RefreshToken) + + default: + // Fallback to Kiro's OAuth refresh endpoint (for social auth: Google/GitHub) + log.Debugf("OAuth Web: using Kiro OAuth refresh endpoint") + oauth := NewKiroOAuth(h.cfg) + return oauth.RefreshToken(ctx, storage.RefreshToken) + } +} diff --git a/internal/auth/kiro/oauth_web_templates.go b/internal/auth/kiro/oauth_web_templates.go new file mode 100644 index 0000000000..228677a511 --- /dev/null +++ b/internal/auth/kiro/oauth_web_templates.go @@ -0,0 +1,779 @@ +// Package kiro provides OAuth Web authentication templates. +package kiro + +const ( + oauthWebStartPageHTML = ` + + + + + AWS SSO Authentication + + + +
+

🔐 AWS SSO Authentication

+

Follow the steps below to complete authentication

+ +
+
+ 1 + Click the button below to open the authorization page +
+ + 🚀 Open Authorization Page + +
+ +
+
+ 2 + Enter the verification code below +
+
+
Verification Code
+
{{.UserCode}}
+
+
+ +
+
+ 3 + Complete AWS SSO login +
+

+ Use your AWS SSO account to login and authorize +

+
+ +
+
+
{{.ExpiresIn}}s
+
+ Waiting for authorization... +
+
+ +
+ 💡 Tip: The authorization page will open in a new tab. This page will automatically update once authorization is complete. +
+
+ + + +` + + oauthWebErrorPageHTML = ` + + + + + Authentication Failed + + + +
+

❌ Authentication Failed

+
+

Error:

+

{{.Error}}

+
+ 🔄 Retry +
+ +` + + oauthWebSuccessPageHTML = ` + + + + + Authentication Successful + + + +
+
+

Authentication Successful!

+
+

You can close this window.

+
+
Token expires: {{.ExpiresAt}}
+
+ +` + + oauthWebSelectPageHTML = ` + + + + + Select Authentication Method + + + +
+

🔐 Select Authentication Method

+

Choose how you want to authenticate with Kiro

+ +
+ + 🔶 + AWS Builder ID (Recommended) + + + + +
or
+ + + + + +
+
+ +
+
+ + +
+ + +
Your AWS Identity Center Start URL
+
+ +
+ + +
AWS Region for your Identity Center
+
+ + +
+
+ +
+
+
+ + +
Copy from Kiro IDE: ~/.kiro/kiro-auth-token.json → refreshToken field
+
+ + + +
+
+
+ +
+ ⚠️ Note: Google and GitHub login are not available for third-party applications due to AWS Cognito restrictions. Please use AWS Builder ID or import your token from Kiro IDE. +
+ +
+ 💡 How to get RefreshToken:
+ 1. Open Kiro IDE and login with Google/GitHub
+ 2. Find the token file: ~/.kiro/kiro-auth-token.json
+ 3. Copy the refreshToken value and paste it above +
+
+ + + +` +) diff --git a/internal/auth/kiro/protocol_handler.go b/internal/auth/kiro/protocol_handler.go new file mode 100644 index 0000000000..a1c28a86ab --- /dev/null +++ b/internal/auth/kiro/protocol_handler.go @@ -0,0 +1,725 @@ +// Package kiro provides custom protocol handler registration for Kiro OAuth. +// This enables the CLI to intercept kiro:// URIs for social authentication (Google/GitHub). +package kiro + +import ( + "context" + "fmt" + "html" + "net" + "net/http" + "net/url" + "os" + "os/exec" + "path/filepath" + "runtime" + "strings" + "sync" + "time" + + log "github.com/sirupsen/logrus" +) + +const ( + // KiroProtocol is the custom URI scheme used by Kiro + KiroProtocol = "kiro" + + // KiroAuthority is the URI authority for authentication callbacks + KiroAuthority = "kiro.kiroAgent" + + // KiroAuthPath is the path for successful authentication + KiroAuthPath = "/authenticate-success" + + // KiroRedirectURI is the full redirect URI for social auth + KiroRedirectURI = "kiro://kiro.kiroAgent/authenticate-success" + + // DefaultHandlerPort is the default port for the local callback server + DefaultHandlerPort = 19876 + + // HandlerTimeout is how long to wait for the OAuth callback + HandlerTimeout = 10 * time.Minute +) + +// ProtocolHandler manages the custom kiro:// protocol handler for OAuth callbacks. +type ProtocolHandler struct { + port int + server *http.Server + listener net.Listener + resultChan chan *AuthCallback + stopChan chan struct{} + mu sync.Mutex + running bool +} + +// AuthCallback contains the OAuth callback parameters. +type AuthCallback struct { + Code string + State string + Error string +} + +// NewProtocolHandler creates a new protocol handler. +func NewProtocolHandler() *ProtocolHandler { + return &ProtocolHandler{ + port: DefaultHandlerPort, + resultChan: make(chan *AuthCallback, 1), + stopChan: make(chan struct{}), + } +} + +// Start starts the local callback server that receives redirects from the protocol handler. +func (h *ProtocolHandler) Start(ctx context.Context) (int, error) { + h.mu.Lock() + defer h.mu.Unlock() + + if h.running { + return h.port, nil + } + + // Drain any stale results from previous runs + select { + case <-h.resultChan: + default: + } + + // Reset stopChan for reuse - close old channel first to unblock any waiting goroutines + if h.stopChan != nil { + select { + case <-h.stopChan: + // Already closed + default: + close(h.stopChan) + } + } + h.stopChan = make(chan struct{}) + + // Try ports in known range (must match handler script port range) + var listener net.Listener + var err error + portRange := []int{DefaultHandlerPort, DefaultHandlerPort + 1, DefaultHandlerPort + 2, DefaultHandlerPort + 3, DefaultHandlerPort + 4} + + for _, port := range portRange { + listener, err = net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", port)) + if err == nil { + break + } + log.Debugf("kiro protocol handler: port %d busy, trying next", port) + } + + if listener == nil { + return 0, fmt.Errorf("failed to start callback server: all ports %d-%d are busy", DefaultHandlerPort, DefaultHandlerPort+4) + } + + h.listener = listener + h.port = listener.Addr().(*net.TCPAddr).Port + + mux := http.NewServeMux() + mux.HandleFunc("/oauth/callback", h.handleCallback) + + h.server = &http.Server{ + Handler: mux, + ReadHeaderTimeout: 10 * time.Second, + } + + go func() { + if err := h.server.Serve(listener); err != nil && err != http.ErrServerClosed { + log.Debugf("kiro protocol handler server error: %v", err) + } + }() + + h.running = true + log.Debugf("kiro protocol handler started on port %d", h.port) + + // Auto-shutdown after context done, timeout, or explicit stop + // Capture references to prevent race with new Start() calls + currentStopChan := h.stopChan + currentServer := h.server + currentListener := h.listener + go func() { + select { + case <-ctx.Done(): + case <-time.After(HandlerTimeout): + case <-currentStopChan: + return // Already stopped, exit goroutine + } + // Only stop if this is still the current server/listener instance + h.mu.Lock() + if h.server == currentServer && h.listener == currentListener { + h.mu.Unlock() + h.Stop() + } else { + h.mu.Unlock() + } + }() + + return h.port, nil +} + +// Stop stops the callback server. +func (h *ProtocolHandler) Stop() { + h.mu.Lock() + defer h.mu.Unlock() + + if !h.running { + return + } + + // Signal the auto-shutdown goroutine to exit. + // This select pattern is safe because stopChan is only modified while holding h.mu, + // and we hold the lock here. The select prevents panic from double-close. + select { + case <-h.stopChan: + // Already closed + default: + close(h.stopChan) + } + + if h.server != nil { + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + _ = h.server.Shutdown(ctx) + } + + h.running = false + log.Debug("kiro protocol handler stopped") +} + +// WaitForCallback waits for the OAuth callback and returns the result. +func (h *ProtocolHandler) WaitForCallback(ctx context.Context) (*AuthCallback, error) { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(HandlerTimeout): + return nil, fmt.Errorf("timeout waiting for OAuth callback") + case result := <-h.resultChan: + return result, nil + } +} + +// GetPort returns the port the handler is listening on. +func (h *ProtocolHandler) GetPort() int { + return h.port +} + +// handleCallback processes the OAuth callback from the protocol handler script. +func (h *ProtocolHandler) handleCallback(w http.ResponseWriter, r *http.Request) { + code := r.URL.Query().Get("code") + state := r.URL.Query().Get("state") + errParam := r.URL.Query().Get("error") + + result := &AuthCallback{ + Code: code, + State: state, + Error: errParam, + } + + // Send result + select { + case h.resultChan <- result: + default: + // Channel full, ignore duplicate callbacks + } + + // Send success response + w.Header().Set("Content-Type", "text/html; charset=utf-8") + if errParam != "" { + w.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(w, ` + +Login Failed + +

Login Failed

+

Error: %s

+

You can close this window.

+ +`, html.EscapeString(errParam)) + } else { + fmt.Fprint(w, ` + +Login Successful + +

Login Successful!

+

You can close this window and return to the terminal.

+ + +`) + } +} + +// IsProtocolHandlerInstalled checks if the kiro:// protocol handler is installed. +func IsProtocolHandlerInstalled() bool { + switch runtime.GOOS { + case "linux": + return isLinuxHandlerInstalled() + case "windows": + return isWindowsHandlerInstalled() + case "darwin": + return isDarwinHandlerInstalled() + default: + return false + } +} + +// InstallProtocolHandler installs the kiro:// protocol handler for the current platform. +func InstallProtocolHandler(handlerPort int) error { + switch runtime.GOOS { + case "linux": + return installLinuxHandler(handlerPort) + case "windows": + return installWindowsHandler(handlerPort) + case "darwin": + return installDarwinHandler(handlerPort) + default: + return fmt.Errorf("unsupported platform: %s", runtime.GOOS) + } +} + +// UninstallProtocolHandler removes the kiro:// protocol handler. +func UninstallProtocolHandler() error { + switch runtime.GOOS { + case "linux": + return uninstallLinuxHandler() + case "windows": + return uninstallWindowsHandler() + case "darwin": + return uninstallDarwinHandler() + default: + return fmt.Errorf("unsupported platform: %s", runtime.GOOS) + } +} + +// --- Linux Implementation --- + +func getLinuxDesktopPath() string { + homeDir, _ := os.UserHomeDir() + return filepath.Join(homeDir, ".local", "share", "applications", "kiro-oauth-handler.desktop") +} + +func getLinuxHandlerScriptPath() string { + homeDir, _ := os.UserHomeDir() + return filepath.Join(homeDir, ".local", "bin", "kiro-oauth-handler") +} + +func isLinuxHandlerInstalled() bool { + desktopPath := getLinuxDesktopPath() + _, err := os.Stat(desktopPath) + return err == nil +} + +func installLinuxHandler(handlerPort int) error { + // Create directories + homeDir, err := os.UserHomeDir() + if err != nil { + return err + } + + binDir := filepath.Join(homeDir, ".local", "bin") + appDir := filepath.Join(homeDir, ".local", "share", "applications") + + if err := os.MkdirAll(binDir, 0755); err != nil { + return fmt.Errorf("failed to create bin directory: %w", err) + } + if err := os.MkdirAll(appDir, 0755); err != nil { + return fmt.Errorf("failed to create applications directory: %w", err) + } + + // Create handler script - tries multiple ports to handle dynamic port allocation + scriptPath := getLinuxHandlerScriptPath() + scriptContent := fmt.Sprintf(`#!/bin/bash +# Kiro OAuth Protocol Handler +# Handles kiro:// URIs - tries CLI first, then forwards to Kiro IDE + +URL="$1" + +# Check curl availability +if ! command -v curl &> /dev/null; then + echo "Error: curl is required for Kiro OAuth handler" >&2 + exit 1 +fi + +# Extract code and state from URL +[[ "$URL" =~ code=([^&]+) ]] && CODE="${BASH_REMATCH[1]}" +[[ "$URL" =~ state=([^&]+) ]] && STATE="${BASH_REMATCH[1]}" +[[ "$URL" =~ error=([^&]+) ]] && ERROR="${BASH_REMATCH[1]}" + +# Try CLI proxy on multiple possible ports (default + dynamic range) +CLI_OK=0 +for PORT in %d %d %d %d %d; do + if [ -n "$ERROR" ]; then + curl -sf --connect-timeout 1 "http://127.0.0.1:$PORT/oauth/callback?error=$ERROR" && CLI_OK=1 && break + elif [ -n "$CODE" ] && [ -n "$STATE" ]; then + curl -sf --connect-timeout 1 "http://127.0.0.1:$PORT/oauth/callback?code=$CODE&state=$STATE" && CLI_OK=1 && break + fi +done + +# If CLI not available, forward to Kiro IDE +if [ $CLI_OK -eq 0 ] && [ -x "/usr/share/kiro/kiro" ]; then + /usr/share/kiro/kiro --open-url "$URL" & +fi +`, handlerPort, handlerPort+1, handlerPort+2, handlerPort+3, handlerPort+4) + + if err := os.WriteFile(scriptPath, []byte(scriptContent), 0755); err != nil { + return fmt.Errorf("failed to write handler script: %w", err) + } + + // Create .desktop file + desktopPath := getLinuxDesktopPath() + desktopContent := fmt.Sprintf(`[Desktop Entry] +Name=Kiro OAuth Handler +Comment=Handle kiro:// protocol for CLI Proxy API authentication +Exec=%s %%u +Type=Application +Terminal=false +NoDisplay=true +MimeType=x-scheme-handler/kiro; +Categories=Utility; +`, scriptPath) + + if err := os.WriteFile(desktopPath, []byte(desktopContent), 0644); err != nil { + return fmt.Errorf("failed to write desktop file: %w", err) + } + + // Register handler with xdg-mime + cmd := exec.Command("xdg-mime", "default", "kiro-oauth-handler.desktop", "x-scheme-handler/kiro") + if err := cmd.Run(); err != nil { + log.Warnf("xdg-mime registration failed (may need manual setup): %v", err) + } + + // Update desktop database + cmd = exec.Command("update-desktop-database", appDir) + _ = cmd.Run() // Ignore errors, not critical + + log.Info("Kiro protocol handler installed for Linux") + return nil +} + +func uninstallLinuxHandler() error { + desktopPath := getLinuxDesktopPath() + scriptPath := getLinuxHandlerScriptPath() + + if err := os.Remove(desktopPath); err != nil && !os.IsNotExist(err) { + return fmt.Errorf("failed to remove desktop file: %w", err) + } + if err := os.Remove(scriptPath); err != nil && !os.IsNotExist(err) { + return fmt.Errorf("failed to remove handler script: %w", err) + } + + log.Info("Kiro protocol handler uninstalled") + return nil +} + +// --- Windows Implementation --- + +func isWindowsHandlerInstalled() bool { + // Check registry key existence + cmd := exec.Command("reg", "query", `HKCU\Software\Classes\kiro`, "/ve") + return cmd.Run() == nil +} + +func installWindowsHandler(handlerPort int) error { + homeDir, err := os.UserHomeDir() + if err != nil { + return err + } + + // Create handler script (PowerShell) + scriptDir := filepath.Join(homeDir, ".cliproxyapi") + if err := os.MkdirAll(scriptDir, 0755); err != nil { + return fmt.Errorf("failed to create script directory: %w", err) + } + + scriptPath := filepath.Join(scriptDir, "kiro-oauth-handler.ps1") + scriptContent := fmt.Sprintf(`# Kiro OAuth Protocol Handler for Windows +param([string]$url) + +# Load required assembly for HttpUtility +Add-Type -AssemblyName System.Web + +# Parse URL parameters +$uri = [System.Uri]$url +$query = [System.Web.HttpUtility]::ParseQueryString($uri.Query) +$code = $query["code"] +$state = $query["state"] +$errorParam = $query["error"] + +# Try multiple ports (default + dynamic range) +$ports = @(%d, %d, %d, %d, %d) +$success = $false + +foreach ($port in $ports) { + if ($success) { break } + $callbackUrl = "http://127.0.0.1:$port/oauth/callback" + try { + if ($errorParam) { + $fullUrl = $callbackUrl + "?error=" + $errorParam + Invoke-WebRequest -Uri $fullUrl -UseBasicParsing -TimeoutSec 1 -ErrorAction Stop | Out-Null + $success = $true + } elseif ($code -and $state) { + $fullUrl = $callbackUrl + "?code=" + $code + "&state=" + $state + Invoke-WebRequest -Uri $fullUrl -UseBasicParsing -TimeoutSec 1 -ErrorAction Stop | Out-Null + $success = $true + } + } catch { + # Try next port + } +} +`, handlerPort, handlerPort+1, handlerPort+2, handlerPort+3, handlerPort+4) + + if err := os.WriteFile(scriptPath, []byte(scriptContent), 0644); err != nil { + return fmt.Errorf("failed to write handler script: %w", err) + } + + // Create batch wrapper + batchPath := filepath.Join(scriptDir, "kiro-oauth-handler.bat") + batchContent := fmt.Sprintf("@echo off\npowershell -ExecutionPolicy Bypass -File \"%s\" %%1\n", scriptPath) + + if err := os.WriteFile(batchPath, []byte(batchContent), 0644); err != nil { + return fmt.Errorf("failed to write batch wrapper: %w", err) + } + + // Register in Windows registry + commands := [][]string{ + {"reg", "add", `HKCU\Software\Classes\kiro`, "/ve", "/d", "URL:Kiro Protocol", "/f"}, + {"reg", "add", `HKCU\Software\Classes\kiro`, "/v", "URL Protocol", "/d", "", "/f"}, + {"reg", "add", `HKCU\Software\Classes\kiro\shell`, "/f"}, + {"reg", "add", `HKCU\Software\Classes\kiro\shell\open`, "/f"}, + {"reg", "add", `HKCU\Software\Classes\kiro\shell\open\command`, "/ve", "/d", fmt.Sprintf("\"%s\" \"%%1\"", batchPath), "/f"}, + } + + for _, args := range commands { + cmd := exec.Command(args[0], args[1:]...) + if err := cmd.Run(); err != nil { + return fmt.Errorf("failed to run registry command: %w", err) + } + } + + log.Info("Kiro protocol handler installed for Windows") + return nil +} + +func uninstallWindowsHandler() error { + // Remove registry keys + cmd := exec.Command("reg", "delete", `HKCU\Software\Classes\kiro`, "/f") + if err := cmd.Run(); err != nil { + log.Warnf("failed to remove registry key: %v", err) + } + + // Remove scripts + homeDir, _ := os.UserHomeDir() + scriptDir := filepath.Join(homeDir, ".cliproxyapi") + _ = os.Remove(filepath.Join(scriptDir, "kiro-oauth-handler.ps1")) + _ = os.Remove(filepath.Join(scriptDir, "kiro-oauth-handler.bat")) + + log.Info("Kiro protocol handler uninstalled") + return nil +} + +// --- macOS Implementation --- + +func getDarwinAppPath() string { + homeDir, _ := os.UserHomeDir() + return filepath.Join(homeDir, "Applications", "KiroOAuthHandler.app") +} + +func isDarwinHandlerInstalled() bool { + appPath := getDarwinAppPath() + _, err := os.Stat(appPath) + return err == nil +} + +func installDarwinHandler(handlerPort int) error { + // Create app bundle structure + appPath := getDarwinAppPath() + contentsPath := filepath.Join(appPath, "Contents") + macOSPath := filepath.Join(contentsPath, "MacOS") + + if err := os.MkdirAll(macOSPath, 0755); err != nil { + return fmt.Errorf("failed to create app bundle: %w", err) + } + + // Create Info.plist + plistPath := filepath.Join(contentsPath, "Info.plist") + plistContent := ` + + + + CFBundleIdentifier + com.cliproxyapi.kiro-oauth-handler + CFBundleName + KiroOAuthHandler + CFBundleExecutable + kiro-oauth-handler + CFBundleVersion + 1.0 + CFBundleURLTypes + + + CFBundleURLName + Kiro Protocol + CFBundleURLSchemes + + kiro + + + + LSBackgroundOnly + + +` + + if err := os.WriteFile(plistPath, []byte(plistContent), 0644); err != nil { + return fmt.Errorf("failed to write Info.plist: %w", err) + } + + // Create executable script - tries multiple ports to handle dynamic port allocation + execPath := filepath.Join(macOSPath, "kiro-oauth-handler") + execContent := fmt.Sprintf(`#!/bin/bash +# Kiro OAuth Protocol Handler for macOS + +URL="$1" + +# Check curl availability (should always exist on macOS) +if [ ! -x /usr/bin/curl ]; then + echo "Error: curl is required for Kiro OAuth handler" >&2 + exit 1 +fi + +# Extract code and state from URL +[[ "$URL" =~ code=([^&]+) ]] && CODE="${BASH_REMATCH[1]}" +[[ "$URL" =~ state=([^&]+) ]] && STATE="${BASH_REMATCH[1]}" +[[ "$URL" =~ error=([^&]+) ]] && ERROR="${BASH_REMATCH[1]}" + +# Try multiple ports (default + dynamic range) +for PORT in %d %d %d %d %d; do + if [ -n "$ERROR" ]; then + /usr/bin/curl -sf --connect-timeout 1 "http://127.0.0.1:$PORT/oauth/callback?error=$ERROR" && exit 0 + elif [ -n "$CODE" ] && [ -n "$STATE" ]; then + /usr/bin/curl -sf --connect-timeout 1 "http://127.0.0.1:$PORT/oauth/callback?code=$CODE&state=$STATE" && exit 0 + fi +done +`, handlerPort, handlerPort+1, handlerPort+2, handlerPort+3, handlerPort+4) + + if err := os.WriteFile(execPath, []byte(execContent), 0755); err != nil { + return fmt.Errorf("failed to write executable: %w", err) + } + + // Register the app with Launch Services + cmd := exec.Command("/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister", + "-f", appPath) + if err := cmd.Run(); err != nil { + log.Warnf("lsregister failed (handler may still work): %v", err) + } + + log.Info("Kiro protocol handler installed for macOS") + return nil +} + +func uninstallDarwinHandler() error { + appPath := getDarwinAppPath() + + // Unregister from Launch Services + cmd := exec.Command("/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister", + "-u", appPath) + _ = cmd.Run() + + // Remove app bundle + if err := os.RemoveAll(appPath); err != nil && !os.IsNotExist(err) { + return fmt.Errorf("failed to remove app bundle: %w", err) + } + + log.Info("Kiro protocol handler uninstalled") + return nil +} + +// ParseKiroURI parses a kiro:// URI and extracts the callback parameters. +func ParseKiroURI(rawURI string) (*AuthCallback, error) { + u, err := url.Parse(rawURI) + if err != nil { + return nil, fmt.Errorf("invalid URI: %w", err) + } + + if u.Scheme != KiroProtocol { + return nil, fmt.Errorf("invalid scheme: expected %s, got %s", KiroProtocol, u.Scheme) + } + + if u.Host != KiroAuthority { + return nil, fmt.Errorf("invalid authority: expected %s, got %s", KiroAuthority, u.Host) + } + + query := u.Query() + return &AuthCallback{ + Code: query.Get("code"), + State: query.Get("state"), + Error: query.Get("error"), + }, nil +} + +// GetHandlerInstructions returns platform-specific instructions for manual handler setup. +func GetHandlerInstructions() string { + switch runtime.GOOS { + case "linux": + return `To manually set up the Kiro protocol handler on Linux: + +1. Create ~/.local/share/applications/kiro-oauth-handler.desktop: + [Desktop Entry] + Name=Kiro OAuth Handler + Exec=~/.local/bin/kiro-oauth-handler %u + Type=Application + Terminal=false + MimeType=x-scheme-handler/kiro; + +2. Create ~/.local/bin/kiro-oauth-handler (make it executable): + #!/bin/bash + URL="$1" + # ... (see generated script for full content) + +3. Run: xdg-mime default kiro-oauth-handler.desktop x-scheme-handler/kiro` + + case "windows": + return `To manually set up the Kiro protocol handler on Windows: + +1. Open Registry Editor (regedit.exe) +2. Create key: HKEY_CURRENT_USER\Software\Classes\kiro +3. Set default value to: URL:Kiro Protocol +4. Create string value "URL Protocol" with empty data +5. Create subkey: shell\open\command +6. Set default value to: "C:\path\to\handler.bat" "%1"` + + case "darwin": + return `To manually set up the Kiro protocol handler on macOS: + +1. Create ~/Applications/KiroOAuthHandler.app bundle +2. Add Info.plist with CFBundleURLTypes containing "kiro" scheme +3. Create executable in Contents/MacOS/ +4. Run: /System/Library/.../lsregister -f ~/Applications/KiroOAuthHandler.app` + + default: + return "Protocol handler setup is not supported on this platform." + } +} + +// SetupProtocolHandlerIfNeeded checks and installs the protocol handler if needed. +func SetupProtocolHandlerIfNeeded(handlerPort int) error { + if IsProtocolHandlerInstalled() { + log.Debug("Kiro protocol handler already installed") + return nil + } + + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Println("║ Kiro Protocol Handler Setup Required ║") + fmt.Println("╚══════════════════════════════════════════════════════════╝") + fmt.Println("\nTo enable Google/GitHub login, we need to install a protocol handler.") + fmt.Println("This allows your browser to redirect back to the CLI after authentication.") + fmt.Println("\nInstalling protocol handler...") + + if err := InstallProtocolHandler(handlerPort); err != nil { + fmt.Printf("\n⚠ Automatic installation failed: %v\n", err) + fmt.Println("\nManual setup instructions:") + fmt.Println(strings.Repeat("-", 60)) + fmt.Println(GetHandlerInstructions()) + return err + } + + fmt.Println("\n✓ Protocol handler installed successfully!") + return nil +} diff --git a/internal/auth/kiro/rate_limiter.go b/internal/auth/kiro/rate_limiter.go new file mode 100644 index 0000000000..52bb24af70 --- /dev/null +++ b/internal/auth/kiro/rate_limiter.go @@ -0,0 +1,316 @@ +package kiro + +import ( + "math" + "math/rand" + "strings" + "sync" + "time" +) + +const ( + DefaultMinTokenInterval = 1 * time.Second + DefaultMaxTokenInterval = 2 * time.Second + DefaultDailyMaxRequests = 500 + DefaultJitterPercent = 0.3 + DefaultBackoffBase = 30 * time.Second + DefaultBackoffMax = 5 * time.Minute + DefaultBackoffMultiplier = 1.5 + DefaultSuspendCooldown = 1 * time.Hour +) + +// TokenState Token 状态 +type TokenState struct { + LastRequest time.Time + RequestCount int + CooldownEnd time.Time + FailCount int + DailyRequests int + DailyResetTime time.Time + IsSuspended bool + SuspendedAt time.Time + SuspendReason string +} + +// RateLimiter 频率限制器 +type RateLimiter struct { + mu sync.RWMutex + states map[string]*TokenState + minTokenInterval time.Duration + maxTokenInterval time.Duration + dailyMaxRequests int + jitterPercent float64 + backoffBase time.Duration + backoffMax time.Duration + backoffMultiplier float64 + suspendCooldown time.Duration + rng *rand.Rand +} + +// NewRateLimiter 创建默认配置的频率限制器 +func NewRateLimiter() *RateLimiter { + return &RateLimiter{ + states: make(map[string]*TokenState), + minTokenInterval: DefaultMinTokenInterval, + maxTokenInterval: DefaultMaxTokenInterval, + dailyMaxRequests: DefaultDailyMaxRequests, + jitterPercent: DefaultJitterPercent, + backoffBase: DefaultBackoffBase, + backoffMax: DefaultBackoffMax, + backoffMultiplier: DefaultBackoffMultiplier, + suspendCooldown: DefaultSuspendCooldown, + rng: rand.New(rand.NewSource(time.Now().UnixNano())), + } +} + +// RateLimiterConfig 频率限制器配置 +type RateLimiterConfig struct { + MinTokenInterval time.Duration + MaxTokenInterval time.Duration + DailyMaxRequests int + JitterPercent float64 + BackoffBase time.Duration + BackoffMax time.Duration + BackoffMultiplier float64 + SuspendCooldown time.Duration +} + +// NewRateLimiterWithConfig 使用自定义配置创建频率限制器 +func NewRateLimiterWithConfig(cfg RateLimiterConfig) *RateLimiter { + rl := NewRateLimiter() + if cfg.MinTokenInterval > 0 { + rl.minTokenInterval = cfg.MinTokenInterval + } + if cfg.MaxTokenInterval > 0 { + rl.maxTokenInterval = cfg.MaxTokenInterval + } + if cfg.DailyMaxRequests > 0 { + rl.dailyMaxRequests = cfg.DailyMaxRequests + } + if cfg.JitterPercent > 0 { + rl.jitterPercent = cfg.JitterPercent + } + if cfg.BackoffBase > 0 { + rl.backoffBase = cfg.BackoffBase + } + if cfg.BackoffMax > 0 { + rl.backoffMax = cfg.BackoffMax + } + if cfg.BackoffMultiplier > 0 { + rl.backoffMultiplier = cfg.BackoffMultiplier + } + if cfg.SuspendCooldown > 0 { + rl.suspendCooldown = cfg.SuspendCooldown + } + return rl +} + +// getOrCreateState 获取或创建 Token 状态 +func (rl *RateLimiter) getOrCreateState(tokenKey string) *TokenState { + state, exists := rl.states[tokenKey] + if !exists { + state = &TokenState{ + DailyResetTime: time.Now().Truncate(24 * time.Hour).Add(24 * time.Hour), + } + rl.states[tokenKey] = state + } + return state +} + +// resetDailyIfNeeded 如果需要则重置每日计数 +func (rl *RateLimiter) resetDailyIfNeeded(state *TokenState) { + now := time.Now() + if now.After(state.DailyResetTime) { + state.DailyRequests = 0 + state.DailyResetTime = now.Truncate(24 * time.Hour).Add(24 * time.Hour) + } +} + +// calculateInterval 计算带抖动的随机间隔 +func (rl *RateLimiter) calculateInterval() time.Duration { + baseInterval := rl.minTokenInterval + time.Duration(rl.rng.Int63n(int64(rl.maxTokenInterval-rl.minTokenInterval))) + jitter := time.Duration(float64(baseInterval) * rl.jitterPercent * (rl.rng.Float64()*2 - 1)) + return baseInterval + jitter +} + +// WaitForToken 等待 Token 可用(带抖动的随机间隔) +func (rl *RateLimiter) WaitForToken(tokenKey string) { + rl.mu.Lock() + state := rl.getOrCreateState(tokenKey) + rl.resetDailyIfNeeded(state) + + now := time.Now() + + // 检查是否在冷却期 + if now.Before(state.CooldownEnd) { + waitTime := state.CooldownEnd.Sub(now) + rl.mu.Unlock() + time.Sleep(waitTime) + rl.mu.Lock() + state = rl.getOrCreateState(tokenKey) + now = time.Now() + } + + // 计算距离上次请求的间隔 + interval := rl.calculateInterval() + nextAllowedTime := state.LastRequest.Add(interval) + + if now.Before(nextAllowedTime) { + waitTime := nextAllowedTime.Sub(now) + rl.mu.Unlock() + time.Sleep(waitTime) + rl.mu.Lock() + state = rl.getOrCreateState(tokenKey) + } + + state.LastRequest = time.Now() + state.RequestCount++ + state.DailyRequests++ + rl.mu.Unlock() +} + +// MarkTokenFailed 标记 Token 失败 +func (rl *RateLimiter) MarkTokenFailed(tokenKey string) { + rl.mu.Lock() + defer rl.mu.Unlock() + + state := rl.getOrCreateState(tokenKey) + state.FailCount++ + state.CooldownEnd = time.Now().Add(rl.calculateBackoff(state.FailCount)) +} + +// MarkTokenSuccess 标记 Token 成功 +func (rl *RateLimiter) MarkTokenSuccess(tokenKey string) { + rl.mu.Lock() + defer rl.mu.Unlock() + + state := rl.getOrCreateState(tokenKey) + state.FailCount = 0 + state.CooldownEnd = time.Time{} +} + +// CheckAndMarkSuspended 检测暂停错误并标记 +func (rl *RateLimiter) CheckAndMarkSuspended(tokenKey string, errorMsg string) bool { + suspendKeywords := []string{ + "suspended", + "banned", + "disabled", + "account has been", + "access denied", + "rate limit exceeded", + "too many requests", + "quota exceeded", + } + + lowerMsg := strings.ToLower(errorMsg) + for _, keyword := range suspendKeywords { + if strings.Contains(lowerMsg, keyword) { + rl.mu.Lock() + defer rl.mu.Unlock() + + state := rl.getOrCreateState(tokenKey) + state.IsSuspended = true + state.SuspendedAt = time.Now() + state.SuspendReason = errorMsg + state.CooldownEnd = time.Now().Add(rl.suspendCooldown) + return true + } + } + return false +} + +// IsTokenAvailable 检查 Token 是否可用 +func (rl *RateLimiter) IsTokenAvailable(tokenKey string) bool { + rl.mu.RLock() + defer rl.mu.RUnlock() + + state, exists := rl.states[tokenKey] + if !exists { + return true + } + + now := time.Now() + + // 检查是否被暂停 + if state.IsSuspended { + if now.After(state.SuspendedAt.Add(rl.suspendCooldown)) { + return true + } + return false + } + + // 检查是否在冷却期 + if now.Before(state.CooldownEnd) { + return false + } + + // 检查每日请求限制 + rl.mu.RUnlock() + rl.mu.Lock() + rl.resetDailyIfNeeded(state) + dailyRequests := state.DailyRequests + dailyMax := rl.dailyMaxRequests + rl.mu.Unlock() + rl.mu.RLock() + + if dailyRequests >= dailyMax { + return false + } + + return true +} + +// calculateBackoff 计算指数退避时间 +func (rl *RateLimiter) calculateBackoff(failCount int) time.Duration { + if failCount <= 0 { + return 0 + } + + backoff := float64(rl.backoffBase) * math.Pow(rl.backoffMultiplier, float64(failCount-1)) + + // 添加抖动 + jitter := backoff * rl.jitterPercent * (rl.rng.Float64()*2 - 1) + backoff += jitter + + if time.Duration(backoff) > rl.backoffMax { + return rl.backoffMax + } + return time.Duration(backoff) +} + +// GetTokenState 获取 Token 状态(只读) +func (rl *RateLimiter) GetTokenState(tokenKey string) *TokenState { + rl.mu.RLock() + defer rl.mu.RUnlock() + + state, exists := rl.states[tokenKey] + if !exists { + return nil + } + + // 返回副本以防止外部修改 + stateCopy := *state + return &stateCopy +} + +// ClearTokenState 清除 Token 状态 +func (rl *RateLimiter) ClearTokenState(tokenKey string) { + rl.mu.Lock() + defer rl.mu.Unlock() + delete(rl.states, tokenKey) +} + +// ResetSuspension 重置暂停状态 +func (rl *RateLimiter) ResetSuspension(tokenKey string) { + rl.mu.Lock() + defer rl.mu.Unlock() + + state, exists := rl.states[tokenKey] + if exists { + state.IsSuspended = false + state.SuspendedAt = time.Time{} + state.SuspendReason = "" + state.CooldownEnd = time.Time{} + state.FailCount = 0 + } +} diff --git a/internal/auth/kiro/rate_limiter_singleton.go b/internal/auth/kiro/rate_limiter_singleton.go new file mode 100644 index 0000000000..4c02af89c6 --- /dev/null +++ b/internal/auth/kiro/rate_limiter_singleton.go @@ -0,0 +1,46 @@ +package kiro + +import ( + "sync" + "time" + + log "github.com/sirupsen/logrus" +) + +var ( + globalRateLimiter *RateLimiter + globalRateLimiterOnce sync.Once + + globalCooldownManager *CooldownManager + globalCooldownManagerOnce sync.Once + cooldownStopCh chan struct{} +) + +// GetGlobalRateLimiter returns the singleton RateLimiter instance. +func GetGlobalRateLimiter() *RateLimiter { + globalRateLimiterOnce.Do(func() { + globalRateLimiter = NewRateLimiter() + log.Info("kiro: global RateLimiter initialized") + }) + return globalRateLimiter +} + +// GetGlobalCooldownManager returns the singleton CooldownManager instance. +func GetGlobalCooldownManager() *CooldownManager { + globalCooldownManagerOnce.Do(func() { + globalCooldownManager = NewCooldownManager() + cooldownStopCh = make(chan struct{}) + go globalCooldownManager.StartCleanupRoutine(5*time.Minute, cooldownStopCh) + log.Info("kiro: global CooldownManager initialized with cleanup routine") + }) + return globalCooldownManager +} + +// ShutdownRateLimiters stops the cooldown cleanup routine. +// Should be called during application shutdown. +func ShutdownRateLimiters() { + if cooldownStopCh != nil { + close(cooldownStopCh) + log.Info("kiro: rate limiter cleanup routine stopped") + } +} diff --git a/internal/auth/kiro/rate_limiter_test.go b/internal/auth/kiro/rate_limiter_test.go new file mode 100644 index 0000000000..636413dd3e --- /dev/null +++ b/internal/auth/kiro/rate_limiter_test.go @@ -0,0 +1,304 @@ +package kiro + +import ( + "sync" + "testing" + "time" +) + +func TestNewRateLimiter(t *testing.T) { + rl := NewRateLimiter() + if rl == nil { + t.Fatal("expected non-nil RateLimiter") + } + if rl.states == nil { + t.Error("expected non-nil states map") + } + if rl.minTokenInterval != DefaultMinTokenInterval { + t.Errorf("expected minTokenInterval %v, got %v", DefaultMinTokenInterval, rl.minTokenInterval) + } + if rl.maxTokenInterval != DefaultMaxTokenInterval { + t.Errorf("expected maxTokenInterval %v, got %v", DefaultMaxTokenInterval, rl.maxTokenInterval) + } + if rl.dailyMaxRequests != DefaultDailyMaxRequests { + t.Errorf("expected dailyMaxRequests %d, got %d", DefaultDailyMaxRequests, rl.dailyMaxRequests) + } +} + +func TestNewRateLimiterWithConfig(t *testing.T) { + cfg := RateLimiterConfig{ + MinTokenInterval: 5 * time.Second, + MaxTokenInterval: 15 * time.Second, + DailyMaxRequests: 100, + JitterPercent: 0.2, + BackoffBase: 1 * time.Minute, + BackoffMax: 30 * time.Minute, + BackoffMultiplier: 1.5, + SuspendCooldown: 12 * time.Hour, + } + + rl := NewRateLimiterWithConfig(cfg) + if rl.minTokenInterval != 5*time.Second { + t.Errorf("expected minTokenInterval 5s, got %v", rl.minTokenInterval) + } + if rl.maxTokenInterval != 15*time.Second { + t.Errorf("expected maxTokenInterval 15s, got %v", rl.maxTokenInterval) + } + if rl.dailyMaxRequests != 100 { + t.Errorf("expected dailyMaxRequests 100, got %d", rl.dailyMaxRequests) + } +} + +func TestNewRateLimiterWithConfig_PartialConfig(t *testing.T) { + cfg := RateLimiterConfig{ + MinTokenInterval: 5 * time.Second, + } + + rl := NewRateLimiterWithConfig(cfg) + if rl.minTokenInterval != 5*time.Second { + t.Errorf("expected minTokenInterval 5s, got %v", rl.minTokenInterval) + } + if rl.maxTokenInterval != DefaultMaxTokenInterval { + t.Errorf("expected default maxTokenInterval, got %v", rl.maxTokenInterval) + } +} + +func TestGetTokenState_NonExistent(t *testing.T) { + rl := NewRateLimiter() + state := rl.GetTokenState("nonexistent") + if state != nil { + t.Error("expected nil state for non-existent token") + } +} + +func TestIsTokenAvailable_NewToken(t *testing.T) { + rl := NewRateLimiter() + if !rl.IsTokenAvailable("newtoken") { + t.Error("expected new token to be available") + } +} + +func TestMarkTokenFailed(t *testing.T) { + rl := NewRateLimiter() + rl.MarkTokenFailed("token1") + + state := rl.GetTokenState("token1") + if state == nil { + t.Fatal("expected non-nil state") + } + if state.FailCount != 1 { + t.Errorf("expected FailCount 1, got %d", state.FailCount) + } + if state.CooldownEnd.IsZero() { + t.Error("expected non-zero CooldownEnd") + } +} + +func TestMarkTokenSuccess(t *testing.T) { + rl := NewRateLimiter() + rl.MarkTokenFailed("token1") + rl.MarkTokenFailed("token1") + rl.MarkTokenSuccess("token1") + + state := rl.GetTokenState("token1") + if state == nil { + t.Fatal("expected non-nil state") + } + if state.FailCount != 0 { + t.Errorf("expected FailCount 0, got %d", state.FailCount) + } + if !state.CooldownEnd.IsZero() { + t.Error("expected zero CooldownEnd after success") + } +} + +func TestCheckAndMarkSuspended_Suspended(t *testing.T) { + rl := NewRateLimiter() + + testCases := []string{ + "Account has been suspended", + "You are banned from this service", + "Account disabled", + "Access denied permanently", + "Rate limit exceeded", + "Too many requests", + "Quota exceeded for today", + } + + for i, msg := range testCases { + tokenKey := "token" + string(rune('a'+i)) + if !rl.CheckAndMarkSuspended(tokenKey, msg) { + t.Errorf("expected suspension detected for: %s", msg) + } + state := rl.GetTokenState(tokenKey) + if !state.IsSuspended { + t.Errorf("expected IsSuspended true for: %s", msg) + } + } +} + +func TestCheckAndMarkSuspended_NotSuspended(t *testing.T) { + rl := NewRateLimiter() + + normalErrors := []string{ + "connection timeout", + "internal server error", + "bad request", + "invalid token format", + } + + for i, msg := range normalErrors { + tokenKey := "token" + string(rune('a'+i)) + if rl.CheckAndMarkSuspended(tokenKey, msg) { + t.Errorf("unexpected suspension for: %s", msg) + } + } +} + +func TestIsTokenAvailable_Suspended(t *testing.T) { + rl := NewRateLimiter() + rl.CheckAndMarkSuspended("token1", "Account suspended") + + if rl.IsTokenAvailable("token1") { + t.Error("expected suspended token to be unavailable") + } +} + +func TestClearTokenState(t *testing.T) { + rl := NewRateLimiter() + rl.MarkTokenFailed("token1") + rl.ClearTokenState("token1") + + state := rl.GetTokenState("token1") + if state != nil { + t.Error("expected nil state after clear") + } +} + +func TestResetSuspension(t *testing.T) { + rl := NewRateLimiter() + rl.CheckAndMarkSuspended("token1", "Account suspended") + rl.ResetSuspension("token1") + + state := rl.GetTokenState("token1") + if state.IsSuspended { + t.Error("expected IsSuspended false after reset") + } + if state.FailCount != 0 { + t.Errorf("expected FailCount 0, got %d", state.FailCount) + } +} + +func TestResetSuspension_NonExistent(t *testing.T) { + rl := NewRateLimiter() + rl.ResetSuspension("nonexistent") +} + +func TestCalculateBackoff_ZeroFailCount(t *testing.T) { + rl := NewRateLimiter() + backoff := rl.calculateBackoff(0) + if backoff != 0 { + t.Errorf("expected 0 backoff for 0 fails, got %v", backoff) + } +} + +func TestCalculateBackoff_Exponential(t *testing.T) { + cfg := RateLimiterConfig{ + BackoffBase: 1 * time.Minute, + BackoffMax: 60 * time.Minute, + BackoffMultiplier: 2.0, + JitterPercent: 0.3, + } + rl := NewRateLimiterWithConfig(cfg) + + backoff1 := rl.calculateBackoff(1) + if backoff1 < 40*time.Second || backoff1 > 80*time.Second { + t.Errorf("expected ~1min (with jitter) for fail 1, got %v", backoff1) + } + + backoff2 := rl.calculateBackoff(2) + if backoff2 < 80*time.Second || backoff2 > 160*time.Second { + t.Errorf("expected ~2min (with jitter) for fail 2, got %v", backoff2) + } +} + +func TestCalculateBackoff_MaxCap(t *testing.T) { + cfg := RateLimiterConfig{ + BackoffBase: 1 * time.Minute, + BackoffMax: 10 * time.Minute, + BackoffMultiplier: 2.0, + JitterPercent: 0, + } + rl := NewRateLimiterWithConfig(cfg) + + backoff := rl.calculateBackoff(10) + if backoff > 10*time.Minute { + t.Errorf("expected backoff capped at 10min, got %v", backoff) + } +} + +func TestGetTokenState_ReturnsCopy(t *testing.T) { + rl := NewRateLimiter() + rl.MarkTokenFailed("token1") + + state1 := rl.GetTokenState("token1") + state1.FailCount = 999 + + state2 := rl.GetTokenState("token1") + if state2.FailCount == 999 { + t.Error("GetTokenState should return a copy") + } +} + +func TestRateLimiter_ConcurrentAccess(t *testing.T) { + rl := NewRateLimiter() + const numGoroutines = 50 + const numOperations = 50 + + var wg sync.WaitGroup + wg.Add(numGoroutines) + + for i := 0; i < numGoroutines; i++ { + go func(id int) { + defer wg.Done() + tokenKey := "token" + string(rune('a'+id%10)) + for j := 0; j < numOperations; j++ { + switch j % 6 { + case 0: + rl.IsTokenAvailable(tokenKey) + case 1: + rl.MarkTokenFailed(tokenKey) + case 2: + rl.MarkTokenSuccess(tokenKey) + case 3: + rl.GetTokenState(tokenKey) + case 4: + rl.CheckAndMarkSuspended(tokenKey, "test error") + case 5: + rl.ResetSuspension(tokenKey) + } + } + }(i) + } + + wg.Wait() +} + +func TestCalculateInterval_WithinRange(t *testing.T) { + cfg := RateLimiterConfig{ + MinTokenInterval: 10 * time.Second, + MaxTokenInterval: 30 * time.Second, + JitterPercent: 0.3, + } + rl := NewRateLimiterWithConfig(cfg) + + minAllowed := 7 * time.Second + maxAllowed := 40 * time.Second + + for i := 0; i < 100; i++ { + interval := rl.calculateInterval() + if interval < minAllowed || interval > maxAllowed { + t.Errorf("interval %v outside expected range [%v, %v]", interval, minAllowed, maxAllowed) + } + } +} diff --git a/internal/auth/kiro/refresh_manager.go b/internal/auth/kiro/refresh_manager.go new file mode 100644 index 0000000000..7042eb078d --- /dev/null +++ b/internal/auth/kiro/refresh_manager.go @@ -0,0 +1,202 @@ +package kiro + +import ( + "context" + "sync" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +// RefreshManager is a singleton manager for background token refreshing. +type RefreshManager struct { + mu sync.Mutex + refresher *BackgroundRefresher + ctx context.Context + cancel context.CancelFunc + started bool + onTokenRefreshed func(tokenID string, tokenData *KiroTokenData) +} + +var ( + globalRefreshManager *RefreshManager + managerOnce sync.Once +) + +// GetRefreshManager returns the global RefreshManager singleton. +func GetRefreshManager() *RefreshManager { + managerOnce.Do(func() { + globalRefreshManager = &RefreshManager{} + }) + return globalRefreshManager +} + +// Initialize sets up the background refresher. +func (m *RefreshManager) Initialize(baseDir string, cfg *config.Config) error { + m.mu.Lock() + defer m.mu.Unlock() + + if m.started { + log.Debug("refresh manager: already initialized") + return nil + } + + if baseDir == "" { + log.Warn("refresh manager: base directory not provided, skipping initialization") + return nil + } + + resolvedBaseDir, err := util.ResolveAuthDir(baseDir) + if err != nil { + log.Warnf("refresh manager: failed to resolve auth directory %s: %v", baseDir, err) + } + if resolvedBaseDir != "" { + baseDir = resolvedBaseDir + } + + repo := NewFileTokenRepository(baseDir) + + opts := []RefresherOption{ + WithInterval(time.Minute), + WithBatchSize(50), + WithConcurrency(10), + WithConfig(cfg), + } + + // Pass callback to BackgroundRefresher if already set + if m.onTokenRefreshed != nil { + opts = append(opts, WithOnTokenRefreshed(m.onTokenRefreshed)) + } + + m.refresher = NewBackgroundRefresher(repo, opts...) + + log.Infof("refresh manager: initialized with base directory %s", baseDir) + return nil +} + +// Start begins background token refreshing. +func (m *RefreshManager) Start() { + m.mu.Lock() + defer m.mu.Unlock() + + if m.started { + log.Debug("refresh manager: already started") + return + } + + if m.refresher == nil { + log.Warn("refresh manager: not initialized, cannot start") + return + } + + m.ctx, m.cancel = context.WithCancel(context.Background()) + m.refresher.Start(m.ctx) + m.started = true + + log.Info("refresh manager: background refresh started") +} + +// Stop halts background token refreshing. +func (m *RefreshManager) Stop() { + m.mu.Lock() + defer m.mu.Unlock() + + if !m.started { + return + } + + if m.cancel != nil { + m.cancel() + } + + if m.refresher != nil { + m.refresher.Stop() + } + + m.started = false + log.Info("refresh manager: background refresh stopped") +} + +// IsRunning reports whether background refreshing is active. +func (m *RefreshManager) IsRunning() bool { + m.mu.Lock() + defer m.mu.Unlock() + return m.started +} + +// UpdateBaseDir changes the token directory at runtime. +func (m *RefreshManager) UpdateBaseDir(baseDir string) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.refresher != nil && m.refresher.tokenRepo != nil { + if repo, ok := m.refresher.tokenRepo.(*FileTokenRepository); ok { + repo.SetBaseDir(baseDir) + log.Infof("refresh manager: updated base directory to %s", baseDir) + } + } +} + +// SetOnTokenRefreshed registers a callback invoked after a successful token refresh. +// Can be called at any time; supports runtime callback updates. +func (m *RefreshManager) SetOnTokenRefreshed(callback func(tokenID string, tokenData *KiroTokenData)) { + m.mu.Lock() + defer m.mu.Unlock() + + m.onTokenRefreshed = callback + + // Update the refresher's callback in a thread-safe manner if already created + if m.refresher != nil { + m.refresher.callbackMu.Lock() + m.refresher.onTokenRefreshed = callback + m.refresher.callbackMu.Unlock() + } + + log.Debug("refresh manager: token refresh callback registered") +} + +// InitializeAndStart initializes and starts background refreshing (convenience method). +func InitializeAndStart(baseDir string, cfg *config.Config) { + // Initialize global fingerprint config + initGlobalFingerprintConfig(cfg) + + manager := GetRefreshManager() + if err := manager.Initialize(baseDir, cfg); err != nil { + log.Errorf("refresh manager: initialization failed: %v", err) + return + } + manager.Start() +} + +// initGlobalFingerprintConfig loads fingerprint settings from application config. +func initGlobalFingerprintConfig(cfg *config.Config) { + if cfg == nil || cfg.KiroFingerprint == nil { + return + } + fpCfg := cfg.KiroFingerprint + SetGlobalFingerprintConfig(&FingerprintConfig{ + OIDCSDKVersion: fpCfg.OIDCSDKVersion, + RuntimeSDKVersion: fpCfg.RuntimeSDKVersion, + StreamingSDKVersion: fpCfg.StreamingSDKVersion, + OSType: fpCfg.OSType, + OSVersion: fpCfg.OSVersion, + NodeVersion: fpCfg.NodeVersion, + KiroVersion: fpCfg.KiroVersion, + KiroHash: fpCfg.KiroHash, + }) + log.Debug("kiro: global fingerprint config loaded") +} + +// InitFingerprintConfig initializes the global fingerprint config from application config. +func InitFingerprintConfig(cfg *config.Config) { + initGlobalFingerprintConfig(cfg) +} + +// StopGlobalRefreshManager stops the global refresh manager. +func StopGlobalRefreshManager() { + if globalRefreshManager != nil { + globalRefreshManager.Stop() + } +} diff --git a/internal/auth/kiro/refresh_utils.go b/internal/auth/kiro/refresh_utils.go new file mode 100644 index 0000000000..5abb714cbe --- /dev/null +++ b/internal/auth/kiro/refresh_utils.go @@ -0,0 +1,159 @@ +// Package kiro provides refresh utilities for Kiro token management. +package kiro + +import ( + "context" + "fmt" + "time" + + log "github.com/sirupsen/logrus" +) + +// RefreshResult contains the result of a token refresh attempt. +type RefreshResult struct { + TokenData *KiroTokenData + Error error + UsedFallback bool // True if we used the existing token as fallback +} + +// RefreshWithGracefulDegradation attempts to refresh a token with graceful degradation. +// If refresh fails but the existing access token is still valid, it returns the existing token. +// This matches kiro-openai-gateway's behavior for better reliability. +// +// Parameters: +// - ctx: Context for the request +// - refreshFunc: Function to perform the actual refresh +// - existingAccessToken: Current access token (for fallback) +// - expiresAt: Expiration time of the existing token +// +// Returns: +// - RefreshResult containing the new or existing token data +func RefreshWithGracefulDegradation( + ctx context.Context, + refreshFunc func(ctx context.Context) (*KiroTokenData, error), + existingAccessToken string, + expiresAt time.Time, +) RefreshResult { + // Try to refresh the token + newTokenData, err := refreshFunc(ctx) + if err == nil { + return RefreshResult{ + TokenData: newTokenData, + Error: nil, + UsedFallback: false, + } + } + + // Refresh failed - check if we can use the existing token + log.Warnf("kiro: token refresh failed: %v", err) + + // Check if existing token is still valid (not expired) + if existingAccessToken != "" && time.Now().Before(expiresAt) { + remainingTime := time.Until(expiresAt) + log.Warnf("kiro: using existing access token (expires in %v). Will retry refresh later.", remainingTime.Round(time.Second)) + + return RefreshResult{ + TokenData: &KiroTokenData{ + AccessToken: existingAccessToken, + ExpiresAt: expiresAt.Format(time.RFC3339), + }, + Error: nil, + UsedFallback: true, + } + } + + // Token is expired and refresh failed - return the error + return RefreshResult{ + TokenData: nil, + Error: fmt.Errorf("token refresh failed and existing token is expired: %w", err), + UsedFallback: false, + } +} + +// IsTokenExpiringSoon checks if a token is expiring within the given threshold. +// Default threshold is 5 minutes if not specified. +func IsTokenExpiringSoon(expiresAt time.Time, threshold time.Duration) bool { + if threshold == 0 { + threshold = 5 * time.Minute + } + return time.Now().Add(threshold).After(expiresAt) +} + +// IsTokenExpired checks if a token has already expired. +func IsTokenExpired(expiresAt time.Time) bool { + return time.Now().After(expiresAt) +} + +// ParseExpiresAt parses an expiration time string in RFC3339 format. +// Returns zero time if parsing fails. +func ParseExpiresAt(expiresAtStr string) time.Time { + if expiresAtStr == "" { + return time.Time{} + } + t, err := time.Parse(time.RFC3339, expiresAtStr) + if err != nil { + log.Debugf("kiro: failed to parse expiresAt '%s': %v", expiresAtStr, err) + return time.Time{} + } + return t +} + +// RefreshConfig contains configuration for token refresh behavior. +type RefreshConfig struct { + // MaxRetries is the maximum number of refresh attempts (default: 1) + MaxRetries int + // RetryDelay is the delay between retry attempts (default: 1 second) + RetryDelay time.Duration + // RefreshThreshold is how early to refresh before expiration (default: 5 minutes) + RefreshThreshold time.Duration + // EnableGracefulDegradation allows using existing token if refresh fails (default: true) + EnableGracefulDegradation bool +} + +// DefaultRefreshConfig returns the default refresh configuration. +func DefaultRefreshConfig() RefreshConfig { + return RefreshConfig{ + MaxRetries: 1, + RetryDelay: time.Second, + RefreshThreshold: 5 * time.Minute, + EnableGracefulDegradation: true, + } +} + +// RefreshWithRetry attempts to refresh a token with retry logic. +func RefreshWithRetry( + ctx context.Context, + refreshFunc func(ctx context.Context) (*KiroTokenData, error), + config RefreshConfig, +) (*KiroTokenData, error) { + var lastErr error + + maxAttempts := config.MaxRetries + 1 + if maxAttempts < 1 { + maxAttempts = 1 + } + + for attempt := 1; attempt <= maxAttempts; attempt++ { + tokenData, err := refreshFunc(ctx) + if err == nil { + if attempt > 1 { + log.Infof("kiro: token refresh succeeded on attempt %d", attempt) + } + return tokenData, nil + } + + lastErr = err + log.Warnf("kiro: token refresh attempt %d/%d failed: %v", attempt, maxAttempts, err) + + // Don't sleep after the last attempt + if attempt < maxAttempts { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(config.RetryDelay): + } + } + } + + return nil, fmt.Errorf("token refresh failed after %d attempts: %w", maxAttempts, lastErr) +} diff --git a/internal/auth/kiro/social_auth.go b/internal/auth/kiro/social_auth.go new file mode 100644 index 0000000000..d25b18afe3 --- /dev/null +++ b/internal/auth/kiro/social_auth.go @@ -0,0 +1,488 @@ +// Package kiro provides social authentication (Google/GitHub) for Kiro via AuthServiceClient. +package kiro + +import ( + "bufio" + "context" + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "html" + "io" + "net" + "net/http" + "net/url" + "os" + "os/exec" + "runtime" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" + "golang.org/x/term" +) + +const ( + // Kiro AuthService endpoint + kiroAuthServiceEndpoint = "https://prod.us-east-1.auth.desktop.kiro.dev" + + // OAuth timeout + socialAuthTimeout = 10 * time.Minute + + // Default callback port for social auth HTTP server + socialAuthCallbackPort = 9876 +) + +// SocialProvider represents the social login provider. +type SocialProvider string + +const ( + // ProviderGoogle is Google OAuth provider + ProviderGoogle SocialProvider = "Google" + // ProviderGitHub is GitHub OAuth provider + ProviderGitHub SocialProvider = "Github" + // Note: AWS Builder ID is NOT supported by Kiro's auth service. + // It only supports: Google, Github, Cognito + // AWS Builder ID must use device code flow via SSO OIDC. +) + +// CreateTokenRequest is sent to Kiro's /oauth/token endpoint. +type CreateTokenRequest struct { + Code string `json:"code"` + CodeVerifier string `json:"code_verifier"` + RedirectURI string `json:"redirect_uri"` + InvitationCode string `json:"invitation_code,omitempty"` +} + +// SocialTokenResponse from Kiro's /oauth/token endpoint for social auth. +type SocialTokenResponse struct { + AccessToken string `json:"accessToken"` + RefreshToken string `json:"refreshToken"` + ProfileArn string `json:"profileArn"` + ExpiresIn int `json:"expiresIn"` +} + +// RefreshTokenRequest is sent to Kiro's /refreshToken endpoint. +type RefreshTokenRequest struct { + RefreshToken string `json:"refreshToken"` +} + +// WebCallbackResult contains the OAuth callback result from HTTP server. +type WebCallbackResult struct { + Code string + State string + Error string +} + +// SocialAuthClient handles social authentication with Kiro. +type SocialAuthClient struct { + httpClient *http.Client + cfg *config.Config + protocolHandler *ProtocolHandler + machineID string + kiroVersion string +} + +// NewSocialAuthClient creates a new social auth client. +func NewSocialAuthClient(cfg *config.Config) *SocialAuthClient { + client := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + client = util.SetProxy(&cfg.SDKConfig, client) + } + fp := GlobalFingerprintManager().GetFingerprint("login") + return &SocialAuthClient{ + httpClient: client, + cfg: cfg, + protocolHandler: NewProtocolHandler(), + machineID: fp.KiroHash, + kiroVersion: fp.KiroVersion, + } +} + +// startWebCallbackServer starts a local HTTP server to receive the OAuth callback. +// This is used instead of the kiro:// protocol handler to avoid redirect_mismatch errors. +func (c *SocialAuthClient) startWebCallbackServer(ctx context.Context, expectedState string) (string, <-chan WebCallbackResult, error) { + // Try to find an available port - use localhost like Kiro does + listener, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", socialAuthCallbackPort)) + if err != nil { + // Try with dynamic port (RFC 8252 allows dynamic ports for native apps) + log.Warnf("kiro social auth: default port %d is busy, falling back to dynamic port", socialAuthCallbackPort) + listener, err = net.Listen("tcp", "localhost:0") + if err != nil { + return "", nil, fmt.Errorf("failed to start callback server: %w", err) + } + } + + port := listener.Addr().(*net.TCPAddr).Port + // Use http scheme for local callback server + redirectURI := fmt.Sprintf("http://localhost:%d/oauth/callback", port) + resultChan := make(chan WebCallbackResult, 1) + + server := &http.Server{ + ReadHeaderTimeout: 10 * time.Second, + } + + mux := http.NewServeMux() + mux.HandleFunc("/oauth/callback", func(w http.ResponseWriter, r *http.Request) { + code := r.URL.Query().Get("code") + state := r.URL.Query().Get("state") + errParam := r.URL.Query().Get("error") + + if errParam != "" { + w.Header().Set("Content-Type", "text/html; charset=utf-8") + w.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(w, ` +Login Failed +

Login Failed

%s

You can close this window.

`, html.EscapeString(errParam)) + resultChan <- WebCallbackResult{Error: errParam} + return + } + + if state != expectedState { + w.Header().Set("Content-Type", "text/html; charset=utf-8") + w.WriteHeader(http.StatusBadRequest) + fmt.Fprint(w, ` +Login Failed +

Login Failed

Invalid state parameter

You can close this window.

`) + resultChan <- WebCallbackResult{Error: "state mismatch"} + return + } + + w.Header().Set("Content-Type", "text/html; charset=utf-8") + fmt.Fprint(w, ` +Login Successful +

Login Successful!

You can close this window and return to the terminal.

+`) + resultChan <- WebCallbackResult{Code: code, State: state} + }) + + server.Handler = mux + + go func() { + if err := server.Serve(listener); err != nil && err != http.ErrServerClosed { + log.Debugf("kiro social auth callback server error: %v", err) + } + }() + + go func() { + select { + case <-ctx.Done(): + case <-time.After(socialAuthTimeout): + case <-resultChan: + } + _ = server.Shutdown(context.Background()) + }() + + return redirectURI, resultChan, nil +} + +// generatePKCE generates PKCE code verifier and challenge. +func generatePKCE() (verifier, challenge string, err error) { + // Generate 32 bytes of random data for verifier + b := make([]byte, 32) + if _, err := rand.Read(b); err != nil { + return "", "", fmt.Errorf("failed to generate random bytes: %w", err) + } + verifier = base64.RawURLEncoding.EncodeToString(b) + + // Generate SHA256 hash of verifier for challenge + h := sha256.Sum256([]byte(verifier)) + challenge = base64.RawURLEncoding.EncodeToString(h[:]) + + return verifier, challenge, nil +} + +// generateState generates a random state parameter. +func generateStateParam() (string, error) { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +// buildLoginURL constructs the Kiro OAuth login URL. +// The login endpoint expects a GET request with query parameters. +// Format: /login?idp=Google&redirect_uri=...&code_challenge=...&code_challenge_method=S256&state=...&prompt=select_account +// The prompt=select_account parameter forces the account selection screen even if already logged in. +func (c *SocialAuthClient) buildLoginURL(provider, redirectURI, codeChallenge, state string) string { + return fmt.Sprintf("%s/login?idp=%s&redirect_uri=%s&code_challenge=%s&code_challenge_method=S256&state=%s&prompt=select_account", + kiroAuthServiceEndpoint, + provider, + url.QueryEscape(redirectURI), + codeChallenge, + state, + ) +} + +// CreateToken exchanges the authorization code for tokens. +func (c *SocialAuthClient) CreateToken(ctx context.Context, req *CreateTokenRequest) (*SocialTokenResponse, error) { + body, err := json.Marshal(req) + if err != nil { + return nil, fmt.Errorf("failed to marshal token request: %w", err) + } + + tokenURL := kiroAuthServiceEndpoint + "/oauth/token" + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, strings.NewReader(string(body))) + if err != nil { + return nil, fmt.Errorf("failed to create token request: %w", err) + } + + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("User-Agent", fmt.Sprintf("KiroIDE-%s-%s", c.kiroVersion, c.machineID)) + httpReq.Header.Set("Accept", "application/json, text/plain, */*") + + resp, err := c.httpClient.Do(httpReq) + if err != nil { + return nil, fmt.Errorf("token request failed: %w", err) + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read token response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("token exchange failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("token exchange failed (status %d)", resp.StatusCode) + } + + var tokenResp SocialTokenResponse + if err := json.Unmarshal(respBody, &tokenResp); err != nil { + return nil, fmt.Errorf("failed to parse token response: %w", err) + } + + return &tokenResp, nil +} + +// RefreshSocialToken refreshes an expired social auth token. +func (c *SocialAuthClient) RefreshSocialToken(ctx context.Context, refreshToken string) (*KiroTokenData, error) { + body, err := json.Marshal(&RefreshTokenRequest{RefreshToken: refreshToken}) + if err != nil { + return nil, fmt.Errorf("failed to marshal refresh request: %w", err) + } + + refreshURL := kiroAuthServiceEndpoint + "/refreshToken" + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, refreshURL, strings.NewReader(string(body))) + if err != nil { + return nil, fmt.Errorf("failed to create refresh request: %w", err) + } + + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("User-Agent", fmt.Sprintf("KiroIDE-%s-%s", c.kiroVersion, c.machineID)) + httpReq.Header.Set("Accept", "application/json, text/plain, */*") + + resp, err := c.httpClient.Do(httpReq) + if err != nil { + return nil, fmt.Errorf("refresh request failed: %w", err) + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read refresh response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("token refresh failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("token refresh failed (status %d)", resp.StatusCode) + } + + var tokenResp SocialTokenResponse + if err := json.Unmarshal(respBody, &tokenResp); err != nil { + return nil, fmt.Errorf("failed to parse refresh response: %w", err) + } + + // Validate ExpiresIn - use default 1 hour if invalid + expiresIn := tokenResp.ExpiresIn + if expiresIn <= 0 { + expiresIn = 3600 // Default 1 hour + } + expiresAt := time.Now().Add(time.Duration(expiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: tokenResp.ProfileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "social", + Provider: "", // Caller should preserve original provider + Region: "us-east-1", + }, nil +} + +// LoginWithSocial performs OAuth login with Google or GitHub. +// Uses local HTTP callback server instead of custom protocol handler to avoid redirect_mismatch errors. +func (c *SocialAuthClient) LoginWithSocial(ctx context.Context, provider SocialProvider) (*KiroTokenData, error) { + providerName := string(provider) + + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Printf("║ Kiro Authentication (%s) ║\n", providerName) + fmt.Println("╚══════════════════════════════════════════════════════════╝") + + // Step 1: Start local HTTP callback server (instead of kiro:// protocol handler) + // This avoids redirect_mismatch errors with AWS Cognito + fmt.Println("\nSetting up authentication...") + + // Step 2: Generate PKCE codes + codeVerifier, codeChallenge, err := generatePKCE() + if err != nil { + return nil, fmt.Errorf("failed to generate PKCE: %w", err) + } + + // Step 3: Generate state + state, err := generateStateParam() + if err != nil { + return nil, fmt.Errorf("failed to generate state: %w", err) + } + + // Step 4: Start local HTTP callback server + redirectURI, resultChan, err := c.startWebCallbackServer(ctx, state) + if err != nil { + return nil, fmt.Errorf("failed to start callback server: %w", err) + } + log.Debugf("kiro social auth: callback server started at %s", redirectURI) + + // Step 5: Build the login URL using HTTP redirect URI + authURL := c.buildLoginURL(providerName, redirectURI, codeChallenge, state) + + // Set incognito mode based on config (defaults to true for Kiro, can be overridden with --no-incognito) + // Incognito mode enables multi-account support by bypassing cached sessions + if c.cfg != nil { + browser.SetIncognitoMode(c.cfg.IncognitoBrowser) + if !c.cfg.IncognitoBrowser { + log.Info("kiro: using normal browser mode (--no-incognito). Note: You may not be able to select a different account.") + } else { + log.Debug("kiro: using incognito mode for multi-account support") + } + } else { + browser.SetIncognitoMode(true) // Default to incognito if no config + log.Debug("kiro: using incognito mode for multi-account support (default)") + } + + // Step 6: Open browser for user authentication + fmt.Println("\n════════════════════════════════════════════════════════════") + fmt.Printf(" Opening browser for %s authentication...\n", providerName) + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf("\n URL: %s\n\n", authURL) + + if err := browser.OpenURL(authURL); err != nil { + log.Warnf("Could not open browser automatically: %v", err) + fmt.Println(" ⚠ Could not open browser automatically.") + fmt.Println(" Please open the URL above in your browser manually.") + } else { + fmt.Println(" (Browser opened automatically)") + } + + fmt.Println("\n Waiting for authentication callback...") + + // Step 7: Wait for callback from HTTP server + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(socialAuthTimeout): + return nil, fmt.Errorf("authentication timed out") + case callback := <-resultChan: + if callback.Error != "" { + return nil, fmt.Errorf("authentication error: %s", callback.Error) + } + + // State is already validated by the callback server + if callback.Code == "" { + return nil, fmt.Errorf("no authorization code received") + } + + fmt.Println("\n✓ Authorization received!") + + // Step 8: Exchange code for tokens + fmt.Println("Exchanging code for tokens...") + + tokenReq := &CreateTokenRequest{ + Code: callback.Code, + CodeVerifier: codeVerifier, + RedirectURI: redirectURI, // Use HTTP redirect URI, not kiro:// protocol + } + + tokenResp, err := c.CreateToken(ctx, tokenReq) + if err != nil { + return nil, fmt.Errorf("failed to exchange code for tokens: %w", err) + } + + fmt.Println("\n✓ Authentication successful!") + + // Close the browser window + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser: %v", err) + } + + // Validate ExpiresIn - use default 1 hour if invalid + expiresIn := tokenResp.ExpiresIn + if expiresIn <= 0 { + expiresIn = 3600 + } + expiresAt := time.Now().Add(time.Duration(expiresIn) * time.Second) + + // Try to extract email from JWT access token first + email := ExtractEmailFromJWT(tokenResp.AccessToken) + + // If no email in JWT, ask user for account label (only in interactive mode) + if email == "" && isInteractiveTerminal() { + fmt.Print("\n Enter account label for file naming (optional, press Enter to skip): ") + reader := bufio.NewReader(os.Stdin) + var err error + email, err = reader.ReadString('\n') + if err != nil { + log.Debugf("Failed to read account label: %v", err) + } + email = strings.TrimSpace(email) + } + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: tokenResp.ProfileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "social", + Provider: providerName, + Email: email, // JWT email or user-provided label + Region: "us-east-1", + }, nil + } +} + +// LoginWithGoogle performs OAuth login with Google. +func (c *SocialAuthClient) LoginWithGoogle(ctx context.Context) (*KiroTokenData, error) { + return c.LoginWithSocial(ctx, ProviderGoogle) +} + +// LoginWithGitHub performs OAuth login with GitHub. +func (c *SocialAuthClient) LoginWithGitHub(ctx context.Context) (*KiroTokenData, error) { + return c.LoginWithSocial(ctx, ProviderGitHub) +} + +// forceDefaultProtocolHandler sets our protocol handler as the default for kiro:// URLs. +// This prevents the "Open with" dialog from appearing on Linux. +// On non-Linux platforms, this is a no-op as they use different mechanisms. +func forceDefaultProtocolHandler() { + if runtime.GOOS != "linux" { + return // Non-Linux platforms use different handler mechanisms + } + + // Set our handler as default using xdg-mime + cmd := exec.Command("xdg-mime", "default", "kiro-oauth-handler.desktop", "x-scheme-handler/kiro") + if err := cmd.Run(); err != nil { + log.Warnf("Failed to set default protocol handler: %v. You may see a handler selection dialog.", err) + } +} + +// isInteractiveTerminal checks if stdin is connected to an interactive terminal. +// Returns false in CI/automated environments or when stdin is piped. +func isInteractiveTerminal() bool { + return term.IsTerminal(int(os.Stdin.Fd())) +} diff --git a/internal/auth/kiro/sso_oidc.go b/internal/auth/kiro/sso_oidc.go new file mode 100644 index 0000000000..22d8648e4d --- /dev/null +++ b/internal/auth/kiro/sso_oidc.go @@ -0,0 +1,1603 @@ +// Package kiro provides AWS SSO OIDC authentication for Kiro. +package kiro + +import ( + "bufio" + "context" + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "html" + "io" + "net" + "net/http" + "net/url" + "os" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + log "github.com/sirupsen/logrus" +) + +const ( + // AWS SSO OIDC endpoints + ssoOIDCEndpoint = "https://oidc.us-east-1.amazonaws.com" + + // Kiro's start URL for Builder ID + builderIDStartURL = "https://view.awsapps.com/start" + + // Default region for IDC + defaultIDCRegion = "us-east-1" + + // Polling interval + pollInterval = 5 * time.Second + + // Authorization code flow callback + authCodeCallbackPath = "/oauth/callback" + authCodeCallbackPort = 19877 +) + +var ( + ErrAuthorizationPending = errors.New("authorization_pending") + ErrSlowDown = errors.New("slow_down") +) + +type SSOOIDCClient struct { + httpClient *http.Client + cfg *config.Config +} + +// NewSSOOIDCClient creates a new SSO OIDC client. +func NewSSOOIDCClient(cfg *config.Config) *SSOOIDCClient { + client := &http.Client{Timeout: 30 * time.Second} + if cfg != nil { + client = util.SetProxy(&cfg.SDKConfig, client) + } + return &SSOOIDCClient{ + httpClient: client, + cfg: cfg, + } +} + +// RegisterClientResponse from AWS SSO OIDC. +type RegisterClientResponse struct { + ClientID string `json:"clientId"` + ClientSecret string `json:"clientSecret"` + ClientIDIssuedAt int64 `json:"clientIdIssuedAt"` + ClientSecretExpiresAt int64 `json:"clientSecretExpiresAt"` +} + +// StartDeviceAuthResponse from AWS SSO OIDC. +type StartDeviceAuthResponse struct { + DeviceCode string `json:"deviceCode"` + UserCode string `json:"userCode"` + VerificationURI string `json:"verificationUri"` + VerificationURIComplete string `json:"verificationUriComplete"` + ExpiresIn int `json:"expiresIn"` + Interval int `json:"interval"` +} + +// CreateTokenResponse from AWS SSO OIDC. +type CreateTokenResponse struct { + AccessToken string `json:"accessToken"` + TokenType string `json:"tokenType"` + ExpiresIn int `json:"expiresIn"` + RefreshToken string `json:"refreshToken"` +} + +// getOIDCEndpoint returns the OIDC endpoint for the given region. +func getOIDCEndpoint(region string) string { + if region == "" { + region = defaultIDCRegion + } + return fmt.Sprintf("https://oidc.%s.amazonaws.com", region) +} + +// promptInput prompts the user for input with an optional default value. +func promptInput(prompt, defaultValue string) string { + reader := bufio.NewReader(os.Stdin) + if defaultValue != "" { + fmt.Printf("%s [%s]: ", prompt, defaultValue) + } else { + fmt.Printf("%s: ", prompt) + } + input, err := reader.ReadString('\n') + if err != nil { + log.Warnf("Error reading input: %v", err) + return defaultValue + } + input = strings.TrimSpace(input) + if input == "" { + return defaultValue + } + return input +} + +// promptSelect prompts the user to select from options using number input. +func promptSelect(prompt string, options []string) int { + reader := bufio.NewReader(os.Stdin) + + for { + fmt.Println(prompt) + for i, opt := range options { + fmt.Printf(" %d) %s\n", i+1, opt) + } + fmt.Printf("Enter selection (1-%d): ", len(options)) + + input, err := reader.ReadString('\n') + if err != nil { + log.Warnf("Error reading input: %v", err) + return 0 // Default to first option on error + } + input = strings.TrimSpace(input) + + // Parse the selection + var selection int + if _, err := fmt.Sscanf(input, "%d", &selection); err != nil || selection < 1 || selection > len(options) { + fmt.Printf("Invalid selection '%s'. Please enter a number between 1 and %d.\n\n", input, len(options)) + continue + } + return selection - 1 + } +} + +// RegisterClientWithRegion registers a new OIDC client with AWS using a specific region. +func (c *SSOOIDCClient) RegisterClientWithRegion(ctx context.Context, region string) (*RegisterClientResponse, error) { + endpoint := getOIDCEndpoint(region) + + payload := map[string]interface{}{ + "clientName": "Kiro IDE", + "clientType": "public", + "scopes": []string{"codewhisperer:completions", "codewhisperer:analysis", "codewhisperer:conversations", "codewhisperer:transformations", "codewhisperer:taskassist"}, + "grantTypes": []string{"urn:ietf:params:oauth:grant-type:device_code", "refresh_token"}, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint+"/client/register", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("register client failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("register client failed (status %d)", resp.StatusCode) + } + + var result RegisterClientResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// StartDeviceAuthorizationWithIDC starts the device authorization flow for IDC. +func (c *SSOOIDCClient) StartDeviceAuthorizationWithIDC(ctx context.Context, clientID, clientSecret, startURL, region string) (*StartDeviceAuthResponse, error) { + endpoint := getOIDCEndpoint(region) + + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "startUrl": startURL, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint+"/device_authorization", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("start device auth failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("start device auth failed (status %d)", resp.StatusCode) + } + + var result StartDeviceAuthResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// CreateTokenWithRegion polls for the access token after user authorization using a specific region. +func (c *SSOOIDCClient) CreateTokenWithRegion(ctx context.Context, clientID, clientSecret, deviceCode, region string) (*CreateTokenResponse, error) { + endpoint := getOIDCEndpoint(region) + + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "deviceCode": deviceCode, + "grantType": "urn:ietf:params:oauth:grant-type:device_code", + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint+"/token", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + // Check for pending authorization + if resp.StatusCode == http.StatusBadRequest { + var errResp struct { + Error string `json:"error"` + } + if json.Unmarshal(respBody, &errResp) == nil { + if errResp.Error == "authorization_pending" { + return nil, ErrAuthorizationPending + } + if errResp.Error == "slow_down" { + return nil, ErrSlowDown + } + } + log.Debugf("create token failed: %s", string(respBody)) + return nil, fmt.Errorf("create token failed") + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("create token failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("create token failed (status %d)", resp.StatusCode) + } + + var result CreateTokenResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// RefreshTokenWithRegion refreshes an access token using the refresh token with a specific OIDC region. +func (c *SSOOIDCClient) RefreshTokenWithRegion(ctx context.Context, clientID, clientSecret, refreshToken, region, startURL string) (*KiroTokenData, error) { + if region == "" { + region = defaultIDCRegion + } + endpoint := getOIDCEndpoint(region) + + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "refreshToken": refreshToken, + "grantType": "refresh_token", + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint+"/token", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Warnf("IDC token refresh failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("token refresh failed (status %d)", resp.StatusCode) + } + + var result CreateTokenResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + expiresAt := time.Now().Add(time.Duration(result.ExpiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: result.AccessToken, + RefreshToken: result.RefreshToken, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "idc", + Provider: "AWS", + ClientID: clientID, + ClientSecret: clientSecret, + StartURL: startURL, + Region: region, + }, nil +} + +// LoginWithIDC performs the full device code flow for AWS Identity Center (IDC). +func (c *SSOOIDCClient) LoginWithIDC(ctx context.Context, startURL, region string) (*KiroTokenData, error) { + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Println("║ Kiro Authentication (AWS Identity Center) ║") + fmt.Println("╚══════════════════════════════════════════════════════════╝") + + // Step 1: Register client with the specified region + fmt.Println("\nRegistering client...") + regResp, err := c.RegisterClientWithRegion(ctx, region) + if err != nil { + return nil, fmt.Errorf("failed to register client: %w", err) + } + log.Debugf("Client registered: %s", regResp.ClientID) + + // Step 2: Start device authorization with IDC start URL + fmt.Println("Starting device authorization...") + authResp, err := c.StartDeviceAuthorizationWithIDC(ctx, regResp.ClientID, regResp.ClientSecret, startURL, region) + if err != nil { + return nil, fmt.Errorf("failed to start device auth: %w", err) + } + + // Step 3: Show user the verification URL + fmt.Printf("\n") + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf(" Confirm the following code in the browser:\n") + fmt.Printf(" Code: %s\n", authResp.UserCode) + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf("\n Open this URL: %s\n\n", authResp.VerificationURIComplete) + + // Set incognito mode based on config + if c.cfg != nil { + browser.SetIncognitoMode(c.cfg.IncognitoBrowser) + if !c.cfg.IncognitoBrowser { + log.Info("kiro: using normal browser mode (--no-incognito). Note: You may not be able to select a different account.") + } else { + log.Debug("kiro: using incognito mode for multi-account support") + } + } else { + browser.SetIncognitoMode(true) + log.Debug("kiro: using incognito mode for multi-account support (default)") + } + + // Open browser + if err := browser.OpenURL(authResp.VerificationURIComplete); err != nil { + log.Warnf("Could not open browser automatically: %v", err) + fmt.Println(" Please open the URL manually in your browser.") + } else { + fmt.Println(" (Browser opened automatically)") + } + + // Step 4: Poll for token + fmt.Println("Waiting for authorization...") + + interval := pollInterval + if authResp.Interval > 0 { + interval = time.Duration(authResp.Interval) * time.Second + } + + deadline := time.Now().Add(time.Duration(authResp.ExpiresIn) * time.Second) + + for time.Now().Before(deadline) { + select { + case <-ctx.Done(): + browser.CloseBrowser() + return nil, ctx.Err() + case <-time.After(interval): + tokenResp, err := c.CreateTokenWithRegion(ctx, regResp.ClientID, regResp.ClientSecret, authResp.DeviceCode, region) + if err != nil { + if errors.Is(err, ErrAuthorizationPending) { + fmt.Print(".") + continue + } + if errors.Is(err, ErrSlowDown) { + interval += 5 * time.Second + continue + } + browser.CloseBrowser() + return nil, fmt.Errorf("token creation failed: %w", err) + } + + fmt.Println("\n\n✓ Authorization successful!") + + // Close the browser window + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser: %v", err) + } + + // Step 5: Get profile ARN from CodeWhisperer API + fmt.Println("Fetching profile information...") + profileArn := c.FetchProfileArn(ctx, tokenResp.AccessToken, regResp.ClientID, tokenResp.RefreshToken) + + // Fetch user email + email := FetchUserEmailWithFallback(ctx, c.cfg, tokenResp.AccessToken, regResp.ClientID, tokenResp.RefreshToken) + if email != "" { + fmt.Printf(" Logged in as: %s\n", email) + } + + expiresAt := time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: profileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "idc", + Provider: "AWS", + ClientID: regResp.ClientID, + ClientSecret: regResp.ClientSecret, + Email: email, + StartURL: startURL, + Region: region, + }, nil + } + } + + // Close browser on timeout + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser on timeout: %v", err) + } + return nil, fmt.Errorf("authorization timed out") +} + +// IDCLoginOptions holds optional parameters for IDC login. +type IDCLoginOptions struct { + StartURL string // Pre-configured start URL (skips prompt if set) + Region string // OIDC region for login and token refresh (defaults to us-east-1) + UseDeviceCode bool // Use Device Code flow instead of Auth Code flow +} + +// LoginWithMethodSelection prompts the user to select between Builder ID and IDC, then performs the login. +// Options can be provided to pre-configure IDC parameters (startURL, region). +// If StartURL is provided in opts, IDC flow is used directly without prompting. +func (c *SSOOIDCClient) LoginWithMethodSelection(ctx context.Context, opts *IDCLoginOptions) (*KiroTokenData, error) { + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Println("║ Kiro Authentication (AWS) ║") + fmt.Println("╚══════════════════════════════════════════════════════════╝") + + // If IDC options with StartURL are provided, skip method selection and use IDC directly + if opts != nil && opts.StartURL != "" { + region := opts.Region + if region == "" { + region = defaultIDCRegion + } + fmt.Printf("\n Using IDC with Start URL: %s\n", opts.StartURL) + fmt.Printf(" Region: %s\n", region) + + if opts.UseDeviceCode { + return c.LoginWithIDCAndOptions(ctx, opts.StartURL, region) + } + return c.LoginWithIDCAuthCode(ctx, opts.StartURL, region) + } + + // Prompt for login method + options := []string{ + "Use with Builder ID (personal AWS account)", + "Use with IDC Account (organization SSO)", + } + selection := promptSelect("\n? Select login method:", options) + + if selection == 0 { + // Builder ID flow - use existing implementation + return c.LoginWithBuilderID(ctx) + } + + // IDC flow - use pre-configured values or prompt + var startURL, region string + + if opts != nil { + startURL = opts.StartURL + region = opts.Region + } + + fmt.Println() + + // Use pre-configured startURL or prompt + if startURL == "" { + startURL = promptInput("? Enter Start URL", "") + if startURL == "" { + return nil, fmt.Errorf("start URL is required for IDC login") + } + } else { + fmt.Printf(" Using pre-configured Start URL: %s\n", startURL) + } + + // Use pre-configured region or prompt + if region == "" { + region = promptInput("? Enter Region", defaultIDCRegion) + } else { + fmt.Printf(" Using pre-configured Region: %s\n", region) + } + + if opts != nil && opts.UseDeviceCode { + return c.LoginWithIDCAndOptions(ctx, startURL, region) + } + return c.LoginWithIDCAuthCode(ctx, startURL, region) +} + +// LoginWithIDCAndOptions performs IDC login with the specified region. +func (c *SSOOIDCClient) LoginWithIDCAndOptions(ctx context.Context, startURL, region string) (*KiroTokenData, error) { + return c.LoginWithIDC(ctx, startURL, region) +} + +// RegisterClient registers a new OIDC client with AWS. +func (c *SSOOIDCClient) RegisterClient(ctx context.Context) (*RegisterClientResponse, error) { + payload := map[string]interface{}{ + "clientName": "Kiro IDE", + "clientType": "public", + "scopes": []string{"codewhisperer:completions", "codewhisperer:analysis", "codewhisperer:conversations", "codewhisperer:transformations", "codewhisperer:taskassist"}, + "grantTypes": []string{"urn:ietf:params:oauth:grant-type:device_code", "refresh_token"}, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, ssoOIDCEndpoint+"/client/register", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("register client failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("register client failed (status %d)", resp.StatusCode) + } + + var result RegisterClientResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// StartDeviceAuthorization starts the device authorization flow. +func (c *SSOOIDCClient) StartDeviceAuthorization(ctx context.Context, clientID, clientSecret string) (*StartDeviceAuthResponse, error) { + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "startUrl": builderIDStartURL, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, ssoOIDCEndpoint+"/device_authorization", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("start device auth failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("start device auth failed (status %d)", resp.StatusCode) + } + + var result StartDeviceAuthResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// CreateToken polls for the access token after user authorization. +func (c *SSOOIDCClient) CreateToken(ctx context.Context, clientID, clientSecret, deviceCode string) (*CreateTokenResponse, error) { + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "deviceCode": deviceCode, + "grantType": "urn:ietf:params:oauth:grant-type:device_code", + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, ssoOIDCEndpoint+"/token", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + // Check for pending authorization + if resp.StatusCode == http.StatusBadRequest { + var errResp struct { + Error string `json:"error"` + } + if json.Unmarshal(respBody, &errResp) == nil { + if errResp.Error == "authorization_pending" { + return nil, ErrAuthorizationPending + } + if errResp.Error == "slow_down" { + return nil, ErrSlowDown + } + } + log.Debugf("create token failed: %s", string(respBody)) + return nil, fmt.Errorf("create token failed") + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("create token failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("create token failed (status %d)", resp.StatusCode) + } + + var result CreateTokenResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// RefreshToken refreshes an access token using the refresh token. +// Includes retry logic and improved error handling for better reliability. +func (c *SSOOIDCClient) RefreshToken(ctx context.Context, clientID, clientSecret, refreshToken string) (*KiroTokenData, error) { + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "refreshToken": refreshToken, + "grantType": "refresh_token", + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, ssoOIDCEndpoint+"/token", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Warnf("token refresh failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("token refresh failed (status %d): %s", resp.StatusCode, string(respBody)) + } + + var result CreateTokenResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + expiresAt := time.Now().Add(time.Duration(result.ExpiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: result.AccessToken, + RefreshToken: result.RefreshToken, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "builder-id", + Provider: "AWS", + ClientID: clientID, + ClientSecret: clientSecret, + Region: defaultIDCRegion, + }, nil +} + +// LoginWithBuilderID performs the full device code flow for AWS Builder ID. +func (c *SSOOIDCClient) LoginWithBuilderID(ctx context.Context) (*KiroTokenData, error) { + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Println("║ Kiro Authentication (AWS Builder ID) ║") + fmt.Println("╚══════════════════════════════════════════════════════════╝") + + // Step 1: Register client + fmt.Println("\nRegistering client...") + regResp, err := c.RegisterClient(ctx) + if err != nil { + return nil, fmt.Errorf("failed to register client: %w", err) + } + log.Debugf("Client registered: %s", regResp.ClientID) + + // Step 2: Start device authorization + fmt.Println("Starting device authorization...") + authResp, err := c.StartDeviceAuthorization(ctx, regResp.ClientID, regResp.ClientSecret) + if err != nil { + return nil, fmt.Errorf("failed to start device auth: %w", err) + } + + // Step 3: Show user the verification URL + fmt.Printf("\n") + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf(" Open this URL in your browser:\n") + fmt.Printf(" %s\n", authResp.VerificationURIComplete) + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf("\n Or go to: %s\n", authResp.VerificationURI) + fmt.Printf(" And enter code: %s\n\n", authResp.UserCode) + + // Set incognito mode based on config (defaults to true for Kiro, can be overridden with --no-incognito) + // Incognito mode enables multi-account support by bypassing cached sessions + if c.cfg != nil { + browser.SetIncognitoMode(c.cfg.IncognitoBrowser) + if !c.cfg.IncognitoBrowser { + log.Info("kiro: using normal browser mode (--no-incognito). Note: You may not be able to select a different account.") + } else { + log.Debug("kiro: using incognito mode for multi-account support") + } + } else { + browser.SetIncognitoMode(true) // Default to incognito if no config + log.Debug("kiro: using incognito mode for multi-account support (default)") + } + + // Open browser using cross-platform browser package + if err := browser.OpenURL(authResp.VerificationURIComplete); err != nil { + log.Warnf("Could not open browser automatically: %v", err) + fmt.Println(" Please open the URL manually in your browser.") + } else { + fmt.Println(" (Browser opened automatically)") + } + + // Step 4: Poll for token + fmt.Println("Waiting for authorization...") + + interval := pollInterval + if authResp.Interval > 0 { + interval = time.Duration(authResp.Interval) * time.Second + } + + deadline := time.Now().Add(time.Duration(authResp.ExpiresIn) * time.Second) + + for time.Now().Before(deadline) { + select { + case <-ctx.Done(): + browser.CloseBrowser() // Cleanup on cancel + return nil, ctx.Err() + case <-time.After(interval): + tokenResp, err := c.CreateToken(ctx, regResp.ClientID, regResp.ClientSecret, authResp.DeviceCode) + if err != nil { + if errors.Is(err, ErrAuthorizationPending) { + fmt.Print(".") + continue + } + if errors.Is(err, ErrSlowDown) { + interval += 5 * time.Second + continue + } + // Close browser on error before returning + browser.CloseBrowser() + return nil, fmt.Errorf("token creation failed: %w", err) + } + + fmt.Println("\n\n✓ Authorization successful!") + + // Close the browser window + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser: %v", err) + } + + // Fetch user email (tries CodeWhisperer API first, then userinfo endpoint, then JWT parsing) + email := FetchUserEmailWithFallback(ctx, c.cfg, tokenResp.AccessToken, regResp.ClientID, tokenResp.RefreshToken) + if email != "" { + fmt.Printf(" Logged in as: %s\n", email) + } + + expiresAt := time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: "", // Builder ID has no profile + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "builder-id", + Provider: "AWS", + ClientID: regResp.ClientID, + ClientSecret: regResp.ClientSecret, + Email: email, + Region: defaultIDCRegion, + }, nil + } + } + + // Close browser on timeout for better UX + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser on timeout: %v", err) + } + return nil, fmt.Errorf("authorization timed out") +} + +// FetchUserEmail retrieves the user's email from AWS SSO OIDC userinfo endpoint. +// Falls back to JWT parsing if userinfo fails. +func (c *SSOOIDCClient) FetchUserEmail(ctx context.Context, accessToken string) string { + // Method 1: Try userinfo endpoint (standard OIDC) + email := c.tryUserInfoEndpoint(ctx, accessToken) + if email != "" { + return email + } + + // Method 2: Fallback to JWT parsing + return ExtractEmailFromJWT(accessToken) +} + +// tryUserInfoEndpoint attempts to get user info from AWS SSO OIDC userinfo endpoint. +func (c *SSOOIDCClient) tryUserInfoEndpoint(ctx context.Context, accessToken string) string { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, ssoOIDCEndpoint+"/userinfo", nil) + if err != nil { + return "" + } + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + log.Debugf("userinfo request failed: %v", err) + return "" + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + respBody, _ := io.ReadAll(resp.Body) + log.Debugf("userinfo endpoint returned status %d: %s", resp.StatusCode, string(respBody)) + return "" + } + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return "" + } + + log.Debugf("userinfo response: %s", string(respBody)) + + var userInfo struct { + Email string `json:"email"` + Sub string `json:"sub"` + PreferredUsername string `json:"preferred_username"` + Name string `json:"name"` + } + + if err := json.Unmarshal(respBody, &userInfo); err != nil { + return "" + } + + if userInfo.Email != "" { + return userInfo.Email + } + if userInfo.PreferredUsername != "" && strings.Contains(userInfo.PreferredUsername, "@") { + return userInfo.PreferredUsername + } + return "" +} + +// FetchProfileArn fetches the profile ARN from ListAvailableProfiles API. +// This is used to get profileArn for imported accounts that may not have it. +func (c *SSOOIDCClient) FetchProfileArn(ctx context.Context, accessToken, clientID, refreshToken string) string { + profileArn := c.tryListAvailableProfiles(ctx, accessToken, clientID, refreshToken) + if profileArn != "" { + return profileArn + } + return c.tryListProfilesLegacy(ctx, accessToken) +} + +func (c *SSOOIDCClient) tryListAvailableProfiles(ctx context.Context, accessToken, clientID, refreshToken string) string { + req, err := http.NewRequestWithContext(ctx, http.MethodPost, GetKiroAPIEndpoint("")+"/ListAvailableProfiles", strings.NewReader("{}")) + if err != nil { + return "" + } + + req.Header.Set("Content-Type", "application/json") + accountKey := GetAccountKey(clientID, refreshToken) + setRuntimeHeaders(req, accessToken, accountKey) + + resp, err := c.httpClient.Do(req) + if err != nil { + log.Debugf("ListAvailableProfiles request failed: %v", err) + return "" + } + defer resp.Body.Close() + + respBody, _ := io.ReadAll(resp.Body) + + if resp.StatusCode != http.StatusOK { + log.Debugf("ListAvailableProfiles failed (status %d): %s", resp.StatusCode, string(respBody)) + return "" + } + + log.Debugf("ListAvailableProfiles response: %s", string(respBody)) + + var result struct { + Profiles []struct { + Arn string `json:"arn"` + ProfileName string `json:"profileName"` + } `json:"profiles"` + NextToken *string `json:"nextToken"` + } + + if err := json.Unmarshal(respBody, &result); err != nil { + log.Debugf("ListAvailableProfiles parse error: %v", err) + return "" + } + + if len(result.Profiles) > 0 { + log.Debugf("Found profile: %s (%s)", result.Profiles[0].ProfileName, result.Profiles[0].Arn) + return result.Profiles[0].Arn + } + + return "" +} + +func (c *SSOOIDCClient) tryListProfilesLegacy(ctx context.Context, accessToken string) string { + payload := map[string]interface{}{ + "origin": "AI_EDITOR", + } + + body, err := json.Marshal(payload) + if err != nil { + return "" + } + + // Use the legacy CodeWhisperer endpoint for JSON-RPC style requests. + // The Q endpoint (q.{region}.amazonaws.com) does NOT support x-amz-target headers. + req, err := http.NewRequestWithContext(ctx, http.MethodPost, GetCodeWhispererLegacyEndpoint(""), strings.NewReader(string(body))) + if err != nil { + return "" + } + + req.Header.Set("Content-Type", "application/x-amz-json-1.0") + req.Header.Set("x-amz-target", "AmazonCodeWhispererService.ListProfiles") + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("Accept", "application/json") + + resp, err := c.httpClient.Do(req) + if err != nil { + return "" + } + defer resp.Body.Close() + + respBody, _ := io.ReadAll(resp.Body) + + if resp.StatusCode != http.StatusOK { + log.Debugf("ListProfiles (legacy) failed (status %d): %s", resp.StatusCode, string(respBody)) + return "" + } + + log.Debugf("ListProfiles (legacy) response: %s", string(respBody)) + + var result struct { + Profiles []struct { + Arn string `json:"arn"` + } `json:"profiles"` + ProfileArn string `json:"profileArn"` + } + + if err := json.Unmarshal(respBody, &result); err != nil { + return "" + } + + if result.ProfileArn != "" { + return result.ProfileArn + } + + if len(result.Profiles) > 0 { + return result.Profiles[0].Arn + } + + return "" +} + +// RegisterClientForAuthCode registers a new OIDC client for authorization code flow. +func (c *SSOOIDCClient) RegisterClientForAuthCode(ctx context.Context, redirectURI string) (*RegisterClientResponse, error) { + payload := map[string]interface{}{ + "clientName": "Kiro IDE", + "clientType": "public", + "scopes": []string{"codewhisperer:completions", "codewhisperer:analysis", "codewhisperer:conversations", "codewhisperer:transformations", "codewhisperer:taskassist"}, + "grantTypes": []string{"authorization_code", "refresh_token"}, + "redirectUris": []string{redirectURI}, + "issuerUrl": builderIDStartURL, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, ssoOIDCEndpoint+"/client/register", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("register client for auth code failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("register client failed (status %d)", resp.StatusCode) + } + + var result RegisterClientResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +func (c *SSOOIDCClient) RegisterClientForAuthCodeWithIDC(ctx context.Context, redirectURI, issuerUrl, region string) (*RegisterClientResponse, error) { + endpoint := getOIDCEndpoint(region) + + payload := map[string]interface{}{ + "clientName": "Kiro IDE", + "clientType": "public", + "scopes": []string{"codewhisperer:completions", "codewhisperer:analysis", "codewhisperer:conversations", "codewhisperer:transformations", "codewhisperer:taskassist"}, + "grantTypes": []string{"authorization_code", "refresh_token"}, + "redirectUris": []string{redirectURI}, + "issuerUrl": issuerUrl, + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint+"/client/register", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("register client for auth code with IDC failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("register client failed (status %d)", resp.StatusCode) + } + + var result RegisterClientResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// AuthCodeCallbackResult contains the result from authorization code callback. +type AuthCodeCallbackResult struct { + Code string + State string + Error string +} + +// startAuthCodeCallbackServer starts a local HTTP server to receive the authorization code callback. +func (c *SSOOIDCClient) startAuthCodeCallbackServer(ctx context.Context, expectedState string) (string, <-chan AuthCodeCallbackResult, error) { + // Try to find an available port + listener, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", authCodeCallbackPort)) + if err != nil { + // Try with dynamic port + log.Warnf("sso oidc: default port %d is busy, falling back to dynamic port", authCodeCallbackPort) + listener, err = net.Listen("tcp", "127.0.0.1:0") + if err != nil { + return "", nil, fmt.Errorf("failed to start callback server: %w", err) + } + } + + port := listener.Addr().(*net.TCPAddr).Port + redirectURI := fmt.Sprintf("http://127.0.0.1:%d%s", port, authCodeCallbackPath) + resultChan := make(chan AuthCodeCallbackResult, 1) + doneChan := make(chan struct{}) + + server := &http.Server{ + ReadHeaderTimeout: 10 * time.Second, + } + + mux := http.NewServeMux() + mux.HandleFunc(authCodeCallbackPath, func(w http.ResponseWriter, r *http.Request) { + code := r.URL.Query().Get("code") + state := r.URL.Query().Get("state") + errParam := r.URL.Query().Get("error") + + // Send response to browser + w.Header().Set("Content-Type", "text/html; charset=utf-8") + if errParam != "" { + w.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(w, ` +Login Failed +

Login Failed

Error: %s

You can close this window.

`, html.EscapeString(errParam)) + resultChan <- AuthCodeCallbackResult{Error: errParam} + close(doneChan) + return + } + + if state != expectedState { + w.WriteHeader(http.StatusBadRequest) + fmt.Fprint(w, ` +Login Failed +

Login Failed

Invalid state parameter

You can close this window.

`) + resultChan <- AuthCodeCallbackResult{Error: "state mismatch"} + close(doneChan) + return + } + + fmt.Fprint(w, ` +Login Successful +

Login Successful!

You can close this window and return to the terminal.

+`) + resultChan <- AuthCodeCallbackResult{Code: code, State: state} + close(doneChan) + }) + + server.Handler = mux + + go func() { + if err := server.Serve(listener); err != nil && err != http.ErrServerClosed { + log.Debugf("auth code callback server error: %v", err) + } + }() + + go func() { + select { + case <-ctx.Done(): + case <-time.After(10 * time.Minute): + case <-doneChan: + } + _ = server.Shutdown(context.Background()) + }() + + return redirectURI, resultChan, nil +} + +// generatePKCEForAuthCode generates PKCE code verifier and challenge for authorization code flow. +func generatePKCEForAuthCode() (verifier, challenge string, err error) { + b := make([]byte, 32) + if _, err := rand.Read(b); err != nil { + return "", "", fmt.Errorf("failed to generate random bytes: %w", err) + } + verifier = base64.RawURLEncoding.EncodeToString(b) + h := sha256.Sum256([]byte(verifier)) + challenge = base64.RawURLEncoding.EncodeToString(h[:]) + return verifier, challenge, nil +} + +// generateStateForAuthCode generates a random state parameter. +func generateStateForAuthCode() (string, error) { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return "", err + } + return base64.RawURLEncoding.EncodeToString(b), nil +} + +// CreateTokenWithAuthCode exchanges authorization code for tokens. +func (c *SSOOIDCClient) CreateTokenWithAuthCode(ctx context.Context, clientID, clientSecret, code, codeVerifier, redirectURI string) (*CreateTokenResponse, error) { + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "code": code, + "codeVerifier": codeVerifier, + "redirectUri": redirectURI, + "grantType": "authorization_code", + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, ssoOIDCEndpoint+"/token", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("create token with auth code failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("create token failed (status %d)", resp.StatusCode) + } + + var result CreateTokenResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +func (c *SSOOIDCClient) CreateTokenWithAuthCodeAndRegion(ctx context.Context, clientID, clientSecret, code, codeVerifier, redirectURI, region string) (*CreateTokenResponse, error) { + endpoint := getOIDCEndpoint(region) + + payload := map[string]string{ + "clientId": clientID, + "clientSecret": clientSecret, + "code": code, + "codeVerifier": codeVerifier, + "redirectUri": redirectURI, + "grantType": "authorization_code", + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, err + } + + req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint+"/token", strings.NewReader(string(body))) + if err != nil { + return nil, err + } + SetOIDCHeaders(req) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + if resp.StatusCode != http.StatusOK { + log.Debugf("create token with auth code failed (status %d): %s", resp.StatusCode, string(respBody)) + return nil, fmt.Errorf("create token failed (status %d)", resp.StatusCode) + } + + var result CreateTokenResponse + if err := json.Unmarshal(respBody, &result); err != nil { + return nil, err + } + + return &result, nil +} + +// LoginWithBuilderIDAuthCode performs the authorization code flow for AWS Builder ID. +// This provides a better UX than device code flow as it uses automatic browser callback. +func (c *SSOOIDCClient) LoginWithBuilderIDAuthCode(ctx context.Context) (*KiroTokenData, error) { + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Println("║ Kiro Authentication (AWS Builder ID - Auth Code) ║") + fmt.Println("╚══════════════════════════════════════════════════════════╝") + + // Step 1: Generate PKCE and state + codeVerifier, codeChallenge, err := generatePKCEForAuthCode() + if err != nil { + return nil, fmt.Errorf("failed to generate PKCE: %w", err) + } + + state, err := generateStateForAuthCode() + if err != nil { + return nil, fmt.Errorf("failed to generate state: %w", err) + } + + // Step 2: Start callback server + fmt.Println("\nStarting callback server...") + redirectURI, resultChan, err := c.startAuthCodeCallbackServer(ctx, state) + if err != nil { + return nil, fmt.Errorf("failed to start callback server: %w", err) + } + log.Debugf("Callback server started, redirect URI: %s", redirectURI) + + // Step 3: Register client with auth code grant type + fmt.Println("Registering client...") + regResp, err := c.RegisterClientForAuthCode(ctx, redirectURI) + if err != nil { + return nil, fmt.Errorf("failed to register client: %w", err) + } + log.Debugf("Client registered: %s", regResp.ClientID) + + // Step 4: Build authorization URL + scopes := "codewhisperer:completions,codewhisperer:analysis,codewhisperer:conversations" + authURL := fmt.Sprintf("%s/authorize?response_type=code&client_id=%s&redirect_uri=%s&scopes=%s&state=%s&code_challenge=%s&code_challenge_method=S256", + ssoOIDCEndpoint, + regResp.ClientID, + redirectURI, + scopes, + state, + codeChallenge, + ) + + // Step 5: Open browser + fmt.Println("\n════════════════════════════════════════════════════════════") + fmt.Println(" Opening browser for authentication...") + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf("\n URL: %s\n\n", authURL) + + // Set incognito mode + if c.cfg != nil { + browser.SetIncognitoMode(c.cfg.IncognitoBrowser) + } else { + browser.SetIncognitoMode(true) + } + + if err := browser.OpenURL(authURL); err != nil { + log.Warnf("Could not open browser automatically: %v", err) + fmt.Println(" ⚠ Could not open browser automatically.") + fmt.Println(" Please open the URL above in your browser manually.") + } else { + fmt.Println(" (Browser opened automatically)") + } + + fmt.Println("\n Waiting for authorization callback...") + + // Step 6: Wait for callback + select { + case <-ctx.Done(): + browser.CloseBrowser() + return nil, ctx.Err() + case <-time.After(10 * time.Minute): + browser.CloseBrowser() + return nil, fmt.Errorf("authorization timed out") + case result := <-resultChan: + if result.Error != "" { + browser.CloseBrowser() + return nil, fmt.Errorf("authorization failed: %s", result.Error) + } + + fmt.Println("\n✓ Authorization received!") + + // Close browser + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser: %v", err) + } + + // Step 7: Exchange code for tokens + fmt.Println("Exchanging code for tokens...") + tokenResp, err := c.CreateTokenWithAuthCode(ctx, regResp.ClientID, regResp.ClientSecret, result.Code, codeVerifier, redirectURI) + if err != nil { + return nil, fmt.Errorf("failed to exchange code for tokens: %w", err) + } + + fmt.Println("\n✓ Authentication successful!") + + // Fetch user email (tries CodeWhisperer API first, then userinfo endpoint, then JWT parsing) + email := FetchUserEmailWithFallback(ctx, c.cfg, tokenResp.AccessToken, regResp.ClientID, tokenResp.RefreshToken) + if email != "" { + fmt.Printf(" Logged in as: %s\n", email) + } + + expiresAt := time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: "", // Builder ID has no profile + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "builder-id", + Provider: "AWS", + ClientID: regResp.ClientID, + ClientSecret: regResp.ClientSecret, + Email: email, + Region: defaultIDCRegion, + }, nil + } +} + +func (c *SSOOIDCClient) LoginWithIDCAuthCode(ctx context.Context, startURL, region string) (*KiroTokenData, error) { + fmt.Println("\n╔══════════════════════════════════════════════════════════╗") + fmt.Println("║ Kiro Authentication (AWS IDC - Auth Code) ║") + fmt.Println("╚══════════════════════════════════════════════════════════╝") + + if region == "" { + region = defaultIDCRegion + } + + codeVerifier, codeChallenge, err := generatePKCEForAuthCode() + if err != nil { + return nil, fmt.Errorf("failed to generate PKCE: %w", err) + } + + state, err := generateStateForAuthCode() + if err != nil { + return nil, fmt.Errorf("failed to generate state: %w", err) + } + + fmt.Println("\nStarting callback server...") + redirectURI, resultChan, err := c.startAuthCodeCallbackServer(ctx, state) + if err != nil { + return nil, fmt.Errorf("failed to start callback server: %w", err) + } + log.Debugf("Callback server started, redirect URI: %s", redirectURI) + + fmt.Println("Registering client...") + regResp, err := c.RegisterClientForAuthCodeWithIDC(ctx, redirectURI, startURL, region) + if err != nil { + return nil, fmt.Errorf("failed to register client: %w", err) + } + log.Debugf("Client registered: %s", regResp.ClientID) + + endpoint := getOIDCEndpoint(region) + scopes := "codewhisperer:completions,codewhisperer:analysis,codewhisperer:conversations,codewhisperer:transformations,codewhisperer:taskassist" + authURL := buildAuthorizationURL(endpoint, regResp.ClientID, redirectURI, scopes, state, codeChallenge) + + fmt.Println("\n════════════════════════════════════════════════════════════") + fmt.Println(" Opening browser for authentication...") + fmt.Println("════════════════════════════════════════════════════════════") + fmt.Printf("\n URL: %s\n\n", authURL) + + if c.cfg != nil { + browser.SetIncognitoMode(c.cfg.IncognitoBrowser) + } else { + browser.SetIncognitoMode(true) + } + + if err := browser.OpenURL(authURL); err != nil { + log.Warnf("Could not open browser automatically: %v", err) + fmt.Println(" ⚠ Could not open browser automatically.") + fmt.Println(" Please open the URL above in your browser manually.") + } else { + fmt.Println(" (Browser opened automatically)") + } + + fmt.Println("\n Waiting for authorization callback...") + + select { + case <-ctx.Done(): + browser.CloseBrowser() + return nil, ctx.Err() + case <-time.After(10 * time.Minute): + browser.CloseBrowser() + return nil, fmt.Errorf("authorization timed out") + case result := <-resultChan: + if result.Error != "" { + browser.CloseBrowser() + return nil, fmt.Errorf("authorization failed: %s", result.Error) + } + + fmt.Println("\n✓ Authorization received!") + + if err := browser.CloseBrowser(); err != nil { + log.Debugf("Failed to close browser: %v", err) + } + + fmt.Println("Exchanging code for tokens...") + tokenResp, err := c.CreateTokenWithAuthCodeAndRegion(ctx, regResp.ClientID, regResp.ClientSecret, result.Code, codeVerifier, redirectURI, region) + if err != nil { + return nil, fmt.Errorf("failed to exchange code for tokens: %w", err) + } + + fmt.Println("\n✓ Authentication successful!") + + fmt.Println("Fetching profile information...") + profileArn := c.FetchProfileArn(ctx, tokenResp.AccessToken, regResp.ClientID, tokenResp.RefreshToken) + + email := FetchUserEmailWithFallback(ctx, c.cfg, tokenResp.AccessToken, regResp.ClientID, tokenResp.RefreshToken) + if email != "" { + fmt.Printf(" Logged in as: %s\n", email) + } + + expiresAt := time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second) + + return &KiroTokenData{ + AccessToken: tokenResp.AccessToken, + RefreshToken: tokenResp.RefreshToken, + ProfileArn: profileArn, + ExpiresAt: expiresAt.Format(time.RFC3339), + AuthMethod: "idc", + Provider: "AWS", + ClientID: regResp.ClientID, + ClientSecret: regResp.ClientSecret, + Email: email, + StartURL: startURL, + Region: region, + }, nil + } +} + +func buildAuthorizationURL(endpoint, clientID, redirectURI, scopes, state, codeChallenge string) string { + params := url.Values{} + params.Set("response_type", "code") + params.Set("client_id", clientID) + params.Set("redirect_uri", redirectURI) + params.Set("scopes", scopes) + params.Set("state", state) + params.Set("code_challenge", codeChallenge) + params.Set("code_challenge_method", "S256") + return fmt.Sprintf("%s/authorize?%s", endpoint, params.Encode()) +} diff --git a/internal/auth/kiro/sso_oidc_test.go b/internal/auth/kiro/sso_oidc_test.go new file mode 100644 index 0000000000..760a6033ad --- /dev/null +++ b/internal/auth/kiro/sso_oidc_test.go @@ -0,0 +1,261 @@ +package kiro + +import ( + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "net/http/httptest" + "net/url" + "strings" + "testing" +) + +type recordingRoundTripper struct { + lastReq *http.Request +} + +func (rt *recordingRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { + rt.lastReq = req + body := `{"nextToken":null,"profiles":[{"arn":"arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC","profileName":"test"}]}` + return &http.Response{ + StatusCode: http.StatusOK, + Body: io.NopCloser(strings.NewReader(body)), + Header: make(http.Header), + }, nil +} + +func TestTryListAvailableProfiles_UsesClientIDForAccountKey(t *testing.T) { + rt := &recordingRoundTripper{} + client := &SSOOIDCClient{ + httpClient: &http.Client{Transport: rt}, + } + + profileArn := client.tryListAvailableProfiles(context.Background(), "access-token", "client-id-123", "refresh-token-456") + if profileArn == "" { + t.Fatal("expected profileArn, got empty result") + } + + accountKey := GetAccountKey("client-id-123", "refresh-token-456") + fp := GlobalFingerprintManager().GetFingerprint(accountKey) + expected := fmt.Sprintf("aws-sdk-js/%s KiroIDE-%s-%s", fp.RuntimeSDKVersion, fp.KiroVersion, fp.KiroHash) + got := rt.lastReq.Header.Get("X-Amz-User-Agent") + if got != expected { + t.Errorf("X-Amz-User-Agent = %q, want %q", got, expected) + } +} + +func TestTryListAvailableProfiles_UsesRefreshTokenWhenClientIDMissing(t *testing.T) { + rt := &recordingRoundTripper{} + client := &SSOOIDCClient{ + httpClient: &http.Client{Transport: rt}, + } + + profileArn := client.tryListAvailableProfiles(context.Background(), "access-token", "", "refresh-token-789") + if profileArn == "" { + t.Fatal("expected profileArn, got empty result") + } + + accountKey := GetAccountKey("", "refresh-token-789") + fp := GlobalFingerprintManager().GetFingerprint(accountKey) + expected := fmt.Sprintf("aws-sdk-js/%s KiroIDE-%s-%s", fp.RuntimeSDKVersion, fp.KiroVersion, fp.KiroHash) + got := rt.lastReq.Header.Get("X-Amz-User-Agent") + if got != expected { + t.Errorf("X-Amz-User-Agent = %q, want %q", got, expected) + } +} + +func TestRegisterClientForAuthCodeWithIDC(t *testing.T) { + var capturedReq struct { + Method string + Path string + Headers http.Header + Body map[string]interface{} + } + + mockResp := RegisterClientResponse{ + ClientID: "test-client-id", + ClientSecret: "test-client-secret", + ClientIDIssuedAt: 1700000000, + ClientSecretExpiresAt: 1700086400, + } + + ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + capturedReq.Method = r.Method + capturedReq.Path = r.URL.Path + capturedReq.Headers = r.Header.Clone() + + bodyBytes, _ := io.ReadAll(r.Body) + json.Unmarshal(bodyBytes, &capturedReq.Body) + + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(mockResp) + })) + defer ts.Close() + + // Extract host to build a region that resolves to our test server. + // Override getOIDCEndpoint by passing region="" and patching the endpoint. + // Since getOIDCEndpoint builds "https://oidc.{region}.amazonaws.com", we + // instead inject the test server URL directly via a custom HTTP client transport. + client := &SSOOIDCClient{ + httpClient: ts.Client(), + } + + // We need to route the request to our test server. Use a transport that rewrites the URL. + client.httpClient.Transport = &rewriteTransport{ + base: ts.Client().Transport, + targetURL: ts.URL, + } + + resp, err := client.RegisterClientForAuthCodeWithIDC( + context.Background(), + "http://127.0.0.1:19877/oauth/callback", + "https://my-idc-instance.awsapps.com/start", + "us-east-1", + ) + if err != nil { + t.Fatalf("unexpected error: %v", err) + } + + // Verify request method and path + if capturedReq.Method != http.MethodPost { + t.Errorf("method = %q, want POST", capturedReq.Method) + } + if capturedReq.Path != "/client/register" { + t.Errorf("path = %q, want /client/register", capturedReq.Path) + } + + // Verify headers + if ct := capturedReq.Headers.Get("Content-Type"); ct != "application/json" { + t.Errorf("Content-Type = %q, want application/json", ct) + } + ua := capturedReq.Headers.Get("User-Agent") + if !strings.Contains(ua, "KiroIDE") { + t.Errorf("User-Agent %q does not contain KiroIDE", ua) + } + if !strings.Contains(ua, "sso-oidc") { + t.Errorf("User-Agent %q does not contain sso-oidc", ua) + } + xua := capturedReq.Headers.Get("X-Amz-User-Agent") + if !strings.Contains(xua, "KiroIDE") { + t.Errorf("x-amz-user-agent %q does not contain KiroIDE", xua) + } + + // Verify body fields + if v, _ := capturedReq.Body["clientName"].(string); v != "Kiro IDE" { + t.Errorf("clientName = %q, want %q", v, "Kiro IDE") + } + if v, _ := capturedReq.Body["clientType"].(string); v != "public" { + t.Errorf("clientType = %q, want %q", v, "public") + } + if v, _ := capturedReq.Body["issuerUrl"].(string); v != "https://my-idc-instance.awsapps.com/start" { + t.Errorf("issuerUrl = %q, want %q", v, "https://my-idc-instance.awsapps.com/start") + } + + // Verify scopes array + scopesRaw, ok := capturedReq.Body["scopes"].([]interface{}) + if !ok || len(scopesRaw) != 5 { + t.Fatalf("scopes: got %v, want 5-element array", capturedReq.Body["scopes"]) + } + expectedScopes := []string{ + "codewhisperer:completions", "codewhisperer:analysis", + "codewhisperer:conversations", "codewhisperer:transformations", + "codewhisperer:taskassist", + } + for i, s := range expectedScopes { + if scopesRaw[i].(string) != s { + t.Errorf("scopes[%d] = %q, want %q", i, scopesRaw[i], s) + } + } + + // Verify grantTypes + grantTypesRaw, ok := capturedReq.Body["grantTypes"].([]interface{}) + if !ok || len(grantTypesRaw) != 2 { + t.Fatalf("grantTypes: got %v, want 2-element array", capturedReq.Body["grantTypes"]) + } + if grantTypesRaw[0].(string) != "authorization_code" || grantTypesRaw[1].(string) != "refresh_token" { + t.Errorf("grantTypes = %v, want [authorization_code, refresh_token]", grantTypesRaw) + } + + // Verify redirectUris + redirectRaw, ok := capturedReq.Body["redirectUris"].([]interface{}) + if !ok || len(redirectRaw) != 1 { + t.Fatalf("redirectUris: got %v, want 1-element array", capturedReq.Body["redirectUris"]) + } + if redirectRaw[0].(string) != "http://127.0.0.1:19877/oauth/callback" { + t.Errorf("redirectUris[0] = %q, want %q", redirectRaw[0], "http://127.0.0.1:19877/oauth/callback") + } + + // Verify response parsing + if resp.ClientID != "test-client-id" { + t.Errorf("ClientID = %q, want %q", resp.ClientID, "test-client-id") + } + if resp.ClientSecret != "test-client-secret" { + t.Errorf("ClientSecret = %q, want %q", resp.ClientSecret, "test-client-secret") + } + if resp.ClientIDIssuedAt != 1700000000 { + t.Errorf("ClientIDIssuedAt = %d, want %d", resp.ClientIDIssuedAt, 1700000000) + } + if resp.ClientSecretExpiresAt != 1700086400 { + t.Errorf("ClientSecretExpiresAt = %d, want %d", resp.ClientSecretExpiresAt, 1700086400) + } +} + +// rewriteTransport redirects all requests to the test server URL. +type rewriteTransport struct { + base http.RoundTripper + targetURL string +} + +func (t *rewriteTransport) RoundTrip(req *http.Request) (*http.Response, error) { + target, _ := url.Parse(t.targetURL) + req.URL.Scheme = target.Scheme + req.URL.Host = target.Host + if t.base != nil { + return t.base.RoundTrip(req) + } + return http.DefaultTransport.RoundTrip(req) +} + +func TestBuildAuthorizationURL(t *testing.T) { + scopes := "codewhisperer:completions,codewhisperer:analysis,codewhisperer:conversations,codewhisperer:transformations,codewhisperer:taskassist" + endpoint := "https://oidc.us-east-1.amazonaws.com" + redirectURI := "http://127.0.0.1:19877/oauth/callback" + + authURL := buildAuthorizationURL(endpoint, "test-client-id", redirectURI, scopes, "random-state", "test-challenge") + + // Verify colons and commas in scopes are percent-encoded + if !strings.Contains(authURL, "codewhisperer%3Acompletions") { + t.Errorf("expected colons in scopes to be percent-encoded, got: %s", authURL) + } + if !strings.Contains(authURL, "completions%2Ccodewhisperer") { + t.Errorf("expected commas in scopes to be percent-encoded, got: %s", authURL) + } + + // Parse back and verify all parameters round-trip correctly + parsed, err := url.Parse(authURL) + if err != nil { + t.Fatalf("failed to parse auth URL: %v", err) + } + + if !strings.HasPrefix(authURL, endpoint+"/authorize?") { + t.Errorf("expected URL to start with %s/authorize?, got: %s", endpoint, authURL) + } + + q := parsed.Query() + checks := map[string]string{ + "response_type": "code", + "client_id": "test-client-id", + "redirect_uri": redirectURI, + "scopes": scopes, + "state": "random-state", + "code_challenge": "test-challenge", + "code_challenge_method": "S256", + } + for key, want := range checks { + if got := q.Get(key); got != want { + t.Errorf("%s = %q, want %q", key, got, want) + } + } +} diff --git a/internal/auth/kiro/token.go b/internal/auth/kiro/token.go new file mode 100644 index 0000000000..91a4995b33 --- /dev/null +++ b/internal/auth/kiro/token.go @@ -0,0 +1,89 @@ +package kiro + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" +) + +// KiroTokenStorage holds the persistent token data for Kiro authentication. +type KiroTokenStorage struct { + // Type is the provider type for management UI recognition (must be "kiro") + Type string `json:"type"` + // AccessToken is the OAuth2 access token for API access + AccessToken string `json:"access_token"` + // RefreshToken is used to obtain new access tokens + RefreshToken string `json:"refresh_token"` + // ProfileArn is the AWS CodeWhisperer profile ARN + ProfileArn string `json:"profile_arn"` + // ExpiresAt is the timestamp when the token expires + ExpiresAt string `json:"expires_at"` + // AuthMethod indicates the authentication method used + AuthMethod string `json:"auth_method"` + // Provider indicates the OAuth provider + Provider string `json:"provider"` + // LastRefresh is the timestamp of the last token refresh + LastRefresh string `json:"last_refresh"` + // ClientID is the OAuth client ID (required for token refresh) + ClientID string `json:"client_id,omitempty"` + // ClientSecret is the OAuth client secret (required for token refresh) + ClientSecret string `json:"client_secret,omitempty"` + // Region is the OIDC region for IDC login and token refresh + Region string `json:"region,omitempty"` + // StartURL is the AWS Identity Center start URL (for IDC auth) + StartURL string `json:"start_url,omitempty"` + // Email is the user's email address + Email string `json:"email,omitempty"` +} + +// SaveTokenToFile persists the token storage to the specified file path. +func (s *KiroTokenStorage) SaveTokenToFile(authFilePath string) error { + dir := filepath.Dir(authFilePath) + if err := os.MkdirAll(dir, 0700); err != nil { + return fmt.Errorf("failed to create directory: %w", err) + } + + data, err := json.MarshalIndent(s, "", " ") + if err != nil { + return fmt.Errorf("failed to marshal token storage: %w", err) + } + + if err := os.WriteFile(authFilePath, data, 0600); err != nil { + return fmt.Errorf("failed to write token file: %w", err) + } + + return nil +} + +// LoadFromFile loads token storage from the specified file path. +func LoadFromFile(authFilePath string) (*KiroTokenStorage, error) { + data, err := os.ReadFile(authFilePath) + if err != nil { + return nil, fmt.Errorf("failed to read token file: %w", err) + } + + var storage KiroTokenStorage + if err := json.Unmarshal(data, &storage); err != nil { + return nil, fmt.Errorf("failed to parse token file: %w", err) + } + + return &storage, nil +} + +// ToTokenData converts storage to KiroTokenData for API use. +func (s *KiroTokenStorage) ToTokenData() *KiroTokenData { + return &KiroTokenData{ + AccessToken: s.AccessToken, + RefreshToken: s.RefreshToken, + ProfileArn: s.ProfileArn, + ExpiresAt: s.ExpiresAt, + AuthMethod: s.AuthMethod, + Provider: s.Provider, + ClientID: s.ClientID, + ClientSecret: s.ClientSecret, + Region: s.Region, + StartURL: s.StartURL, + Email: s.Email, + } +} diff --git a/internal/auth/kiro/token_repository.go b/internal/auth/kiro/token_repository.go new file mode 100644 index 0000000000..3ddf620e8f --- /dev/null +++ b/internal/auth/kiro/token_repository.go @@ -0,0 +1,260 @@ +package kiro + +import ( + "context" + "encoding/json" + "fmt" + "io/fs" + "os" + "path/filepath" + "sort" + "strings" + "sync" + "time" + + log "github.com/sirupsen/logrus" +) + +// FileTokenRepository 实现 TokenRepository 接口,基于文件系统存储 +type FileTokenRepository struct { + mu sync.RWMutex + baseDir string +} + +// NewFileTokenRepository 创建一个新的文件 token 存储库 +func NewFileTokenRepository(baseDir string) *FileTokenRepository { + return &FileTokenRepository{ + baseDir: baseDir, + } +} + +// SetBaseDir 设置基础目录 +func (r *FileTokenRepository) SetBaseDir(dir string) { + r.mu.Lock() + r.baseDir = strings.TrimSpace(dir) + r.mu.Unlock() +} + +// FindOldestUnverified 查找需要刷新的 token(按最后验证时间排序) +func (r *FileTokenRepository) FindOldestUnverified(limit int) []*Token { + r.mu.RLock() + baseDir := r.baseDir + r.mu.RUnlock() + + if baseDir == "" { + log.Debug("token repository: base directory not configured") + return nil + } + + var tokens []*Token + + err := filepath.WalkDir(baseDir, func(path string, d fs.DirEntry, walkErr error) error { + if walkErr != nil { + return nil // 忽略错误,继续遍历 + } + if d.IsDir() { + return nil + } + if !strings.HasSuffix(strings.ToLower(d.Name()), ".json") { + return nil + } + + // 只处理 kiro 相关的 token 文件 + if !strings.HasPrefix(d.Name(), "kiro-") { + return nil + } + + token, err := r.readTokenFile(path) + if err != nil { + log.Debugf("token repository: failed to read token file %s: %v", path, err) + return nil + } + + if token != nil && token.RefreshToken != "" { + // 检查 token 是否需要刷新(过期前 5 分钟) + if token.ExpiresAt.IsZero() || time.Until(token.ExpiresAt) < 5*time.Minute { + tokens = append(tokens, token) + } + } + + return nil + }) + + if err != nil { + log.Warnf("token repository: error walking directory: %v", err) + } + + // 按最后验证时间排序(最旧的优先) + sort.Slice(tokens, func(i, j int) bool { + return tokens[i].LastVerified.Before(tokens[j].LastVerified) + }) + + // 限制返回数量 + if limit > 0 && len(tokens) > limit { + tokens = tokens[:limit] + } + + return tokens +} + +// UpdateToken 更新 token 并持久化到文件 +func (r *FileTokenRepository) UpdateToken(token *Token) error { + if token == nil { + return fmt.Errorf("token repository: token is nil") + } + + r.mu.RLock() + baseDir := r.baseDir + r.mu.RUnlock() + + if baseDir == "" { + return fmt.Errorf("token repository: base directory not configured") + } + + // 构建文件路径 + filePath := filepath.Join(baseDir, token.ID) + if !strings.HasSuffix(filePath, ".json") { + filePath += ".json" + } + + // 读取现有文件内容 + existingData := make(map[string]any) + if data, err := os.ReadFile(filePath); err == nil { + _ = json.Unmarshal(data, &existingData) + } + + // 更新字段 + existingData["access_token"] = token.AccessToken + existingData["refresh_token"] = token.RefreshToken + existingData["last_refresh"] = time.Now().Format(time.RFC3339) + + if !token.ExpiresAt.IsZero() { + existingData["expires_at"] = token.ExpiresAt.Format(time.RFC3339) + } + + // 保持原有的关键字段 + if token.ClientID != "" { + existingData["client_id"] = token.ClientID + } + if token.ClientSecret != "" { + existingData["client_secret"] = token.ClientSecret + } + if token.AuthMethod != "" { + existingData["auth_method"] = token.AuthMethod + } + if token.Region != "" { + existingData["region"] = token.Region + } + if token.StartURL != "" { + existingData["start_url"] = token.StartURL + } + + // 序列化并写入文件 + raw, err := json.MarshalIndent(existingData, "", " ") + if err != nil { + return fmt.Errorf("token repository: marshal failed: %w", err) + } + + // 原子写入:先写入临时文件,再重命名 + tmpPath := filePath + ".tmp" + if err := os.WriteFile(tmpPath, raw, 0o600); err != nil { + return fmt.Errorf("token repository: write temp file failed: %w", err) + } + if err := os.Rename(tmpPath, filePath); err != nil { + _ = os.Remove(tmpPath) + return fmt.Errorf("token repository: rename failed: %w", err) + } + + log.Debugf("token repository: updated token %s", token.ID) + return nil +} + +// readTokenFile 从文件读取 token +func (r *FileTokenRepository) readTokenFile(path string) (*Token, error) { + data, err := os.ReadFile(path) + if err != nil { + return nil, err + } + + var metadata map[string]any + if err := json.Unmarshal(data, &metadata); err != nil { + return nil, err + } + + // 检查是否是 kiro token + tokenType, _ := metadata["type"].(string) + if tokenType != "kiro" { + return nil, nil + } + + // 检查 auth_method (case-insensitive comparison to handle "IdC", "IDC", "idc", etc.) + authMethod, _ := metadata["auth_method"].(string) + authMethod = strings.ToLower(authMethod) + if authMethod != "idc" && authMethod != "builder-id" { + return nil, nil // 只处理 IDC 和 Builder ID token + } + + token := &Token{ + ID: filepath.Base(path), + AuthMethod: authMethod, + } + + // 解析各字段 + token.AccessToken, _ = metadata["access_token"].(string) + token.RefreshToken, _ = metadata["refresh_token"].(string) + token.ClientID, _ = metadata["client_id"].(string) + token.ClientSecret, _ = metadata["client_secret"].(string) + token.Region, _ = metadata["region"].(string) + token.StartURL, _ = metadata["start_url"].(string) + token.Provider, _ = metadata["provider"].(string) + + // 解析时间字段 + if expiresAtStr, ok := metadata["expires_at"].(string); ok && expiresAtStr != "" { + if t, err := time.Parse(time.RFC3339, expiresAtStr); err == nil { + token.ExpiresAt = t + } + } + if lastRefreshStr, ok := metadata["last_refresh"].(string); ok && lastRefreshStr != "" { + if t, err := time.Parse(time.RFC3339, lastRefreshStr); err == nil { + token.LastVerified = t + } + } + + return token, nil +} + +// ListKiroTokens 列出所有 Kiro token(用于调试) +func (r *FileTokenRepository) ListKiroTokens(ctx context.Context) ([]*Token, error) { + r.mu.RLock() + baseDir := r.baseDir + r.mu.RUnlock() + + if baseDir == "" { + return nil, fmt.Errorf("token repository: base directory not configured") + } + + var tokens []*Token + + err := filepath.WalkDir(baseDir, func(path string, d fs.DirEntry, walkErr error) error { + if walkErr != nil { + return nil + } + if d.IsDir() { + return nil + } + if !strings.HasPrefix(d.Name(), "kiro-") || !strings.HasSuffix(d.Name(), ".json") { + return nil + } + + token, err := r.readTokenFile(path) + if err != nil { + return nil + } + if token != nil { + tokens = append(tokens, token) + } + return nil + }) + + return tokens, err +} diff --git a/internal/auth/kiro/usage_checker.go b/internal/auth/kiro/usage_checker.go new file mode 100644 index 0000000000..30ec1c5dc1 --- /dev/null +++ b/internal/auth/kiro/usage_checker.go @@ -0,0 +1,236 @@ +// Package kiro provides authentication functionality for AWS CodeWhisperer (Kiro) API. +// This file implements usage quota checking and monitoring. +package kiro + +import ( + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" +) + +// UsageQuotaResponse represents the API response structure for usage quota checking. +type UsageQuotaResponse struct { + UsageBreakdownList []UsageBreakdownExtended `json:"usageBreakdownList"` + SubscriptionInfo *SubscriptionInfo `json:"subscriptionInfo,omitempty"` + NextDateReset float64 `json:"nextDateReset,omitempty"` +} + +// UsageBreakdownExtended represents detailed usage information for quota checking. +// Note: UsageBreakdown is already defined in codewhisperer_client.go +type UsageBreakdownExtended struct { + ResourceType string `json:"resourceType"` + UsageLimitWithPrecision float64 `json:"usageLimitWithPrecision"` + CurrentUsageWithPrecision float64 `json:"currentUsageWithPrecision"` + FreeTrialInfo *FreeTrialInfoExtended `json:"freeTrialInfo,omitempty"` +} + +// FreeTrialInfoExtended represents free trial usage information. +type FreeTrialInfoExtended struct { + FreeTrialStatus string `json:"freeTrialStatus"` + UsageLimitWithPrecision float64 `json:"usageLimitWithPrecision"` + CurrentUsageWithPrecision float64 `json:"currentUsageWithPrecision"` +} + +// QuotaStatus represents the quota status for a token. +type QuotaStatus struct { + TotalLimit float64 + CurrentUsage float64 + RemainingQuota float64 + IsExhausted bool + ResourceType string + NextReset time.Time +} + +// UsageChecker provides methods for checking token quota usage. +type UsageChecker struct { + httpClient *http.Client +} + +// NewUsageChecker creates a new UsageChecker instance. +func NewUsageChecker(cfg *config.Config) *UsageChecker { + return &UsageChecker{ + httpClient: util.SetProxy(&cfg.SDKConfig, &http.Client{Timeout: 30 * time.Second}), + } +} + +// NewUsageCheckerWithClient creates a UsageChecker with a custom HTTP client. +func NewUsageCheckerWithClient(client *http.Client) *UsageChecker { + return &UsageChecker{ + httpClient: client, + } +} + +// CheckUsage retrieves usage limits for the given token. +func (c *UsageChecker) CheckUsage(ctx context.Context, tokenData *KiroTokenData) (*UsageQuotaResponse, error) { + if tokenData == nil { + return nil, fmt.Errorf("token data is nil") + } + + if tokenData.AccessToken == "" { + return nil, fmt.Errorf("access token is empty") + } + + queryParams := map[string]string{ + "origin": "AI_EDITOR", + "profileArn": tokenData.ProfileArn, + "resourceType": "AGENTIC_REQUEST", + } + + // Use endpoint from profileArn if available + endpoint := GetKiroAPIEndpointFromProfileArn(tokenData.ProfileArn) + url := buildURL(endpoint, pathGetUsageLimits, queryParams) + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) + if err != nil { + return nil, fmt.Errorf("failed to create request: %w", err) + } + + accountKey := GetAccountKey(tokenData.ClientID, tokenData.RefreshToken) + setRuntimeHeaders(req, tokenData.AccessToken, accountKey) + + resp, err := c.httpClient.Do(req) + if err != nil { + return nil, fmt.Errorf("request failed: %w", err) + } + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("failed to read response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("API error (status %d): %s", resp.StatusCode, string(body)) + } + + var result UsageQuotaResponse + if err := json.Unmarshal(body, &result); err != nil { + return nil, fmt.Errorf("failed to parse usage response: %w", err) + } + + return &result, nil +} + +// CheckUsageByAccessToken retrieves usage limits using an access token and profile ARN directly. +func (c *UsageChecker) CheckUsageByAccessToken(ctx context.Context, accessToken, profileArn string) (*UsageQuotaResponse, error) { + tokenData := &KiroTokenData{ + AccessToken: accessToken, + ProfileArn: profileArn, + } + return c.CheckUsage(ctx, tokenData) +} + +// GetRemainingQuota calculates the remaining quota from usage limits. +func GetRemainingQuota(usage *UsageQuotaResponse) float64 { + if usage == nil || len(usage.UsageBreakdownList) == 0 { + return 0 + } + + var totalRemaining float64 + for _, breakdown := range usage.UsageBreakdownList { + remaining := breakdown.UsageLimitWithPrecision - breakdown.CurrentUsageWithPrecision + if remaining > 0 { + totalRemaining += remaining + } + + if breakdown.FreeTrialInfo != nil { + freeRemaining := breakdown.FreeTrialInfo.UsageLimitWithPrecision - breakdown.FreeTrialInfo.CurrentUsageWithPrecision + if freeRemaining > 0 { + totalRemaining += freeRemaining + } + } + } + + return totalRemaining +} + +// IsQuotaExhausted checks if the quota is exhausted based on usage limits. +func IsQuotaExhausted(usage *UsageQuotaResponse) bool { + if usage == nil || len(usage.UsageBreakdownList) == 0 { + return true + } + + for _, breakdown := range usage.UsageBreakdownList { + if breakdown.CurrentUsageWithPrecision < breakdown.UsageLimitWithPrecision { + return false + } + + if breakdown.FreeTrialInfo != nil { + if breakdown.FreeTrialInfo.CurrentUsageWithPrecision < breakdown.FreeTrialInfo.UsageLimitWithPrecision { + return false + } + } + } + + return true +} + +// GetQuotaStatus retrieves a comprehensive quota status for a token. +func (c *UsageChecker) GetQuotaStatus(ctx context.Context, tokenData *KiroTokenData) (*QuotaStatus, error) { + usage, err := c.CheckUsage(ctx, tokenData) + if err != nil { + return nil, err + } + + status := &QuotaStatus{ + IsExhausted: IsQuotaExhausted(usage), + } + + if len(usage.UsageBreakdownList) > 0 { + breakdown := usage.UsageBreakdownList[0] + status.TotalLimit = breakdown.UsageLimitWithPrecision + status.CurrentUsage = breakdown.CurrentUsageWithPrecision + status.RemainingQuota = breakdown.UsageLimitWithPrecision - breakdown.CurrentUsageWithPrecision + status.ResourceType = breakdown.ResourceType + + if breakdown.FreeTrialInfo != nil { + status.TotalLimit += breakdown.FreeTrialInfo.UsageLimitWithPrecision + status.CurrentUsage += breakdown.FreeTrialInfo.CurrentUsageWithPrecision + freeRemaining := breakdown.FreeTrialInfo.UsageLimitWithPrecision - breakdown.FreeTrialInfo.CurrentUsageWithPrecision + if freeRemaining > 0 { + status.RemainingQuota += freeRemaining + } + } + } + + if usage.NextDateReset > 0 { + status.NextReset = time.Unix(int64(usage.NextDateReset/1000), 0) + } + + return status, nil +} + +// CalculateAvailableCount calculates the available request count based on usage limits. +func CalculateAvailableCount(usage *UsageQuotaResponse) float64 { + return GetRemainingQuota(usage) +} + +// GetUsagePercentage calculates the usage percentage. +func GetUsagePercentage(usage *UsageQuotaResponse) float64 { + if usage == nil || len(usage.UsageBreakdownList) == 0 { + return 100.0 + } + + var totalLimit, totalUsage float64 + for _, breakdown := range usage.UsageBreakdownList { + totalLimit += breakdown.UsageLimitWithPrecision + totalUsage += breakdown.CurrentUsageWithPrecision + + if breakdown.FreeTrialInfo != nil { + totalLimit += breakdown.FreeTrialInfo.UsageLimitWithPrecision + totalUsage += breakdown.FreeTrialInfo.CurrentUsageWithPrecision + } + } + + if totalLimit == 0 { + return 100.0 + } + + return (totalUsage / totalLimit) * 100 +} diff --git a/internal/auth/qoder/auth.go b/internal/auth/qoder/auth.go new file mode 100644 index 0000000000..cc2f800770 --- /dev/null +++ b/internal/auth/qoder/auth.go @@ -0,0 +1,197 @@ +// Package qoder provides OAuth2 authentication functionality for the Qoder provider. +package qoder + +import ( + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strings" + + log "github.com/sirupsen/logrus" +) + +// UserStatusResponse represents the response from the user status endpoint. +type UserStatusResponse struct { + ID string `json:"id"` + Name string `json:"name"` + Email string `json:"email"` +} + +// QoderAuth handles Qoder PKCE + URI-scheme authentication. +type QoderAuth struct { + httpClient *http.Client +} + +// NewQoderAuth creates a new Qoder auth service. +func NewQoderAuth(httpClient *http.Client) *QoderAuth { + if httpClient == nil { + httpClient = &http.Client{} + } + return &QoderAuth{httpClient: httpClient} +} + +// GeneratePKCE generates a PKCE verifier/challenge pair and a nonce. +func GeneratePKCE() (nonce, challenge, verifier string, err error) { + // Generate 32-byte random verifier + verifierBytes := make([]byte, 32) + if _, err = rand.Read(verifierBytes); err != nil { + return "", "", "", fmt.Errorf("qoder: generate verifier: %w", err) + } + verifier = base64.RawURLEncoding.EncodeToString(verifierBytes) + + // SHA-256 challenge + challengeHash := sha256.Sum256([]byte(verifier)) + challenge = base64.RawURLEncoding.EncodeToString(challengeHash[:]) + + // Nonce + nonceBytes := make([]byte, 16) + if _, err = rand.Read(nonceBytes); err != nil { + return "", "", "", fmt.Errorf("qoder: generate nonce: %w", err) + } + nonce = fmt.Sprintf("%x", nonceBytes) + + return nonce, challenge, verifier, nil +} + +// BuildAuthURL constructs the Qoder login URL for browser-based authentication. +func BuildAuthURL(nonce, challenge, machineID string) string { + params := url.Values{} + params.Set("nonce", nonce) + params.Set("challenge", challenge) + params.Set("challenge_method", "S256") + params.Set("redirect_uri", RedirectURI) + params.Set("machine_id", machineID) + return AuthBase + SelectAccountsPath + "?" + params.Encode() +} + +// FetchUserStatus retrieves user info using a device token. +func (o *QoderAuth) FetchUserStatus(deviceToken string) (*UserStatusResponse, error) { + deviceToken = strings.TrimSpace(deviceToken) + if deviceToken == "" { + return nil, fmt.Errorf("qoder user status: missing device token") + } + reqURL := OpenAPIBase + UserStatusPath + req, err := http.NewRequest(http.MethodGet, reqURL, nil) + if err != nil { + return nil, fmt.Errorf("qoder user status: create request: %w", err) + } + req.Header.Set("Accept", "application/json") + req.Header.Set("Authorization", "Bearer "+deviceToken) + req.Header.Set("Cosy-Version", IDEVersion) + req.Header.Set("Cosy-Clienttype", "0") + + resp, errDo := o.httpClient.Do(req) + if errDo != nil { + return nil, fmt.Errorf("qoder user status: execute request: %w", errDo) + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("qoder user status: close body error: %v", errClose) + } + }() + + if resp.StatusCode < http.StatusOK || resp.StatusCode >= http.StatusMultipleChoices { + bodyBytes, errRead := io.ReadAll(io.LimitReader(resp.Body, 8<<10)) + if errRead != nil { + return nil, fmt.Errorf("qoder user status: read response: %w", errRead) + } + body := strings.TrimSpace(string(bodyBytes)) + if body == "" { + return nil, fmt.Errorf("qoder user status: request failed: status %d", resp.StatusCode) + } + return nil, fmt.Errorf("qoder user status: request failed: status %d: %s", resp.StatusCode, body) + } + + var user UserStatusResponse + if errDecode := json.NewDecoder(resp.Body).Decode(&user); errDecode != nil { + return nil, fmt.Errorf("qoder user status: decode response: %w", errDecode) + } + return &user, nil +} + +// DecodeAuthField decodes the obfuscated auth callback field to extract user info. +func DecodeAuthField(authStr string) (map[string]any, error) { + if strings.TrimSpace(authStr) == "" { + return nil, fmt.Errorf("qoder: empty auth field") + } + + // Reverse custom alphabet to standard base64 + var b64 strings.Builder + for _, c := range authStr { + ch := string(c) + if ch == CustomPad { + b64.WriteByte('=') + } else { + idx := strings.Index(CustomAlphabet, ch) + if idx >= 0 { + b64.WriteByte(StdAlphabet[idx]) + } else { + b64.WriteString(ch) + } + } + } + + decoded := b64.String() + + // Find the base64-encoded JSON payload starting with "eyJ" + eqPos := strings.Index(decoded, "=") + var head, tail string + if eqPos < 0 { + head = decoded + tail = "" + } else { + tail = decoded[:eqPos] + head = decoded[eqPos+1:] + } + + eyjPos := strings.Index(head, "eyJ") + var reconstructed string + if eyjPos < 0 { + eyjFull := strings.Index(decoded, "eyJ") + if eyjFull < 0 { + return nil, fmt.Errorf("qoder: no JWT payload found in auth field") + } + reconstructed = decoded[eyjFull:] + } else { + reconstructed = head[eyjPos:] + head[:eyjPos] + tail + "=" + } + + // Try decoding with different padding + for _, pad := range []string{"", "=", "==", "==="} { + raw, errDec := base64.StdEncoding.DecodeString(reconstructed + pad) + if errDec != nil { + raw, errDec = base64.RawStdEncoding.DecodeString(reconstructed + pad) + if errDec != nil { + continue + } + } + var result map[string]any + if errJSON := json.Unmarshal(raw, &result); errJSON != nil { + continue + } + return result, nil + } + + return nil, fmt.Errorf("qoder: failed to decode auth field") +} + +// GenerateMachineID creates a deterministic machine identifier. +func GenerateMachineID(hostname, macAddr, system, machine string) string { + raw := fmt.Sprintf("%s-%s-%s-%s", hostname, macAddr, system, machine) + digest := sha256.Sum256([]byte(raw)) + encoded := base64.RawURLEncoding.EncodeToString(digest[:]) + var parts []string + for i := 0; i < len(encoded); i += 22 { + end := i + 22 + if end > len(encoded) { + end = len(encoded) + } + parts = append(parts, encoded[i:end]) + } + return strings.Join(parts, "-") +} diff --git a/internal/auth/qoder/constants.go b/internal/auth/qoder/constants.go new file mode 100644 index 0000000000..65d1a34bc4 --- /dev/null +++ b/internal/auth/qoder/constants.go @@ -0,0 +1,41 @@ +// Package qoder provides OAuth2 authentication functionality for the Qoder provider. +package qoder + +// Qoder login configuration +const ( + CallbackPort = 51122 + AuthBase = "https://qoder.com" + CenterBase = "https://center.qoder.sh" + ChatBase = "https://api3.qoder.sh" + OpenAPIBase = "https://openapi.qoder.sh" + IDEVersion = "0.14.2" + CosyVersion = "1.0.0" + RedirectURI = "qoder://aicoding.aicoding-agent/login-success" +) + +// SelectAccountsPath is the browser login page path. +const SelectAccountsPath = "/device/selectAccounts" + +// ServerPublicKeyPEM is the RSA public key for COSY authentication. +const ServerPublicKeyPEM = `-----BEGIN PUBLIC KEY----- +MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDA8iMH5c02LilrsERw9t6Pv5Nc +4k6Pz1EaDicBMpdpxKduSZu5OANqUq8er4GM95omAGIOPOh+Nx0spthYA2BqGz+l +6HRkPJ7S236FZz73In/KVuLnwI8JJ2CbuJap8kvheCCZpmAWpb/cPx/3Vr/J6I17 +XcW+ML9FoCI6AOvOzwIDAQAB +-----END PUBLIC KEY-----` + +// Custom base64 encoding alphabet used by Qoder body encoding. +const ( + CustomPad = "$" + StdAlphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" + CustomAlphabet = "_doRTgHZBKcGVjlvpC,@aFSx#DPuNJme&i*MzLOEn)sUrthbf%Y^w.(kIQyXqWA!" +) + +// Chat endpoint path +const ( + ChatPath = "/algo/api/v2/service/pro/sse/agent_chat_generation" + ChatQueryExtra = "FetchKeys=llm_model_result&AgentId=agent_common" + ModelListPath = "/algo/api/v2/model/list" + UserPlanPath = "/algo/api/v2/user/plan" + UserStatusPath = "/api/v3/user/status" +) diff --git a/internal/auth/qoder/filename.go b/internal/auth/qoder/filename.go new file mode 100644 index 0000000000..1fab171a91 --- /dev/null +++ b/internal/auth/qoder/filename.go @@ -0,0 +1,16 @@ +package qoder + +import ( + "fmt" + "strings" +) + +// CredentialFileName returns the filename used to persist Qoder credentials. +// It uses the uid as a suffix to disambiguate accounts. +func CredentialFileName(uid string) string { + uid = strings.TrimSpace(uid) + if uid == "" { + return "qoder.json" + } + return fmt.Sprintf("qoder-%s.json", uid) +} diff --git a/internal/auth/qoder/uri_handler_other.go b/internal/auth/qoder/uri_handler_other.go new file mode 100644 index 0000000000..e5fff6b05e --- /dev/null +++ b/internal/auth/qoder/uri_handler_other.go @@ -0,0 +1,13 @@ +//go:build !windows + +package qoder + +// RegisterURIHandler is a no-op on non-Windows platforms. +// On Linux/macOS, the qoder:// protocol would need xdg-open or other platform-specific handling. +// For now, users on non-Windows platforms should paste the callback URL manually. +func RegisterURIHandler(callbackPort int) func() { + return func() {} +} + +// UnregisterURIHandler is a no-op on non-Windows platforms. +func UnregisterURIHandler() {} diff --git a/internal/auth/qoder/uri_handler_windows.go b/internal/auth/qoder/uri_handler_windows.go new file mode 100644 index 0000000000..79fb4aad77 --- /dev/null +++ b/internal/auth/qoder/uri_handler_windows.go @@ -0,0 +1,89 @@ +//go:build windows + +package qoder + +import ( + "fmt" + "os" + "os/exec" + "path/filepath" + + log "github.com/sirupsen/logrus" +) + +const vbsFileName = "qoder_login_handler.vbs" + +// RegisterURIHandler registers the qoder:// URI protocol handler on Windows. +// It creates a VBS script that forwards the qoder:// callback URL to the local +// HTTP callback server, then registers the protocol in the Windows registry. +// Returns a cleanup function that should be deferred. +func RegisterURIHandler(callbackPort int) func() { + vbsPath := filepath.Join(os.TempDir(), vbsFileName) + + vbsContent := fmt.Sprintf(`Set objHTTP = CreateObject("MSXML2.XMLHTTP") +On Error Resume Next +url = "http://127.0.0.1:%d/forward?url=" +url = url & UrlEncode(WScript.Arguments(0)) +objHTTP.Open "GET", url, False +objHTTP.send + +Function UrlEncode(str) + Dim result, i, c + result = "" + For i = 1 To Len(str) + c = Mid(str, i, 1) + If c = " " Then + result = result & "+" + ElseIf InStr("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_.~", c) > 0 Then + result = result & c + Else + result = result & "%%" & Right("0" & Hex(Asc(c)), 2) + End If + Next + UrlEncode = result +End Function +`, callbackPort) + + if err := os.WriteFile(vbsPath, []byte(vbsContent), 0o644); err != nil { + log.Errorf("qoder: failed to write VBS handler script: %v", err) + return func() {} + } + + regCmds := [][]string{ + {"reg", "add", `HKCU\Software\Classes\qoder`, "/ve", "/t", "REG_SZ", "/d", "URL:QoderLogin", "/f"}, + {"reg", "add", `HKCU\Software\Classes\qoder`, "/v", "URL Protocol", "/t", "REG_SZ", "/d", "", "/f"}, + {"reg", "add", `HKCU\Software\Classes\qoder\shell`, "/f"}, + {"reg", "add", `HKCU\Software\Classes\qoder\shell\open`, "/f"}, + {"reg", "add", `HKCU\Software\Classes\qoder\shell\open\command`, + "/ve", "/t", "REG_SZ", "/d", fmt.Sprintf(`wscript.exe "%s" %%1`, vbsPath), "/f"}, + } + + for _, args := range regCmds { + cmd := exec.Command(args[0], args[1:]...) + cmd.Stdout = nil + cmd.Stderr = nil + _ = cmd.Run() + } + + log.Infof("qoder: registered qoder:// URI handler (VBS: %s)", vbsPath) + + return func() { + UnregisterURIHandler() + } +} + +// UnregisterURIHandler removes the qoder:// URI protocol handler from Windows registry +// and cleans up the temporary VBS script. +func UnregisterURIHandler() { + cmd := exec.Command("reg", "delete", `HKCU\Software\Classes\qoder`, "/f") + cmd.Stdout = nil + cmd.Stderr = nil + _ = cmd.Run() + + vbsPath := filepath.Join(os.TempDir(), vbsFileName) + if _, err := os.Stat(vbsPath); err == nil { + _ = os.Remove(vbsPath) + } + + log.Info("qoder: unregistered qoder:// URI handler") +} diff --git a/internal/auth/vertex/vertex_credentials.go b/internal/auth/vertex/vertex_credentials.go index 9f830994ed..db214bd6e2 100644 --- a/internal/auth/vertex/vertex_credentials.go +++ b/internal/auth/vertex/vertex_credentials.go @@ -8,7 +8,7 @@ import ( "os" "path/filepath" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" log "github.com/sirupsen/logrus" ) diff --git a/internal/browser/browser.go b/internal/browser/browser.go index b24dc5e112..d56f877537 100644 --- a/internal/browser/browser.go +++ b/internal/browser/browser.go @@ -144,3 +144,29 @@ func GetPlatformInfo() map[string]interface{} { return info } + +// incognitoMode controls whether to open URLs in incognito/private mode. +var incognitoMode bool + +// lastBrowserCmd stores the last opened browser process for cleanup. +var lastBrowserCmd *exec.Cmd + +// SetIncognitoMode enables or disables incognito/private browsing mode. +func SetIncognitoMode(enabled bool) { + incognitoMode = enabled +} + +// IsIncognitoMode returns whether incognito mode is enabled. +func IsIncognitoMode() bool { + return incognitoMode +} + +// CloseBrowser closes the last opened browser process. +func CloseBrowser() error { + if lastBrowserCmd == nil || lastBrowserCmd.Process == nil { + return nil + } + err := lastBrowserCmd.Process.Kill() + lastBrowserCmd = nil + return err +} diff --git a/internal/cmd/anthropic_login.go b/internal/cmd/anthropic_login.go index f7381461a6..cc1bfc8e7c 100644 --- a/internal/cmd/anthropic_login.go +++ b/internal/cmd/anthropic_login.go @@ -6,9 +6,9 @@ import ( "fmt" "os" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/claude" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/claude" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/cmd/antigravity_login.go b/internal/cmd/antigravity_login.go index 2efbaeee01..f2bd5505a2 100644 --- a/internal/cmd/antigravity_login.go +++ b/internal/cmd/antigravity_login.go @@ -4,8 +4,8 @@ import ( "context" "fmt" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/cmd/auth_manager.go b/internal/cmd/auth_manager.go index 2654717901..7896a7023a 100644 --- a/internal/cmd/auth_manager.go +++ b/internal/cmd/auth_manager.go @@ -1,7 +1,7 @@ package cmd import ( - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" ) // newAuthManager creates a new authentication manager instance with all supported diff --git a/internal/cmd/bt_login.go b/internal/cmd/bt_login.go new file mode 100644 index 0000000000..54cb17acff --- /dev/null +++ b/internal/cmd/bt_login.go @@ -0,0 +1,80 @@ +package cmd + +import ( + "context" + "fmt" + "os" + + btauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/bt" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +func DoBTLogin(cfg *config.Config) { + if cfg == nil || len(cfg.BTKey) == 0 { + log.Error("No BT credentials configured. Add bt entries to config.yaml.") + return + } + + store := sdkAuth.GetTokenStore() + successCount := 0 + + for i, entry := range cfg.BTKey { + phone := entry.Phone + password := entry.Password + if phone == "" || password == "" { + log.Warnf("bt[%d]: missing phone or password, skipping", i) + continue + } + + fmt.Printf("Logging in BT account: %s ...\n", phone) + + token, err := btauth.Login(phone, password) + if err != nil { + log.Errorf("bt[%d]: login failed for phone %s: %v", i, phone, err) + continue + } + + authID := fmt.Sprintf("bt-%s.json", phone) + auth := &cliproxyauth.Auth{ + ID: authID, + FileName: authID, + Provider: "bt", + Label: phone, + Storage: token, + Metadata: map[string]any{ + "type": "bt", + "bt_phone": phone, + "bt_password": password, + "uid": token.UID, + "access_key": token.AccessKey, + "serverid": token.ServerID, + }, + } + + if cfg != nil { + if dirSetter, ok := store.(interface{ SetBaseDir(string) }); ok { + dirSetter.SetBaseDir(cfg.AuthDir) + } + } + + savedPath, err := store.Save(context.Background(), auth) + if err != nil { + log.Errorf("bt[%d]: failed to save credentials: %v", i, err) + continue + } + + fmt.Printf("BT authentication saved to %s\n", savedPath) + fmt.Printf("Authenticated as %s (uid: %s)\n", phone, token.UID) + successCount++ + } + + if successCount > 0 { + fmt.Printf("\nBT authentication successful! (%d account(s))\n", successCount) + } else { + fmt.Println("\nNo BT accounts were authenticated successfully.") + os.Exit(1) + } +} diff --git a/internal/cmd/codearts_login.go b/internal/cmd/codearts_login.go new file mode 100644 index 0000000000..be5f73f525 --- /dev/null +++ b/internal/cmd/codearts_login.go @@ -0,0 +1,37 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +func DoCodeArtsLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + CallbackPort: options.CallbackPort, + Metadata: map[string]string{}, + } + + record, savedPath, err := manager.Login(context.Background(), "codearts", cfg, authOpts) + if err != nil { + log.Errorf("CodeArts authentication failed: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("CodeArts authentication successful!") +} diff --git a/internal/cmd/codebuddy_ai_login.go b/internal/cmd/codebuddy_ai_login.go new file mode 100644 index 0000000000..33a4056be0 --- /dev/null +++ b/internal/cmd/codebuddy_ai_login.go @@ -0,0 +1,36 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +func DoCodeBuddyAILogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + } + + record, savedPath, err := manager.Login(context.Background(), "codebuddy-ai", cfg, authOpts) + if err != nil { + log.Errorf("CodeBuddy AI authentication failed: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("CodeBuddy AI authentication successful!") +} diff --git a/internal/cmd/codebuddy_login.go b/internal/cmd/codebuddy_login.go new file mode 100644 index 0000000000..dd718851c6 --- /dev/null +++ b/internal/cmd/codebuddy_login.go @@ -0,0 +1,43 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +// DoCodeBuddyLogin triggers the browser OAuth polling flow for CodeBuddy and saves tokens. +// It initiates the OAuth authentication, displays the user code for the user to enter +// at the CodeBuddy verification URL, and waits for authorization before saving the tokens. +// +// Parameters: +// - cfg: The application configuration containing proxy and auth directory settings +// - options: Login options including browser behavior settings +func DoCodeBuddyLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + } + + record, savedPath, err := manager.Login(context.Background(), "codebuddy", cfg, authOpts) + if err != nil { + log.Errorf("CodeBuddy authentication failed: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("CodeBuddy authentication successful!") +} diff --git a/internal/cmd/cursor_login.go b/internal/cmd/cursor_login.go new file mode 100644 index 0000000000..7317044767 --- /dev/null +++ b/internal/cmd/cursor_login.go @@ -0,0 +1,37 @@ +package cmd + +import ( + "context" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +// DoCursorLogin triggers the OAuth PKCE flow for Cursor and saves tokens. +func DoCursorLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + Prompt: options.Prompt, + } + + record, savedPath, err := manager.Login(context.Background(), "cursor", cfg, authOpts) + if err != nil { + log.Errorf("Cursor authentication failed: %v", err) + return + } + + if savedPath != "" { + log.Infof("Authentication saved to %s", savedPath) + } + if record != nil && record.Label != "" { + log.Infof("Authenticated as %s", record.Label) + } + log.Info("Cursor authentication successful!") +} diff --git a/internal/cmd/github_copilot_login.go b/internal/cmd/github_copilot_login.go new file mode 100644 index 0000000000..f605e87d7e --- /dev/null +++ b/internal/cmd/github_copilot_login.go @@ -0,0 +1,44 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +// DoGitHubCopilotLogin triggers the OAuth device flow for GitHub Copilot and saves tokens. +// It initiates the device flow authentication, displays the user code for the user to enter +// at GitHub's verification URL, and waits for authorization before saving the tokens. +// +// Parameters: +// - cfg: The application configuration containing proxy and auth directory settings +// - options: Login options including browser behavior settings +func DoGitHubCopilotLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + Prompt: options.Prompt, + } + + record, savedPath, err := manager.Login(context.Background(), "github-copilot", cfg, authOpts) + if err != nil { + log.Errorf("GitHub Copilot authentication failed: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("GitHub Copilot authentication successful!") +} diff --git a/internal/cmd/gitlab_login.go b/internal/cmd/gitlab_login.go new file mode 100644 index 0000000000..df8e56f54a --- /dev/null +++ b/internal/cmd/gitlab_login.go @@ -0,0 +1,69 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" +) + +func DoGitLabLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + promptFn := options.Prompt + if promptFn == nil { + promptFn = defaultProjectPrompt() + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + CallbackPort: options.CallbackPort, + Metadata: map[string]string{ + "login_mode": "oauth", + }, + Prompt: promptFn, + } + + _, savedPath, err := manager.Login(context.Background(), "gitlab", cfg, authOpts) + if err != nil { + fmt.Printf("GitLab Duo authentication failed: %v\n", err) + return + } + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + fmt.Println("GitLab Duo authentication successful!") +} + +func DoGitLabTokenLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + promptFn := options.Prompt + if promptFn == nil { + promptFn = defaultProjectPrompt() + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + Metadata: map[string]string{ + "login_mode": "pat", + }, + Prompt: promptFn, + } + + _, savedPath, err := manager.Login(context.Background(), "gitlab", cfg, authOpts) + if err != nil { + fmt.Printf("GitLab Duo PAT authentication failed: %v\n", err) + return + } + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + fmt.Println("GitLab Duo PAT authentication successful!") +} diff --git a/internal/cmd/iflow_cookie.go b/internal/cmd/iflow_cookie.go new file mode 100644 index 0000000000..61408deac8 --- /dev/null +++ b/internal/cmd/iflow_cookie.go @@ -0,0 +1,98 @@ +package cmd + +import ( + "bufio" + "context" + "fmt" + "os" + "path/filepath" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/iflow" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" +) + +// DoIFlowCookieAuth performs the iFlow cookie-based authentication. +func DoIFlowCookieAuth(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + promptFn := options.Prompt + if promptFn == nil { + reader := bufio.NewReader(os.Stdin) + promptFn = func(prompt string) (string, error) { + fmt.Print(prompt) + value, err := reader.ReadString('\n') + if err != nil { + return "", err + } + return strings.TrimSpace(value), nil + } + } + + // Prompt user for cookie + cookie, err := promptForCookie(promptFn) + if err != nil { + fmt.Printf("Failed to get cookie: %v\n", err) + return + } + + // Check for duplicate BXAuth before authentication + bxAuth := iflow.ExtractBXAuth(cookie) + if existingFile, err := iflow.CheckDuplicateBXAuth(cfg.AuthDir, bxAuth); err != nil { + fmt.Printf("Failed to check duplicate: %v\n", err) + return + } else if existingFile != "" { + fmt.Printf("Duplicate BXAuth found, authentication already exists: %s\n", filepath.Base(existingFile)) + return + } + + // Authenticate with cookie + auth := iflow.NewIFlowAuth(cfg) + ctx := context.Background() + + tokenData, err := auth.AuthenticateWithCookie(ctx, cookie) + if err != nil { + fmt.Printf("iFlow cookie authentication failed: %v\n", err) + return + } + + // Create token storage + tokenStorage := auth.CreateCookieTokenStorage(tokenData) + + // Get auth file path using email in filename + authFilePath := getAuthFilePath(cfg, "iflow", tokenData.Email) + + // Save token to file + if err := tokenStorage.SaveTokenToFile(authFilePath); err != nil { + fmt.Printf("Failed to save authentication: %v\n", err) + return + } + + fmt.Printf("Authentication successful! API key: %s\n", tokenData.APIKey) + fmt.Printf("Expires at: %s\n", tokenData.Expire) + fmt.Printf("Authentication saved to: %s\n", authFilePath) +} + +// promptForCookie prompts the user to enter their iFlow cookie +func promptForCookie(promptFn func(string) (string, error)) (string, error) { + line, err := promptFn("Enter iFlow Cookie (from browser cookies): ") + if err != nil { + return "", fmt.Errorf("failed to read cookie: %w", err) + } + + cookie, err := iflow.NormalizeCookie(line) + if err != nil { + return "", err + } + + return cookie, nil +} + +// getAuthFilePath returns the auth file path for the given provider and email +func getAuthFilePath(cfg *config.Config, provider, email string) string { + fileName := iflow.SanitizeIFlowFileName(email) + return fmt.Sprintf("%s/%s-%s-%d.json", cfg.AuthDir, provider, fileName, time.Now().Unix()) +} diff --git a/internal/cmd/iflow_login.go b/internal/cmd/iflow_login.go new file mode 100644 index 0000000000..9015f5be67 --- /dev/null +++ b/internal/cmd/iflow_login.go @@ -0,0 +1,48 @@ +package cmd + +import ( + "context" + "errors" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +// DoIFlowLogin performs the iFlow OAuth login via the shared authentication manager. +func DoIFlowLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + + promptFn := options.Prompt + if promptFn == nil { + promptFn = defaultProjectPrompt() + } + + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + CallbackPort: options.CallbackPort, + Metadata: map[string]string{}, + Prompt: promptFn, + } + + _, savedPath, err := manager.Login(context.Background(), "iflow", cfg, authOpts) + if err != nil { + if emailErr, ok := errors.AsType[*sdkAuth.EmailRequiredError](err); ok { + log.Error(emailErr.Error()) + return + } + fmt.Printf("iFlow authentication failed: %v\n", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + + fmt.Println("iFlow authentication successful!") +} diff --git a/internal/cmd/joycode_login.go b/internal/cmd/joycode_login.go new file mode 100644 index 0000000000..f1c7a6e9eb --- /dev/null +++ b/internal/cmd/joycode_login.go @@ -0,0 +1,37 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +func DoJoyCodeLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + CallbackPort: options.CallbackPort, + Metadata: map[string]string{}, + } + + record, savedPath, err := manager.Login(context.Background(), "joycode", cfg, authOpts) + if err != nil { + log.Errorf("JoyCode authentication failed: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("JoyCode authentication successful!") +} diff --git a/internal/cmd/kilo_login.go b/internal/cmd/kilo_login.go new file mode 100644 index 0000000000..4f9b37f25b --- /dev/null +++ b/internal/cmd/kilo_login.go @@ -0,0 +1,54 @@ +package cmd + +import ( + "context" + "fmt" + "strings" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" +) + +// DoKiloLogin handles the Kilo device flow using the shared authentication manager. +// It initiates the device-based authentication process for Kilo AI services and saves +// the authentication tokens to the configured auth directory. +// +// Parameters: +// - cfg: The application configuration +// - options: Login options including browser behavior and prompts +func DoKiloLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + + promptFn := options.Prompt + if promptFn == nil { + promptFn = func(prompt string) (string, error) { + fmt.Print(prompt) + var value string + fmt.Scanln(&value) + return strings.TrimSpace(value), nil + } + } + + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + CallbackPort: options.CallbackPort, + Metadata: map[string]string{}, + Prompt: promptFn, + } + + _, savedPath, err := manager.Login(context.Background(), "kilo", cfg, authOpts) + if err != nil { + fmt.Printf("Kilo authentication failed: %v\n", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + + fmt.Println("Kilo authentication successful!") +} diff --git a/internal/cmd/kimi_login.go b/internal/cmd/kimi_login.go index eb5f11fb37..ffc470fda0 100644 --- a/internal/cmd/kimi_login.go +++ b/internal/cmd/kimi_login.go @@ -4,8 +4,8 @@ import ( "context" "fmt" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/cmd/kiro_login.go b/internal/cmd/kiro_login.go new file mode 100644 index 0000000000..b2fd6f59e5 --- /dev/null +++ b/internal/cmd/kiro_login.go @@ -0,0 +1,257 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +// DoKiroLogin triggers the Kiro authentication flow with Google OAuth. +// This is the default login method (same as --kiro-google-login). +// +// Parameters: +// - cfg: The application configuration +// - options: Login options including Prompt field +func DoKiroLogin(cfg *config.Config, options *LoginOptions) { + // Use Google login as default + DoKiroGoogleLogin(cfg, options) +} + +// DoKiroGoogleLogin triggers Kiro authentication with Google OAuth. +// This uses a custom protocol handler (kiro://) to receive the callback. +// +// Parameters: +// - cfg: The application configuration +// - options: Login options including prompts +func DoKiroGoogleLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + // Note: Kiro defaults to incognito mode for multi-account support. + // Users can override with --no-incognito if they want to use existing browser sessions. + + manager := newAuthManager() + + // Use KiroAuthenticator with Google login + authenticator := sdkAuth.NewKiroAuthenticator() + record, err := authenticator.LoginWithGoogle(context.Background(), cfg, &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + Prompt: options.Prompt, + }) + if err != nil { + log.Errorf("Kiro Google authentication failed: %v", err) + fmt.Println("\nTroubleshooting:") + fmt.Println("1. Make sure the protocol handler is installed") + fmt.Println("2. Complete the Google login in the browser") + fmt.Println("3. If callback fails, try: --kiro-import (after logging in via Kiro IDE)") + return + } + + // Save the auth record + savedPath, err := manager.SaveAuth(record, cfg) + if err != nil { + log.Errorf("Failed to save auth: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("Kiro Google authentication successful!") +} + +// DoKiroAWSLogin triggers Kiro authentication with AWS Builder ID. +// This uses the device code flow for AWS SSO OIDC authentication. +// +// Parameters: +// - cfg: The application configuration +// - options: Login options including prompts +func DoKiroAWSLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + // Note: Kiro defaults to incognito mode for multi-account support. + // Users can override with --no-incognito if they want to use existing browser sessions. + + manager := newAuthManager() + + // Use KiroAuthenticator with AWS Builder ID login (device code flow) + authenticator := sdkAuth.NewKiroAuthenticator() + record, err := authenticator.Login(context.Background(), cfg, &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + Prompt: options.Prompt, + }) + if err != nil { + log.Errorf("Kiro AWS authentication failed: %v", err) + fmt.Println("\nTroubleshooting:") + fmt.Println("1. Make sure you have an AWS Builder ID") + fmt.Println("2. Complete the authorization in the browser") + fmt.Println("3. If callback fails, try: --kiro-import (after logging in via Kiro IDE)") + return + } + + // Save the auth record + savedPath, err := manager.SaveAuth(record, cfg) + if err != nil { + log.Errorf("Failed to save auth: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("Kiro AWS authentication successful!") +} + +// DoKiroAWSAuthCodeLogin triggers Kiro authentication with AWS Builder ID using authorization code flow. +// This provides a better UX than device code flow as it uses automatic browser callback. +// +// Parameters: +// - cfg: The application configuration +// - options: Login options including prompts +func DoKiroAWSAuthCodeLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + // Note: Kiro defaults to incognito mode for multi-account support. + // Users can override with --no-incognito if they want to use existing browser sessions. + + manager := newAuthManager() + + // Use KiroAuthenticator with AWS Builder ID login (authorization code flow) + authenticator := sdkAuth.NewKiroAuthenticator() + record, err := authenticator.LoginWithAuthCode(context.Background(), cfg, &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: map[string]string{}, + Prompt: options.Prompt, + }) + if err != nil { + log.Errorf("Kiro AWS authentication (auth code) failed: %v", err) + fmt.Println("\nTroubleshooting:") + fmt.Println("1. Make sure you have an AWS Builder ID") + fmt.Println("2. Complete the authorization in the browser") + fmt.Println("3. If callback fails, try: --kiro-aws-login (device code flow)") + return + } + + // Save the auth record + savedPath, err := manager.SaveAuth(record, cfg) + if err != nil { + log.Errorf("Failed to save auth: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("Kiro AWS authentication successful!") +} + +// DoKiroImport imports Kiro token from Kiro IDE's token file. +// This is useful for users who have already logged in via Kiro IDE +// and want to use the same credentials in CLI Proxy API. +// +// Parameters: +// - cfg: The application configuration +// - options: Login options (currently unused for import) +func DoKiroImport(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + manager := newAuthManager() + + // Use ImportFromKiroIDE instead of Login + authenticator := sdkAuth.NewKiroAuthenticator() + record, err := authenticator.ImportFromKiroIDE(context.Background(), cfg) + if err != nil { + log.Errorf("Kiro token import failed: %v", err) + fmt.Println("\nMake sure you have logged in to Kiro IDE first:") + fmt.Println("1. Open Kiro IDE") + fmt.Println("2. Click 'Sign in with Google' (or GitHub)") + fmt.Println("3. Complete the login process") + fmt.Println("4. Run this command again") + return + } + + // Save the imported auth record + savedPath, err := manager.SaveAuth(record, cfg) + if err != nil { + log.Errorf("Failed to save auth: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Imported as %s\n", record.Label) + } + fmt.Println("Kiro token import successful!") +} + +func DoKiroIDCLogin(cfg *config.Config, options *LoginOptions, startURL, region, flow string) { + if options == nil { + options = &LoginOptions{} + } + + if startURL == "" { + log.Errorf("Kiro IDC login requires --kiro-idc-start-url") + fmt.Println("\nUsage: --kiro-idc-login --kiro-idc-start-url https://d-xxx.awsapps.com/start") + return + } + + manager := newAuthManager() + + authenticator := sdkAuth.NewKiroAuthenticator() + metadata := map[string]string{ + "start-url": startURL, + "region": region, + "flow": flow, + } + + record, err := authenticator.Login(context.Background(), cfg, &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + Metadata: metadata, + Prompt: options.Prompt, + }) + if err != nil { + log.Errorf("Kiro IDC authentication failed: %v", err) + fmt.Println("\nTroubleshooting:") + fmt.Println("1. Make sure your IDC Start URL is correct") + fmt.Println("2. Complete the authorization in the browser") + fmt.Println("3. If auth code flow fails, try: --kiro-idc-flow device") + return + } + + savedPath, err := manager.SaveAuth(record, cfg) + if err != nil { + log.Errorf("Failed to save auth: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("Kiro IDC authentication successful!") +} diff --git a/internal/cmd/login.go b/internal/cmd/login.go index 16af718ebb..a71bb28263 100644 --- a/internal/cmd/login.go +++ b/internal/cmd/login.go @@ -17,12 +17,12 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/gemini" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/gemini" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" ) @@ -333,42 +333,10 @@ func performGeminiCLISetup(ctx context.Context, httpClient *http.Client, storage finalProjectID := projectID if responseProjectID != "" { if explicitProject && !strings.EqualFold(responseProjectID, projectID) { - // Check if this is a free user (gen-lang-client projects or free/legacy tier) - isFreeUser := strings.HasPrefix(projectID, "gen-lang-client-") || - strings.EqualFold(tierID, "FREE") || - strings.EqualFold(tierID, "LEGACY") - - if isFreeUser { - // Interactive prompt for free users - fmt.Printf("\nGoogle returned a different project ID:\n") - fmt.Printf(" Requested (frontend): %s\n", projectID) - fmt.Printf(" Returned (backend): %s\n\n", responseProjectID) - fmt.Printf(" Backend project IDs have access to preview models (gemini-3-*).\n") - fmt.Printf(" This is normal for free tier users.\n\n") - fmt.Printf("Which project ID would you like to use?\n") - fmt.Printf(" [1] Backend (recommended): %s\n", responseProjectID) - fmt.Printf(" [2] Frontend: %s\n\n", projectID) - fmt.Printf("Enter choice [1]: ") - - reader := bufio.NewReader(os.Stdin) - choice, _ := reader.ReadString('\n') - choice = strings.TrimSpace(choice) - - if choice == "2" { - log.Infof("Using frontend project ID: %s", projectID) - fmt.Println(". Warning: Frontend project IDs may not have access to preview models.") - finalProjectID = projectID - } else { - log.Infof("Using backend project ID: %s (recommended)", responseProjectID) - finalProjectID = responseProjectID - } - } else { - // Pro users: keep requested project ID (original behavior) - log.Warnf("Gemini onboarding returned project %s instead of requested %s; keeping requested project ID.", responseProjectID, projectID) - } - } else { - finalProjectID = responseProjectID + log.Infof("Gemini onboarding: requested project %s maps to backend project %s", projectID, responseProjectID) + log.Infof("Using backend project ID: %s", responseProjectID) } + finalProjectID = responseProjectID } storage.ProjectID = strings.TrimSpace(finalProjectID) diff --git a/internal/cmd/openai_device_login.go b/internal/cmd/openai_device_login.go index 1b7351e63a..3fa9307b9c 100644 --- a/internal/cmd/openai_device_login.go +++ b/internal/cmd/openai_device_login.go @@ -6,9 +6,9 @@ import ( "fmt" "os" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/cmd/openai_login.go b/internal/cmd/openai_login.go index 783a948400..ee8a025067 100644 --- a/internal/cmd/openai_login.go +++ b/internal/cmd/openai_login.go @@ -6,9 +6,9 @@ import ( "fmt" "os" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/cmd/qoder_login.go b/internal/cmd/qoder_login.go new file mode 100644 index 0000000000..a27bf64d8c --- /dev/null +++ b/internal/cmd/qoder_login.go @@ -0,0 +1,44 @@ +package cmd + +import ( + "context" + "fmt" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + log "github.com/sirupsen/logrus" +) + +// DoQoderLogin triggers the PKCE browser flow for the Qoder provider and saves tokens. +func DoQoderLogin(cfg *config.Config, options *LoginOptions) { + if options == nil { + options = &LoginOptions{} + } + + promptFn := options.Prompt + if promptFn == nil { + promptFn = defaultProjectPrompt() + } + + manager := newAuthManager() + authOpts := &sdkAuth.LoginOptions{ + NoBrowser: options.NoBrowser, + CallbackPort: options.CallbackPort, + Metadata: map[string]string{}, + Prompt: promptFn, + } + + record, savedPath, err := manager.Login(context.Background(), "qoder", cfg, authOpts) + if err != nil { + log.Errorf("Qoder authentication failed: %v", err) + return + } + + if savedPath != "" { + fmt.Printf("Authentication saved to %s\n", savedPath) + } + if record != nil && record.Label != "" { + fmt.Printf("Authenticated as %s\n", record.Label) + } + fmt.Println("Qoder authentication successful!") +} diff --git a/internal/cmd/run.go b/internal/cmd/run.go index d8c4f01938..38f189b4a9 100644 --- a/internal/cmd/run.go +++ b/internal/cmd/run.go @@ -10,9 +10,9 @@ import ( "syscall" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy" log "github.com/sirupsen/logrus" ) diff --git a/internal/cmd/vertex_import.go b/internal/cmd/vertex_import.go index 4aa0d74b59..ffb6200b1a 100644 --- a/internal/cmd/vertex_import.go +++ b/internal/cmd/vertex_import.go @@ -9,11 +9,11 @@ import ( "os" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/vertex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/vertex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/config/config.go b/internal/config/config.go index 760d43ec4a..0760f987fb 100644 --- a/internal/config/config.go +++ b/internal/config/config.go @@ -13,7 +13,7 @@ import ( "strings" "syscall" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" log "github.com/sirupsen/logrus" "golang.org/x/crypto/bcrypt" "gopkg.in/yaml.v3" @@ -22,6 +22,7 @@ import ( const ( DefaultPanelGitHubRepository = "https://github.com/router-for-me/Cli-Proxy-API-Management-Center" DefaultPprofAddr = "127.0.0.1:8316" + DefaultAuthDir = "~/.cli-proxy-api" ) // Config represents the application's configuration, loaded from a YAML file. @@ -36,6 +37,9 @@ type Config struct { // TLS config controls HTTPS server settings. TLS TLSConfig `yaml:"tls" json:"tls"` + // Home config enables the Redis-based control plane integration. + Home HomeConfig `yaml:"home" json:"-"` + // RemoteManagement nests management-related options under 'remote-management'. RemoteManagement RemoteManagement `yaml:"remote-management" json:"-"` @@ -65,6 +69,11 @@ type Config struct { // UsageStatisticsEnabled toggles in-memory usage aggregation; when false, usage data is discarded. UsageStatisticsEnabled bool `yaml:"usage-statistics-enabled" json:"usage-statistics-enabled"` + // RedisUsageQueueRetentionSeconds controls how long (in seconds) usage queue items + // are retained in memory for the Redis RESP interface (LPOP/RPOP). + // Default: 60. Max: 3600. + RedisUsageQueueRetentionSeconds int `yaml:"redis-usage-queue-retention-seconds" json:"redis-usage-queue-retention-seconds"` + // DisableCooling disables quota cooldown scheduling when true. DisableCooling bool `yaml:"disable-cooling" json:"disable-cooling"` @@ -120,6 +129,20 @@ type Config struct { // Used for services that use Vertex AI-style paths but with simple API key authentication. VertexCompatAPIKey []VertexCompatKey `yaml:"vertex-api-key" json:"vertex-api-key"` + // KiroKey defines a list of Kiro (AWS CodeWhisperer) configurations. + KiroKey []KiroKey `yaml:"kiro" json:"kiro"` + + // BTKey defines a list of BaoTa (BT Panel) AI configurations. + BTKey []BTKey `yaml:"bt" json:"bt"` + + // KiroFingerprint defines a global fingerprint configuration for all Kiro requests. + // When set, all Kiro requests will use this fixed fingerprint instead of random generation. + KiroFingerprint *KiroFingerprintConfig `yaml:"kiro-fingerprint,omitempty" json:"kiro-fingerprint,omitempty"` + + // KiroPreferredEndpoint sets the global default preferred endpoint for all Kiro providers. + // Values: "ide" (default, CodeWhisperer) or "cli" (Amazon Q). + KiroPreferredEndpoint string `yaml:"kiro-preferred-endpoint" json:"kiro-preferred-endpoint"` + // AmpCode contains Amp CLI upstream configuration, management restrictions, and model mappings. AmpCode AmpCode `yaml:"ampcode" json:"ampcode"` @@ -134,6 +157,11 @@ type Config struct { // gemini-api-key, codex-api-key, claude-api-key, openai-compatibility, vertex-api-key, and ampcode. OAuthModelAlias map[string][]OAuthModelAlias `yaml:"oauth-model-alias,omitempty" json:"oauth-model-alias,omitempty"` + // IncognitoBrowser enables opening OAuth URLs in incognito/private browsing mode. + // This is useful when you want to login with a different account without logging out + // from your current session. Default: false. + IncognitoBrowser bool `yaml:"incognito-browser" json:"incognito-browser"` + // Payload defines default and override rules for provider payload parameters. Payload PayloadConfig `yaml:"payload" json:"payload"` @@ -206,8 +234,9 @@ type QuotaExceeded struct { // SwitchPreviewModel indicates whether to automatically switch to a preview model when a quota is exceeded. SwitchPreviewModel bool `yaml:"switch-preview-model" json:"switch-preview-model"` - // AntigravityCredits indicates whether to retry Antigravity quota_exhausted 429s once - // on the same credential with enabledCreditTypes=["GOOGLE_ONE_AI"]. + // AntigravityCredits enables credits-based last-resort fallback for Claude models. + // When all free-tier auths are exhausted (429/503), the conductor retries with + // an auth that has available Google One AI credits. AntigravityCredits bool `yaml:"antigravity-credits" json:"antigravity-credits"` } @@ -217,15 +246,11 @@ type RoutingConfig struct { // Supported values: "round-robin" (default), "fill-first". Strategy string `yaml:"strategy,omitempty" json:"strategy,omitempty"` - // ClaudeCodeSessionAffinity enables session-sticky routing for Claude Code clients. - // When enabled, requests with the same session ID (extracted from metadata.user_id) - // are routed to the same auth credential when available. - // Deprecated: Use SessionAffinity instead for universal session support. - ClaudeCodeSessionAffinity bool `yaml:"claude-code-session-affinity,omitempty" json:"claude-code-session-affinity,omitempty"` - // SessionAffinity enables universal session-sticky routing for all clients. // Session IDs are extracted from multiple sources: - // X-Session-ID header, Idempotency-Key, metadata.user_id, conversation_id, or message hash. + // metadata.user_id (Claude Code session format), X-Session-ID, Session_id (Codex), + // X-Amp-Thread-Id (Amp CLI thread), X-Client-Request-Id (PI), metadata.user_id, + // conversation_id, or message hash. // Automatic failover is always enabled when bound auth becomes unavailable. SessionAffinity bool `yaml:"session-affinity,omitempty" json:"session-affinity,omitempty"` @@ -392,6 +417,9 @@ type ClaudeKey struct { // ExcludedModels lists model IDs that should be excluded for this provider. ExcludedModels []string `yaml:"excluded-models,omitempty" json:"excluded-models,omitempty"` + // DisableCooling disables auth/model cooldown scheduling for this credential when true. + DisableCooling bool `yaml:"disable-cooling,omitempty" json:"disable-cooling,omitempty"` + // Cloak configures request cloaking for non-Claude-Code clients. Cloak *CloakConfig `yaml:"cloak,omitempty" json:"cloak,omitempty"` @@ -447,6 +475,9 @@ type CodexKey struct { // ExcludedModels lists model IDs that should be excluded for this provider. ExcludedModels []string `yaml:"excluded-models,omitempty" json:"excluded-models,omitempty"` + + // DisableCooling disables auth/model cooldown scheduling for this credential when true. + DisableCooling bool `yaml:"disable-cooling,omitempty" json:"disable-cooling,omitempty"` } func (k CodexKey) GetAPIKey() string { return k.APIKey } @@ -491,6 +522,9 @@ type GeminiKey struct { // ExcludedModels lists model IDs that should be excluded for this provider. ExcludedModels []string `yaml:"excluded-models,omitempty" json:"excluded-models,omitempty"` + + // DisableCooling disables auth/model cooldown scheduling for this credential when true. + DisableCooling bool `yaml:"disable-cooling,omitempty" json:"disable-cooling,omitempty"` } func (k GeminiKey) GetAPIKey() string { return k.APIKey } @@ -518,6 +552,9 @@ type OpenAICompatibility struct { // Higher values are preferred; defaults to 0. Priority int `yaml:"priority,omitempty" json:"priority,omitempty"` + // Disabled prevents this provider from being used for routing. + Disabled bool `yaml:"disabled,omitempty" json:"disabled,omitempty"` + // Prefix optionally namespaces model aliases for this provider (e.g., "teamA/kimi-k2"). Prefix string `yaml:"prefix,omitempty" json:"prefix,omitempty"` @@ -532,6 +569,9 @@ type OpenAICompatibility struct { // Headers optionally adds extra HTTP headers for requests sent to this provider. Headers map[string]string `yaml:"headers,omitempty" json:"headers,omitempty"` + + // DisableCooling disables auth/model cooldown scheduling for this provider when true. + DisableCooling bool `yaml:"disable-cooling,omitempty" json:"disable-cooling,omitempty"` } // OpenAICompatibilityAPIKey represents an API key configuration with optional proxy setting. @@ -603,7 +643,10 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) { cfg.LogsMaxTotalSizeMB = 0 cfg.ErrorLogsMaxFiles = 10 cfg.UsageStatisticsEnabled = false + cfg.RedisUsageQueueRetentionSeconds = 60 + cfg.IncognitoBrowser = false // Default to normal browser (AWS uses incognito by force) cfg.DisableCooling = false + cfg.DisableImageGeneration = DisableImageGenerationOff cfg.Pprof.Enable = false cfg.Pprof.Addr = DefaultPprofAddr cfg.AmpCode.RestrictManagementToLocalhost = false // Default to false: API key auth is sufficient @@ -664,6 +707,13 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) { cfg.ErrorLogsMaxFiles = 10 } + if cfg.RedisUsageQueueRetentionSeconds <= 0 { + cfg.RedisUsageQueueRetentionSeconds = 60 + } else if cfg.RedisUsageQueueRetentionSeconds > 3600 { + log.WithField("value", cfg.RedisUsageQueueRetentionSeconds).Warn("redis-usage-queue-retention-seconds too large; clamping to 3600") + cfg.RedisUsageQueueRetentionSeconds = 3600 + } + if cfg.MaxRetryCredentials < 0 { cfg.MaxRetryCredentials = 0 } @@ -695,6 +745,12 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) { // Normalize global OAuth model name aliases. cfg.SanitizeOAuthModelAlias() + // Sanitize Kiro keys: trim whitespace from credential fields + cfg.SanitizeKiroKeys() + + // Sanitize BT keys: trim whitespace from credential fields + cfg.SanitizeBTKeys() + // Validate raw payload rules and drop invalid entries. cfg.SanitizePayloadRules() @@ -1921,3 +1977,116 @@ func removeLegacyAuthBlock(root *yaml.Node) { } removeMapKey(root, "auth") } + +// KiroKey represents the configuration for Kiro (AWS CodeWhisperer) authentication. +type KiroKey struct { + // TokenFile is the path to the Kiro token file (default: ~/.aws/sso/cache/kiro-auth-token.json) + TokenFile string `yaml:"token-file,omitempty" json:"token-file,omitempty"` + + // AccessToken is the OAuth access token for direct configuration. + AccessToken string `yaml:"access-token,omitempty" json:"access-token,omitempty"` + + // RefreshToken is the OAuth refresh token for token renewal. + RefreshToken string `yaml:"refresh-token,omitempty" json:"refresh-token,omitempty"` + + // ProfileArn is the AWS CodeWhisperer profile ARN. + ProfileArn string `yaml:"profile-arn,omitempty" json:"profile-arn,omitempty"` + + // Region is the AWS region (default: us-east-1). + Region string `yaml:"region,omitempty" json:"region,omitempty"` + + // StartURL is the IAM Identity Center (IDC) start URL for SSO login. + StartURL string `yaml:"start-url,omitempty" json:"start-url,omitempty"` + + // ProxyURL optionally overrides the global proxy for this configuration. + ProxyURL string `yaml:"proxy-url,omitempty" json:"proxy-url,omitempty"` + + // AgentTaskType sets the Kiro API task type. Known values: "vibe", "dev", "chat". + // Leave empty to let API use defaults. Different values may inject different system prompts. + AgentTaskType string `yaml:"agent-task-type,omitempty" json:"agent-task-type,omitempty"` + + // PreferredEndpoint sets the preferred Kiro API endpoint/quota. + // Values: "codewhisperer" (default, IDE quota) or "amazonq" (CLI quota). + PreferredEndpoint string `yaml:"preferred-endpoint,omitempty" json:"preferred-endpoint,omitempty"` +} + +// BTKey represents the configuration for BaoTa (BT Panel) AI authentication. +// Phone is stored in plaintext; password is stored as base64-encoded. +type BTKey struct { + // Phone is the BaoTa account phone number (plaintext). + Phone string `yaml:"phone" json:"phone"` + + // Password is the BaoTa account password (base64-encoded). + Password string `yaml:"password" json:"password"` + + // Prefix optionally namespaces model aliases for this provider. + Prefix string `yaml:"prefix,omitempty" json:"prefix,omitempty"` + + // ProxyURL optionally overrides the global proxy for this configuration. + ProxyURL string `yaml:"proxy-url,omitempty" json:"proxy-url,omitempty"` + + // Models defines optional model aliases for this credential. + Models []OpenAICompatibilityModel `yaml:"models,omitempty" json:"models,omitempty"` + + // ExcludedModels defines models to exclude from listing. + ExcludedModels []string `yaml:"excluded-models,omitempty" json:"excluded-models,omitempty"` + + // Priority controls selection preference. Higher values are preferred; defaults to 0. + Priority int `yaml:"priority,omitempty" json:"priority,omitempty"` + + // Headers optionally adds extra HTTP headers for requests sent to this provider. + Headers map[string]string `yaml:"headers,omitempty" json:"headers,omitempty"` +} + +// KiroFingerprintConfig defines a global fingerprint configuration for Kiro requests. +// When configured, all Kiro requests will use this fixed fingerprint instead of random generation. +// Empty fields will fall back to random selection from built-in pools. +type KiroFingerprintConfig struct { + OIDCSDKVersion string `yaml:"oidc-sdk-version,omitempty" json:"oidc-sdk-version,omitempty"` + RuntimeSDKVersion string `yaml:"runtime-sdk-version,omitempty" json:"runtime-sdk-version,omitempty"` + StreamingSDKVersion string `yaml:"streaming-sdk-version,omitempty" json:"streaming-sdk-version,omitempty"` + OSType string `yaml:"os-type,omitempty" json:"os-type,omitempty"` + OSVersion string `yaml:"os-version,omitempty" json:"os-version,omitempty"` + NodeVersion string `yaml:"node-version,omitempty" json:"node-version,omitempty"` + KiroVersion string `yaml:"kiro-version,omitempty" json:"kiro-version,omitempty"` + KiroHash string `yaml:"kiro-hash,omitempty" json:"kiro-hash,omitempty"` +} + +// SanitizeKiroKeys trims whitespace from Kiro credential fields. +func (cfg *Config) SanitizeKiroKeys() { + if cfg == nil || len(cfg.KiroKey) == 0 { + return + } + for i := range cfg.KiroKey { + entry := &cfg.KiroKey[i] + entry.TokenFile = strings.TrimSpace(entry.TokenFile) + entry.AccessToken = strings.TrimSpace(entry.AccessToken) + entry.RefreshToken = strings.TrimSpace(entry.RefreshToken) + entry.ProfileArn = strings.TrimSpace(entry.ProfileArn) + entry.Region = strings.TrimSpace(entry.Region) + entry.ProxyURL = strings.TrimSpace(entry.ProxyURL) + entry.PreferredEndpoint = strings.TrimSpace(entry.PreferredEndpoint) + } +} + +// SanitizeBTKeys trims whitespace and validates BT credential fields. +func (cfg *Config) SanitizeBTKeys() { + if cfg == nil || len(cfg.BTKey) == 0 { + return + } + out := make([]BTKey, 0, len(cfg.BTKey)) + for i := range cfg.BTKey { + entry := cfg.BTKey[i] + entry.Phone = strings.TrimSpace(entry.Phone) + entry.Password = strings.TrimSpace(entry.Password) + entry.Prefix = normalizeModelPrefix(entry.Prefix) + entry.ProxyURL = strings.TrimSpace(entry.ProxyURL) + entry.Headers = NormalizeHeaders(entry.Headers) + entry.ExcludedModels = NormalizeExcludedModels(entry.ExcludedModels) + if entry.Phone == "" || entry.Password == "" { + continue + } + out = append(out, entry) + } + cfg.BTKey = out +} diff --git a/internal/config/disable_image_generation_mode.go b/internal/config/disable_image_generation_mode.go new file mode 100644 index 0000000000..1712638b86 --- /dev/null +++ b/internal/config/disable_image_generation_mode.go @@ -0,0 +1,136 @@ +package config + +import ( + "bytes" + "encoding/json" + "fmt" + "strings" + + "gopkg.in/yaml.v3" +) + +// DisableImageGenerationMode is a tri-state config value for disable-image-generation. +// +// It supports: +// - false: enabled +// - true: disabled everywhere (including /v1/images/* endpoints) +// - "chat": disabled for all non-images endpoints, but enabled for /v1/images/generations and /v1/images/edits +type DisableImageGenerationMode int + +const ( + DisableImageGenerationOff DisableImageGenerationMode = iota + DisableImageGenerationAll + DisableImageGenerationChat +) + +func (m DisableImageGenerationMode) String() string { + switch m { + case DisableImageGenerationOff: + return "false" + case DisableImageGenerationAll: + return "true" + case DisableImageGenerationChat: + return "chat" + default: + return "false" + } +} + +func (m DisableImageGenerationMode) MarshalYAML() (any, error) { + switch m { + case DisableImageGenerationAll: + return true, nil + case DisableImageGenerationChat: + return "chat", nil + default: + return false, nil + } +} + +func (m *DisableImageGenerationMode) UnmarshalYAML(value *yaml.Node) error { + mode, err := parseDisableImageGenerationNode(value) + if err != nil { + return err + } + *m = mode + return nil +} + +func (m DisableImageGenerationMode) MarshalJSON() ([]byte, error) { + switch m { + case DisableImageGenerationAll: + return []byte("true"), nil + case DisableImageGenerationChat: + return json.Marshal("chat") + default: + return []byte("false"), nil + } +} + +func (m *DisableImageGenerationMode) UnmarshalJSON(data []byte) error { + mode, err := parseDisableImageGenerationJSON(data) + if err != nil { + return err + } + *m = mode + return nil +} + +func parseDisableImageGenerationNode(value *yaml.Node) (DisableImageGenerationMode, error) { + if value == nil { + return DisableImageGenerationOff, nil + } + + // First try a typed bool decode (covers unquoted true/false and YAML 1.1 bools). + var b bool + if err := value.Decode(&b); err == nil && value.Kind == yaml.ScalarNode && value.ShortTag() == "!!bool" { + if b { + return DisableImageGenerationAll, nil + } + return DisableImageGenerationOff, nil + } + + // Fall back to string decoding (covers quoted "true"/"false" and "chat"). + var s string + if err := value.Decode(&s); err != nil { + return DisableImageGenerationOff, fmt.Errorf("invalid disable-image-generation value") + } + return parseDisableImageGenerationString(s) +} + +func parseDisableImageGenerationJSON(data []byte) (DisableImageGenerationMode, error) { + trimmed := bytes.TrimSpace(data) + if len(trimmed) == 0 || bytes.Equal(trimmed, []byte("null")) { + return DisableImageGenerationOff, nil + } + + // bool + var b bool + if err := json.Unmarshal(trimmed, &b); err == nil { + if b { + return DisableImageGenerationAll, nil + } + return DisableImageGenerationOff, nil + } + + // string + var s string + if err := json.Unmarshal(trimmed, &s); err != nil { + return DisableImageGenerationOff, fmt.Errorf("invalid disable-image-generation value") + } + return parseDisableImageGenerationString(s) +} + +func parseDisableImageGenerationString(s string) (DisableImageGenerationMode, error) { + s = strings.TrimSpace(strings.ToLower(s)) + switch s { + case "", "false", "0", "off", "no": + return DisableImageGenerationOff, nil + case "true", "1", "on", "yes": + return DisableImageGenerationAll, nil + case "chat": + return DisableImageGenerationChat, nil + default: + return DisableImageGenerationOff, fmt.Errorf("invalid disable-image-generation value %q (allowed: true, false, chat)", s) + } +} diff --git a/internal/config/disable_image_generation_mode_test.go b/internal/config/disable_image_generation_mode_test.go new file mode 100644 index 0000000000..433a5cbf96 --- /dev/null +++ b/internal/config/disable_image_generation_mode_test.go @@ -0,0 +1,76 @@ +package config + +import ( + "encoding/json" + "testing" + + "gopkg.in/yaml.v3" +) + +func TestDisableImageGenerationMode_UnmarshalYAML(t *testing.T) { + type wrapper struct { + V DisableImageGenerationMode `yaml:"disable-image-generation"` + } + + { + var w wrapper + if err := yaml.Unmarshal([]byte("disable-image-generation: false\n"), &w); err != nil { + t.Fatalf("unmarshal false: %v", err) + } + if w.V != DisableImageGenerationOff { + t.Fatalf("false => %v, want %v", w.V, DisableImageGenerationOff) + } + } + + { + var w wrapper + if err := yaml.Unmarshal([]byte("disable-image-generation: true\n"), &w); err != nil { + t.Fatalf("unmarshal true: %v", err) + } + if w.V != DisableImageGenerationAll { + t.Fatalf("true => %v, want %v", w.V, DisableImageGenerationAll) + } + } + + { + var w wrapper + if err := yaml.Unmarshal([]byte("disable-image-generation: chat\n"), &w); err != nil { + t.Fatalf("unmarshal chat: %v", err) + } + if w.V != DisableImageGenerationChat { + t.Fatalf("chat => %v, want %v", w.V, DisableImageGenerationChat) + } + } +} + +func TestDisableImageGenerationMode_UnmarshalJSON(t *testing.T) { + { + var v DisableImageGenerationMode + if err := json.Unmarshal([]byte("false"), &v); err != nil { + t.Fatalf("unmarshal false: %v", err) + } + if v != DisableImageGenerationOff { + t.Fatalf("false => %v, want %v", v, DisableImageGenerationOff) + } + } + + { + var v DisableImageGenerationMode + if err := json.Unmarshal([]byte("true"), &v); err != nil { + t.Fatalf("unmarshal true: %v", err) + } + if v != DisableImageGenerationAll { + t.Fatalf("true => %v, want %v", v, DisableImageGenerationAll) + } + } + + { + var v DisableImageGenerationMode + if err := json.Unmarshal([]byte(`"chat"`), &v); err != nil { + t.Fatalf("unmarshal chat: %v", err) + } + if v != DisableImageGenerationChat { + t.Fatalf("chat => %v, want %v", v, DisableImageGenerationChat) + } + } +} diff --git a/internal/config/home.go b/internal/config/home.go new file mode 100644 index 0000000000..03c9173239 --- /dev/null +++ b/internal/config/home.go @@ -0,0 +1,9 @@ +package config + +// HomeConfig configures the optional "home" control plane integration over Redis protocol. +type HomeConfig struct { + Enabled bool `yaml:"enabled" json:"enabled"` + Host string `yaml:"host" json:"-"` + Port int `yaml:"port" json:"-"` + Password string `yaml:"password" json:"-"` +} diff --git a/internal/config/oauth_model_alias_defaults.go b/internal/config/oauth_model_alias_defaults.go new file mode 100644 index 0000000000..5eda56abe5 --- /dev/null +++ b/internal/config/oauth_model_alias_defaults.go @@ -0,0 +1,61 @@ +package config + +import "strings" + +// defaultKiroAliases returns default oauth-model-alias entries for Kiro. +// These aliases expose standard Claude IDs for Kiro-prefixed upstream models. +func defaultKiroAliases() []OAuthModelAlias { + return []OAuthModelAlias{ + // Sonnet 4.6 + {Name: "kiro-claude-sonnet-4-6", Alias: "claude-sonnet-4-6", Fork: true}, + // Sonnet 4.5 + {Name: "kiro-claude-sonnet-4-5", Alias: "claude-sonnet-4-5-20250929", Fork: true}, + {Name: "kiro-claude-sonnet-4-5", Alias: "claude-sonnet-4-5", Fork: true}, + // Sonnet 4 + {Name: "kiro-claude-sonnet-4", Alias: "claude-sonnet-4-20250514", Fork: true}, + {Name: "kiro-claude-sonnet-4", Alias: "claude-sonnet-4", Fork: true}, + // Opus 4.6 + {Name: "kiro-claude-opus-4-6", Alias: "claude-opus-4-6", Fork: true}, + // Opus 4.5 + {Name: "kiro-claude-opus-4-5", Alias: "claude-opus-4-5-20251101", Fork: true}, + {Name: "kiro-claude-opus-4-5", Alias: "claude-opus-4-5", Fork: true}, + // Haiku 4.5 + {Name: "kiro-claude-haiku-4-5", Alias: "claude-haiku-4-5-20251001", Fork: true}, + {Name: "kiro-claude-haiku-4-5", Alias: "claude-haiku-4-5", Fork: true}, + } +} + +// defaultGitHubCopilotAliases returns default oauth-model-alias entries for +// GitHub Copilot Claude models. It exposes hyphen-style IDs used by clients. +func defaultGitHubCopilotAliases() []OAuthModelAlias { + return []OAuthModelAlias{ + {Name: "claude-haiku-4.5", Alias: "claude-haiku-4-5", Fork: true}, + {Name: "claude-opus-4.1", Alias: "claude-opus-4-1", Fork: true}, + {Name: "claude-opus-4.5", Alias: "claude-opus-4-5", Fork: true}, + {Name: "claude-opus-4.6", Alias: "claude-opus-4-6", Fork: true}, + {Name: "claude-sonnet-4.5", Alias: "claude-sonnet-4-5", Fork: true}, + {Name: "claude-sonnet-4.6", Alias: "claude-sonnet-4-6", Fork: true}, + } +} + +// GitHubCopilotAliasesFromModels generates oauth-model-alias entries from a dynamic +// list of model IDs fetched from the Copilot API. It auto-creates aliases for +// models whose ID contains a dot (e.g. "claude-opus-4.6" → "claude-opus-4-6"), +// which is the pattern used by Claude models on Copilot. +func GitHubCopilotAliasesFromModels(modelIDs []string) []OAuthModelAlias { + var aliases []OAuthModelAlias + seen := make(map[string]struct{}) + for _, id := range modelIDs { + if !strings.Contains(id, ".") { + continue + } + hyphenID := strings.ReplaceAll(id, ".", "-") + key := id + "→" + hyphenID + if _, ok := seen[key]; ok { + continue + } + seen[key] = struct{}{} + aliases = append(aliases, OAuthModelAlias{Name: id, Alias: hyphenID, Fork: true}) + } + return aliases +} diff --git a/internal/config/parse.go b/internal/config/parse.go new file mode 100644 index 0000000000..283740e5f0 --- /dev/null +++ b/internal/config/parse.go @@ -0,0 +1,89 @@ +package config + +import ( + "fmt" + "strings" + + log "github.com/sirupsen/logrus" + "golang.org/x/crypto/bcrypt" + "gopkg.in/yaml.v3" +) + +// ParseConfigBytes parses a YAML configuration payload into Config and applies the same +// in-memory normalizations as LoadConfigOptional, without persisting any changes to disk. +func ParseConfigBytes(data []byte) (*Config, error) { + if len(data) == 0 { + return nil, fmt.Errorf("config payload is empty") + } + + var cfg Config + // Keep defaults aligned with LoadConfigOptional. + cfg.Host = "" // Default empty: binds to all interfaces (IPv4 + IPv6) + cfg.LoggingToFile = false + cfg.LogsMaxTotalSizeMB = 0 + cfg.ErrorLogsMaxFiles = 10 + cfg.UsageStatisticsEnabled = false + cfg.RedisUsageQueueRetentionSeconds = 60 + cfg.DisableCooling = false + cfg.DisableImageGeneration = DisableImageGenerationOff + cfg.Pprof.Enable = false + cfg.Pprof.Addr = DefaultPprofAddr + cfg.AmpCode.RestrictManagementToLocalhost = false // Default to false: API key auth is sufficient + cfg.RemoteManagement.PanelGitHubRepository = DefaultPanelGitHubRepository + + if err := yaml.Unmarshal(data, &cfg); err != nil { + return nil, fmt.Errorf("parse config payload: %w", err) + } + + // Hash remote management key if plaintext is detected (nested), but do NOT persist. + if cfg.RemoteManagement.SecretKey != "" && !looksLikeBcrypt(cfg.RemoteManagement.SecretKey) { + hashed, errHash := bcrypt.GenerateFromPassword([]byte(cfg.RemoteManagement.SecretKey), bcrypt.DefaultCost) + if errHash != nil { + return nil, fmt.Errorf("hash remote management key: %w", errHash) + } + cfg.RemoteManagement.SecretKey = string(hashed) + } + + cfg.RemoteManagement.PanelGitHubRepository = strings.TrimSpace(cfg.RemoteManagement.PanelGitHubRepository) + if cfg.RemoteManagement.PanelGitHubRepository == "" { + cfg.RemoteManagement.PanelGitHubRepository = DefaultPanelGitHubRepository + } + + cfg.Pprof.Addr = strings.TrimSpace(cfg.Pprof.Addr) + if cfg.Pprof.Addr == "" { + cfg.Pprof.Addr = DefaultPprofAddr + } + + if cfg.LogsMaxTotalSizeMB < 0 { + cfg.LogsMaxTotalSizeMB = 0 + } + + if cfg.ErrorLogsMaxFiles < 0 { + cfg.ErrorLogsMaxFiles = 10 + } + + if cfg.RedisUsageQueueRetentionSeconds <= 0 { + cfg.RedisUsageQueueRetentionSeconds = 60 + } else if cfg.RedisUsageQueueRetentionSeconds > 3600 { + log.WithField("value", cfg.RedisUsageQueueRetentionSeconds).Warn("redis-usage-queue-retention-seconds too large; clamping to 3600") + cfg.RedisUsageQueueRetentionSeconds = 3600 + } + + if cfg.MaxRetryCredentials < 0 { + cfg.MaxRetryCredentials = 0 + } + + // Apply the same sanitization pipeline. + cfg.SanitizeGeminiKeys() + cfg.SanitizeVertexCompatKeys() + cfg.SanitizeCodexKeys() + cfg.SanitizeCodexHeaderDefaults() + cfg.SanitizeClaudeHeaderDefaults() + cfg.SanitizeClaudeKeys() + cfg.SanitizeOpenAICompatibility() + cfg.OAuthExcludedModels = NormalizeOAuthExcludedModels(cfg.OAuthExcludedModels) + cfg.SanitizeOAuthModelAlias() + cfg.SanitizePayloadRules() + + return &cfg, nil +} diff --git a/internal/config/sdk_config.go b/internal/config/sdk_config.go index aa27526d1e..48c0fe5f17 100644 --- a/internal/config/sdk_config.go +++ b/internal/config/sdk_config.go @@ -9,6 +9,16 @@ type SDKConfig struct { // ProxyURL is the URL of an optional proxy server to use for outbound requests. ProxyURL string `yaml:"proxy-url" json:"proxy-url"` + // DisableImageGeneration controls whether the built-in image_generation tool is injected/allowed. + // + // Supported values: + // - false (default): image_generation is enabled everywhere (normal behavior). + // - true: image_generation is disabled everywhere. The server stops injecting it, removes it from request payloads, + // and returns 404 for /v1/images/generations and /v1/images/edits. + // - "chat": disable image_generation injection for all non-images endpoints (e.g. /v1/responses, /v1/chat/completions), + // while keeping /v1/images/generations and /v1/images/edits enabled and preserving image_generation there. + DisableImageGeneration DisableImageGenerationMode `yaml:"disable-image-generation" json:"disable-image-generation"` + // EnableGeminiCLIEndpoint controls whether Gemini CLI internal endpoints (/v1internal:*) are enabled. // Default is false for safety; when false, /v1internal:* requests are rejected. EnableGeminiCLIEndpoint bool `yaml:"enable-gemini-cli-endpoint" json:"enable-gemini-cli-endpoint"` diff --git a/internal/constant/constant.go b/internal/constant/constant.go index 58b388a138..7977f3896c 100644 --- a/internal/constant/constant.go +++ b/internal/constant/constant.go @@ -24,4 +24,16 @@ const ( // Antigravity represents the Antigravity response format identifier. Antigravity = "antigravity" + + // Kiro represents the AWS CodeWhisperer (Kiro) provider identifier. + Kiro = "kiro" + + // Kilo represents the Kilo AI provider identifier. + Kilo = "kilo" + + // CodeArts represents the HuaweiCloud CodeArts IDE provider identifier. + CodeArts = "codearts" + + // JoyCode represents the JD JoyCode provider identifier. + JoyCode = "joycode" ) diff --git a/internal/home/client.go b/internal/home/client.go new file mode 100644 index 0000000000..23082cc69c --- /dev/null +++ b/internal/home/client.go @@ -0,0 +1,393 @@ +package home + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "net/http" + "strings" + "sync/atomic" + "time" + + "github.com/redis/go-redis/v9" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + log "github.com/sirupsen/logrus" +) + +const ( + redisKeyConfig = "config" + redisChannelConfig = "config" + redisKeyModels = "models" + redisKeyUsage = "usage" + redisKeyRequestLog = "request-log" + + homeReconnectInterval = time.Second +) + +var ( + ErrDisabled = errors.New("home client disabled") + ErrNotConnected = errors.New("home not connected") + ErrEmptyResponse = errors.New("home returned empty response") + ErrAuthNotFound = errors.New("home auth not found") + ErrConfigNotFound = errors.New("home config not found") + ErrModelsNotFound = errors.New("home models not found") +) + +type Client struct { + homeCfg config.HomeConfig + + cmd *redis.Client + sub *redis.Client + + heartbeatOK atomic.Bool +} + +func New(homeCfg config.HomeConfig) *Client { + return &Client{homeCfg: homeCfg} +} + +func (c *Client) Enabled() bool { + if c == nil { + return false + } + return c.homeCfg.Enabled +} + +func (c *Client) HeartbeatOK() bool { + if c == nil { + return false + } + if !c.Enabled() { + return false + } + return c.heartbeatOK.Load() +} + +func (c *Client) Close() { + if c == nil { + return + } + c.heartbeatOK.Store(false) + if c.cmd != nil { + _ = c.cmd.Close() + } + if c.sub != nil { + _ = c.sub.Close() + } + c.cmd = nil + c.sub = nil +} + +func (c *Client) addr() (string, bool) { + if c == nil { + return "", false + } + host := strings.TrimSpace(c.homeCfg.Host) + if host == "" { + return "", false + } + if c.homeCfg.Port <= 0 { + return "", false + } + return fmt.Sprintf("%s:%d", host, c.homeCfg.Port), true +} + +func (c *Client) ensureClients() error { + if c == nil { + return ErrDisabled + } + if !c.Enabled() { + return ErrDisabled + } + addr, ok := c.addr() + if !ok { + return fmt.Errorf("home: invalid address (host=%q port=%d)", c.homeCfg.Host, c.homeCfg.Port) + } + + if c.cmd == nil { + c.cmd = redis.NewClient(&redis.Options{ + Addr: addr, + Password: c.homeCfg.Password, + }) + } + if c.sub == nil { + c.sub = redis.NewClient(&redis.Options{ + Addr: addr, + Password: c.homeCfg.Password, + }) + } + return nil +} + +func (c *Client) Ping(ctx context.Context) error { + if err := c.ensureClients(); err != nil { + return err + } + if c.cmd == nil { + return ErrNotConnected + } + return c.cmd.Ping(ctx).Err() +} + +func (c *Client) GetConfig(ctx context.Context) ([]byte, error) { + if err := c.ensureClients(); err != nil { + return nil, err + } + raw, err := c.cmd.Get(ctx, redisKeyConfig).Bytes() + if errors.Is(err, redis.Nil) { + return nil, ErrConfigNotFound + } + if err != nil { + return nil, err + } + if len(raw) == 0 { + return nil, ErrEmptyResponse + } + return raw, nil +} + +func (c *Client) GetModels(ctx context.Context) ([]byte, error) { + if err := c.ensureClients(); err != nil { + return nil, err + } + raw, err := c.cmd.Get(ctx, redisKeyModels).Bytes() + if errors.Is(err, redis.Nil) { + return nil, ErrModelsNotFound + } + if err != nil { + return nil, err + } + if len(raw) == 0 { + return nil, ErrEmptyResponse + } + return raw, nil +} + +func headersToLowerMap(headers http.Header) map[string]string { + if len(headers) == 0 { + return nil + } + out := make(map[string]string, len(headers)) + for key, values := range headers { + k := strings.ToLower(strings.TrimSpace(key)) + if k == "" { + continue + } + if len(values) == 0 { + out[k] = "" + continue + } + trimmed := make([]string, 0, len(values)) + for _, v := range values { + trimmed = append(trimmed, strings.TrimSpace(v)) + } + out[k] = strings.Join(trimmed, ", ") + } + if len(out) == 0 { + return nil + } + return out +} + +func newAuthDispatchRequest(requestedModel string, sessionID string, headers http.Header, count int) authDispatchRequest { + if count <= 0 { + count = 1 + } + return authDispatchRequest{ + Type: "auth", + Model: requestedModel, + Count: count, + SessionID: strings.TrimSpace(sessionID), + Headers: headersToLowerMap(headers), + } +} + +func (c *Client) RPopAuth(ctx context.Context, requestedModel string, sessionID string, headers http.Header, count int) ([]byte, error) { + if err := c.ensureClients(); err != nil { + return nil, err + } + requestedModel = strings.TrimSpace(requestedModel) + if requestedModel == "" { + return nil, fmt.Errorf("home: requested model is empty") + } + req := newAuthDispatchRequest(requestedModel, sessionID, headers, count) + keyBytes, err := json.Marshal(&req) + if err != nil { + return nil, err + } + + raw, err := c.cmd.RPop(ctx, string(keyBytes)).Bytes() + if errors.Is(err, redis.Nil) { + return nil, ErrAuthNotFound + } + if err != nil { + return nil, err + } + if len(raw) == 0 { + return nil, ErrEmptyResponse + } + return raw, nil +} + +func (c *Client) GetRefreshAuth(ctx context.Context, authIndex string) ([]byte, error) { + if err := c.ensureClients(); err != nil { + return nil, err + } + authIndex = strings.TrimSpace(authIndex) + if authIndex == "" { + return nil, fmt.Errorf("home: auth_index is empty") + } + req := refreshRequest{ + Type: "refresh", + AuthIndex: authIndex, + } + keyBytes, err := json.Marshal(&req) + if err != nil { + return nil, err + } + + raw, err := c.cmd.Get(ctx, string(keyBytes)).Bytes() + if errors.Is(err, redis.Nil) { + return nil, ErrAuthNotFound + } + if err != nil { + return nil, err + } + if len(raw) == 0 { + return nil, ErrEmptyResponse + } + return raw, nil +} + +func (c *Client) LPushUsage(ctx context.Context, payload []byte) error { + if err := c.ensureClients(); err != nil { + return err + } + if len(payload) == 0 { + return nil + } + return c.cmd.LPush(ctx, redisKeyUsage, payload).Err() +} + +func (c *Client) RPushRequestLog(ctx context.Context, payload []byte) error { + if err := c.ensureClients(); err != nil { + return err + } + if len(payload) == 0 { + return nil + } + return c.cmd.RPush(ctx, redisKeyRequestLog, payload).Err() +} + +// StartConfigSubscriber connects to home, fetches config once via GET config, then subscribes to +// the "config" channel to receive runtime config updates. +// +// The subscription connection is treated as the home heartbeat. HeartbeatOK is set to true only +// after the initial GET config succeeds and the SUBSCRIBE connection is established. When the +// subscription ends unexpectedly, HeartbeatOK becomes false and the loop reconnects. +func (c *Client) StartConfigSubscriber(ctx context.Context, onConfig func([]byte) error) { + if c == nil { + return + } + if !c.Enabled() { + return + } + if onConfig == nil { + return + } + + for { + if ctx != nil { + select { + case <-ctx.Done(): + c.heartbeatOK.Store(false) + return + default: + } + } + + c.heartbeatOK.Store(false) + c.Close() + + if errEnsure := c.ensureClients(); errEnsure != nil { + log.Warn("unable to connect to home control center, retrying in 1 second") + sleepWithContext(ctx, homeReconnectInterval) + continue + } + + if errPing := c.Ping(ctx); errPing != nil { + log.Warn("unable to connect to home control center, retrying in 1 second") + sleepWithContext(ctx, homeReconnectInterval) + continue + } + + raw, errGet := c.GetConfig(ctx) + if errGet != nil { + log.Warn("unable to fetch config from home control center, retrying in 1 second") + sleepWithContext(ctx, homeReconnectInterval) + continue + } + if errApply := onConfig(raw); errApply != nil { + log.Warn("unable to apply config from home control center, retrying in 1 second") + sleepWithContext(ctx, homeReconnectInterval) + continue + } + + if c.sub == nil { + sleepWithContext(ctx, homeReconnectInterval) + continue + } + + pubsub := c.sub.Subscribe(ctx, redisChannelConfig) + if pubsub == nil { + sleepWithContext(ctx, homeReconnectInterval) + continue + } + + // Ensure the subscription is established before marking heartbeat OK. + if _, errReceive := pubsub.Receive(ctx); errReceive != nil { + _ = pubsub.Close() + sleepWithContext(ctx, homeReconnectInterval) + continue + } + + c.heartbeatOK.Store(true) + + for { + msg, errMsg := pubsub.ReceiveMessage(ctx) + if errMsg != nil { + _ = pubsub.Close() + c.heartbeatOK.Store(false) + sleepWithContext(ctx, homeReconnectInterval) + break + } + if msg == nil { + continue + } + if payload := strings.TrimSpace(msg.Payload); payload != "" { + if errApply := onConfig([]byte(payload)); errApply != nil { + log.Warn("failed to apply config update from home control center, ignoring") + } + } + } + } +} + +func sleepWithContext(ctx context.Context, d time.Duration) { + if d <= 0 { + return + } + timer := time.NewTimer(d) + defer timer.Stop() + if ctx == nil { + <-timer.C + return + } + select { + case <-ctx.Done(): + return + case <-timer.C: + return + } +} diff --git a/internal/home/client_test.go b/internal/home/client_test.go new file mode 100644 index 0000000000..625e77bcac --- /dev/null +++ b/internal/home/client_test.go @@ -0,0 +1,32 @@ +package home + +import ( + "encoding/json" + "net/http" + "testing" +) + +func TestAuthDispatchRequestIncludesCount(t *testing.T) { + req := newAuthDispatchRequest("gpt-5.4", "session-1", http.Header{"Authorization": {"Bearer test"}}, 2) + + raw, err := json.Marshal(&req) + if err != nil { + t.Fatalf("marshal auth dispatch request: %v", err) + } + + var payload map[string]any + if err := json.Unmarshal(raw, &payload); err != nil { + t.Fatalf("unmarshal auth dispatch request: %v", err) + } + if got := int(payload["count"].(float64)); got != 2 { + t.Fatalf("count = %d, want 2", got) + } +} + +func TestAuthDispatchRequestDefaultsCountToOne(t *testing.T) { + req := newAuthDispatchRequest("gpt-5.4", "", nil, 0) + + if req.Count != 1 { + t.Fatalf("count = %d, want 1", req.Count) + } +} diff --git a/internal/home/global.go b/internal/home/global.go new file mode 100644 index 0000000000..a79121a487 --- /dev/null +++ b/internal/home/global.go @@ -0,0 +1,25 @@ +package home + +import "sync/atomic" + +var currentClient atomic.Value // *Client + +// SetCurrent sets the active home client used by runtime integrations. +func SetCurrent(client *Client) { + currentClient.Store(client) +} + +// Current returns the active home client instance, if any. +func Current() *Client { + if v := currentClient.Load(); v != nil { + if client, ok := v.(*Client); ok { + return client + } + } + return nil +} + +// ClearCurrent removes the active home client. +func ClearCurrent() { + currentClient.Store((*Client)(nil)) +} diff --git a/internal/home/requests.go b/internal/home/requests.go new file mode 100644 index 0000000000..0757766468 --- /dev/null +++ b/internal/home/requests.go @@ -0,0 +1,14 @@ +package home + +type authDispatchRequest struct { + Type string `json:"type"` + Model string `json:"model"` + Count int `json:"count"` + SessionID string `json:"session_id,omitempty"` + Headers map[string]string `json:"headers,omitempty"` +} + +type refreshRequest struct { + Type string `json:"type"` + AuthIndex string `json:"auth_index"` +} diff --git a/internal/interfaces/types.go b/internal/interfaces/types.go index 9fb1e7f3b8..dfdfc02a84 100644 --- a/internal/interfaces/types.go +++ b/internal/interfaces/types.go @@ -3,7 +3,7 @@ // transformation operations, maintaining compatibility with the SDK translator package. package interfaces -import sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" +import sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" // Backwards compatible aliases for translator function types. type TranslateRequestFunc = sdktranslator.RequestTransform diff --git a/internal/logging/gin_logger.go b/internal/logging/gin_logger.go index b94d7afe6d..6e3559b8c3 100644 --- a/internal/logging/gin_logger.go +++ b/internal/logging/gin_logger.go @@ -12,7 +12,7 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" ) @@ -20,13 +20,17 @@ import ( var aiAPIPrefixes = []string{ "/v1/chat/completions", "/v1/completions", + "/v1/images", "/v1/messages", "/v1/responses", "/v1beta/models/", "/api/provider/", } -const skipGinLogKey = "__gin_skip_request_logging__" +const ( + skipGinLogKey = "__gin_skip_request_logging__" + creditsUsedKey = "__antigravity_credits_used__" +) // GinLogrusLogger returns a Gin middleware handler that logs HTTP requests and responses // using logrus. It captures request details including method, path, status code, latency, @@ -78,6 +82,9 @@ func GinLogrusLogger() gin.HandlerFunc { requestID = "--------" } logLine := fmt.Sprintf("%3d | %13v | %15s | %-7s \"%s\"", statusCode, latency, clientIP, method, path) + if creditsUsed(c) { + logLine += " [credits]" + } if errorMessage != "" { logLine = logLine + " | " + errorMessage } @@ -148,3 +155,15 @@ func shouldSkipGinRequestLogging(c *gin.Context) bool { flag, ok := val.(bool) return ok && flag } + +func creditsUsed(c *gin.Context) bool { + if c == nil { + return false + } + val, exists := c.Get(creditsUsedKey) + if !exists { + return false + } + flag, ok := val.(bool) + return ok && flag +} diff --git a/internal/logging/gin_logger_test.go b/internal/logging/gin_logger_test.go index 7de1833865..9bd3ddfba6 100644 --- a/internal/logging/gin_logger_test.go +++ b/internal/logging/gin_logger_test.go @@ -58,3 +58,12 @@ func TestGinLogrusRecoveryHandlesRegularPanic(t *testing.T) { t.Fatalf("expected 500, got %d", recorder.Code) } } + +func TestIsAIAPIPathIncludesImages(t *testing.T) { + if !isAIAPIPath("/v1/images/generations") { + t.Fatalf("expected /v1/images/generations to be treated as AI API path") + } + if !isAIAPIPath("/v1/images/edits") { + t.Fatalf("expected /v1/images/edits to be treated as AI API path") + } +} diff --git a/internal/logging/global_logger.go b/internal/logging/global_logger.go index 372222a545..4b4ef62c85 100644 --- a/internal/logging/global_logger.go +++ b/internal/logging/global_logger.go @@ -10,8 +10,8 @@ import ( "sync" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "gopkg.in/natefinch/lumberjack.v2" ) diff --git a/internal/logging/request_logger.go b/internal/logging/request_logger.go index 2db2a504d3..44b2c95264 100644 --- a/internal/logging/request_logger.go +++ b/internal/logging/request_logger.go @@ -8,6 +8,8 @@ import ( "bytes" "compress/flate" "compress/gzip" + "context" + "encoding/json" "fmt" "io" "os" @@ -22,13 +24,23 @@ import ( "github.com/klauspost/compress/zstd" log "github.com/sirupsen/logrus" - "github.com/router-for-me/CLIProxyAPI/v6/internal/buildinfo" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/buildinfo" + "github.com/router-for-me/CLIProxyAPI/v7/internal/home" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" ) var requestLogID atomic.Uint64 +type homeRequestLogClient interface { + HeartbeatOK() bool + RPushRequestLog(ctx context.Context, payload []byte) error +} + +var currentHomeRequestLogClient = func() homeRequestLogClient { + return home.Current() +} + // RequestLogger defines the interface for logging HTTP requests and responses. // It provides methods for logging both regular and streaming HTTP request/response cycles. type RequestLogger interface { @@ -148,6 +160,58 @@ type FileRequestLogger struct { // errorLogsMaxFiles limits the number of error log files retained. errorLogsMaxFiles int + + homeEnabled bool +} + +type homeRequestLogPayload struct { + Headers map[string][]string `json:"headers,omitempty"` + RequestLog string `json:"request_log,omitempty"` +} + +func cloneHeaders(headers map[string][]string) map[string][]string { + if len(headers) == 0 { + return nil + } + out := make(map[string][]string, len(headers)) + for key, values := range headers { + if strings.TrimSpace(key) == "" { + continue + } + if values == nil { + out[key] = nil + continue + } + copied := make([]string, len(values)) + copy(copied, values) + out[key] = copied + } + if len(out) == 0 { + return nil + } + return out +} + +func (l *FileRequestLogger) forwardRequestLogToHome(ctx context.Context, headers map[string][]string, logText string) error { + if l == nil || !l.homeEnabled { + return nil + } + client := currentHomeRequestLogClient() + if client == nil || !client.HeartbeatOK() { + return nil + } + payload := homeRequestLogPayload{ + Headers: cloneHeaders(headers), + RequestLog: logText, + } + raw, errMarshal := json.Marshal(&payload) + if errMarshal != nil { + return errMarshal + } + if ctx == nil { + ctx = context.Background() + } + return client.RPushRequestLog(ctx, raw) } // NewFileRequestLogger creates a new file-based request logger. @@ -173,7 +237,17 @@ func NewFileRequestLogger(enabled bool, logsDir string, configDir string, errorL enabled: enabled, logsDir: logsDir, errorLogsMaxFiles: errorLogsMaxFiles, + homeEnabled: false, + } +} + +// SetHomeEnabled toggles home request-log forwarding. +// When enabled, request logs are not written to disk and are instead forwarded to home via Redis RESP. +func (l *FileRequestLogger) SetHomeEnabled(enabled bool) { + if l == nil { + return } + l.homeEnabled = enabled } // IsEnabled returns whether request logging is currently enabled. @@ -231,6 +305,38 @@ func (l *FileRequestLogger) logRequest(url, method string, requestHeaders map[st return nil } + if l.homeEnabled && l.enabled { + responseToWrite, decompressErr := l.decompressResponse(responseHeaders, response) + if decompressErr != nil { + responseToWrite = response + } + + var buf bytes.Buffer + writeErr := l.writeNonStreamingLog( + &buf, + url, + method, + requestHeaders, + body, + "", + websocketTimeline, + apiRequest, + apiResponse, + apiWebsocketTimeline, + apiResponseErrors, + statusCode, + responseHeaders, + responseToWrite, + decompressErr, + requestTimestamp, + apiResponseTimestamp, + ) + if writeErr != nil { + return fmt.Errorf("failed to build request log content: %w", writeErr) + } + return l.forwardRequestLogToHome(context.Background(), requestHeaders, buf.String()) + } + // Ensure logs directory exists if errEnsure := l.ensureLogsDir(); errEnsure != nil { return fmt.Errorf("failed to create logs directory: %w", errEnsure) @@ -321,6 +427,14 @@ func (l *FileRequestLogger) LogStreamingRequest(url, method string, headers map[ return &NoOpStreamingLogWriter{}, nil } + if l.homeEnabled { + client := home.Current() + if client == nil || !client.HeartbeatOK() { + return &NoOpStreamingLogWriter{}, nil + } + return newHomeStreamingLogWriter(url, method, headers, body, requestID), nil + } + // Ensure logs directory exists if err := l.ensureLogsDir(); err != nil { return nil, fmt.Errorf("failed to create logs directory: %w", err) @@ -1498,3 +1612,165 @@ func (w *NoOpStreamingLogWriter) SetFirstChunkTimestamp(_ time.Time) {} // Returns: // - error: Always returns nil func (w *NoOpStreamingLogWriter) Close() error { return nil } + +type homeStreamingLogWriter struct { + url string + method string + timestamp time.Time + + requestHeaders map[string][]string + requestBody []byte + + chunkChan chan []byte + doneChan chan struct{} + + responseStatus int + statusWritten bool + responseHeaders map[string][]string + responseBody bytes.Buffer + apiRequest []byte + apiResponse []byte + apiWebsocketTime []byte + apiResponseTS time.Time + firstChunkTS time.Time +} + +func newHomeStreamingLogWriter(url, method string, headers map[string][]string, body []byte, _ string) *homeStreamingLogWriter { + requestHeaders := make(map[string][]string, len(headers)) + for key, values := range headers { + headerValues := make([]string, len(values)) + copy(headerValues, values) + requestHeaders[key] = headerValues + } + + writer := &homeStreamingLogWriter{ + url: url, + method: method, + timestamp: time.Now(), + requestHeaders: requestHeaders, + requestBody: append([]byte(nil), body...), + chunkChan: make(chan []byte, 100), + doneChan: make(chan struct{}), + } + + go writer.asyncWriter() + return writer +} + +func (w *homeStreamingLogWriter) asyncWriter() { + defer close(w.doneChan) + for chunk := range w.chunkChan { + if len(chunk) == 0 { + continue + } + _, _ = w.responseBody.Write(chunk) + } +} + +func (w *homeStreamingLogWriter) WriteChunkAsync(chunk []byte) { + if w == nil || w.chunkChan == nil || len(chunk) == 0 { + return + } + select { + case w.chunkChan <- append([]byte(nil), chunk...): + default: + } +} + +func (w *homeStreamingLogWriter) WriteStatus(status int, headers map[string][]string) error { + if w == nil || status == 0 { + return nil + } + w.responseStatus = status + w.statusWritten = true + if headers != nil { + w.responseHeaders = make(map[string][]string, len(headers)) + for key, values := range headers { + copied := make([]string, len(values)) + copy(copied, values) + w.responseHeaders[key] = copied + } + } + return nil +} + +func (w *homeStreamingLogWriter) WriteAPIRequest(apiRequest []byte) error { + if w == nil || len(apiRequest) == 0 { + return nil + } + w.apiRequest = bytes.Clone(apiRequest) + return nil +} + +func (w *homeStreamingLogWriter) WriteAPIResponse(apiResponse []byte) error { + if w == nil || len(apiResponse) == 0 { + return nil + } + w.apiResponse = bytes.Clone(apiResponse) + return nil +} + +func (w *homeStreamingLogWriter) WriteAPIWebsocketTimeline(apiWebsocketTimeline []byte) error { + if w == nil || len(apiWebsocketTimeline) == 0 { + return nil + } + w.apiWebsocketTime = bytes.Clone(apiWebsocketTimeline) + return nil +} + +func (w *homeStreamingLogWriter) SetFirstChunkTimestamp(timestamp time.Time) { + if w == nil { + return + } + if !timestamp.IsZero() { + w.firstChunkTS = timestamp + w.apiResponseTS = timestamp + } +} + +func (w *homeStreamingLogWriter) Close() error { + if w == nil { + return nil + } + + client := currentHomeRequestLogClient() + if client == nil || !client.HeartbeatOK() { + return nil + } + + if w.chunkChan != nil { + close(w.chunkChan) + <-w.doneChan + w.chunkChan = nil + } + + responsePayload := w.responseBody.Bytes() + + var buf bytes.Buffer + upstreamTransport := inferUpstreamTransport(w.apiRequest, w.apiResponse, w.apiWebsocketTime, nil) + if errWrite := writeRequestInfoWithBody(&buf, w.url, w.method, w.requestHeaders, w.requestBody, "", w.timestamp, "http", upstreamTransport, true); errWrite != nil { + return errWrite + } + if errWrite := writeAPISection(&buf, "=== API WEBSOCKET TIMELINE ===\n", "=== API WEBSOCKET TIMELINE", w.apiWebsocketTime, time.Time{}); errWrite != nil { + return errWrite + } + if errWrite := writeAPISection(&buf, "=== API REQUEST ===\n", "=== API REQUEST", w.apiRequest, time.Time{}); errWrite != nil { + return errWrite + } + if errWrite := writeAPISection(&buf, "=== API RESPONSE ===\n", "=== API RESPONSE", w.apiResponse, w.apiResponseTS); errWrite != nil { + return errWrite + } + if errWrite := writeResponseSection(&buf, w.responseStatus, w.statusWritten, w.responseHeaders, bytes.NewReader(responsePayload), nil, false); errWrite != nil { + return errWrite + } + + payload := homeRequestLogPayload{ + Headers: cloneHeaders(w.requestHeaders), + RequestLog: buf.String(), + } + raw, errMarshal := json.Marshal(&payload) + if errMarshal != nil { + return errMarshal + } + return client.RPushRequestLog(context.Background(), raw) +} diff --git a/internal/logging/request_logger_home_test.go b/internal/logging/request_logger_home_test.go new file mode 100644 index 0000000000..f8cdf1e453 --- /dev/null +++ b/internal/logging/request_logger_home_test.go @@ -0,0 +1,154 @@ +package logging + +import ( + "bytes" + "context" + "encoding/json" + "net/http" + "os" + "testing" + "time" +) + +type stubHomeRequestLogClient struct { + heartbeatOK bool + pushed [][]byte +} + +func (c *stubHomeRequestLogClient) HeartbeatOK() bool { return c.heartbeatOK } + +func (c *stubHomeRequestLogClient) RPushRequestLog(_ context.Context, payload []byte) error { + c.pushed = append(c.pushed, bytes.Clone(payload)) + return nil +} + +func TestFileRequestLogger_HomeEnabled_ForwardsWhenRequestLogEnabled(t *testing.T) { + original := currentHomeRequestLogClient + defer func() { + currentHomeRequestLogClient = original + }() + + stub := &stubHomeRequestLogClient{heartbeatOK: true} + currentHomeRequestLogClient = func() homeRequestLogClient { + return stub + } + + logsDir := t.TempDir() + logger := NewFileRequestLogger(true, logsDir, "", 0) + logger.SetHomeEnabled(true) + + requestHeaders := map[string][]string{ + "Content-Type": {"application/json"}, + "Authorization": {"Bearer secret"}, + } + + errLog := logger.LogRequest( + "/v1/chat/completions", + http.MethodPost, + requestHeaders, + []byte(`{"input":"hello"}`), + http.StatusOK, + map[string][]string{"Content-Type": {"application/json"}}, + []byte(`{"ok":true}`), + nil, + nil, + nil, + nil, + nil, + "req-1", + time.Now(), + time.Now(), + ) + if errLog != nil { + t.Fatalf("LogRequest error: %v", errLog) + } + + entries, errRead := os.ReadDir(logsDir) + if errRead != nil { + t.Fatalf("failed to read logs dir: %v", errRead) + } + if len(entries) != 0 { + t.Fatalf("expected no local request log files, got entries: %+v", entries) + } + + if len(stub.pushed) != 1 { + t.Fatalf("home pushed records = %d, want 1", len(stub.pushed)) + } + + var got struct { + Headers map[string][]string `json:"headers"` + RequestLog string `json:"request_log"` + } + if errUnmarshal := json.Unmarshal(stub.pushed[0], &got); errUnmarshal != nil { + t.Fatalf("unmarshal payload: %v payload=%s", errUnmarshal, string(stub.pushed[0])) + } + if got.Headers == nil || got.Headers["Content-Type"][0] != "application/json" { + t.Fatalf("headers.content-type = %+v, want application/json", got.Headers["Content-Type"]) + } + if got.Headers == nil || got.Headers["Authorization"][0] != "Bearer secret" { + t.Fatalf("headers.authorization = %+v, want Bearer secret", got.Headers["Authorization"]) + } + if got.RequestLog == "" { + t.Fatalf("request_log empty, want non-empty") + } +} + +func TestFileRequestLogger_HomeEnabled_DoesNotForwardForcedErrorLogsWhenRequestLogDisabled(t *testing.T) { + original := currentHomeRequestLogClient + defer func() { + currentHomeRequestLogClient = original + }() + + stub := &stubHomeRequestLogClient{heartbeatOK: true} + currentHomeRequestLogClient = func() homeRequestLogClient { + return stub + } + + logsDir := t.TempDir() + logger := NewFileRequestLogger(false, logsDir, "", 0) + logger.SetHomeEnabled(true) + + errLog := logger.LogRequestWithOptions( + "/v1/chat/completions", + http.MethodPost, + map[string][]string{"Content-Type": {"application/json"}}, + []byte(`{"input":"hello"}`), + http.StatusBadGateway, + map[string][]string{"Content-Type": {"application/json"}}, + []byte(`{"error":"upstream failure"}`), + nil, + nil, + nil, + nil, + nil, + true, + "req-2", + time.Now(), + time.Now(), + ) + if errLog != nil { + t.Fatalf("LogRequestWithOptions error: %v", errLog) + } + + if len(stub.pushed) != 0 { + t.Fatalf("home pushed records = %d, want 0", len(stub.pushed)) + } + + entries, errRead := os.ReadDir(logsDir) + if errRead != nil { + t.Fatalf("failed to read logs dir: %v", errRead) + } + found := false + for _, entry := range entries { + if entry.IsDir() { + continue + } + if entry.Name() != "" { + found = true + break + } + } + if !found { + t.Fatalf("expected local forced error log file when request-log disabled") + } +} diff --git a/internal/logging/requestmeta.go b/internal/logging/requestmeta.go new file mode 100644 index 0000000000..a28d7c6287 --- /dev/null +++ b/internal/logging/requestmeta.go @@ -0,0 +1,62 @@ +package logging + +import ( + "context" + "sync/atomic" +) + +type endpointKey struct{} +type responseStatusKey struct{} + +type responseStatusHolder struct { + status atomic.Int32 +} + +func WithEndpoint(ctx context.Context, endpoint string) context.Context { + if ctx == nil { + ctx = context.Background() + } + return context.WithValue(ctx, endpointKey{}, endpoint) +} + +func GetEndpoint(ctx context.Context) string { + if ctx == nil { + return "" + } + if endpoint, ok := ctx.Value(endpointKey{}).(string); ok { + return endpoint + } + return "" +} + +func WithResponseStatusHolder(ctx context.Context) context.Context { + if ctx == nil { + ctx = context.Background() + } + if holder, ok := ctx.Value(responseStatusKey{}).(*responseStatusHolder); ok && holder != nil { + return ctx + } + return context.WithValue(ctx, responseStatusKey{}, &responseStatusHolder{}) +} + +func SetResponseStatus(ctx context.Context, status int) { + if ctx == nil || status <= 0 { + return + } + holder, ok := ctx.Value(responseStatusKey{}).(*responseStatusHolder) + if !ok || holder == nil { + return + } + holder.status.Store(int32(status)) +} + +func GetResponseStatus(ctx context.Context) int { + if ctx == nil { + return 0 + } + holder, ok := ctx.Value(responseStatusKey{}).(*responseStatusHolder) + if !ok || holder == nil { + return 0 + } + return int(holder.status.Load()) +} diff --git a/internal/managementasset/embed.go b/internal/managementasset/embed.go new file mode 100644 index 0000000000..b4be5eafb7 --- /dev/null +++ b/internal/managementasset/embed.go @@ -0,0 +1,41 @@ +package managementasset + +import ( + "embed" + "io/fs" + "net/http" +) + +//go:embed all:web_static +var embeddedWebFS embed.FS + +func GetEmbeddedFileSystem() http.FileSystem { + sub, err := fs.Sub(embeddedWebFS, "web_static") + if err != nil { + return nil + } + return http.FS(sub) +} + +func GetEmbeddedFS() fs.FS { + sub, err := fs.Sub(embeddedWebFS, "web_static") + if err != nil { + return nil + } + return sub +} + +func HasEmbeddedAssets() bool { + fsys := GetEmbeddedFileSystem() + if fsys == nil { + return false + } + f, err := fsys.Open("index.html") + if err != nil { + return false + } + defer func() { + _ = f.Close() + }() + return true +} diff --git a/internal/managementasset/updater.go b/internal/managementasset/updater.go index ae2bc81956..ea7ca3f502 100644 --- a/internal/managementasset/updater.go +++ b/internal/managementasset/updater.go @@ -17,9 +17,9 @@ import ( "sync/atomic" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" log "github.com/sirupsen/logrus" "golang.org/x/sync/singleflight" ) diff --git a/internal/managementasset/web_static/README.md b/internal/managementasset/web_static/README.md new file mode 100644 index 0000000000..551dc8a2fc --- /dev/null +++ b/internal/managementasset/web_static/README.md @@ -0,0 +1,3 @@ +This directory contains embedded web static assets generated from `web/out`. + +To update: `cp -r web/out internal/managementasset/web_static` diff --git a/internal/misc/antigravity_version.go b/internal/misc/antigravity_version.go index 595cfefd96..0d187c254f 100644 --- a/internal/misc/antigravity_version.go +++ b/internal/misc/antigravity_version.go @@ -7,6 +7,7 @@ import ( "errors" "fmt" "net/http" + "strings" "sync" "time" @@ -18,6 +19,8 @@ const ( antigravityFallbackVersion = "1.21.9" antigravityVersionCacheTTL = 6 * time.Hour antigravityFetchTimeout = 10 * time.Second + AntigravityNodeAPIClientUA = "google-api-nodejs-client/10.3.0" + AntigravityGoogAPIClientUA = "gl-node/22.21.1" ) type antigravityRelease struct { @@ -107,6 +110,65 @@ func AntigravityUserAgent() string { return fmt.Sprintf("antigravity/%s darwin/arm64", AntigravityLatestVersion()) } +func antigravityBaseUserAgent(userAgent string) string { + userAgent = strings.TrimSpace(userAgent) + if userAgent == "" { + return AntigravityUserAgent() + } + lower := strings.ToLower(userAgent) + if strings.HasPrefix(lower, "antigravity/") { + if idx := strings.Index(lower, " google-api-nodejs-client/"); idx >= 0 { + trimmed := strings.TrimSpace(userAgent[:idx]) + if trimmed != "" { + return trimmed + } + } + } + return userAgent +} + +// AntigravityRequestUserAgent returns the short Antigravity runtime UA used by +// generate/stream/model-list requests. +func AntigravityRequestUserAgent(userAgent string) string { + return antigravityBaseUserAgent(userAgent) +} + +// AntigravityLoadCodeAssistUserAgent returns the long Antigravity control-plane +// UA used by loadCodeAssist requests. +func AntigravityLoadCodeAssistUserAgent(userAgent string) string { + userAgent = strings.TrimSpace(userAgent) + if userAgent == "" { + return AntigravityUserAgent() + " " + AntigravityNodeAPIClientUA + } + lower := strings.ToLower(userAgent) + if !strings.HasPrefix(lower, "antigravity/") { + return userAgent + } + if strings.Contains(lower, "google-api-nodejs-client/") { + return userAgent + } + return antigravityBaseUserAgent(userAgent) + " " + AntigravityNodeAPIClientUA +} + +// AntigravityVersionFromUserAgent extracts the Antigravity version prefix from +// either the short or long Antigravity UA forms. +func AntigravityVersionFromUserAgent(userAgent string) string { + base := antigravityBaseUserAgent(userAgent) + lower := strings.ToLower(base) + if !strings.HasPrefix(lower, "antigravity/") { + return AntigravityLatestVersion() + } + rest := base[len("antigravity/"):] + if idx := strings.IndexAny(rest, " \t"); idx >= 0 { + rest = rest[:idx] + } + rest = strings.TrimSpace(rest) + if rest == "" { + return AntigravityLatestVersion() + } + return rest +} + func fetchAntigravityLatestVersion(ctx context.Context) (string, error) { if ctx == nil { ctx = context.Background() diff --git a/internal/misc/header_utils.go b/internal/misc/header_utils.go index 5752a26956..ac022a9627 100644 --- a/internal/misc/header_utils.go +++ b/internal/misc/header_utils.go @@ -12,7 +12,7 @@ import ( const ( // GeminiCLIVersion is the version string reported in the User-Agent for upstream requests. - GeminiCLIVersion = "0.31.0" + GeminiCLIVersion = "0.34.0" // GeminiCLIApiClientHeader is the value for the X-Goog-Api-Client header sent to the Gemini CLI upstream. GeminiCLIApiClientHeader = "google-genai-sdk/1.41.0 gl-node/v22.19.0" @@ -46,7 +46,7 @@ func GeminiCLIUserAgent(model string) string { if model == "" { model = "unknown" } - return fmt.Sprintf("GeminiCLI/%s/%s (%s; %s)", GeminiCLIVersion, model, geminiCLIOS(), geminiCLIArch()) + return fmt.Sprintf("GeminiCLI/%s/%s (%s; %s; terminal)", GeminiCLIVersion, model, geminiCLIOS(), geminiCLIArch()) } // ScrubProxyAndFingerprintHeaders removes all headers that could reveal diff --git a/internal/redisqueue/plugin.go b/internal/redisqueue/plugin.go new file mode 100644 index 0000000000..e5b74cb24b --- /dev/null +++ b/internal/redisqueue/plugin.go @@ -0,0 +1,160 @@ +package redisqueue + +import ( + "context" + "encoding/json" + "strings" + "time" + + internallogging "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + coreusage "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" +) + +func init() { + coreusage.RegisterPlugin(&usageQueuePlugin{}) +} + +type usageQueuePlugin struct{} + +func (p *usageQueuePlugin) HandleUsage(ctx context.Context, record coreusage.Record) { + if p == nil { + return + } + if !Enabled() || !UsageStatisticsEnabled() { + return + } + + timestamp := record.RequestedAt + if timestamp.IsZero() { + timestamp = time.Now() + } + + modelName := strings.TrimSpace(record.Model) + if modelName == "" { + modelName = "unknown" + } + aliasName := strings.TrimSpace(record.Alias) + if aliasName == "" { + aliasName = modelName + } + provider := strings.TrimSpace(record.Provider) + if provider == "" { + provider = "unknown" + } + authType := strings.TrimSpace(record.AuthType) + if authType == "" { + authType = "unknown" + } + apiKey := strings.TrimSpace(record.APIKey) + requestID := strings.TrimSpace(internallogging.GetRequestID(ctx)) + + tokens := tokenStats{ + InputTokens: record.Detail.InputTokens, + OutputTokens: record.Detail.OutputTokens, + ReasoningTokens: record.Detail.ReasoningTokens, + CachedTokens: record.Detail.CachedTokens, + TotalTokens: record.Detail.TotalTokens, + } + if tokens.TotalTokens == 0 { + tokens.TotalTokens = tokens.InputTokens + tokens.OutputTokens + tokens.ReasoningTokens + } + if tokens.TotalTokens == 0 { + tokens.TotalTokens = tokens.InputTokens + tokens.OutputTokens + tokens.ReasoningTokens + tokens.CachedTokens + } + + failed := record.Failed + if !failed { + failed = !resolveSuccess(ctx) + } + fail := resolveFail(ctx, record, failed) + + detail := requestDetail{ + Timestamp: timestamp, + LatencyMs: record.Latency.Milliseconds(), + Source: record.Source, + AuthIndex: record.AuthIndex, + Tokens: tokens, + Failed: failed, + Fail: fail, + } + + payload, err := json.Marshal(queuedUsageDetail{ + requestDetail: detail, + Provider: provider, + Model: modelName, + Alias: aliasName, + Endpoint: resolveEndpoint(ctx), + AuthType: authType, + APIKey: apiKey, + RequestID: requestID, + }) + if err != nil { + return + } + Enqueue(payload) +} + +type queuedUsageDetail struct { + requestDetail + Provider string `json:"provider"` + Model string `json:"model"` + Alias string `json:"alias"` + Endpoint string `json:"endpoint"` + AuthType string `json:"auth_type"` + APIKey string `json:"api_key"` + RequestID string `json:"request_id"` +} + +type requestDetail struct { + Timestamp time.Time `json:"timestamp"` + LatencyMs int64 `json:"latency_ms"` + Source string `json:"source"` + AuthIndex string `json:"auth_index"` + Tokens tokenStats `json:"tokens"` + Failed bool `json:"failed"` + Fail failDetail `json:"fail"` +} + +type tokenStats struct { + InputTokens int64 `json:"input_tokens"` + OutputTokens int64 `json:"output_tokens"` + ReasoningTokens int64 `json:"reasoning_tokens"` + CachedTokens int64 `json:"cached_tokens"` + TotalTokens int64 `json:"total_tokens"` +} + +type failDetail struct { + StatusCode int `json:"status_code"` + Body string `json:"body"` +} + +func resolveFail(ctx context.Context, record coreusage.Record, failed bool) failDetail { + fail := failDetail{ + StatusCode: record.Fail.StatusCode, + Body: strings.TrimSpace(record.Fail.Body), + } + if !failed { + return failDetail{StatusCode: 200} + } + if fail.StatusCode <= 0 { + fail.StatusCode = internallogging.GetResponseStatus(ctx) + } + if fail.StatusCode <= 0 { + fail.StatusCode = 500 + } + return fail +} + +func resolveSuccess(ctx context.Context) bool { + status := internallogging.GetResponseStatus(ctx) + if status == 0 { + return true + } + return status < httpStatusBadRequest +} + +func resolveEndpoint(ctx context.Context) string { + return strings.TrimSpace(internallogging.GetEndpoint(ctx)) +} + +const httpStatusBadRequest = 400 diff --git a/internal/redisqueue/plugin_test.go b/internal/redisqueue/plugin_test.go new file mode 100644 index 0000000000..e2af6af709 --- /dev/null +++ b/internal/redisqueue/plugin_test.go @@ -0,0 +1,278 @@ +package redisqueue + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + "time" + + "github.com/gin-gonic/gin" + internallogging "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + coreusage "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" +) + +func TestUsageQueuePluginPayloadIncludesStableFieldsAndSuccess(t *testing.T) { + withEnabledQueue(t, func() { + ctx := internallogging.WithRequestID(context.Background(), "ctx-request-id") + ctx = internallogging.WithEndpoint(ctx, "POST /v1/chat/completions") + ctx = internallogging.WithResponseStatusHolder(ctx) + internallogging.SetResponseStatus(ctx, http.StatusOK) + + plugin := &usageQueuePlugin{} + plugin.HandleUsage(ctx, coreusage.Record{ + Provider: "openai", + Model: "gpt-5.4", + Alias: "client-gpt", + APIKey: "test-key", + AuthIndex: "0", + AuthType: "apikey", + Source: "user@example.com", + RequestedAt: time.Date(2026, 4, 25, 0, 0, 0, 0, time.UTC), + Latency: 1500 * time.Millisecond, + Detail: coreusage.Detail{ + InputTokens: 10, + OutputTokens: 20, + TotalTokens: 30, + }, + }) + + payload := popSinglePayload(t) + requireStringField(t, payload, "provider", "openai") + requireStringField(t, payload, "model", "gpt-5.4") + requireStringField(t, payload, "alias", "client-gpt") + requireStringField(t, payload, "endpoint", "POST /v1/chat/completions") + requireStringField(t, payload, "auth_type", "apikey") + requireMissingField(t, payload, "user_api_key") + requireStringField(t, payload, "request_id", "ctx-request-id") + requireBoolField(t, payload, "failed", false) + requireFailField(t, payload, http.StatusOK, "") + }) +} + +func TestUsageQueuePluginPayloadIncludesStableFieldsAndFailureAndGinRequestID(t *testing.T) { + withEnabledQueue(t, func() { + ctx := internallogging.WithRequestID(context.Background(), "gin-request-id") + ctx = internallogging.WithEndpoint(ctx, "GET /v1/responses") + ctx = internallogging.WithResponseStatusHolder(ctx) + internallogging.SetResponseStatus(ctx, http.StatusInternalServerError) + + plugin := &usageQueuePlugin{} + plugin.HandleUsage(ctx, coreusage.Record{ + Provider: "openai", + Model: "gpt-5.4-mini", + Alias: "client-mini", + APIKey: "test-key", + AuthIndex: "0", + AuthType: "apikey", + Source: "user@example.com", + RequestedAt: time.Date(2026, 4, 25, 0, 0, 0, 0, time.UTC), + Latency: 2500 * time.Millisecond, + Fail: coreusage.Failure{ + StatusCode: http.StatusInternalServerError, + Body: "upstream failed", + }, + Detail: coreusage.Detail{ + InputTokens: 10, + OutputTokens: 20, + TotalTokens: 30, + }, + }) + + payload := popSinglePayload(t) + requireStringField(t, payload, "provider", "openai") + requireStringField(t, payload, "model", "gpt-5.4-mini") + requireStringField(t, payload, "alias", "client-mini") + requireStringField(t, payload, "endpoint", "GET /v1/responses") + requireStringField(t, payload, "auth_type", "apikey") + requireMissingField(t, payload, "user_api_key") + requireStringField(t, payload, "request_id", "gin-request-id") + requireBoolField(t, payload, "failed", true) + requireFailField(t, payload, http.StatusInternalServerError, "upstream failed") + }) +} + +func TestUsageQueuePluginAsyncIgnoresRecycledGinContext(t *testing.T) { + withEnabledQueue(t, func() { + ginCtx := newTestGinContext(t, http.MethodPost, "/v1/chat/completions", http.StatusOK) + ctx := context.WithValue(context.Background(), "gin", ginCtx) + ctx = internallogging.WithRequestID(ctx, "ctx-request-id") + ctx = internallogging.WithEndpoint(ctx, "POST /v1/chat/completions") + ctx = internallogging.WithResponseStatusHolder(ctx) + internallogging.SetResponseStatus(ctx, http.StatusInternalServerError) + + mgr := coreusage.NewManager(16) + defer mgr.Stop() + + mgr.Register(pluginFunc(func(_ context.Context, _ coreusage.Record) { + ginCtx.Request = httptest.NewRequest(http.MethodGet, "http://example.com/v1/responses", nil) + ginCtx.Status(http.StatusOK) + })) + mgr.Register(&usageQueuePlugin{}) + + mgr.Publish(ctx, coreusage.Record{ + Provider: "openai", + Model: "gpt-5.4", + Alias: "client-gpt", + APIKey: "test-key", + AuthIndex: "0", + AuthType: "apikey", + Source: "user@example.com", + RequestedAt: time.Date(2026, 4, 25, 0, 0, 0, 0, time.UTC), + Latency: 1500 * time.Millisecond, + Fail: coreusage.Failure{ + StatusCode: http.StatusBadGateway, + Body: "bad gateway", + }, + Detail: coreusage.Detail{ + InputTokens: 10, + OutputTokens: 20, + TotalTokens: 30, + }, + }) + + payload := waitForSinglePayload(t, 2*time.Second) + requireStringField(t, payload, "endpoint", "POST /v1/chat/completions") + requireStringField(t, payload, "alias", "client-gpt") + requireMissingField(t, payload, "user_api_key") + requireStringField(t, payload, "request_id", "ctx-request-id") + requireBoolField(t, payload, "failed", true) + requireFailField(t, payload, http.StatusBadGateway, "bad gateway") + }) +} + +func withEnabledQueue(t *testing.T, fn func()) { + t.Helper() + + prevQueueEnabled := Enabled() + prevUsageEnabled := UsageStatisticsEnabled() + + SetEnabled(false) + SetEnabled(true) + SetUsageStatisticsEnabled(true) + + defer func() { + SetEnabled(false) + SetEnabled(prevQueueEnabled) + SetUsageStatisticsEnabled(prevUsageEnabled) + }() + + fn() +} + +func newTestGinContext(t *testing.T, method, path string, status int) *gin.Context { + t.Helper() + + gin.SetMode(gin.TestMode) + recorder := httptest.NewRecorder() + ginCtx, _ := gin.CreateTestContext(recorder) + ginCtx.Request = httptest.NewRequest(method, "http://example.com"+path, nil) + if status != 0 { + ginCtx.Status(status) + } + return ginCtx +} + +func popSinglePayload(t *testing.T) map[string]json.RawMessage { + t.Helper() + + items := PopOldest(10) + if len(items) != 1 { + t.Fatalf("PopOldest() items = %d, want 1", len(items)) + } + + var payload map[string]json.RawMessage + if err := json.Unmarshal(items[0], &payload); err != nil { + t.Fatalf("unmarshal payload: %v", err) + } + return payload +} + +func waitForSinglePayload(t *testing.T, timeout time.Duration) map[string]json.RawMessage { + t.Helper() + + deadline := time.Now().Add(timeout) + for time.Now().Before(deadline) { + items := PopOldest(10) + if len(items) == 0 { + time.Sleep(10 * time.Millisecond) + continue + } + if len(items) != 1 { + t.Fatalf("PopOldest() items = %d, want 1", len(items)) + } + var payload map[string]json.RawMessage + if err := json.Unmarshal(items[0], &payload); err != nil { + t.Fatalf("unmarshal payload: %v", err) + } + return payload + } + t.Fatalf("timeout waiting for queued payload") + return nil +} + +func requireStringField(t *testing.T, payload map[string]json.RawMessage, key, want string) { + t.Helper() + + raw, ok := payload[key] + if !ok { + t.Fatalf("payload missing %q", key) + } + var got string + if err := json.Unmarshal(raw, &got); err != nil { + t.Fatalf("unmarshal %q: %v", key, err) + } + if got != want { + t.Fatalf("%s = %q, want %q", key, got, want) + } +} + +func requireMissingField(t *testing.T, payload map[string]json.RawMessage, key string) { + t.Helper() + + if _, ok := payload[key]; ok { + t.Fatalf("payload unexpectedly contains %q", key) + } +} + +type pluginFunc func(context.Context, coreusage.Record) + +func (fn pluginFunc) HandleUsage(ctx context.Context, record coreusage.Record) { + fn(ctx, record) +} + +func requireBoolField(t *testing.T, payload map[string]json.RawMessage, key string, want bool) { + t.Helper() + + raw, ok := payload[key] + if !ok { + t.Fatalf("payload missing %q", key) + } + var got bool + if err := json.Unmarshal(raw, &got); err != nil { + t.Fatalf("unmarshal %q: %v", key, err) + } + if got != want { + t.Fatalf("%s = %t, want %t", key, got, want) + } +} + +func requireFailField(t *testing.T, payload map[string]json.RawMessage, wantStatus int, wantBody string) { + t.Helper() + + raw, ok := payload["fail"] + if !ok { + t.Fatalf("payload missing %q", "fail") + } + var got struct { + StatusCode int `json:"status_code"` + Body string `json:"body"` + } + if err := json.Unmarshal(raw, &got); err != nil { + t.Fatalf("unmarshal fail: %v", err) + } + if got.StatusCode != wantStatus || got.Body != wantBody { + t.Fatalf("fail = {status_code:%d body:%q}, want {status_code:%d body:%q}", got.StatusCode, got.Body, wantStatus, wantBody) + } +} diff --git a/internal/redisqueue/queue.go b/internal/redisqueue/queue.go new file mode 100644 index 0000000000..2fea58391a --- /dev/null +++ b/internal/redisqueue/queue.go @@ -0,0 +1,155 @@ +package redisqueue + +import ( + "sync" + "sync/atomic" + "time" +) + +const ( + defaultRetentionSeconds int64 = 60 + maxRetentionSeconds int64 = 3600 +) + +type queueItem struct { + enqueuedAt time.Time + payload []byte +} + +type queue struct { + mu sync.Mutex + items []queueItem + head int +} + +var ( + enabled atomic.Bool + retentionSeconds atomic.Int64 + global queue +) + +func init() { + retentionSeconds.Store(defaultRetentionSeconds) +} + +func SetEnabled(value bool) { + enabled.Store(value) + if !value { + global.clear() + } +} + +func Enabled() bool { + return enabled.Load() +} + +func SetRetentionSeconds(value int) { + normalized := int64(value) + if normalized <= 0 { + normalized = defaultRetentionSeconds + } else if normalized > maxRetentionSeconds { + normalized = maxRetentionSeconds + } + retentionSeconds.Store(normalized) +} + +func Enqueue(payload []byte) { + if !Enabled() { + return + } + if len(payload) == 0 { + return + } + global.enqueue(payload) +} + +func PopOldest(count int) [][]byte { + if !Enabled() { + return nil + } + if count <= 0 { + return nil + } + return global.popOldest(count) +} + +func (q *queue) clear() { + q.mu.Lock() + defer q.mu.Unlock() + q.items = nil + q.head = 0 +} + +func (q *queue) enqueue(payload []byte) { + now := time.Now() + + q.mu.Lock() + defer q.mu.Unlock() + + q.pruneLocked(now) + q.items = append(q.items, queueItem{ + enqueuedAt: now, + payload: append([]byte(nil), payload...), + }) + q.maybeCompactLocked() +} + +func (q *queue) popOldest(count int) [][]byte { + now := time.Now() + + q.mu.Lock() + defer q.mu.Unlock() + + q.pruneLocked(now) + available := len(q.items) - q.head + if available <= 0 { + q.items = nil + q.head = 0 + return nil + } + if count > available { + count = available + } + + out := make([][]byte, 0, count) + for i := 0; i < count; i++ { + item := q.items[q.head+i] + out = append(out, item.payload) + } + q.head += count + q.maybeCompactLocked() + return out +} + +func (q *queue) pruneLocked(now time.Time) { + if q.head >= len(q.items) { + q.items = nil + q.head = 0 + return + } + + windowSeconds := retentionSeconds.Load() + if windowSeconds <= 0 { + windowSeconds = defaultRetentionSeconds + } + cutoff := now.Add(-time.Duration(windowSeconds) * time.Second) + for q.head < len(q.items) && q.items[q.head].enqueuedAt.Before(cutoff) { + q.head++ + } +} + +func (q *queue) maybeCompactLocked() { + if q.head == 0 { + return + } + if q.head >= len(q.items) { + q.items = nil + q.head = 0 + return + } + if q.head < 1024 && q.head*2 < len(q.items) { + return + } + q.items = append([]queueItem(nil), q.items[q.head:]...) + q.head = 0 +} diff --git a/internal/redisqueue/usage_toggle.go b/internal/redisqueue/usage_toggle.go new file mode 100644 index 0000000000..dddbeca692 --- /dev/null +++ b/internal/redisqueue/usage_toggle.go @@ -0,0 +1,16 @@ +package redisqueue + +import "sync/atomic" + +var usageStatisticsEnabled atomic.Bool + +func init() { + usageStatisticsEnabled.Store(true) +} + +// SetUsageStatisticsEnabled toggles whether usage records are enqueued into the redisqueue payload buffer. +// This is controlled by the config field `usage-statistics-enabled` and the corresponding management API. +func SetUsageStatisticsEnabled(enabled bool) { usageStatisticsEnabled.Store(enabled) } + +// UsageStatisticsEnabled reports whether the usage queue plugin should publish records. +func UsageStatisticsEnabled() bool { return usageStatisticsEnabled.Load() } diff --git a/internal/registry/kilo_models.go b/internal/registry/kilo_models.go new file mode 100644 index 0000000000..ac9939dbb7 --- /dev/null +++ b/internal/registry/kilo_models.go @@ -0,0 +1,21 @@ +// Package registry provides model definitions for various AI service providers. +package registry + +// GetKiloModels returns the Kilo model definitions +func GetKiloModels() []*ModelInfo { + return []*ModelInfo{ + // --- Base Models --- + { + ID: "kilo/auto", + Object: "model", + Created: 1732752000, + OwnedBy: "kilo", + Type: "kilo", + DisplayName: "Kilo Auto", + Description: "Automatic model selection by Kilo", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + } +} diff --git a/internal/registry/kiro_model_converter.go b/internal/registry/kiro_model_converter.go new file mode 100644 index 0000000000..fe50a8f306 --- /dev/null +++ b/internal/registry/kiro_model_converter.go @@ -0,0 +1,303 @@ +// Package registry provides Kiro model conversion utilities. +// This file handles converting dynamic Kiro API model lists to the internal ModelInfo format, +// and merging with static metadata for thinking support and other capabilities. +package registry + +import ( + "strings" + "time" +) + +// KiroAPIModel represents a model from Kiro API response. +// This is a local copy to avoid import cycles with the kiro package. +// The structure mirrors kiro.KiroModel for easy data conversion. +type KiroAPIModel struct { + // ModelID is the unique identifier for the model (e.g., "claude-sonnet-4.5") + ModelID string + // ModelName is the human-readable name + ModelName string + // Description is the model description + Description string + // RateMultiplier is the credit multiplier for this model + RateMultiplier float64 + // RateUnit is the unit for rate calculation (e.g., "credit") + RateUnit string + // MaxInputTokens is the maximum input token limit + MaxInputTokens int +} + +// DefaultKiroThinkingSupport defines the default thinking configuration for Kiro models. +// All Kiro models support thinking with the following budget range. +var DefaultKiroThinkingSupport = &ThinkingSupport{ + Min: 1024, // Minimum thinking budget tokens + Max: 32000, // Maximum thinking budget tokens + ZeroAllowed: true, // Allow disabling thinking with 0 + DynamicAllowed: true, // Allow dynamic thinking budget (-1) +} + +// DefaultKiroContextLength is the default context window size for Kiro models. +const DefaultKiroContextLength = 200000 + +// DefaultKiroMaxCompletionTokens is the default max completion tokens for Kiro models. +const DefaultKiroMaxCompletionTokens = 64000 + +// ConvertKiroAPIModels converts Kiro API models to internal ModelInfo format. +// It performs the following transformations: +// - Normalizes model ID (e.g., claude-sonnet-4.5 → kiro-claude-sonnet-4-5) +// - Adds default thinking support metadata +// - Sets default context length and max completion tokens if not provided +// +// Parameters: +// - kiroModels: List of models from Kiro API response +// +// Returns: +// - []*ModelInfo: Converted model information list +func ConvertKiroAPIModels(kiroModels []*KiroAPIModel) []*ModelInfo { + if len(kiroModels) == 0 { + return nil + } + + now := time.Now().Unix() + result := make([]*ModelInfo, 0, len(kiroModels)) + + for _, km := range kiroModels { + // Skip nil models + if km == nil { + continue + } + + // Skip models without valid ID + if km.ModelID == "" { + continue + } + + // Normalize the model ID to kiro-* format + normalizedID := normalizeKiroModelID(km.ModelID) + + // Create ModelInfo with converted data + info := &ModelInfo{ + ID: normalizedID, + Object: "model", + Created: now, + OwnedBy: "aws", + Type: "kiro", + DisplayName: generateKiroDisplayName(km.ModelName, normalizedID), + Description: km.Description, + // Use MaxInputTokens from API if available, otherwise use default + ContextLength: getContextLength(km.MaxInputTokens), + MaxCompletionTokens: DefaultKiroMaxCompletionTokens, + // All Kiro models support thinking + Thinking: cloneThinkingSupport(DefaultKiroThinkingSupport), + } + + result = append(result, info) + } + + return result +} + +// GenerateAgenticVariants creates -agentic variants for each model. +// Agentic variants are optimized for coding agents with chunked writes. +// +// Parameters: +// - models: Base models to generate variants for +// +// Returns: +// - []*ModelInfo: Combined list of base models and their agentic variants +func GenerateAgenticVariants(models []*ModelInfo) []*ModelInfo { + if len(models) == 0 { + return nil + } + + // Pre-allocate result with capacity for both base models and variants + result := make([]*ModelInfo, 0, len(models)*2) + + for _, model := range models { + if model == nil { + continue + } + + // Add the base model first + result = append(result, model) + + // Skip if model already has -agentic suffix + if strings.HasSuffix(model.ID, "-agentic") { + continue + } + + // Skip special models that shouldn't have agentic variants + if model.ID == "kiro-auto" { + continue + } + + // Create agentic variant + agenticModel := &ModelInfo{ + ID: model.ID + "-agentic", + Object: model.Object, + Created: model.Created, + OwnedBy: model.OwnedBy, + Type: model.Type, + DisplayName: model.DisplayName + " (Agentic)", + Description: generateAgenticDescription(model.Description), + ContextLength: model.ContextLength, + MaxCompletionTokens: model.MaxCompletionTokens, + Thinking: cloneThinkingSupport(model.Thinking), + } + + result = append(result, agenticModel) + } + + return result +} + +// MergeWithStaticMetadata merges dynamic models with static metadata. +// Static metadata takes priority for any overlapping fields. +// This allows manual overrides for specific models while keeping dynamic discovery. +// +// Parameters: +// - dynamicModels: Models from Kiro API (converted to ModelInfo) +// - staticModels: Predefined model metadata (from GetKiroModels()) +// +// Returns: +// - []*ModelInfo: Merged model list with static metadata taking priority +func MergeWithStaticMetadata(dynamicModels, staticModels []*ModelInfo) []*ModelInfo { + if len(dynamicModels) == 0 && len(staticModels) == 0 { + return nil + } + + // Build a map of static models for quick lookup + staticMap := make(map[string]*ModelInfo, len(staticModels)) + for _, sm := range staticModels { + if sm != nil && sm.ID != "" { + staticMap[sm.ID] = sm + } + } + + // Build result, preferring static metadata where available + seenIDs := make(map[string]struct{}) + result := make([]*ModelInfo, 0, len(dynamicModels)+len(staticModels)) + + // First, process dynamic models and merge with static if available + for _, dm := range dynamicModels { + if dm == nil || dm.ID == "" { + continue + } + + // Skip duplicates + if _, seen := seenIDs[dm.ID]; seen { + continue + } + seenIDs[dm.ID] = struct{}{} + + // Check if static metadata exists for this model + if sm, exists := staticMap[dm.ID]; exists { + // Static metadata takes priority - use static model + result = append(result, sm) + } else { + // No static metadata - use dynamic model + result = append(result, dm) + } + } + + // Add any static models not in dynamic list + for _, sm := range staticModels { + if sm == nil || sm.ID == "" { + continue + } + if _, seen := seenIDs[sm.ID]; seen { + continue + } + seenIDs[sm.ID] = struct{}{} + result = append(result, sm) + } + + return result +} + +// normalizeKiroModelID converts Kiro API model IDs to internal format. +// Transformation rules: +// - Adds "kiro-" prefix if not present +// - Replaces dots with hyphens (e.g., 4.5 → 4-5) +// - Handles special cases like "auto" → "kiro-auto" +// +// Examples: +// - "claude-sonnet-4.5" → "kiro-claude-sonnet-4-5" +// - "claude-opus-4.5" → "kiro-claude-opus-4-5" +// - "auto" → "kiro-auto" +// - "kiro-claude-sonnet-4-5" → "kiro-claude-sonnet-4-5" (unchanged) +func normalizeKiroModelID(modelID string) string { + if modelID == "" { + return "" + } + + // Trim whitespace + modelID = strings.TrimSpace(modelID) + + // Replace dots with hyphens (e.g., 4.5 → 4-5) + normalized := strings.ReplaceAll(modelID, ".", "-") + + // Add kiro- prefix if not present + if !strings.HasPrefix(normalized, "kiro-") { + normalized = "kiro-" + normalized + } + + return normalized +} + +// generateKiroDisplayName creates a human-readable display name. +// Uses the API-provided model name if available, otherwise generates from ID. +func generateKiroDisplayName(modelName, normalizedID string) string { + if modelName != "" { + return "Kiro " + modelName + } + + // Generate from normalized ID by removing kiro- prefix and formatting + displayID := strings.TrimPrefix(normalizedID, "kiro-") + // Capitalize first letter of each word + words := strings.Split(displayID, "-") + for i, word := range words { + if len(word) > 0 { + words[i] = strings.ToUpper(word[:1]) + word[1:] + } + } + return "Kiro " + strings.Join(words, " ") +} + +// generateAgenticDescription creates description for agentic variants. +func generateAgenticDescription(baseDescription string) string { + if baseDescription == "" { + return "Optimized for coding agents with chunked writes" + } + return baseDescription + " (Agentic mode: chunked writes)" +} + +// getContextLength returns the context length, using default if not provided. +func getContextLength(maxInputTokens int) int { + if maxInputTokens > 0 { + return maxInputTokens + } + return DefaultKiroContextLength +} + +// cloneThinkingSupport creates a deep copy of ThinkingSupport. +// Returns nil if input is nil. +func cloneThinkingSupport(ts *ThinkingSupport) *ThinkingSupport { + if ts == nil { + return nil + } + + clone := &ThinkingSupport{ + Min: ts.Min, + Max: ts.Max, + ZeroAllowed: ts.ZeroAllowed, + DynamicAllowed: ts.DynamicAllowed, + } + + // Deep copy Levels slice if present + if len(ts.Levels) > 0 { + clone.Levels = make([]string, len(ts.Levels)) + copy(clone.Levels, ts.Levels) + } + + return clone +} diff --git a/internal/registry/model_definitions.go b/internal/registry/model_definitions.go index ab7258f845..e8a5f39e72 100644 --- a/internal/registry/model_definitions.go +++ b/internal/registry/model_definitions.go @@ -6,6 +6,15 @@ import ( "strings" ) +const codexBuiltinImageModelID = "gpt-image-2" + +// defaultCopilotClaudeContextLength is the conservative prompt token limit for +// Claude models accessed via the GitHub Copilot API. Individual accounts are +// capped at 128K; business accounts at 168K. When the dynamic /models API fetch +// succeeds, the real per-account limit overrides this value. This constant is +// only used as a safe fallback. +const defaultCopilotClaudeContextLength = 128000 + // staticModelsJSON mirrors the top-level structure of models.json. type staticModelsJSON struct { Claude []*ModelInfo `json:"claude"` @@ -48,22 +57,22 @@ func GetAIStudioModels() []*ModelInfo { // GetCodexFreeModels returns model definitions for the Codex free plan tier. func GetCodexFreeModels() []*ModelInfo { - return cloneModelInfos(getModels().CodexFree) + return WithCodexBuiltins(cloneModelInfos(getModels().CodexFree)) } // GetCodexTeamModels returns model definitions for the Codex team plan tier. func GetCodexTeamModels() []*ModelInfo { - return cloneModelInfos(getModels().CodexTeam) + return WithCodexBuiltins(cloneModelInfos(getModels().CodexTeam)) } // GetCodexPlusModels returns model definitions for the Codex plus plan tier. func GetCodexPlusModels() []*ModelInfo { - return cloneModelInfos(getModels().CodexPlus) + return WithCodexBuiltins(cloneModelInfos(getModels().CodexPlus)) } // GetCodexProModels returns model definitions for the Codex pro plan tier. func GetCodexProModels() []*ModelInfo { - return cloneModelInfos(getModels().CodexPro) + return WithCodexBuiltins(cloneModelInfos(getModels().CodexPro)) } // GetKimiModels returns the standard Kimi (Moonshot AI) model definitions. @@ -76,6 +85,71 @@ func GetAntigravityModels() []*ModelInfo { return cloneModelInfos(getModels().Antigravity) } +// WithCodexBuiltins injects hard-coded Codex-only model definitions that should +// not depend on remote models.json updates. Built-ins replace any matching IDs +// already present in the provided slice. +func WithCodexBuiltins(models []*ModelInfo) []*ModelInfo { + return upsertModelInfos(models, codexBuiltinImageModelInfo()) +} + +func codexBuiltinImageModelInfo() *ModelInfo { + return &ModelInfo{ + ID: codexBuiltinImageModelID, + Object: "model", + Created: 1704067200, // 2024-01-01 + OwnedBy: "openai", + Type: "openai", + DisplayName: "GPT Image 2", + Version: codexBuiltinImageModelID, + } +} + +func upsertModelInfos(models []*ModelInfo, extras ...*ModelInfo) []*ModelInfo { + if len(extras) == 0 { + return models + } + + extraIDs := make(map[string]struct{}, len(extras)) + extraList := make([]*ModelInfo, 0, len(extras)) + for _, extra := range extras { + if extra == nil { + continue + } + id := strings.TrimSpace(extra.ID) + if id == "" { + continue + } + key := strings.ToLower(id) + if _, exists := extraIDs[key]; exists { + continue + } + extraIDs[key] = struct{}{} + extraList = append(extraList, cloneModelInfo(extra)) + } + + if len(extraList) == 0 { + return models + } + + filtered := make([]*ModelInfo, 0, len(models)+len(extraList)) + for _, model := range models { + if model == nil { + continue + } + id := strings.TrimSpace(model.ID) + if id == "" { + continue + } + if _, exists := extraIDs[strings.ToLower(id)]; exists { + continue + } + filtered = append(filtered, model) + } + + filtered = append(filtered, extraList...) + return filtered +} + // cloneModelInfos returns a shallow copy of the slice with each element deep-cloned. func cloneModelInfos(models []*ModelInfo) []*ModelInfo { if len(models) == 0 { @@ -117,8 +191,26 @@ func GetStaticModelDefinitionsByChannel(channel string) []*ModelInfo { return GetCodexProModels() case "kimi": return GetKimiModels() + case "github-copilot": + return GetGitHubCopilotModels() + case "kiro": + return GetKiroModels() + case "kilo": + return GetKiloModels() + case "amazonq": + return GetAmazonQModels() case "antigravity": return GetAntigravityModels() + case "qoder": + return GetQoderModels() + case "bt": + return GetBTModels() + case "codebuddy": + return GetCodeBuddyModels() + case "codebuddy-ai": + return GetCodeBuddyAIModels() + case "cursor": + return GetCursorModels() default: return nil } @@ -141,6 +233,14 @@ func LookupStaticModelInfo(modelID string) *ModelInfo { data.CodexPro, data.Kimi, data.Antigravity, + GetQoderModels(), + GetGitHubCopilotModels(), + GetKiroModels(), + GetKiloModels(), + GetAmazonQModels(), + GetCodeBuddyModels(), + GetCodeBuddyAIModels(), + GetCursorModels(), } for _, models := range allModels { for _, m := range models { @@ -152,3 +252,1019 @@ func LookupStaticModelInfo(modelID string) *ModelInfo { return nil } + +func GetQoderModels() []*ModelInfo { + now := int64(1748044800) // 2025-05-24 + return []*ModelInfo{ + { + ID: "auto", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Auto", + Description: "Automatic model selection", + }, + { + ID: "ultimate", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Ultimate", + Description: "Qoder Ultimate tier model", + }, + { + ID: "performance", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Performance", + Description: "Qoder Performance tier model", + }, + { + ID: "efficient", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Efficient", + Description: "Qoder Efficient tier model", + }, + { + ID: "lite", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Lite", + Description: "Qoder Lite tier model", + }, + { + ID: "qmodel", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Qwen3.6-Plus", + Description: "Qwen 3.6 Plus via Qoder", + }, + { + ID: "dmodel", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "DeepSeek-V4-Pro", + Description: "DeepSeek V4 Pro via Qoder", + }, + { + ID: "dfmodel", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "DeepSeek-V4-Flash", + Description: "DeepSeek V4 Flash via Qoder", + }, + { + ID: "gm51model", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "GLM-5.1", + Description: "GLM 5.1 via Qoder", + }, + { + ID: "kmodel", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "Kimi-K2.6", + Description: "Kimi K2.6 via Qoder", + }, + { + ID: "mmodel", + Object: "model", + Created: now, + OwnedBy: "qoder", + Type: "qoder", + DisplayName: "MiniMax-M2.7", + Description: "MiniMax M2.7 via Qoder", + }, + } +} + +func GetCodeBuddyModels() []*ModelInfo { + now := int64(1748044800) + return []*ModelInfo{ + { + ID: "auto", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "Auto", Description: "Automatic model selection via CodeBuddy", + ContextLength: 168000, MaxCompletionTokens: 32000, SupportedEndpoints: []string{"/chat/completions"}, + SupportedInputModalities: []string{"TEXT", "IMAGE"}, + }, + { + ID: "hy3-preview", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "Hy3 Preview", Description: "Hunyuan thinking model with enhanced reasoning capabilities via CodeBuddy", + ContextLength: 192000, MaxCompletionTokens: 64000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "glm-5v-turbo", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "GLM-5v Turbo", Description: "Native multimodal model via CodeBuddy", + ContextLength: 200000, MaxCompletionTokens: 38000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + SupportedInputModalities: []string{"TEXT", "IMAGE"}, + }, + { + ID: "glm-5.1", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "GLM-5.1", Description: "GLM-5.1 via CodeBuddy", + ContextLength: 200000, MaxCompletionTokens: 48000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "glm-5.0-turbo", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "GLM-5.0 Turbo", Description: "GLM-5.0 Turbo via CodeBuddy", + ContextLength: 200000, MaxCompletionTokens: 48000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "kimi-k2.6", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "Kimi K2.6", Description: "Kimi K2.6 via CodeBuddy", + ContextLength: 256000, MaxCompletionTokens: 32000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + SupportedInputModalities: []string{"TEXT", "IMAGE"}, + }, + { + ID: "kimi-k2.5", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "Kimi K2.5", Description: "Kimi K2.5 via CodeBuddy", + ContextLength: 256000, MaxCompletionTokens: 32000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + SupportedInputModalities: []string{"TEXT", "IMAGE"}, + }, + { + ID: "minimax-m2.7", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "MiniMax M2.7", Description: "MiniMax M2.7 via CodeBuddy", + ContextLength: 200000, MaxCompletionTokens: 48000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + SupportedInputModalities: []string{"TEXT", "IMAGE"}, + }, + { + ID: "deepseek-v4-flash", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "DeepSeek V4 Flash", Description: "DeepSeek V4 Flash via CodeBuddy", + ContextLength: 1000000, MaxCompletionTokens: 50000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"high", "max"}}, + }, + { + ID: "deepseek-v3-2-volc", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "DeepSeek V3.2", Description: "DeepSeek V3.2 via CodeBuddy", + ContextLength: 96000, MaxCompletionTokens: 32000, SupportedEndpoints: []string{"/chat/completions"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + SupportedInputModalities: []string{"TEXT", "IMAGE"}, + }, + { + ID: "deepseek-v3-1-volc", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "DeepSeek V3.1 Terminus", Description: "DeepSeek V3.1 Terminus via CodeBuddy", + ContextLength: 128000, MaxCompletionTokens: 32000, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "deepseek-r1-0528-lkeap", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "DeepSeek R1 0528", Description: "DeepSeek R1 0528 via CodeBuddy", + ContextLength: 112000, MaxCompletionTokens: 16000, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "hunyuan-chat", Object: "model", Created: now, OwnedBy: "tencent", + Type: "codebuddy", DisplayName: "Hunyuan Turbos", Description: "Tencent Hunyuan Turbos via CodeBuddy", + ContextLength: 128000, MaxCompletionTokens: 8192, SupportedEndpoints: []string{"/chat/completions"}, + }, + } +} + +func GetCodeBuddyAIModels() []*ModelInfo { + now := int64(1748044800) + return []*ModelInfo{ + { + ID: "default-model", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "Default Model", Description: "Default model via CodeBuddy AI", + ContextLength: 128000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "glm-5v-turbo", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GLM-5v Turbo", Description: "GLM-5v Turbo via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "kimi-k2.5", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "Kimi-K2.5", Description: "Kimi K2.5 via CodeBuddy AI", + ContextLength: 256000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gpt-5.4", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GPT-5.4", Description: "GPT-5.4 via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gpt-5.3-codex", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GPT-5.3-Codex", Description: "GPT-5.3 Codex via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gpt-5.2-codex", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GPT-5.2-Codex", Description: "GPT-5.2 Codex via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gpt-5.2", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GPT-5.2", Description: "GPT-5.2 via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gpt-5.1", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GPT-5.1", Description: "GPT-5.1 via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gpt-5.1-codex-max", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "GPT-5.1-Codex-Max", Description: "GPT-5.1 Codex Max via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gemini-3.0-pro", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "Gemini-3.0-Pro", Description: "Gemini 3.0 Pro via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gemini-3.0-flash", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "Gemini-3.0-Flash", Description: "Gemini 3.0 Flash via CodeBuddy AI", + ContextLength: 200000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "deepseek-v3.2", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "DeepSeek-V3.2", Description: "DeepSeek V3.2 via CodeBuddy AI", + ContextLength: 128000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "auto-chat", Object: "model", Created: now, OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", DisplayName: "Auto-Chat", Description: "Auto Chat via CodeBuddy AI", + ContextLength: 128000, MaxCompletionTokens: 32768, SupportedEndpoints: []string{"/chat/completions"}, + }, + } +} + +func GetBTModels() []*ModelInfo { + now := int64(1745548800) // 2025-04-25 + return []*ModelInfo{ + {ID: "ernie-4.5-21b-a3b-thinking", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "ERNIE 4.5 21B A3B Thinking", Description: "Baidu ERNIE 4.5 thinking model via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192, Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}}, + {ID: "ernie-x1-turbo-32k-preview", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "ERNIE X1 Turbo 32K Preview", Description: "Baidu ERNIE X1 Turbo via BaoTa", ContextLength: 32768, MaxCompletionTokens: 8192}, + {ID: "ernie-x1.1", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "ERNIE X1.1", Description: "Baidu ERNIE X1.1 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "ernie-5.0", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "ERNIE 5.0", Description: "Baidu ERNIE 5.0 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "ernie-5.0-thinking-preview", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "ERNIE 5.0 Thinking Preview", Description: "Baidu ERNIE 5.0 thinking model via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384, Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}}, + {ID: "hunyuan-2.0-instruct-20251111", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Hunyuan 2.0 Instruct", Description: "Tencent Hunyuan 2.0 Instruct via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "hunyuan-2.0-thinking-20251109", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Hunyuan 2.0 Thinking", Description: "Tencent Hunyuan 2.0 thinking model via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192, Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}}, + {ID: "deepseek-r1-250528", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "DeepSeek R1 250528", Description: "DeepSeek R1 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384, Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}}, + {ID: "deepseek-v3-2-251201", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "DeepSeek V3.2 251201", Description: "DeepSeek V3.2 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "glm-4-7-251222", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "GLM-4.7 251222", Description: "Zhipu GLM-4.7 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "doubao-seed-2-0-code-preview-260215", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Doubao Seed 2.0 Code Preview", Description: "ByteDance Doubao Seed 2.0 Code via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "doubao-seed-2-0-mini-260215", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Doubao Seed 2.0 Mini", Description: "ByteDance Doubao Seed 2.0 Mini via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "doubao-seed-2-0-lite-260215", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Doubao Seed 2.0 Lite", Description: "ByteDance Doubao Seed 2.0 Lite via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "doubao-seed-2-0-pro-260215", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Doubao Seed 2.0 Pro", Description: "ByteDance Doubao Seed 2.0 Pro via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "text-embedding-v4", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Text Embedding V4", Description: "Baidu Text Embedding V4 via BaoTa"}, + {ID: "kimi-k2.5", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Kimi K2.5", Description: "Moonshot Kimi K2.5 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "deepseek-v3.2", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "DeepSeek V3.2", Description: "DeepSeek V3.2 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen-max-2025-01-25", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen Max 2025-01-25", Description: "Alibaba Qwen Max via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "glm-5", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "GLM-5", Description: "Zhipu GLM-5 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "qwen-flash", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen Flash", Description: "Alibaba Qwen Flash via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen-plus", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen Plus", Description: "Alibaba Qwen Plus via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "doubao-seed-1-8-251228", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Doubao Seed 1.8 251228", Description: "ByteDance Doubao Seed 1.8 via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen-plus-2025-12-01", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen Plus 2025-12-01", Description: "Alibaba Qwen Plus via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "deepseek-v4-flash", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "DeepSeek V4 Flash", Description: "DeepSeek V4 Flash via BaoTa", ContextLength: 1000000, MaxCompletionTokens: 384000}, + {ID: "deepseek-v4-pro", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "DeepSeek V4 Pro", Description: "DeepSeek V4 Pro via BaoTa", ContextLength: 1000000, MaxCompletionTokens: 384000}, + {ID: "qwen3.5-plus", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3.5 Plus", Description: "Alibaba Qwen3.5 Plus via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen3.5-flash", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3.5 Flash", Description: "Alibaba Qwen3.5 Flash via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen3-coder-flash", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3 Coder Flash", Description: "Alibaba Qwen3 Coder Flash via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen3-coder-plus", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3 Coder Plus", Description: "Alibaba Qwen3 Coder Plus via BaoTa", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "qwen3.6-plus", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3.6 Plus", Description: "Alibaba Qwen3.6 Plus via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen3-max", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3 Max", Description: "Alibaba Qwen3 Max via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + {ID: "qwen3-max-2026-01-23", Object: "model", Created: now, OwnedBy: "bt", Type: "bt", DisplayName: "Qwen3 Max 2026-01-23", Description: "Alibaba Qwen3 Max via BaoTa", ContextLength: 128000, MaxCompletionTokens: 8192}, + } +} + +func GetCursorModels() []*ModelInfo { + return []*ModelInfo{ + {ID: "composer-2", Object: "model", OwnedBy: "cursor", Type: "cursor", DisplayName: "Composer 2", ContextLength: 200000, MaxCompletionTokens: 64000, Thinking: &ThinkingSupport{Max: 50000, DynamicAllowed: true}}, + {ID: "claude-4-sonnet", Object: "model", OwnedBy: "cursor", Type: "cursor", DisplayName: "Claude 4 Sonnet", ContextLength: 200000, MaxCompletionTokens: 64000, Thinking: &ThinkingSupport{Max: 50000, DynamicAllowed: true}}, + {ID: "claude-3.5-sonnet", Object: "model", OwnedBy: "cursor", Type: "cursor", DisplayName: "Claude 3.5 Sonnet", ContextLength: 200000, MaxCompletionTokens: 8192}, + {ID: "gpt-4o", Object: "model", OwnedBy: "cursor", Type: "cursor", DisplayName: "GPT-4o", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "cursor-small", Object: "model", OwnedBy: "cursor", Type: "cursor", DisplayName: "Cursor Small", ContextLength: 200000, MaxCompletionTokens: 64000}, + {ID: "gemini-2.5-pro", Object: "model", OwnedBy: "cursor", Type: "cursor", DisplayName: "Gemini 2.5 Pro", ContextLength: 1000000, MaxCompletionTokens: 65536, Thinking: &ThinkingSupport{Max: 50000, DynamicAllowed: true}}, + } +} + +func GetGitHubCopilotModels() []*ModelInfo { + now := int64(1732752000) // 2024-11-27 + copilotClaudeEndpoints := []string{"/chat/completions", "/messages"} + gpt4oEntries := []struct { + ID string + DisplayName string + Description string + }{ + {ID: "gpt-4o-2024-11-20", DisplayName: "GPT-4o (2024-11-20)", Description: "OpenAI GPT-4o 2024-11-20 via GitHub Copilot"}, + {ID: "gpt-4o-2024-08-06", DisplayName: "GPT-4o (2024-08-06)", Description: "OpenAI GPT-4o 2024-08-06 via GitHub Copilot"}, + {ID: "gpt-4o-2024-05-13", DisplayName: "GPT-4o (2024-05-13)", Description: "OpenAI GPT-4o 2024-05-13 via GitHub Copilot"}, + {ID: "gpt-4o", DisplayName: "GPT-4o", Description: "OpenAI GPT-4o via GitHub Copilot"}, + {ID: "gpt-4-o-preview", DisplayName: "GPT-4-o Preview", Description: "OpenAI GPT-4-o Preview via GitHub Copilot"}, + } + + models := []*ModelInfo{ + { + ID: "gpt-4.1", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-4.1", + Description: "OpenAI GPT-4.1 via GitHub Copilot", + ContextLength: 128000, + MaxCompletionTokens: 16384, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + }, + } + + for _, entry := range gpt4oEntries { + models = append(models, &ModelInfo{ + ID: entry.ID, + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: entry.DisplayName, + Description: entry.Description, + ContextLength: 128000, + MaxCompletionTokens: 16384, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + }) + } + + return append(models, []*ModelInfo{ + { + ID: "gpt-5", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5", + Description: "OpenAI GPT-5 via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "gpt-5-mini", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5 Mini", + Description: "OpenAI GPT-5 Mini via GitHub Copilot", + ContextLength: 128000, + MaxCompletionTokens: 16384, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "gpt-5-codex", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5 Codex", + Description: "OpenAI GPT-5 Codex via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "gpt-5.1", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.1", + Description: "OpenAI GPT-5.1 via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high"}}, + }, + { + ID: "gpt-5.1-codex", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.1 Codex", + Description: "OpenAI GPT-5.1 Codex via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high"}}, + }, + { + ID: "gpt-5.1-codex-mini", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.1 Codex Mini", + Description: "OpenAI GPT-5.1 Codex Mini via GitHub Copilot", + ContextLength: 128000, + MaxCompletionTokens: 16384, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high"}}, + }, + { + ID: "gpt-5.1-codex-max", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.1 Codex Max", + Description: "OpenAI GPT-5.1 Codex Max via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high", "xhigh"}}, + }, + { + ID: "gpt-5.2", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.2", + Description: "OpenAI GPT-5.2 via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high", "xhigh"}}, + }, + { + ID: "gpt-5.2-codex", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.2 Codex", + Description: "OpenAI GPT-5.2 Codex via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high", "xhigh"}}, + }, + { + ID: "gpt-5.3-codex", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.3 Codex", + Description: "OpenAI GPT-5.3 Codex via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high", "xhigh"}}, + }, + { + ID: "gpt-5.4", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.4", + Description: "OpenAI GPT-5.4 via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high", "xhigh"}}, + }, + { + ID: "gpt-5.4-mini", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "GPT-5.4 mini", + Description: "OpenAI GPT-5.4 mini via GitHub Copilot", + ContextLength: 200000, + MaxCompletionTokens: 32768, + SupportedEndpoints: []string{"/responses"}, + Thinking: &ThinkingSupport{Levels: []string{"none", "low", "medium", "high", "xhigh"}}, + }, + { + ID: "claude-haiku-4.5", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Haiku 4.5", + Description: "Anthropic Claude Haiku 4.5 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 64000, + SupportedEndpoints: copilotClaudeEndpoints, + }, + { + ID: "claude-opus-4.1", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Opus 4.1", + Description: "Anthropic Claude Opus 4.1 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 32000, + SupportedEndpoints: copilotClaudeEndpoints, + }, + { + ID: "claude-opus-4.5", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Opus 4.5", + Description: "Anthropic Claude Opus 4.5 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 64000, + SupportedEndpoints: copilotClaudeEndpoints, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "claude-opus-4.6", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Opus 4.6", + Description: "Anthropic Claude Opus 4.6 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 64000, + SupportedEndpoints: copilotClaudeEndpoints, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "claude-sonnet-4", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Sonnet 4", + Description: "Anthropic Claude Sonnet 4 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 64000, + SupportedEndpoints: copilotClaudeEndpoints, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "claude-sonnet-4.5", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Sonnet 4.5", + Description: "Anthropic Claude Sonnet 4.5 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 64000, + SupportedEndpoints: copilotClaudeEndpoints, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "claude-sonnet-4.6", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Claude Sonnet 4.6", + Description: "Anthropic Claude Sonnet 4.6 via GitHub Copilot", + ContextLength: defaultCopilotClaudeContextLength, + MaxCompletionTokens: 64000, + SupportedEndpoints: copilotClaudeEndpoints, + Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "gemini-2.5-pro", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Gemini 2.5 Pro", + Description: "Google Gemini 2.5 Pro via GitHub Copilot", + ContextLength: 1048576, + MaxCompletionTokens: 65536, + SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gemini-3-pro-preview", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Gemini 3 Pro (Preview)", + Description: "Google Gemini 3 Pro Preview via GitHub Copilot", + ContextLength: 1048576, + MaxCompletionTokens: 65536, + SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gemini-3.1-pro-preview", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Gemini 3.1 Pro (Preview)", + Description: "Google Gemini 3.1 Pro Preview via GitHub Copilot", + ContextLength: 173000, + MaxCompletionTokens: 65536, + SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "gemini-3-flash-preview", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Gemini 3 Flash (Preview)", + Description: "Google Gemini 3 Flash Preview via GitHub Copilot", + ContextLength: 173000, + MaxCompletionTokens: 65536, + SupportedEndpoints: []string{"/chat/completions"}, + }, + { + ID: "grok-code-fast-1", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Grok Code Fast 1", + Description: "xAI Grok Code Fast 1 via GitHub Copilot", + ContextLength: 128000, + MaxCompletionTokens: 16384, + }, + { + ID: "oswe-vscode-prime", + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + DisplayName: "Raptor mini (Preview)", + Description: "Raptor mini via GitHub Copilot", + ContextLength: 128000, + MaxCompletionTokens: 16384, + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + }, + }...) +} + +func GetKiroModels() []*ModelInfo { + return []*ModelInfo{ + // --- Base Models --- + { + ID: "kiro-auto", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Auto", + Description: "Automatic model selection by Kiro", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-opus-4-6", + Object: "model", + Created: 1736899200, // 2025-01-15 + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Opus 4.6", + Description: "Claude Opus 4.6 via Kiro (2.2x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-sonnet-4-6", + Object: "model", + Created: 1739836800, // 2025-02-18 + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Sonnet 4.6", + Description: "Claude Sonnet 4.6 via Kiro (1.3x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-opus-4-5", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Opus 4.5", + Description: "Claude Opus 4.5 via Kiro (2.2x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-sonnet-4-5", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Sonnet 4.5", + Description: "Claude Sonnet 4.5 via Kiro (1.3x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-sonnet-4", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Sonnet 4", + Description: "Claude Sonnet 4 via Kiro (1.3x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-haiku-4-5", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Haiku 4.5", + Description: "Claude Haiku 4.5 via Kiro (0.4x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + // --- 第三方模型 (通过 Kiro 接入) --- + { + ID: "kiro-deepseek-3-2", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro DeepSeek 3.2", + Description: "DeepSeek 3.2 via Kiro", + ContextLength: 128000, + MaxCompletionTokens: 32768, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-minimax-m2-1", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro MiniMax M2.1", + Description: "MiniMax M2.1 via Kiro", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-qwen3-coder-next", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Qwen3 Coder Next", + Description: "Qwen3 Coder Next via Kiro", + ContextLength: 128000, + MaxCompletionTokens: 32768, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-gpt-4o", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro GPT-4o", + Description: "OpenAI GPT-4o via Kiro", + ContextLength: 128000, + MaxCompletionTokens: 16384, + }, + { + ID: "kiro-gpt-4", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro GPT-4", + Description: "OpenAI GPT-4 via Kiro", + ContextLength: 128000, + MaxCompletionTokens: 8192, + }, + { + ID: "kiro-gpt-4-turbo", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro GPT-4 Turbo", + Description: "OpenAI GPT-4 Turbo via Kiro", + ContextLength: 128000, + MaxCompletionTokens: 16384, + }, + { + ID: "kiro-gpt-3-5-turbo", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro GPT-3.5 Turbo", + Description: "OpenAI GPT-3.5 Turbo via Kiro", + ContextLength: 16384, + MaxCompletionTokens: 4096, + }, + // --- Agentic Variants (Optimized for coding agents with chunked writes) --- + { + ID: "kiro-claude-opus-4-6-agentic", + Object: "model", + Created: 1736899200, // 2025-01-15 + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Opus 4.6 (Agentic)", + Description: "Claude Opus 4.6 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-sonnet-4-6-agentic", + Object: "model", + Created: 1739836800, // 2025-02-18 + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Sonnet 4.6 (Agentic)", + Description: "Claude Sonnet 4.6 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-opus-4-5-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Opus 4.5 (Agentic)", + Description: "Claude Opus 4.5 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-sonnet-4-5-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Sonnet 4.5 (Agentic)", + Description: "Claude Sonnet 4.5 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-sonnet-4-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Sonnet 4 (Agentic)", + Description: "Claude Sonnet 4 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-claude-haiku-4-5-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Claude Haiku 4.5 (Agentic)", + Description: "Claude Haiku 4.5 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-deepseek-3-2-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro DeepSeek 3.2 (Agentic)", + Description: "DeepSeek 3.2 optimized for coding agents (chunked writes)", + ContextLength: 128000, + MaxCompletionTokens: 32768, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-minimax-m2-1-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro MiniMax M2.1 (Agentic)", + Description: "MiniMax M2.1 optimized for coding agents (chunked writes)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + { + ID: "kiro-qwen3-coder-next-agentic", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Kiro Qwen3 Coder Next (Agentic)", + Description: "Qwen3 Coder Next optimized for coding agents (chunked writes)", + ContextLength: 128000, + MaxCompletionTokens: 32768, + Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + }, + } +} + +func GetAmazonQModels() []*ModelInfo { + return []*ModelInfo{ + { + ID: "amazonq-auto", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", // Uses Kiro executor - same API + DisplayName: "Amazon Q Auto", + Description: "Automatic model selection by Amazon Q", + ContextLength: 200000, + MaxCompletionTokens: 64000, + }, + { + ID: "amazonq-claude-opus-4.5", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Amazon Q Claude Opus 4.5", + Description: "Claude Opus 4.5 via Amazon Q (2.2x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + }, + { + ID: "amazonq-claude-sonnet-4.5", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Amazon Q Claude Sonnet 4.5", + Description: "Claude Sonnet 4.5 via Amazon Q (1.3x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + }, + { + ID: "amazonq-claude-sonnet-4", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Amazon Q Claude Sonnet 4", + Description: "Claude Sonnet 4 via Amazon Q (1.3x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + }, + { + ID: "amazonq-claude-haiku-4.5", + Object: "model", + Created: 1732752000, + OwnedBy: "aws", + Type: "kiro", + DisplayName: "Amazon Q Claude Haiku 4.5", + Description: "Claude Haiku 4.5 via Amazon Q (0.4x credit)", + ContextLength: 200000, + MaxCompletionTokens: 64000, + }, + } +} diff --git a/internal/registry/model_definitions_test.go b/internal/registry/model_definitions_test.go new file mode 100644 index 0000000000..bb2fc46046 --- /dev/null +++ b/internal/registry/model_definitions_test.go @@ -0,0 +1,94 @@ +package registry + +import "testing" + +func TestCodexFreeModelsExcludeGPT55(t *testing.T) { + model := findModelInfo(GetCodexFreeModels(), "gpt-5.5") + if model != nil { + t.Fatal("expected codex free tier to NOT include gpt-5.5") + } +} + +func TestCodexStaticModelsIncludeGPT55(t *testing.T) { + tierModels := map[string][]*ModelInfo{ + "team": GetCodexTeamModels(), + "plus": GetCodexPlusModels(), + "pro": GetCodexProModels(), + } + + for tier, models := range tierModels { + t.Run(tier, func(t *testing.T) { + model := findModelInfo(models, "gpt-5.5") + if model == nil { + t.Fatalf("expected codex %s tier to include gpt-5.5", tier) + } + assertGPT55ModelInfo(t, tier, model) + }) + } + + model := LookupStaticModelInfo("gpt-5.5") + if model == nil { + t.Fatal("expected LookupStaticModelInfo to find gpt-5.5") + } + assertGPT55ModelInfo(t, "lookup", model) +} + +func findModelInfo(models []*ModelInfo, id string) *ModelInfo { + for _, model := range models { + if model != nil && model.ID == id { + return model + } + } + return nil +} + +func assertGPT55ModelInfo(t *testing.T, source string, model *ModelInfo) { + t.Helper() + + if model.ID != "gpt-5.5" { + t.Fatalf("%s id mismatch: got %q", source, model.ID) + } + if model.Object != "model" { + t.Fatalf("%s object mismatch: got %q", source, model.Object) + } + if model.Created != 1776902400 { + t.Fatalf("%s created timestamp mismatch: got %d", source, model.Created) + } + if model.OwnedBy != "openai" { + t.Fatalf("%s owned_by mismatch: got %q", source, model.OwnedBy) + } + if model.Type != "openai" { + t.Fatalf("%s type mismatch: got %q", source, model.Type) + } + if model.DisplayName != "GPT 5.5" { + t.Fatalf("%s display name mismatch: got %q", source, model.DisplayName) + } + if model.Version != "gpt-5.5" { + t.Fatalf("%s version mismatch: got %q", source, model.Version) + } + if model.Description != "Frontier model for complex coding, research, and real-world work." { + t.Fatalf("%s description mismatch: got %q", source, model.Description) + } + if model.ContextLength != 272000 { + t.Fatalf("%s context length mismatch: got %d", source, model.ContextLength) + } + if model.MaxCompletionTokens != 128000 { + t.Fatalf("%s max completion tokens mismatch: got %d", source, model.MaxCompletionTokens) + } + if len(model.SupportedParameters) != 1 || model.SupportedParameters[0] != "tools" { + t.Fatalf("%s supported parameters mismatch: got %v", source, model.SupportedParameters) + } + if model.Thinking == nil { + t.Fatalf("%s missing thinking support", source) + } + + want := []string{"low", "medium", "high", "xhigh"} + if len(model.Thinking.Levels) != len(want) { + t.Fatalf("%s thinking level count mismatch: got %d, want %d", source, len(model.Thinking.Levels), len(want)) + } + for i, level := range want { + if model.Thinking.Levels[i] != level { + t.Fatalf("%s thinking level %d mismatch: got %q, want %q", source, i, model.Thinking.Levels[i], level) + } + } +} diff --git a/internal/registry/model_registry.go b/internal/registry/model_registry.go index 3f3f530d27..560c8ccc0e 100644 --- a/internal/registry/model_registry.go +++ b/internal/registry/model_registry.go @@ -11,7 +11,7 @@ import ( "sync" "time" - misc "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" + misc "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" log "github.com/sirupsen/logrus" ) @@ -56,6 +56,9 @@ type ModelInfo struct { // This is optional and currently used for Gemini thinking budget normalization. Thinking *ThinkingSupport `json:"thinking,omitempty"` + // SupportedEndpoints lists supported API endpoints (e.g., "/chat/completions", "/responses"). + SupportedEndpoints []string `json:"supported_endpoints,omitempty"` + // UserDefined indicates this model was defined through config file's models[] // array (e.g., openai-compatibility.*.models[], *-api-key.models[]). // UserDefined models have thinking configuration passed through without validation. @@ -1141,6 +1144,9 @@ func (r *ModelRegistry) convertModelToMap(model *ModelInfo, handlerType string) if len(model.SupportedParameters) > 0 { result["supported_parameters"] = append([]string(nil), model.SupportedParameters...) } + if len(model.SupportedEndpoints) > 0 { + result["supported_endpoints"] = model.SupportedEndpoints + } return result case "claude": diff --git a/internal/registry/model_registry_safety_test.go b/internal/registry/model_registry_safety_test.go index 5f4f65d298..be5bf7908c 100644 --- a/internal/registry/model_registry_safety_test.go +++ b/internal/registry/model_registry_safety_test.go @@ -136,13 +136,13 @@ func TestGetAvailableModelsReturnsClonedSupportedParameters(t *testing.T) { } func TestLookupModelInfoReturnsCloneForStaticDefinitions(t *testing.T) { - first := LookupModelInfo("glm-4.6") + first := LookupModelInfo("claude-sonnet-4-6") if first == nil || first.Thinking == nil || len(first.Thinking.Levels) == 0 { t.Fatalf("expected static model with thinking levels, got %+v", first) } first.Thinking.Levels[0] = "mutated" - second := LookupModelInfo("glm-4.6") + second := LookupModelInfo("claude-sonnet-4-6") if second == nil || second.Thinking == nil || len(second.Thinking.Levels) == 0 || second.Thinking.Levels[0] == "mutated" { t.Fatalf("expected static lookup clone, got %+v", second) } diff --git a/internal/registry/models/models.json b/internal/registry/models/models.json index 65d8325169..fa56bb42a2 100644 --- a/internal/registry/models/models.json +++ b/internal/registry/models/models.json @@ -1292,6 +1292,52 @@ "xhigh" ] } + }, + { + "id": "gpt-5.5", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "GPT 5.5", + "version": "gpt-5.5", + "description": "Frontier model for complex coding, research, and real-world work.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } + }, + { + "id": "codex-auto-review", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "Codex Auto Review", + "version": "Codex Auto Review", + "description": "Automatic approval review model for Codex.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } } ], "codex-team": [ @@ -1387,6 +1433,52 @@ "xhigh" ] } + }, + { + "id": "gpt-5.5", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "GPT 5.5", + "version": "gpt-5.5", + "description": "Frontier model for complex coding, research, and real-world work.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } + }, + { + "id": "codex-auto-review", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "Codex Auto Review", + "version": "Codex Auto Review", + "description": "Automatic approval review model for Codex.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } } ], "codex-plus": [ @@ -1505,6 +1597,52 @@ "xhigh" ] } + }, + { + "id": "gpt-5.5", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "GPT 5.5", + "version": "gpt-5.5", + "description": "Frontier model for complex coding, research, and real-world work.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } + }, + { + "id": "codex-auto-review", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "Codex Auto Review", + "version": "Codex Auto Review", + "description": "Automatic approval review model for Codex.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } } ], "codex-pro": [ @@ -1623,6 +1761,52 @@ "xhigh" ] } + }, + { + "id": "gpt-5.5", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "GPT 5.5", + "version": "gpt-5.5", + "description": "Frontier model for complex coding, research, and real-world work.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } + }, + { + "id": "codex-auto-review", + "object": "model", + "created": 1776902400, + "owned_by": "openai", + "type": "openai", + "display_name": "Codex Auto Review", + "version": "Codex Auto Review", + "description": "Automatic approval review model for Codex.", + "context_length": 272000, + "max_completion_tokens": 128000, + "supported_parameters": [ + "tools" + ], + "thinking": { + "levels": [ + "low", + "medium", + "high", + "xhigh" + ] + } } ], "kimi": [ @@ -1670,6 +1854,23 @@ "zero_allowed": true, "dynamic_allowed": true } + }, + { + "id": "kimi-k2.6", + "object": "model", + "created": 1776729600, + "owned_by": "moonshot", + "type": "kimi", + "display_name": "Kimi K2.6", + "description": "Kimi K2.6 - Latest Moonshot AI coding model with improved capabilities", + "context_length": 262144, + "max_completion_tokens": 65536, + "thinking": { + "min": 1024, + "max": 32000, + "zero_allowed": true, + "dynamic_allowed": true + } } ], "antigravity": [ diff --git a/internal/runtime/executor/aistudio_executor.go b/internal/runtime/executor/aistudio_executor.go index f53e3e4d1d..41365b5f7a 100644 --- a/internal/runtime/executor/aistudio_executor.go +++ b/internal/runtime/executor/aistudio_executor.go @@ -13,14 +13,14 @@ import ( "net/url" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - "github.com/router-for-me/CLIProxyAPI/v6/internal/wsrelay" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/wsrelay" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -284,8 +284,11 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth processEvent := func(event wsrelay.StreamEvent) bool { if event.Err != nil { helps.RecordAPIResponseError(ctx, e.cfg, event.Err) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)} + reporter.PublishFailure(ctx, event.Err) + select { + case out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)}: + case <-ctx.Done(): + } return false } switch event.Type { @@ -303,7 +306,11 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth } lines := sdktranslator.TranslateStream(ctx, body.toFormat, opts.SourceFormat, req.Model, opts.OriginalRequest, translatedReq, filtered, ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON(lines[i])} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON(lines[i])}: + case <-ctx.Done(): + return false + } } break } @@ -319,14 +326,21 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth } lines := sdktranslator.TranslateStream(ctx, body.toFormat, opts.SourceFormat, req.Model, opts.OriginalRequest, translatedReq, event.Payload, ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON(lines[i])} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON(lines[i])}: + case <-ctx.Done(): + return false + } } reporter.Publish(ctx, helps.ParseGeminiUsage(event.Payload)) return false case wsrelay.MessageTypeError: helps.RecordAPIResponseError(ctx, e.cfg, event.Err) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)} + reporter.PublishFailure(ctx, event.Err) + select { + case out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)}: + case <-ctx.Done(): + } return false } return true @@ -400,7 +414,10 @@ func (e *AIStudioExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.A } // Refresh refreshes the authentication credentials (no-op for AI Studio). -func (e *AIStudioExecutor) Refresh(_ context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { +func (e *AIStudioExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } return auth, nil } @@ -428,7 +445,8 @@ func (e *AIStudioExecutor) translateRequest(req cliproxyexecutor.Request, opts c } payload = fixGeminiImageAspectRatio(baseModel, payload) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - payload = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", payload, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + payload = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", payload, originalTranslated, requestedModel, requestPath) payload, _ = sjson.DeleteBytes(payload, "generationConfig.maxOutputTokens") payload, _ = sjson.DeleteBytes(payload, "generationConfig.responseMimeType") payload, _ = sjson.DeleteBytes(payload, "generationConfig.responseJsonSchema") diff --git a/internal/runtime/executor/antigravity_executor.go b/internal/runtime/executor/antigravity_executor.go index 163b2d9279..2f8dff927c 100644 --- a/internal/runtime/executor/antigravity_executor.go +++ b/internal/runtime/executor/antigravity_executor.go @@ -23,18 +23,18 @@ import ( "time" "github.com/google/uuid" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - antigravityclaude "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/claude" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + antigravityclaude "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/antigravity/claude" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -52,8 +52,8 @@ const ( defaultAntigravityAgent = "antigravity/1.21.9 darwin/arm64" // fallback only; overridden at runtime by misc.AntigravityUserAgent() antigravityAuthType = "antigravity" refreshSkew = 3000 * time.Second - antigravityCreditsRetryTTL = 5 * time.Hour - antigravityCreditsAutoDisableDuration = 5 * time.Hour + antigravityCreditsHintRefreshInterval = 10 * time.Minute + antigravityCreditsHintRefreshTimeout = 5 * time.Second antigravityShortQuotaCooldownThreshold = 5 * time.Minute antigravityInstantRetryThreshold = 3 * time.Second // systemInstruction = "You are Antigravity, a powerful agentic AI coding assistant designed by the Google Deepmind team working on Advanced Agentic Coding.You are pair programming with a USER to solve their coding task. The task may require creating a new codebase, modifying or debugging an existing codebase, or simply answering a question.**Absolute paths only****Proactiveness**" @@ -62,8 +62,6 @@ const ( type antigravity429Category string type antigravityCreditsFailureState struct { - Count int - DisabledUntil time.Time PermanentlyDisabled bool ExplicitBalanceExhausted bool } @@ -91,28 +89,85 @@ var ( randSource = rand.New(rand.NewSource(time.Now().UnixNano())) randSourceMutex sync.Mutex antigravityCreditsFailureByAuth sync.Map - antigravityPreferCreditsByModel sync.Map antigravityShortCooldownByAuth sync.Map + antigravityCreditsBalanceByAuth sync.Map // auth.ID → antigravityCreditsBalance + antigravityCreditsHintRefreshByID sync.Map // auth.ID → *antigravityCreditsHintRefreshState antigravityQuotaExhaustedKeywords = []string{ "quota_exhausted", "quota exhausted", } - antigravityCreditsExhaustedKeywords = []string{ - "google_one_ai", - "insufficient credit", - "insufficient credits", - "not enough credit", - "not enough credits", - "credit exhausted", - "credits exhausted", - "credit balance", - "minimumcreditamountforusage", - "minimum credit amount for usage", - "minimum credit", - "resource has been exhausted", - } ) +type antigravityCreditsBalance struct { + CreditAmount float64 + MinCreditAmount float64 + PaidTierID string + Known bool +} + +type antigravityCreditsHintRefreshState struct { + mu sync.Mutex + lastAttempt time.Time +} + +func antigravityAuthHasCredits(auth *cliproxyauth.Auth) bool { + if auth == nil || strings.TrimSpace(auth.ID) == "" { + return false + } + if hint, ok := cliproxyauth.GetAntigravityCreditsHint(auth.ID); ok && hint.Known { + return hint.Available + } + val, ok := antigravityCreditsBalanceByAuth.Load(strings.TrimSpace(auth.ID)) + if !ok { + return true // optimistic: assume credits available when balance unknown + } + bal, valid := val.(antigravityCreditsBalance) + if !valid { + antigravityCreditsBalanceByAuth.Delete(strings.TrimSpace(auth.ID)) + return false + } + if !bal.Known { + return false + } + available := bal.CreditAmount >= bal.MinCreditAmount + cliproxyauth.SetAntigravityCreditsHint(strings.TrimSpace(auth.ID), cliproxyauth.AntigravityCreditsHint{ + Known: true, + Available: available, + CreditAmount: bal.CreditAmount, + MinCreditAmount: bal.MinCreditAmount, + PaidTierID: bal.PaidTierID, + UpdatedAt: time.Now(), + }) + return available +} + +// parseMetaFloat extracts a float64 from auth.Metadata (handles string and numeric types). +func parseMetaFloat(metadata map[string]any, key string) (float64, bool) { + v, ok := metadata[key] + if !ok { + return 0, false + } + switch typed := v.(type) { + case float64: + return typed, true + case int: + return float64(typed), true + case int64: + return float64(typed), true + case uint64: + return float64(typed), true + case json.Number: + if f, err := typed.Float64(); err == nil { + return f, true + } + case string: + if f, err := strconv.ParseFloat(strings.TrimSpace(typed), 64); err == nil { + return f, true + } + } + return 0, false +} + // AntigravityExecutor proxies requests to the antigravity upstream. type AntigravityExecutor struct { cfg *config.Config @@ -189,7 +244,7 @@ func validateAntigravityRequestSignatures(from sdktranslator.Format, rawJSON []b if from.String() != "claude" { return rawJSON, nil } - // Always strip thinking blocks with empty signatures (proxy-generated). + // Always strip thinking blocks with invalid signatures (empty or non-Claude-format). rawJSON = antigravityclaude.StripEmptySignatureThinkingBlocks(rawJSON) if cache.SignatureCacheEnabled() { return rawJSON, nil @@ -298,49 +353,46 @@ func decideAntigravity429(body []byte) antigravity429Decision { decision.retryAfter = retryAfter } - lowerBody := strings.ToLower(string(body)) - for _, keyword := range antigravityQuotaExhaustedKeywords { - if strings.Contains(lowerBody, keyword) { - decision.kind = antigravity429DecisionFullQuotaExhausted - decision.reason = "quota_exhausted" - return decision - } - } - status := strings.TrimSpace(gjson.GetBytes(body, "error.status").String()) if !strings.EqualFold(status, "RESOURCE_EXHAUSTED") { return decision } details := gjson.GetBytes(body, "error.details") - if !details.Exists() || !details.IsArray() { - decision.kind = antigravity429DecisionSoftRetry - return decision - } - - for _, detail := range details.Array() { - if detail.Get("@type").String() != "type.googleapis.com/google.rpc.ErrorInfo" { - continue - } - reason := strings.TrimSpace(detail.Get("reason").String()) - decision.reason = reason - switch { - case strings.EqualFold(reason, "QUOTA_EXHAUSTED"): - decision.kind = antigravity429DecisionFullQuotaExhausted - return decision - case strings.EqualFold(reason, "RATE_LIMIT_EXCEEDED"): - if decision.retryAfter == nil { - decision.kind = antigravity429DecisionSoftRetry - return decision + if details.Exists() && details.IsArray() { + for _, detail := range details.Array() { + if detail.Get("@type").String() != "type.googleapis.com/google.rpc.ErrorInfo" { + continue } + reason := strings.TrimSpace(detail.Get("reason").String()) + decision.reason = reason switch { - case *decision.retryAfter < antigravityInstantRetryThreshold: - decision.kind = antigravity429DecisionInstantRetrySameAuth - case *decision.retryAfter < antigravityShortQuotaCooldownThreshold: - decision.kind = antigravity429DecisionShortCooldownSwitchAuth - default: + case strings.EqualFold(reason, "QUOTA_EXHAUSTED"): decision.kind = antigravity429DecisionFullQuotaExhausted + return decision + case strings.EqualFold(reason, "RATE_LIMIT_EXCEEDED"): + if decision.retryAfter == nil { + decision.kind = antigravity429DecisionSoftRetry + return decision + } + switch { + case *decision.retryAfter < antigravityInstantRetryThreshold: + decision.kind = antigravity429DecisionInstantRetrySameAuth + case *decision.retryAfter < antigravityShortQuotaCooldownThreshold: + decision.kind = antigravity429DecisionShortCooldownSwitchAuth + default: + decision.kind = antigravity429DecisionFullQuotaExhausted + } + return decision } + } + } + + lowerBody := strings.ToLower(string(body)) + for _, keyword := range antigravityQuotaExhaustedKeywords { + if strings.Contains(lowerBody, keyword) { + decision.kind = antigravity429DecisionFullQuotaExhausted + decision.reason = "quota_exhausted" return decision } } @@ -349,81 +401,10 @@ func decideAntigravity429(body []byte) antigravity429Decision { return decision } -func antigravityHasQuotaResetDelayOrModelInfo(body []byte) bool { - if len(body) == 0 { - return false - } - details := gjson.GetBytes(body, "error.details") - if !details.Exists() || !details.IsArray() { - return false - } - for _, detail := range details.Array() { - if detail.Get("@type").String() != "type.googleapis.com/google.rpc.ErrorInfo" { - continue - } - if strings.TrimSpace(detail.Get("metadata.quotaResetDelay").String()) != "" { - return true - } - if strings.TrimSpace(detail.Get("metadata.model").String()) != "" { - return true - } - } - return false -} - func antigravityCreditsRetryEnabled(cfg *config.Config) bool { return cfg != nil && cfg.QuotaExceeded.AntigravityCredits } -func antigravityCreditsFailureStateForAuth(auth *cliproxyauth.Auth) (string, antigravityCreditsFailureState, bool) { - if auth == nil || strings.TrimSpace(auth.ID) == "" { - return "", antigravityCreditsFailureState{}, false - } - authID := strings.TrimSpace(auth.ID) - value, ok := antigravityCreditsFailureByAuth.Load(authID) - if !ok { - return authID, antigravityCreditsFailureState{}, true - } - state, ok := value.(antigravityCreditsFailureState) - if !ok { - antigravityCreditsFailureByAuth.Delete(authID) - return authID, antigravityCreditsFailureState{}, true - } - return authID, state, true -} - -func antigravityCreditsDisabled(auth *cliproxyauth.Auth, now time.Time) bool { - authID, state, ok := antigravityCreditsFailureStateForAuth(auth) - if !ok { - return false - } - if state.PermanentlyDisabled { - return true - } - if state.DisabledUntil.IsZero() { - return false - } - if state.DisabledUntil.After(now) { - return true - } - antigravityCreditsFailureByAuth.Delete(authID) - return false -} - -func recordAntigravityCreditsFailure(auth *cliproxyauth.Auth, now time.Time) { - authID, state, ok := antigravityCreditsFailureStateForAuth(auth) - if !ok { - return - } - if state.PermanentlyDisabled { - antigravityCreditsFailureByAuth.Store(authID, state) - return - } - state.Count++ - state.DisabledUntil = now.Add(antigravityCreditsAutoDisableDuration) - antigravityCreditsFailureByAuth.Store(authID, state) -} - func clearAntigravityCreditsFailureState(auth *cliproxyauth.Auth) { if auth == nil || strings.TrimSpace(auth.ID) == "" { return @@ -440,6 +421,25 @@ func markAntigravityCreditsPermanentlyDisabled(auth *cliproxyauth.Auth) { ExplicitBalanceExhausted: true, } antigravityCreditsFailureByAuth.Store(authID, state) + antigravityCreditsBalanceByAuth.Store(authID, antigravityCreditsBalance{ + CreditAmount: 0, + MinCreditAmount: 1, + Known: true, + }) + cliproxyauth.SetAntigravityCreditsHint(authID, cliproxyauth.AntigravityCreditsHint{ + Known: true, + Available: false, + CreditAmount: 0, + MinCreditAmount: 1, + UpdatedAt: time.Now(), + }) +} + +func clearAntigravityCreditsPermanentlyDisabled(auth *cliproxyauth.Auth) { + if auth == nil || strings.TrimSpace(auth.ID) == "" { + return + } + antigravityCreditsFailureByAuth.Delete(strings.TrimSpace(auth.ID)) } func antigravityHasExplicitCreditsBalanceExhaustedReason(body []byte) bool { @@ -462,81 +462,6 @@ func antigravityHasExplicitCreditsBalanceExhaustedReason(body []byte) bool { return false } -func antigravityPreferCreditsKey(auth *cliproxyauth.Auth, modelName string) string { - if auth == nil { - return "" - } - authID := strings.TrimSpace(auth.ID) - modelName = strings.TrimSpace(modelName) - if authID == "" || modelName == "" { - return "" - } - return authID + "|" + modelName -} - -func antigravityShouldPreferCredits(auth *cliproxyauth.Auth, modelName string, now time.Time) bool { - key := antigravityPreferCreditsKey(auth, modelName) - if key == "" { - return false - } - value, ok := antigravityPreferCreditsByModel.Load(key) - if !ok { - return false - } - until, ok := value.(time.Time) - if !ok || until.IsZero() { - antigravityPreferCreditsByModel.Delete(key) - return false - } - if !until.After(now) { - antigravityPreferCreditsByModel.Delete(key) - return false - } - return true -} - -func markAntigravityPreferCredits(auth *cliproxyauth.Auth, modelName string, now time.Time, retryAfter *time.Duration) { - key := antigravityPreferCreditsKey(auth, modelName) - if key == "" { - return - } - until := now.Add(antigravityCreditsRetryTTL) - if retryAfter != nil && *retryAfter > 0 { - until = now.Add(*retryAfter) - } - antigravityPreferCreditsByModel.Store(key, until) -} - -func clearAntigravityPreferCredits(auth *cliproxyauth.Auth, modelName string) { - key := antigravityPreferCreditsKey(auth, modelName) - if key == "" { - return - } - antigravityPreferCreditsByModel.Delete(key) -} - -func shouldMarkAntigravityCreditsExhausted(statusCode int, body []byte, reqErr error) bool { - if reqErr != nil || statusCode == 0 { - return false - } - if statusCode >= http.StatusInternalServerError || statusCode == http.StatusRequestTimeout { - return false - } - lowerBody := strings.ToLower(string(body)) - for _, keyword := range antigravityCreditsExhaustedKeywords { - if strings.Contains(lowerBody, keyword) { - if keyword == "resource has been exhausted" && - statusCode == http.StatusTooManyRequests && - decideAntigravity429(body).kind == antigravity429DecisionSoftRetry && - !antigravityHasQuotaResetDelayOrModelInfo(body) { - return false - } - return true - } - } - return false -} - func newAntigravityStatusErr(statusCode int, body []byte) statusErr { err := statusErr{code: statusCode, msg: string(body)} if statusCode == http.StatusTooManyRequests { @@ -547,136 +472,13 @@ func newAntigravityStatusErr(statusCode int, body []byte) statusErr { return err } -func (e *AntigravityExecutor) attemptCreditsFallback( - ctx context.Context, - auth *cliproxyauth.Auth, - httpClient *http.Client, - token string, - modelName string, - payload []byte, - stream bool, - alt string, - baseURL string, - originalBody []byte, -) (*http.Response, bool) { - if !antigravityCreditsRetryEnabled(e.cfg) { - return nil, false - } - if decideAntigravity429(originalBody).kind != antigravity429DecisionFullQuotaExhausted { - return nil, false - } - now := time.Now() - if shouldForcePermanentDisableCredits(originalBody) { - clearAntigravityPreferCredits(auth, modelName) - markAntigravityCreditsPermanentlyDisabled(auth) - return nil, false - } - - if antigravityHasExplicitCreditsBalanceExhaustedReason(originalBody) { - clearAntigravityPreferCredits(auth, modelName) - markAntigravityCreditsPermanentlyDisabled(auth) - return nil, false - } - - if antigravityCreditsDisabled(auth, now) { - return nil, false - } - creditsPayload := injectEnabledCreditTypes(payload) - if len(creditsPayload) == 0 { - return nil, false - } - - httpReq, errReq := e.buildRequest(ctx, auth, token, modelName, creditsPayload, stream, alt, baseURL) - if errReq != nil { - helps.RecordAPIResponseError(ctx, e.cfg, errReq) - clearAntigravityPreferCredits(auth, modelName) - recordAntigravityCreditsFailure(auth, now) - return nil, true - } - httpResp, errDo := httpClient.Do(httpReq) - if errDo != nil { - helps.RecordAPIResponseError(ctx, e.cfg, errDo) - clearAntigravityPreferCredits(auth, modelName) - recordAntigravityCreditsFailure(auth, now) - return nil, true - } - if httpResp.StatusCode >= http.StatusOK && httpResp.StatusCode < http.StatusMultipleChoices { - retryAfter, _ := parseRetryDelay(originalBody) - markAntigravityPreferCredits(auth, modelName, now, retryAfter) - clearAntigravityCreditsFailureState(auth) - return httpResp, true - } - - helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) - bodyBytes, errRead := io.ReadAll(httpResp.Body) - if errClose := httpResp.Body.Close(); errClose != nil { - log.Errorf("antigravity executor: close credits fallback response body error: %v", errClose) - } - if errRead != nil { - helps.RecordAPIResponseError(ctx, e.cfg, errRead) - clearAntigravityPreferCredits(auth, modelName) - recordAntigravityCreditsFailure(auth, now) - return nil, true - } - helps.AppendAPIResponseChunk(ctx, e.cfg, bodyBytes) - if shouldForcePermanentDisableCredits(bodyBytes) { - clearAntigravityPreferCredits(auth, modelName) - markAntigravityCreditsPermanentlyDisabled(auth) - return nil, true - } - - if antigravityHasExplicitCreditsBalanceExhaustedReason(bodyBytes) { - clearAntigravityPreferCredits(auth, modelName) - markAntigravityCreditsPermanentlyDisabled(auth) - return nil, true - } - - clearAntigravityPreferCredits(auth, modelName) - recordAntigravityCreditsFailure(auth, now) - return nil, true -} - -func (e *AntigravityExecutor) handleDirectCreditsFailure(ctx context.Context, auth *cliproxyauth.Auth, modelName string, reqErr error) { - if reqErr != nil { - if shouldForcePermanentDisableCredits(reqErrBody(reqErr)) { - clearAntigravityPreferCredits(auth, modelName) - markAntigravityCreditsPermanentlyDisabled(auth) - return - } - - if antigravityHasExplicitCreditsBalanceExhaustedReason(reqErrBody(reqErr)) { - clearAntigravityPreferCredits(auth, modelName) - markAntigravityCreditsPermanentlyDisabled(auth) - return - } - - helps.RecordAPIResponseError(ctx, e.cfg, reqErr) - } - clearAntigravityPreferCredits(auth, modelName) - recordAntigravityCreditsFailure(auth, time.Now()) -} -func reqErrBody(reqErr error) []byte { - if reqErr == nil { - return nil - } - msg := reqErr.Error() - if strings.TrimSpace(msg) == "" { - return nil - } - return []byte(msg) -} - -func shouldForcePermanentDisableCredits(body []byte) bool { - return antigravityHasExplicitCreditsBalanceExhaustedReason(body) -} - // Execute performs a non-streaming request to the Antigravity API. func (e *AntigravityExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { if opts.Alt == "responses/compact" { return resp, statusErr{code: http.StatusNotImplemented, msg: "/responses/compact not supported"} } baseModel := thinking.ParseSuffix(req.Model).ModelName - if inCooldown, remaining := antigravityIsInShortCooldown(auth, baseModel, time.Now()); inCooldown { + if inCooldown, remaining := antigravityIsInShortCooldown(auth, baseModel, time.Now()); inCooldown && !antigravityShouldBypassShortCooldown(ctx, e.cfg) { log.Debugf("antigravity executor: auth %s in short cooldown for model %s (%s remaining), returning 429 to switch auth", auth.ID, baseModel, remaining) d := remaining return resp, statusErr{code: http.StatusTooManyRequests, msg: fmt.Sprintf("auth in short cooldown, %s remaining", remaining), retryAfter: &d} @@ -719,7 +521,10 @@ func (e *AntigravityExecutor) Execute(ctx context.Context, auth *cliproxyauth.Au } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "antigravity", "request", translated, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "antigravity", "request", translated, originalTranslated, requestedModel, requestPath) + + useCredits := cliproxyauth.AntigravityCreditsRequested(ctx) && antigravityCreditsRetryEnabled(e.cfg) baseURLs := antigravityBaseURLFallbackOrder(auth) httpClient := newAntigravityHTTPClient(ctx, e.cfg, auth, 0) @@ -733,11 +538,10 @@ attemptLoop: for idx, baseURL := range baseURLs { requestPayload := translated - usedCreditsDirect := false - if antigravityCreditsRetryEnabled(e.cfg) && antigravityShouldPreferCredits(auth, baseModel, time.Now()) { - if creditsPayload := injectEnabledCreditTypes(translated); len(creditsPayload) > 0 { - requestPayload = creditsPayload - usedCreditsDirect = true + if useCredits { + if cp := injectEnabledCreditTypes(translated); len(cp) > 0 { + requestPayload = cp + helps.MarkCreditsUsed(ctx) } } @@ -785,7 +589,6 @@ attemptLoop: wait := antigravityInstantRetryDelay(*decision.retryAfter) log.Debugf("antigravity executor: instant retry for model %s, waiting %s", baseModel, wait) if errWait := antigravityWait(ctx, wait); errWait != nil { - return resp, errWait } } @@ -794,34 +597,13 @@ attemptLoop: case antigravity429DecisionShortCooldownSwitchAuth: if decision.retryAfter != nil && *decision.retryAfter > 0 { markAntigravityShortCooldown(auth, baseModel, time.Now(), *decision.retryAfter) - log.Debugf("antigravity executor: short quota cooldown (%s) for model %s, recorded cooldown and skipping credits fallback", *decision.retryAfter, baseModel) + log.Debugf("antigravity executor: short quota cooldown (%s) for model %s, recorded cooldown", *decision.retryAfter, baseModel) } case antigravity429DecisionFullQuotaExhausted: - if usedCreditsDirect { - clearAntigravityPreferCredits(auth, baseModel) - recordAntigravityCreditsFailure(auth, time.Now()) - } else { - creditsResp, _ := e.attemptCreditsFallback(ctx, auth, httpClient, token, baseModel, translated, false, opts.Alt, baseURL, bodyBytes) - if creditsResp != nil { - helps.RecordAPIResponseMetadata(ctx, e.cfg, creditsResp.StatusCode, creditsResp.Header.Clone()) - creditsBody, errCreditsRead := io.ReadAll(creditsResp.Body) - if errClose := creditsResp.Body.Close(); errClose != nil { - log.Errorf("antigravity executor: close credits success response body error: %v", errClose) - } - if errCreditsRead != nil { - helps.RecordAPIResponseError(ctx, e.cfg, errCreditsRead) - err = errCreditsRead - return resp, err - } - helps.AppendAPIResponseChunk(ctx, e.cfg, creditsBody) - reporter.Publish(ctx, helps.ParseAntigravityUsage(creditsBody)) - var param any - converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, creditsBody, ¶m) - resp = cliproxyexecutor.Response{Payload: converted, Headers: creditsResp.Header.Clone()} - reporter.EnsurePublished(ctx) - return resp, nil - } + if useCredits && antigravityHasExplicitCreditsBalanceExhaustedReason(bodyBytes) { + markAntigravityCreditsPermanentlyDisabled(auth) } + // No credits logic - just fall through to error return below } } @@ -870,6 +652,10 @@ attemptLoop: return resp, err } + // Success + if useCredits { + clearAntigravityCreditsFailureState(auth) + } reporter.Publish(ctx, helps.ParseAntigravityUsage(bodyBytes)) var param any converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bodyBytes, ¶m) @@ -895,7 +681,7 @@ attemptLoop: // executeClaudeNonStream performs a claude non-streaming request to the Antigravity API. func (e *AntigravityExecutor) executeClaudeNonStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { baseModel := thinking.ParseSuffix(req.Model).ModelName - if inCooldown, remaining := antigravityIsInShortCooldown(auth, baseModel, time.Now()); inCooldown { + if inCooldown, remaining := antigravityIsInShortCooldown(auth, baseModel, time.Now()); inCooldown && !antigravityShouldBypassShortCooldown(ctx, e.cfg) { log.Debugf("antigravity executor: auth %s in short cooldown for model %s (%s remaining), returning 429 to switch auth", auth.ID, baseModel, remaining) d := remaining return resp, statusErr{code: http.StatusTooManyRequests, msg: fmt.Sprintf("auth in short cooldown, %s remaining", remaining), retryAfter: &d} @@ -933,7 +719,10 @@ func (e *AntigravityExecutor) executeClaudeNonStream(ctx context.Context, auth * } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "antigravity", "request", translated, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "antigravity", "request", translated, originalTranslated, requestedModel, requestPath) + + useCredits := cliproxyauth.AntigravityCreditsRequested(ctx) && antigravityCreditsRetryEnabled(e.cfg) baseURLs := antigravityBaseURLFallbackOrder(auth) httpClient := newAntigravityHTTPClient(ctx, e.cfg, auth, 0) @@ -948,11 +737,10 @@ attemptLoop: for idx, baseURL := range baseURLs { requestPayload := translated - usedCreditsDirect := false - if antigravityCreditsRetryEnabled(e.cfg) && antigravityShouldPreferCredits(auth, baseModel, time.Now()) { - if creditsPayload := injectEnabledCreditTypes(translated); len(creditsPayload) > 0 { - requestPayload = creditsPayload - usedCreditsDirect = true + if useCredits { + if cp := injectEnabledCreditTypes(translated); len(cp) > 0 { + requestPayload = cp + helps.MarkCreditsUsed(ctx) } } httpReq, errReq := e.buildRequest(ctx, auth, token, baseModel, requestPayload, true, opts.Alt, baseURL) @@ -1014,7 +802,6 @@ attemptLoop: wait := antigravityInstantRetryDelay(*decision.retryAfter) log.Debugf("antigravity executor: instant retry for model %s, waiting %s", baseModel, wait) if errWait := antigravityWait(ctx, wait); errWait != nil { - return resp, errWait } } @@ -1023,25 +810,16 @@ attemptLoop: case antigravity429DecisionShortCooldownSwitchAuth: if decision.retryAfter != nil && *decision.retryAfter > 0 { markAntigravityShortCooldown(auth, baseModel, time.Now(), *decision.retryAfter) - log.Debugf("antigravity executor: short quota cooldown (%s) for model %s, recorded cooldown and skipping credits fallback", *decision.retryAfter, baseModel) + log.Debugf("antigravity executor: short quota cooldown (%s) for model %s, recorded cooldown", *decision.retryAfter, baseModel) } case antigravity429DecisionFullQuotaExhausted: - if usedCreditsDirect { - clearAntigravityPreferCredits(auth, baseModel) - recordAntigravityCreditsFailure(auth, time.Now()) - } else { - creditsResp, _ := e.attemptCreditsFallback(ctx, auth, httpClient, token, baseModel, translated, true, opts.Alt, baseURL, bodyBytes) - if creditsResp != nil { - httpResp = creditsResp - helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) - } + if useCredits && antigravityHasExplicitCreditsBalanceExhaustedReason(bodyBytes) { + markAntigravityCreditsPermanentlyDisabled(auth) } + // No credits logic - just fall through to error return below } } - if httpResp.StatusCode >= http.StatusOK && httpResp.StatusCode < http.StatusMultipleChoices { - goto streamSuccessClaudeNonStream - } lastStatus = httpResp.StatusCode lastBody = append([]byte(nil), bodyBytes...) lastErr = nil @@ -1085,7 +863,10 @@ attemptLoop: return resp, err } - streamSuccessClaudeNonStream: + // Stream success + if useCredits { + clearAntigravityCreditsFailureState(auth) + } out := make(chan cliproxyexecutor.StreamChunk) go func(resp *http.Response) { defer close(out) @@ -1117,7 +898,7 @@ attemptLoop: } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) + reporter.PublishFailure(ctx, errScan) out <- cliproxyexecutor.StreamChunk{Err: errScan} } else { reporter.EnsurePublished(ctx) @@ -1360,7 +1141,7 @@ func (e *AntigravityExecutor) ExecuteStream(ctx context.Context, auth *cliproxya baseModel := thinking.ParseSuffix(req.Model).ModelName ctx = context.WithValue(ctx, "alt", "") - if inCooldown, remaining := antigravityIsInShortCooldown(auth, baseModel, time.Now()); inCooldown { + if inCooldown, remaining := antigravityIsInShortCooldown(auth, baseModel, time.Now()); inCooldown && !antigravityShouldBypassShortCooldown(ctx, e.cfg) { log.Debugf("antigravity executor: auth %s in short cooldown for model %s (%s remaining), returning 429 to switch auth", auth.ID, baseModel, remaining) d := remaining return nil, statusErr{code: http.StatusTooManyRequests, msg: fmt.Sprintf("auth in short cooldown, %s remaining", remaining), retryAfter: &d} @@ -1389,6 +1170,7 @@ func (e *AntigravityExecutor) ExecuteStream(ctx context.Context, auth *cliproxya if updatedAuth != nil { auth = updatedAuth } + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true) translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) @@ -1398,7 +1180,10 @@ func (e *AntigravityExecutor) ExecuteStream(ctx context.Context, auth *cliproxya } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "antigravity", "request", translated, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "antigravity", "request", translated, originalTranslated, requestedModel, requestPath) + + useCredits := cliproxyauth.AntigravityCreditsRequested(ctx) && antigravityCreditsRetryEnabled(e.cfg) baseURLs := antigravityBaseURLFallbackOrder(auth) httpClient := newAntigravityHTTPClient(ctx, e.cfg, auth, 0) @@ -1413,11 +1198,10 @@ attemptLoop: for idx, baseURL := range baseURLs { requestPayload := translated - usedCreditsDirect := false - if antigravityCreditsRetryEnabled(e.cfg) && antigravityShouldPreferCredits(auth, baseModel, time.Now()) { - if creditsPayload := injectEnabledCreditTypes(translated); len(creditsPayload) > 0 { - requestPayload = creditsPayload - usedCreditsDirect = true + if useCredits { + if cp := injectEnabledCreditTypes(translated); len(cp) > 0 { + requestPayload = cp + helps.MarkCreditsUsed(ctx) } } httpReq, errReq := e.buildRequest(ctx, auth, token, baseModel, requestPayload, true, opts.Alt, baseURL) @@ -1478,7 +1262,6 @@ attemptLoop: wait := antigravityInstantRetryDelay(*decision.retryAfter) log.Debugf("antigravity executor: instant retry for model %s, waiting %s", baseModel, wait) if errWait := antigravityWait(ctx, wait); errWait != nil { - return nil, errWait } } @@ -1487,25 +1270,16 @@ attemptLoop: case antigravity429DecisionShortCooldownSwitchAuth: if decision.retryAfter != nil && *decision.retryAfter > 0 { markAntigravityShortCooldown(auth, baseModel, time.Now(), *decision.retryAfter) - log.Debugf("antigravity executor: short quota cooldown (%s) for model %s, recorded cooldown and skipping credits fallback", *decision.retryAfter, baseModel) + log.Debugf("antigravity executor: short quota cooldown (%s) for model %s recorded", *decision.retryAfter, baseModel) } case antigravity429DecisionFullQuotaExhausted: - if usedCreditsDirect { - clearAntigravityPreferCredits(auth, baseModel) - recordAntigravityCreditsFailure(auth, time.Now()) - } else { - creditsResp, _ := e.attemptCreditsFallback(ctx, auth, httpClient, token, baseModel, translated, true, opts.Alt, baseURL, bodyBytes) - if creditsResp != nil { - httpResp = creditsResp - helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) - } + if useCredits && antigravityHasExplicitCreditsBalanceExhaustedReason(bodyBytes) { + markAntigravityCreditsPermanentlyDisabled(auth) } + // No credits logic - just fall through to error return below } } - if httpResp.StatusCode >= http.StatusOK && httpResp.StatusCode < http.StatusMultipleChoices { - goto streamSuccessExecuteStream - } lastStatus = httpResp.StatusCode lastBody = append([]byte(nil), bodyBytes...) lastErr = nil @@ -1549,7 +1323,10 @@ attemptLoop: return nil, err } - streamSuccessExecuteStream: + // Stream success + if useCredits { + clearAntigravityCreditsFailureState(auth) + } out := make(chan cliproxyexecutor.StreamChunk) go func(resp *http.Response) { defer close(out) @@ -1580,17 +1357,28 @@ attemptLoop: chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(payload), ¶m) for i := range chunks { - out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]}: + case <-ctx.Done(): + return + } } } tail := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, []byte("[DONE]"), ¶m) for i := range tail { - out <- cliproxyexecutor.StreamChunk{Payload: tail[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: tail[i]}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } else { reporter.EnsurePublished(ctx) } @@ -1614,6 +1402,9 @@ attemptLoop: // Refresh refreshes the authentication credentials using the refresh token. func (e *AntigravityExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } if auth == nil { return auth, nil } @@ -1792,6 +1583,7 @@ func (e *AntigravityExecutor) ensureAccessToken(ctx context.Context, auth *clipr accessToken := metaStringValue(auth.Metadata, "access_token") expiry := tokenExpiry(auth.Metadata) if accessToken != "" && expiry.After(time.Now().Add(refreshSkew)) { + e.maybeRefreshAntigravityCreditsHint(ctx, auth, accessToken) return accessToken, nil, nil } refreshCtx := context.Background() @@ -1800,6 +1592,18 @@ func (e *AntigravityExecutor) ensureAccessToken(ctx context.Context, auth *clipr refreshCtx = context.WithValue(refreshCtx, "cliproxy.roundtripper", rt) } } + if refreshed, handled, err := helps.RefreshAuthViaHome(refreshCtx, e.cfg, auth); handled { + if err != nil { + return "", nil, err + } + token := metaStringValue(refreshed.Metadata, "access_token") + if strings.TrimSpace(token) == "" { + return "", nil, statusErr{code: http.StatusUnauthorized, msg: "missing access token"} + } + e.maybeRefreshAntigravityCreditsHint(ctx, refreshed, token) + return token, refreshed, nil + } + updated, errRefresh := e.refreshToken(refreshCtx, auth.Clone()) if errRefresh != nil { return "", nil, errRefresh @@ -1807,6 +1611,63 @@ func (e *AntigravityExecutor) ensureAccessToken(ctx context.Context, auth *clipr return metaStringValue(updated.Metadata, "access_token"), updated, nil } +func (e *AntigravityExecutor) maybeRefreshAntigravityCreditsHint(ctx context.Context, auth *cliproxyauth.Auth, accessToken string) { + if e == nil || auth == nil || !antigravityCreditsRetryEnabled(e.cfg) { + return + } + if ctx != nil && ctx.Err() != nil { + return + } + authID := strings.TrimSpace(auth.ID) + if authID == "" { + return + } + if hint, ok := cliproxyauth.GetAntigravityCreditsHint(authID); ok && hint.Known { + return + } + if strings.TrimSpace(accessToken) == "" { + accessToken = metaStringValue(auth.Metadata, "access_token") + } + if strings.TrimSpace(accessToken) == "" { + return + } + + state := &antigravityCreditsHintRefreshState{} + if existing, loaded := antigravityCreditsHintRefreshByID.LoadOrStore(authID, state); loaded { + if cast, ok := existing.(*antigravityCreditsHintRefreshState); ok && cast != nil { + state = cast + } else { + antigravityCreditsHintRefreshByID.Delete(authID) + antigravityCreditsHintRefreshByID.Store(authID, state) + } + } + + now := time.Now() + if !state.mu.TryLock() { + return + } + if !state.lastAttempt.IsZero() && now.Sub(state.lastAttempt) < antigravityCreditsHintRefreshInterval { + state.mu.Unlock() + return + } + state.lastAttempt = now + + refreshCtx := context.Background() + if ctx != nil { + if rt, ok := ctx.Value("cliproxy.roundtripper").(http.RoundTripper); ok && rt != nil { + refreshCtx = context.WithValue(refreshCtx, "cliproxy.roundtripper", rt) + } + } + refreshCtx, cancel := context.WithTimeout(refreshCtx, antigravityCreditsHintRefreshTimeout) + authCopy := auth.Clone() + + go func(state *antigravityCreditsHintRefreshState, auth *cliproxyauth.Auth, token string) { + defer cancel() + defer state.mu.Unlock() + e.updateAntigravityCreditsBalance(refreshCtx, auth, token) + }(state, authCopy, accessToken) +} + func (e *AntigravityExecutor) refreshToken(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { if auth == nil { return nil, statusErr{code: http.StatusUnauthorized, msg: "missing auth"} @@ -1882,6 +1743,7 @@ func (e *AntigravityExecutor) refreshToken(ctx context.Context, auth *cliproxyau if errProject := e.ensureAntigravityProjectID(ctx, auth, tokenResp.AccessToken); errProject != nil { log.Warnf("antigravity executor: ensure project id failed: %v", errProject) } + e.updateAntigravityCreditsBalance(ctx, auth, tokenResp.AccessToken) return auth, nil } @@ -1918,6 +1780,107 @@ func (e *AntigravityExecutor) ensureAntigravityProjectID(ctx context.Context, au return nil } +func (e *AntigravityExecutor) updateAntigravityCreditsBalance(ctx context.Context, auth *cliproxyauth.Auth, accessToken string) { + if auth == nil || strings.TrimSpace(auth.ID) == "" { + return + } + token := strings.TrimSpace(accessToken) + if token == "" { + token = metaStringValue(auth.Metadata, "access_token") + } + if token == "" { + return + } + + userAgent := resolveLoadCodeAssistUserAgent(auth) + loadReqBody, errMarshal := json.Marshal(map[string]any{ + "metadata": map[string]string{ + "ide_type": "ANTIGRAVITY", + "ide_version": misc.AntigravityVersionFromUserAgent(userAgent), + "ide_name": "antigravity", + }, + }) + if errMarshal != nil { + log.Debugf("antigravity executor: marshal loadCodeAssist request error: %v", errMarshal) + return + } + baseURL := buildBaseURL(auth) + endpointURL := strings.TrimSuffix(baseURL, "/") + "/v1internal:loadCodeAssist" + httpReq, errReq := http.NewRequestWithContext(ctx, http.MethodPost, endpointURL, bytes.NewReader(loadReqBody)) + if errReq != nil { + log.Debugf("antigravity executor: create loadCodeAssist request error: %v", errReq) + return + } + httpReq.Header.Set("Authorization", "Bearer "+token) + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("User-Agent", userAgent) + httpReq.Header.Set("X-Goog-Api-Client", misc.AntigravityGoogAPIClientUA) + + httpClient := newAntigravityHTTPClient(ctx, e.cfg, auth, 0) + httpResp, errDo := httpClient.Do(httpReq) + if errDo != nil { + log.Debugf("antigravity executor: loadCodeAssist request error: %v", errDo) + return + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("antigravity executor: close loadCodeAssist response body error: %v", errClose) + } + }() + + bodyBytes, errRead := io.ReadAll(httpResp.Body) + if errRead != nil || httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices { + log.Debugf("antigravity executor: loadCodeAssist returned status %d, err=%v", httpResp.StatusCode, errRead) + return + } + + authID := strings.TrimSpace(auth.ID) + paidTierID := strings.TrimSpace(gjson.GetBytes(bodyBytes, "paidTier.id").String()) + + credits := gjson.GetBytes(bodyBytes, "paidTier.availableCredits") + if !credits.IsArray() { + cliproxyauth.SetAntigravityCreditsHint(authID, cliproxyauth.AntigravityCreditsHint{ + Known: true, + Available: false, + PaidTierID: paidTierID, + UpdatedAt: time.Now(), + }) + return + } + for _, credit := range credits.Array() { + if !strings.EqualFold(credit.Get("creditType").String(), "GOOGLE_ONE_AI") { + continue + } + creditAmount, errCA := strconv.ParseFloat(strings.TrimSpace(credit.Get("creditAmount").String()), 64) + if errCA != nil { + continue + } + minAmount, errMA := strconv.ParseFloat(strings.TrimSpace(credit.Get("minimumCreditAmountForUsage").String()), 64) + if errMA != nil { + continue + } + bal := antigravityCreditsBalance{ + CreditAmount: creditAmount, + MinCreditAmount: minAmount, + PaidTierID: paidTierID, + Known: true, + } + antigravityCreditsBalanceByAuth.Store(authID, bal) + cliproxyauth.SetAntigravityCreditsHint(authID, cliproxyauth.AntigravityCreditsHint{ + Known: true, + Available: creditAmount >= minAmount, + CreditAmount: creditAmount, + MinCreditAmount: minAmount, + PaidTierID: paidTierID, + UpdatedAt: time.Now(), + }) + if creditAmount >= minAmount { + clearAntigravityCreditsPermanentlyDisabled(auth) + } + return + } +} + func (e *AntigravityExecutor) buildRequest(ctx context.Context, auth *cliproxyauth.Auth, token, modelName string, payload []byte, stream bool, alt, baseURL string) (*http.Request, error) { if token == "" { return nil, statusErr{code: http.StatusUnauthorized, msg: "missing access token"} @@ -2149,19 +2112,28 @@ func resolveHost(base string) string { } func resolveUserAgent(auth *cliproxyauth.Auth) string { + return misc.AntigravityRequestUserAgent(antigravityConfiguredUserAgent(auth)) +} + +func resolveLoadCodeAssistUserAgent(auth *cliproxyauth.Auth) string { + return misc.AntigravityLoadCodeAssistUserAgent(antigravityConfiguredUserAgent(auth)) +} + +func antigravityConfiguredUserAgent(auth *cliproxyauth.Auth) string { + raw := "" if auth != nil { if auth.Attributes != nil { if ua := strings.TrimSpace(auth.Attributes["user_agent"]); ua != "" { - return ua + raw = ua } } - if auth.Metadata != nil { + if raw == "" && auth.Metadata != nil { if ua, ok := auth.Metadata["user_agent"].(string); ok && strings.TrimSpace(ua) != "" { - return strings.TrimSpace(ua) + raw = strings.TrimSpace(ua) } } } - return misc.AntigravityUserAgent() + return raw } func antigravityRetryAttempts(auth *cliproxyauth.Auth, cfg *config.Config) int { @@ -2220,6 +2192,10 @@ func antigravityShouldRetrySoftRateLimit(statusCode int, body []byte) bool { return decideAntigravity429(body).kind == antigravity429DecisionSoftRetry } +func antigravityShouldBypassShortCooldown(ctx context.Context, cfg *config.Config) bool { + return cliproxyauth.AntigravityCreditsRequested(ctx) && antigravityCreditsRetryEnabled(cfg) +} + func antigravitySoftRateLimitDelay(attempt int) time.Duration { if attempt < 0 { attempt = 0 @@ -2321,9 +2297,9 @@ var antigravityBaseURLFallbackOrder = func(auth *cliproxyauth.Auth) []string { return []string{base} } return []string{ - antigravityBaseURLProd, antigravityBaseURLDaily, - antigravitySandboxBaseURLDaily, + antigravityBaseURLProd, + // antigravitySandboxBaseURLDaily, } } diff --git a/internal/runtime/executor/antigravity_executor_buildrequest_test.go b/internal/runtime/executor/antigravity_executor_buildrequest_test.go index ed2d79e632..f0711752e4 100644 --- a/internal/runtime/executor/antigravity_executor_buildrequest_test.go +++ b/internal/runtime/executor/antigravity_executor_buildrequest_test.go @@ -6,7 +6,7 @@ import ( "io" "testing" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestAntigravityBuildRequest_SanitizesGeminiToolSchema(t *testing.T) { diff --git a/internal/runtime/executor/antigravity_executor_credits_test.go b/internal/runtime/executor/antigravity_executor_credits_test.go index cf968ac794..e16e64434f 100644 --- a/internal/runtime/executor/antigravity_executor_credits_test.go +++ b/internal/runtime/executor/antigravity_executor_credits_test.go @@ -10,16 +10,17 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) func resetAntigravityCreditsRetryState() { antigravityCreditsFailureByAuth = sync.Map{} - antigravityPreferCreditsByModel = sync.Map{} antigravityShortCooldownByAuth = sync.Map{} + antigravityCreditsBalanceByAuth = sync.Map{} + antigravityCreditsHintRefreshByID = sync.Map{} } func TestClassifyAntigravity429(t *testing.T) { @@ -30,6 +31,43 @@ func TestClassifyAntigravity429(t *testing.T) { } }) + t.Run("standard antigravity rate limit with ui message stays rate limited", func(t *testing.T) { + body := []byte(`{ + "error": { + "code": 429, + "message": "You have exhausted your capacity on this model. Your quota will reset after 0s.", + "status": "RESOURCE_EXHAUSTED", + "details": [ + { + "@type": "type.googleapis.com/google.rpc.ErrorInfo", + "reason": "RATE_LIMIT_EXCEEDED", + "domain": "cloudcode-pa.googleapis.com", + "metadata": { + "model": "claude-opus-4-6-thinking", + "quotaResetDelay": "479.417207ms", + "quotaResetTimeStamp": "2026-04-20T09:19:49Z", + "uiMessage": "true" + } + }, + { + "@type": "type.googleapis.com/google.rpc.RetryInfo", + "retryDelay": "0.479417207s" + } + ] + } + }`) + if got := classifyAntigravity429(body); got != antigravity429RateLimited { + t.Fatalf("classifyAntigravity429() = %q, want %q", got, antigravity429RateLimited) + } + decision := decideAntigravity429(body) + if decision.kind != antigravity429DecisionInstantRetrySameAuth { + t.Fatalf("decideAntigravity429().kind = %q, want %q", decision.kind, antigravity429DecisionInstantRetrySameAuth) + } + if decision.retryAfter == nil { + t.Fatal("decideAntigravity429().retryAfter = nil") + } + }) + t.Run("structured rate limit", func(t *testing.T) { body := []byte(`{ "error": { @@ -67,8 +105,31 @@ func TestClassifyAntigravity429(t *testing.T) { }) } +func TestAntigravityShouldRetryNoCapacity_Standard503(t *testing.T) { + body := []byte(`{ + "error": { + "code": 503, + "message": "No capacity available for model gemini-3.1-flash-image on the server", + "status": "UNAVAILABLE", + "details": [ + { + "@type": "type.googleapis.com/google.rpc.ErrorInfo", + "reason": "MODEL_CAPACITY_EXHAUSTED", + "domain": "cloudcode-pa.googleapis.com", + "metadata": { + "model": "gemini-3.1-flash-image" + } + } + ] + } + }`) + if !antigravityShouldRetryNoCapacity(http.StatusServiceUnavailable, body) { + t.Fatal("antigravityShouldRetryNoCapacity() = false, want true") + } +} + func TestInjectEnabledCreditTypes(t *testing.T) { - body := []byte(`{"model":"gemini-2.5-flash","request":{}}`) + body := []byte(`{"model":"claude-sonnet-4-6","request":{}}`) got := injectEnabledCreditTypes(body) if got == nil { t.Fatal("injectEnabledCreditTypes() returned nil") @@ -82,34 +143,18 @@ func TestInjectEnabledCreditTypes(t *testing.T) { } } -func TestShouldMarkAntigravityCreditsExhausted(t *testing.T) { - t.Run("credit errors are marked", func(t *testing.T) { - for _, body := range [][]byte{ - []byte(`{"error":{"message":"Insufficient GOOGLE_ONE_AI credits"}}`), - []byte(`{"error":{"message":"minimumCreditAmountForUsage requirement not met"}}`), - } { - if !shouldMarkAntigravityCreditsExhausted(http.StatusForbidden, body, nil) { - t.Fatalf("shouldMarkAntigravityCreditsExhausted(%s) = false, want true", string(body)) - } - } - }) - - t.Run("transient 429 resource exhausted is not marked", func(t *testing.T) { - body := []byte(`{"error":{"code":429,"message":"Resource has been exhausted (e.g. check quota).","status":"RESOURCE_EXHAUSTED"}}`) - if shouldMarkAntigravityCreditsExhausted(http.StatusTooManyRequests, body, nil) { - t.Fatalf("shouldMarkAntigravityCreditsExhausted(%s) = true, want false", string(body)) - } - }) - - t.Run("resource exhausted with quota metadata is still marked", func(t *testing.T) { - body := []byte(`{"error":{"code":429,"message":"Resource has been exhausted","status":"RESOURCE_EXHAUSTED","details":[{"@type":"type.googleapis.com/google.rpc.ErrorInfo","metadata":{"quotaResetDelay":"1h","model":"claude-sonnet-4-6"}}]}}`) - if !shouldMarkAntigravityCreditsExhausted(http.StatusTooManyRequests, body, nil) { - t.Fatalf("shouldMarkAntigravityCreditsExhausted(%s) = false, want true", string(body)) - } - }) - - if shouldMarkAntigravityCreditsExhausted(http.StatusServiceUnavailable, []byte(`{"error":{"message":"credits exhausted"}}`), nil) { - t.Fatal("shouldMarkAntigravityCreditsExhausted() = true for 5xx, want false") +func TestParseRetryDelay_HumanReadableDuration(t *testing.T) { + body := []byte(`{"error":{"message":"You have exhausted your capacity on this model. Your quota will reset after 1h43m56s."}}`) + retryAfter, err := parseRetryDelay(body) + if err != nil { + t.Fatalf("parseRetryDelay() error = %v", err) + } + if retryAfter == nil { + t.Fatal("parseRetryDelay() returned nil") + } + want := time.Hour + 43*time.Minute + 56*time.Second + if *retryAfter != want { + t.Fatalf("parseRetryDelay() = %v, want %v", *retryAfter, want) } } @@ -147,7 +192,7 @@ func TestAntigravityExecute_RetriesTransient429ResourceExhausted(t *testing.T) { } resp, err := exec.Execute(context.Background(), auth, cliproxyexecutor.Request{ - Model: "gemini-2.5-flash", + Model: "claude-sonnet-4-6", Payload: []byte(`{"request":{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}}`), }, cliproxyexecutor.Options{ SourceFormat: sdktranslator.FormatAntigravity, @@ -163,32 +208,23 @@ func TestAntigravityExecute_RetriesTransient429ResourceExhausted(t *testing.T) { } } -func TestAntigravityExecute_RetriesQuotaExhaustedWithCredits(t *testing.T) { +func TestAntigravityExecute_CreditsInjectedWhenConductorRequests(t *testing.T) { resetAntigravityCreditsRetryState() t.Cleanup(resetAntigravityCreditsRetryState) - var ( - mu sync.Mutex - requestBodies []string - ) - + var requestBodies []string server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { body, _ := io.ReadAll(r.Body) _ = r.Body.Close() - - mu.Lock() - requestBodies = append(requestBodies, string(body)) - reqNum := len(requestBodies) - mu.Unlock() - - if reqNum == 1 { - w.WriteHeader(http.StatusTooManyRequests) - _, _ = w.Write([]byte(`{"error":{"status":"RESOURCE_EXHAUSTED","message":"QUOTA_EXHAUSTED"}}`)) + if r.URL.Path == "/v1internal:loadCodeAssist" { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"paidTier":{"id":"tier-1","availableCredits":[{"creditType":"GOOGLE_ONE_AI","creditAmount":"25000","minimumCreditAmountForUsage":"50"}]}}`)) return } + requestBodies = append(requestBodies, string(body)) if !strings.Contains(string(body), `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("second request body missing enabledCreditTypes: %s", string(body)) + t.Fatalf("request body missing enabledCreditTypes: %s", string(body)) } w.Header().Set("Content-Type", "application/json") _, _ = w.Write([]byte(`{"response":{"candidates":[{"content":{"role":"model","parts":[{"text":"ok"}]}}],"usageMetadata":{"promptTokenCount":1,"candidatesTokenCount":1,"totalTokenCount":2}}}`)) @@ -199,7 +235,7 @@ func TestAntigravityExecute_RetriesQuotaExhaustedWithCredits(t *testing.T) { QuotaExceeded: config.QuotaExceeded{AntigravityCredits: true}, }) auth := &cliproxyauth.Auth{ - ID: "auth-credits-ok", + ID: "auth-credits-conductor", Attributes: map[string]string{ "base_url": server.URL, }, @@ -210,8 +246,11 @@ func TestAntigravityExecute_RetriesQuotaExhaustedWithCredits(t *testing.T) { }, } - resp, err := exec.Execute(context.Background(), auth, cliproxyexecutor.Request{ - Model: "gemini-2.5-flash", + // Simulate conductor setting credits requested flag in context + ctx := cliproxyauth.WithAntigravityCredits(context.Background()) + + resp, err := exec.Execute(ctx, auth, cliproxyexecutor.Request{ + Model: "claude-sonnet-4-6", Payload: []byte(`{"request":{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}}`), }, cliproxyexecutor.Options{ SourceFormat: sdktranslator.FormatAntigravity, @@ -222,21 +261,25 @@ func TestAntigravityExecute_RetriesQuotaExhaustedWithCredits(t *testing.T) { if len(resp.Payload) == 0 { t.Fatal("Execute() returned empty payload") } - - mu.Lock() - defer mu.Unlock() - if len(requestBodies) != 2 { - t.Fatalf("request count = %d, want 2", len(requestBodies)) + if len(requestBodies) != 1 { + t.Fatalf("request count = %d, want 1", len(requestBodies)) } } -func TestAntigravityExecute_SkipsCreditsRetryWhenAlreadyExhausted(t *testing.T) { +func TestAntigravityExecute_NoCreditsWithoutConductorFlag(t *testing.T) { resetAntigravityCreditsRetryState() t.Cleanup(resetAntigravityCreditsRetryState) - var requestCount int + var requestBodies []string server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - requestCount++ + body, _ := io.ReadAll(r.Body) + _ = r.Body.Close() + if r.URL.Path == "/v1internal:loadCodeAssist" { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"paidTier":{"id":"tier-1","availableCredits":[{"creditType":"GOOGLE_ONE_AI","creditAmount":"25000","minimumCreditAmountForUsage":"50"}]}}`)) + return + } + requestBodies = append(requestBodies, string(body)) w.WriteHeader(http.StatusTooManyRequests) _, _ = w.Write([]byte(`{"error":{"status":"RESOURCE_EXHAUSTED","message":"QUOTA_EXHAUSTED"}}`)) })) @@ -246,7 +289,7 @@ func TestAntigravityExecute_SkipsCreditsRetryWhenAlreadyExhausted(t *testing.T) QuotaExceeded: config.QuotaExceeded{AntigravityCredits: true}, }) auth := &cliproxyauth.Auth{ - ID: "auth-credits-exhausted", + ID: "auth-no-conductor-flag", Attributes: map[string]string{ "base_url": server.URL, }, @@ -256,10 +299,10 @@ func TestAntigravityExecute_SkipsCreditsRetryWhenAlreadyExhausted(t *testing.T) "expired": time.Now().Add(1 * time.Hour).Format(time.RFC3339), }, } - recordAntigravityCreditsFailure(auth, time.Now()) + // No conductor credits flag set in context _, err := exec.Execute(context.Background(), auth, cliproxyexecutor.Request{ - Model: "gemini-2.5-flash", + Model: "claude-sonnet-4-6", Payload: []byte(`{"request":{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}}`), }, cliproxyexecutor.Options{ SourceFormat: sdktranslator.FormatAntigravity, @@ -267,224 +310,194 @@ func TestAntigravityExecute_SkipsCreditsRetryWhenAlreadyExhausted(t *testing.T) if err == nil { t.Fatal("Execute() error = nil, want 429") } - sErr, ok := err.(statusErr) - if !ok { - t.Fatalf("Execute() error type = %T, want statusErr", err) - } - if got := sErr.StatusCode(); got != http.StatusTooManyRequests { - t.Fatalf("Execute() status code = %d, want %d", got, http.StatusTooManyRequests) + if len(requestBodies) != 1 { + t.Fatalf("request count = %d, want 1", len(requestBodies)) } - if requestCount != 1 { - t.Fatalf("request count = %d, want 1", requestCount) + // Should NOT contain credits since conductor didn't request them + if strings.Contains(requestBodies[0], `"enabledCreditTypes"`) { + t.Fatalf("request should not contain enabledCreditTypes without conductor flag: %s", requestBodies[0]) } } -func TestAntigravityExecute_PrefersCreditsAfterSuccessfulFallback(t *testing.T) { - resetAntigravityCreditsRetryState() - t.Cleanup(resetAntigravityCreditsRetryState) - - var ( - mu sync.Mutex - requestBodies []string - ) +func TestAntigravityAuthHasCredits(t *testing.T) { + t.Run("sufficient balance", func(t *testing.T) { + resetAntigravityCreditsRetryState() + auth := &cliproxyauth.Auth{ID: "test-sufficient"} + antigravityCreditsBalanceByAuth.Store("test-sufficient", antigravityCreditsBalance{ + CreditAmount: 25000, + MinCreditAmount: 50, + Known: true, + }) + if !antigravityAuthHasCredits(auth) { + t.Fatal("antigravityAuthHasCredits() = false, want true") + } + }) - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - body, _ := io.ReadAll(r.Body) - _ = r.Body.Close() + t.Run("insufficient balance", func(t *testing.T) { + resetAntigravityCreditsRetryState() + auth := &cliproxyauth.Auth{ID: "test-insufficient"} + antigravityCreditsBalanceByAuth.Store("test-insufficient", antigravityCreditsBalance{ + CreditAmount: 30, + MinCreditAmount: 50, + Known: true, + }) + if antigravityAuthHasCredits(auth) { + t.Fatal("antigravityAuthHasCredits() = true, want false") + } + }) - mu.Lock() - requestBodies = append(requestBodies, string(body)) - reqNum := len(requestBodies) - mu.Unlock() + t.Run("no balance stored returns true (optimistic)", func(t *testing.T) { + resetAntigravityCreditsRetryState() + auth := &cliproxyauth.Auth{ID: "test-no-balance"} + if !antigravityAuthHasCredits(auth) { + t.Fatal("antigravityAuthHasCredits() = false with no balance stored, want true (optimistic default)") + } + }) - switch reqNum { - case 1: - w.WriteHeader(http.StatusTooManyRequests) - _, _ = w.Write([]byte(`{"error":{"status":"RESOURCE_EXHAUSTED","details":[{"@type":"type.googleapis.com/google.rpc.ErrorInfo","reason":"QUOTA_EXHAUSTED"},{"@type":"type.googleapis.com/google.rpc.RetryInfo","retryDelay":"10s"}]}}`)) - case 2, 3: - if !strings.Contains(string(body), `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("request %d body missing enabledCreditTypes: %s", reqNum, string(body)) - } - w.Header().Set("Content-Type", "application/json") - _, _ = w.Write([]byte(`{"response":{"candidates":[{"content":{"role":"model","parts":[{"text":"OK"}]}}],"usageMetadata":{"promptTokenCount":1,"candidatesTokenCount":1,"totalTokenCount":2}}}`)) - default: - t.Fatalf("unexpected request count %d", reqNum) + t.Run("nil auth returns false", func(t *testing.T) { + if antigravityAuthHasCredits(nil) { + t.Fatal("antigravityAuthHasCredits(nil) = true, want false") } - })) - defer server.Close() + }) - exec := NewAntigravityExecutor(&config.Config{ - QuotaExceeded: config.QuotaExceeded{AntigravityCredits: true}, + t.Run("empty ID returns false", func(t *testing.T) { + auth := &cliproxyauth.Auth{} + if antigravityAuthHasCredits(auth) { + t.Fatal("antigravityAuthHasCredits(empty ID) = true, want false") + } }) - auth := &cliproxyauth.Auth{ - ID: "auth-prefer-credits", - Attributes: map[string]string{ - "base_url": server.URL, - }, - Metadata: map[string]any{ - "access_token": "token", - "project_id": "project-1", - "expired": time.Now().Add(1 * time.Hour).Format(time.RFC3339), - }, - } - request := cliproxyexecutor.Request{ - Model: "gemini-2.5-flash", - Payload: []byte(`{"request":{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}}`), - } - opts := cliproxyexecutor.Options{SourceFormat: sdktranslator.FormatAntigravity} + t.Run("unknown balance returns false", func(t *testing.T) { + resetAntigravityCreditsRetryState() + auth := &cliproxyauth.Auth{ID: "test-unknown"} + antigravityCreditsBalanceByAuth.Store("test-unknown", antigravityCreditsBalance{ + Known: false, + }) + if antigravityAuthHasCredits(auth) { + t.Fatal("antigravityAuthHasCredits() = true for unknown balance, want false") + } + }) +} - if _, err := exec.Execute(context.Background(), auth, request, opts); err != nil { - t.Fatalf("first Execute() error = %v", err) - } - if _, err := exec.Execute(context.Background(), auth, request, opts); err != nil { - t.Fatalf("second Execute() error = %v", err) - } +type roundTripperFunc func(*http.Request) (*http.Response, error) - mu.Lock() - defer mu.Unlock() - if len(requestBodies) != 3 { - t.Fatalf("request count = %d, want 3", len(requestBodies)) - } - if strings.Contains(requestBodies[0], `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("first request unexpectedly used credits: %s", requestBodies[0]) - } - if !strings.Contains(requestBodies[1], `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("fallback request missing credits: %s", requestBodies[1]) - } - if !strings.Contains(requestBodies[2], `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("preferred request missing credits: %s", requestBodies[2]) - } +func (f roundTripperFunc) RoundTrip(req *http.Request) (*http.Response, error) { + return f(req) } -func TestAntigravityExecute_PreservesBaseURLFallbackAfterCreditsRetryFailure(t *testing.T) { +func TestEnsureAccessToken_WarmTokenLoadsCreditsHint(t *testing.T) { resetAntigravityCreditsRetryState() t.Cleanup(resetAntigravityCreditsRetryState) - var ( - mu sync.Mutex - firstCount int - secondCount int - ) - - firstServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - body, _ := io.ReadAll(r.Body) - _ = r.Body.Close() - - mu.Lock() - firstCount++ - reqNum := firstCount - mu.Unlock() - - switch reqNum { - case 1: - w.WriteHeader(http.StatusTooManyRequests) - _, _ = w.Write([]byte(`{"error":{"status":"RESOURCE_EXHAUSTED","details":[{"@type":"type.googleapis.com/google.rpc.ErrorInfo","reason":"QUOTA_EXHAUSTED"}]}}`)) - case 2: - if !strings.Contains(string(body), `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("credits retry missing enabledCreditTypes: %s", string(body)) - } - w.WriteHeader(http.StatusForbidden) - _, _ = w.Write([]byte(`{"error":{"message":"permission denied"}}`)) - default: - t.Fatalf("unexpected first server request count %d", reqNum) - } - })) - defer firstServer.Close() - - secondServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - mu.Lock() - secondCount++ - mu.Unlock() - w.Header().Set("Content-Type", "application/json") - _, _ = w.Write([]byte(`{"response":{"candidates":[{"content":{"role":"model","parts":[{"text":"ok"}]}}],"usageMetadata":{"promptTokenCount":1,"candidatesTokenCount":1,"totalTokenCount":2}}}`)) - })) - defer secondServer.Close() - exec := NewAntigravityExecutor(&config.Config{ QuotaExceeded: config.QuotaExceeded{AntigravityCredits: true}, }) auth := &cliproxyauth.Auth{ - ID: "auth-baseurl-fallback", - Attributes: map[string]string{ - "base_url": firstServer.URL, - }, + ID: "auth-warm-token-credits", Metadata: map[string]any{ "access_token": "token", - "project_id": "project-1", "expired": time.Now().Add(1 * time.Hour).Format(time.RFC3339), }, } + ctx := context.WithValue(context.Background(), "cliproxy.roundtripper", roundTripperFunc(func(req *http.Request) (*http.Response, error) { + if req.URL.String() != "https://cloudcode-pa.googleapis.com/v1internal:loadCodeAssist" { + t.Fatalf("unexpected request url %s", req.URL.String()) + } + return &http.Response{ + StatusCode: http.StatusOK, + Header: make(http.Header), + Body: io.NopCloser(strings.NewReader(`{"paidTier":{"id":"tier-1","availableCredits":[{"creditType":"GOOGLE_ONE_AI","creditAmount":"25000","minimumCreditAmountForUsage":"50"}]}}`)), + }, nil + })) - originalOrder := antigravityBaseURLFallbackOrder - defer func() { antigravityBaseURLFallbackOrder = originalOrder }() - antigravityBaseURLFallbackOrder = func(auth *cliproxyauth.Auth) []string { - return []string{firstServer.URL, secondServer.URL} - } - - resp, err := exec.Execute(context.Background(), auth, cliproxyexecutor.Request{ - Model: "gemini-2.5-flash", - Payload: []byte(`{"request":{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}}`), - }, cliproxyexecutor.Options{ - SourceFormat: sdktranslator.FormatAntigravity, - }) + token, updatedAuth, err := exec.ensureAccessToken(ctx, auth) if err != nil { - t.Fatalf("Execute() error = %v", err) + t.Fatalf("ensureAccessToken() error = %v", err) } - if len(resp.Payload) == 0 { - t.Fatal("Execute() returned empty payload") + if token != "token" { + t.Fatalf("ensureAccessToken() token = %q, want %q", token, "token") + } + if updatedAuth != nil { + t.Fatalf("ensureAccessToken() updatedAuth = %v, want nil", updatedAuth) } - if firstCount != 2 { - t.Fatalf("first server request count = %d, want 2", firstCount) + deadline := time.Now().Add(2 * time.Second) + for time.Now().Before(deadline) && !cliproxyauth.HasKnownAntigravityCreditsHint(auth.ID) { + time.Sleep(10 * time.Millisecond) } - if secondCount != 1 { - t.Fatalf("second server request count = %d, want 1", secondCount) + if !cliproxyauth.HasKnownAntigravityCreditsHint(auth.ID) { + t.Fatal("expected credits hint to be populated for warm token auth") + } + hint, ok := cliproxyauth.GetAntigravityCreditsHint(auth.ID) + if !ok { + t.Fatal("expected credits hint lookup to succeed") + } + if !hint.Available { + t.Fatalf("hint.Available = %v, want true", hint.Available) + } + if hint.CreditAmount != 25000 || hint.MinCreditAmount != 50 { + t.Fatalf("hint amounts = (%v, %v), want (25000, 50)", hint.CreditAmount, hint.MinCreditAmount) } } -func TestAntigravityExecute_DoesNotDirectInjectCreditsWhenFlagDisabled(t *testing.T) { +func TestUpdateAntigravityCreditsBalance_LoadCodeAssistUserAgent(t *testing.T) { resetAntigravityCreditsRetryState() t.Cleanup(resetAntigravityCreditsRetryState) - var requestBodies []string - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - body, _ := io.ReadAll(r.Body) - _ = r.Body.Close() - requestBodies = append(requestBodies, string(body)) - w.WriteHeader(http.StatusTooManyRequests) - _, _ = w.Write([]byte(`{"error":{"status":"RESOURCE_EXHAUSTED","message":"QUOTA_EXHAUSTED"}}`)) - })) - defer server.Close() - - exec := NewAntigravityExecutor(&config.Config{ - QuotaExceeded: config.QuotaExceeded{AntigravityCredits: false}, - }) + exec := NewAntigravityExecutor(&config.Config{}) + const userAgent = "antigravity/1.23.2 windows/amd64 google-api-nodejs-client/10.3.0" auth := &cliproxyauth.Auth{ - ID: "auth-flag-disabled", - Attributes: map[string]string{ - "base_url": server.URL, - }, - Metadata: map[string]any{ - "access_token": "token", - "project_id": "project-1", - "expired": time.Now().Add(1 * time.Hour).Format(time.RFC3339), - }, + ID: "auth-load-code-assist-ua", + Attributes: map[string]string{"user_agent": userAgent}, } - markAntigravityPreferCredits(auth, "gemini-2.5-flash", time.Now(), nil) + ctx := context.WithValue(context.Background(), "cliproxy.roundtripper", roundTripperFunc(func(req *http.Request) (*http.Response, error) { + if req.URL.String() != "https://cloudcode-pa.googleapis.com/v1internal:loadCodeAssist" { + t.Fatalf("unexpected request url %s", req.URL.String()) + } + if got := req.Header.Get("User-Agent"); got != userAgent { + t.Fatalf("User-Agent = %q, want %q", got, userAgent) + } + if got := req.Header.Get("X-Goog-Api-Client"); got != "gl-node/22.21.1" { + t.Fatalf("X-Goog-Api-Client = %q, want %q", got, "gl-node/22.21.1") + } + body, _ := io.ReadAll(req.Body) + _ = req.Body.Close() + if string(body) != `{"metadata":{"ide_name":"antigravity","ide_type":"ANTIGRAVITY","ide_version":"1.23.2"}}` { + t.Fatalf("loadCodeAssist body = %s", string(body)) + } + return &http.Response{ + StatusCode: http.StatusOK, + Header: make(http.Header), + Body: io.NopCloser(strings.NewReader(`{"paidTier":{"id":"tier-1","availableCredits":[{"creditType":"GOOGLE_ONE_AI","creditAmount":"25000","minimumCreditAmountForUsage":"50"}]}}`)), + }, nil + })) - _, err := exec.Execute(context.Background(), auth, cliproxyexecutor.Request{ - Model: "gemini-2.5-flash", - Payload: []byte(`{"request":{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}}`), - }, cliproxyexecutor.Options{ - SourceFormat: sdktranslator.FormatAntigravity, - }) - if err == nil { - t.Fatal("Execute() error = nil, want 429") - } - if len(requestBodies) != 1 { - t.Fatalf("request count = %d, want 1", len(requestBodies)) - } - if strings.Contains(requestBodies[0], `"enabledCreditTypes":["GOOGLE_ONE_AI"]`) { - t.Fatalf("request unexpectedly used enabledCreditTypes with flag disabled: %s", requestBodies[0]) + exec.updateAntigravityCreditsBalance(ctx, auth, "token") +} + +func TestParseMetaFloat(t *testing.T) { + tests := []struct { + name string + value any + wantVal float64 + wantOK bool + }{ + {"string", "25000", 25000, true}, + {"float64", float64(100), 100, true}, + {"int", int(50), 50, true}, + {"int64", int64(75), 75, true}, + {"empty string", "", 0, false}, + {"invalid string", "abc", 0, false}, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + meta := map[string]any{"key": tt.value} + got, ok := parseMetaFloat(meta, "key") + if ok != tt.wantOK { + t.Fatalf("parseMetaFloat() ok = %v, want %v", ok, tt.wantOK) + } + if ok && got != tt.wantVal { + t.Fatalf("parseMetaFloat() = %f, want %f", got, tt.wantVal) + } + }) } } diff --git a/internal/runtime/executor/antigravity_executor_signature_test.go b/internal/runtime/executor/antigravity_executor_signature_test.go index 226daf5c67..7d84bfe890 100644 --- a/internal/runtime/executor/antigravity_executor_signature_test.go +++ b/internal/runtime/executor/antigravity_executor_signature_test.go @@ -10,10 +10,10 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) func testGeminiSignaturePayload() string { diff --git a/internal/runtime/executor/bt_executor.go b/internal/runtime/executor/bt_executor.go new file mode 100644 index 0000000000..737cd0f267 --- /dev/null +++ b/internal/runtime/executor/bt_executor.go @@ -0,0 +1,429 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "strings" + "time" + + btauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/bt" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/sjson" +) + +type BTExecutor struct { + cfg *config.Config +} + +func NewBTExecutor(cfg *config.Config) *BTExecutor { + return &BTExecutor{cfg: cfg} +} + +func (e *BTExecutor) Identifier() string { return "bt" } + +func btCredentials(auth *cliproxyauth.Auth) (uid, accessKey, serverID string) { + uid = metaStringValue(auth.Metadata, "uid") + accessKey = metaStringValue(auth.Metadata, "access_key") + serverID = metaStringValue(auth.Metadata, "serverid") + return +} + +func (e *BTExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + uid, accessKey, serverID := btCredentials(auth) + if uid == "" || accessKey == "" { + return fmt.Errorf("bt executor: missing credentials") + } + req.Header.Set("uid", uid) + req.Header.Set("access-key", accessKey) + req.Header.Set("serverid", serverID) + req.Header.Set("appid", btauth.AppID) + req.Header.Set("Content-Type", "application/json") + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(req, attrs) + return nil +} + +func (e *BTExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("bt executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if err := e.PrepareRequest(httpReq, auth); err != nil { + return nil, err + } + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +func (e *BTExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + endpoint := "/chat/completions" + if opts.Alt == "responses/compact" { + to = sdktranslator.FromString("openai-response") + endpoint = "/responses/compact" + } + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := originalPayloadSource + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, opts.Stream) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, opts.Stream) + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel, "") + if opts.Alt == "responses/compact" { + if updated, errDelete := sjson.DeleteBytes(translated, "stream"); errDelete == nil { + translated = updated + } + } + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return resp, err + } + + uid, accessKey, serverID := btCredentials(auth) + if uid == "" || accessKey == "" { + err = statusErr{code: http.StatusUnauthorized, msg: "bt: missing credentials in auth metadata"} + return resp, err + } + + baseURL := btauth.CloudURL + "/plugin_api/chat/openai/v1" + upstreamURL := strings.TrimSuffix(baseURL, "/") + endpoint + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, upstreamURL, bytes.NewReader(translated)) + if err != nil { + return resp, err + } + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("uid", uid) + httpReq.Header.Set("access-key", accessKey) + httpReq.Header.Set("serverid", serverID) + httpReq.Header.Set("appid", btauth.AppID) + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + httpReq.Header.Set("User-Agent", "cli-proxy-bt") + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: upstreamURL, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("bt executor: close response body error: %v", errClose) + } + }() + helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + helps.AppendAPIResponseChunk(ctx, e.cfg, b) + helps.LogWithRequestID(ctx).Debugf("bt executor: request error, status: %d, message: %s", httpResp.StatusCode, helps.SummarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return resp, err + } + body, err := io.ReadAll(httpResp.Body) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + helps.AppendAPIResponseChunk(ctx, e.cfg, body) + reporter.Publish(ctx, helps.ParseOpenAIUsage(body)) + reporter.EnsurePublished(ctx) + var param any + out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, body, ¶m) + resp = cliproxyexecutor.Response{Payload: out, Headers: httpResp.Header.Clone()} + return resp, nil +} + +func (e *BTExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := originalPayloadSource + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel, "") + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return nil, err + } + + translated, _ = sjson.SetBytes(translated, "stream_options.include_usage", true) + + uid, accessKey, serverID := btCredentials(auth) + if uid == "" || accessKey == "" { + return nil, fmt.Errorf("bt: missing credentials in auth metadata") + } + + baseURL := btauth.CloudURL + "/plugin_api/chat/openai/v1" + upstreamURL := strings.TrimSuffix(baseURL, "/") + "/chat/completions" + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, upstreamURL, bytes.NewReader(translated)) + if err != nil { + return nil, err + } + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("uid", uid) + httpReq.Header.Set("access-key", accessKey) + httpReq.Header.Set("serverid", serverID) + httpReq.Header.Set("appid", btauth.AppID) + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + httpReq.Header.Set("User-Agent", "cli-proxy-bt") + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: upstreamURL, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return nil, err + } + helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + helps.AppendAPIResponseChunk(ctx, e.cfg, b) + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("bt executor: close response body error: %v", errClose) + } + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return nil, err + } + out := make(chan cliproxyexecutor.StreamChunk) + go func() { + defer close(out) + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("bt executor: close response body error: %v", errClose) + } + }() + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, 52_428_800) + var param any + for scanner.Scan() { + line := scanner.Bytes() + helps.AppendAPIResponseChunk(ctx, e.cfg, line) + if detail, ok := helps.ParseOpenAIStreamUsage(line); ok { + reporter.Publish(ctx, detail) + } + if len(line) == 0 { + continue + } + if !bytes.HasPrefix(line, []byte("data:")) { + continue + } + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(line), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + } + } + if errScan := scanner.Err(); errScan != nil { + helps.RecordAPIResponseError(ctx, e.cfg, errScan) + reporter.PublishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } else { + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, []byte("data: [DONE]"), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + } + } + reporter.EnsurePublished(ctx) + }() + return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil +} + +func (e *BTExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false) + + translated, err := thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return cliproxyexecutor.Response{}, err + } + + enc, err := helps.TokenizerForModel(baseModel) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("bt executor: tokenizer init failed: %w", err) + } + + count, err := helps.CountOpenAIChatTokens(enc, translated) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("bt executor: token counting failed: %w", err) + } + + usageJSON := helps.BuildOpenAIUsageJSON(count) + translatedUsage := sdktranslator.TranslateTokenCount(ctx, to, from, count, usageJSON) + return cliproxyexecutor.Response{Payload: translatedUsage}, nil +} + +func (e *BTExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + log.Debugf("bt executor: refresh called") + _ = ctx + return auth, nil +} + +func FetchBTModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + if auth == nil { + return nil + } + + uid, accessKey, serverID := btCredentials(auth) + if uid == "" || accessKey == "" { + log.Debug("bt: missing credentials, skipping dynamic model fetch") + return nil + } + + httpClient := helps.NewProxyAwareHTTPClient(ctx, cfg, auth, 15*time.Second) + modelsURL := btauth.CloudURL + "/plugin_api/chat/openai/v1/models" + req, err := http.NewRequestWithContext(ctx, http.MethodGet, modelsURL, nil) + if err != nil { + log.Warnf("bt: failed to create model fetch request: %v", err) + return nil + } + req.Header.Set("uid", uid) + req.Header.Set("access-key", accessKey) + req.Header.Set("serverid", serverID) + req.Header.Set("appid", btauth.AppID) + req.Header.Set("User-Agent", "cli-proxy-bt") + util.ApplyCustomHeadersFromAttrs(req, auth.Attributes) + + resp, err := httpClient.Do(req) + if err != nil { + log.Warnf("bt: fetch models failed: %v", err) + return nil + } + defer func() { + if err := resp.Body.Close(); err != nil { + log.Debugf("bt: close model fetch response error: %v", err) + } + }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + log.Warnf("bt: failed to read models response: %v", err) + return nil + } + + if resp.StatusCode != http.StatusOK { + log.Warnf("bt: fetch models failed: status %d, body: %s", resp.StatusCode, string(body)) + return nil + } + + var result struct { + Data []struct { + ID string `json:"id"` + } `json:"data"` + } + if err := json.Unmarshal(body, &result); err != nil { + log.Warnf("bt: failed to parse models response: %v", err) + return nil + } + + now := time.Now().Unix() + models := make([]*registry.ModelInfo, 0, len(result.Data)) + seen := make(map[string]struct{}) + for _, m := range result.Data { + id := strings.TrimSpace(m.ID) + if id == "" { + continue + } + key := strings.ToLower(id) + if _, exists := seen[key]; exists { + continue + } + seen[key] = struct{}{} + models = append(models, ®istry.ModelInfo{ + ID: id, + Object: "model", + Created: now, + OwnedBy: "bt", + Type: "bt", + DisplayName: id, + UserDefined: false, + Thinking: ®istry.ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }) + } + return models +} diff --git a/internal/runtime/executor/claude_executor.go b/internal/runtime/executor/claude_executor.go index 0311827bae..eb17864d6e 100644 --- a/internal/runtime/executor/claude_executor.go +++ b/internal/runtime/executor/claude_executor.go @@ -11,23 +11,22 @@ import ( "fmt" "io" "net/http" - "net/textproto" "strings" "time" "github.com/andybalholm/brotli" "github.com/google/uuid" "github.com/klauspost/compress/zstd" - claudeauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/claude" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + claudeauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/claude" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -66,14 +65,13 @@ var oauthToolRenameMap = map[string]string{ "notebookedit": "NotebookEdit", } -// oauthToolRenameReverseMap is the inverse of oauthToolRenameMap for response decoding. -var oauthToolRenameReverseMap = func() map[string]string { - m := make(map[string]string, len(oauthToolRenameMap)) - for k, v := range oauthToolRenameMap { - m[v] = k - } - return m -}() +// The reverse map is now computed per-request in remapOAuthToolNames so that +// only names the client actually caused us to rewrite are restored on the +// response. A global reverse map — as used previously — corrupted responses +// for clients that sent mixed casing (e.g. Amp CLI sends `Bash` TitleCase +// alongside `glob` lowercase; the request flagged renames via `glob→Glob`, +// then the global reverse map incorrectly rewrote every `Bash` in the +// response to `bash`, causing Amp to reject the tool_use as unknown). // oauthToolsToRemove lists tool names that must be stripped from OAuth requests // even after remapping. Currently empty — all tools are mapped instead of removed. @@ -165,7 +163,8 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r body = applyCloaking(ctx, e.cfg, auth, body, baseModel, apiKey) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body = ensureModelMaxTokens(body, baseModel) // Disable thinking if tool_choice forces tool use (Anthropic API constraint) @@ -192,15 +191,9 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r bodyForTranslation := body bodyForUpstream := body oauthToken := isClaudeOAuthToken(apiKey) - oauthToolNamesRemapped := false - if oauthToken && !auth.ToolPrefixDisabled() { - bodyForUpstream = applyClaudeToolPrefix(body, claudeToolPrefix) - } - // Remap third-party tool names to Claude Code equivalents and remove - // tools without official counterparts. This prevents Anthropic from - // fingerprinting the request as third-party via tool naming patterns. + var oauthToolNamesReverseMap map[string]string if oauthToken { - bodyForUpstream, oauthToolNamesRemapped = remapOAuthToolNames(bodyForUpstream) + bodyForUpstream, oauthToolNamesReverseMap = prepareClaudeOAuthToolNamesForUpstream(bodyForUpstream, claudeToolPrefix, auth.ToolPrefixDisabled()) } // Enable cch signing by default for OAuth tokens (not just experimental flag). // Claude Code always computes cch; missing or invalid cch is a detectable fingerprint. @@ -285,6 +278,10 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r } helps.AppendAPIResponseChunk(ctx, e.cfg, data) if stream { + if errValidate := validateClaudeStreamingResponse(data); errValidate != nil { + helps.RecordAPIResponseError(ctx, e.cfg, errValidate) + return resp, errValidate + } lines := bytes.Split(data, []byte("\n")) for _, line := range lines { if detail, ok := helps.ParseClaudeStreamUsage(line); ok { @@ -294,13 +291,7 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r } else { reporter.Publish(ctx, helps.ParseClaudeUsage(data)) } - if isClaudeOAuthToken(apiKey) && !auth.ToolPrefixDisabled() { - data = stripClaudeToolPrefixFromResponse(data, claudeToolPrefix) - } - // Reverse the OAuth tool name remap so the downstream client sees original names. - if isClaudeOAuthToken(apiKey) && oauthToolNamesRemapped { - data = reverseRemapOAuthToolNames(data) - } + data = restoreClaudeOAuthToolNamesFromResponse(data, claudeToolPrefix, auth.ToolPrefixDisabled(), oauthToolNamesReverseMap) var param any out := sdktranslator.TranslateNonStream( ctx, @@ -350,7 +341,8 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A body = applyCloaking(ctx, e.cfg, auth, body, baseModel, apiKey) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body = ensureModelMaxTokens(body, baseModel) // Disable thinking if tool_choice forces tool use (Anthropic API constraint) @@ -374,15 +366,9 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A bodyForTranslation := body bodyForUpstream := body oauthToken := isClaudeOAuthToken(apiKey) - oauthToolNamesRemapped := false - if oauthToken && !auth.ToolPrefixDisabled() { - bodyForUpstream = applyClaudeToolPrefix(body, claudeToolPrefix) - } - // Remap third-party tool names to Claude Code equivalents and remove - // tools without official counterparts. This prevents Anthropic from - // fingerprinting the request as third-party via tool naming patterns. + var oauthToolNamesReverseMap map[string]string if oauthToken { - bodyForUpstream, oauthToolNamesRemapped = remapOAuthToolNames(bodyForUpstream) + bodyForUpstream, oauthToolNamesReverseMap = prepareClaudeOAuthToolNamesForUpstream(bodyForUpstream, claudeToolPrefix, auth.ToolPrefixDisabled()) } // Enable cch signing by default for OAuth tokens (not just experimental flag). if oauthToken || experimentalCCHSigningEnabled(e.cfg, auth) { @@ -473,22 +459,24 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A if detail, ok := helps.ParseClaudeStreamUsage(line); ok { reporter.Publish(ctx, detail) } - if isClaudeOAuthToken(apiKey) && !auth.ToolPrefixDisabled() { - line = stripClaudeToolPrefixFromStreamLine(line, claudeToolPrefix) - } - if isClaudeOAuthToken(apiKey) && oauthToolNamesRemapped { - line = reverseRemapOAuthToolNamesFromStreamLine(line) - } + line = restoreClaudeOAuthToolNamesFromStreamLine(line, claudeToolPrefix, auth.ToolPrefixDisabled(), oauthToolNamesReverseMap) // Forward the line as-is to preserve SSE format cloned := make([]byte, len(line)+1) copy(cloned, line) cloned[len(line)] = '\n' - out <- cliproxyexecutor.StreamChunk{Payload: cloned} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: cloned}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } return } @@ -503,12 +491,7 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A if detail, ok := helps.ParseClaudeStreamUsage(line); ok { reporter.Publish(ctx, detail) } - if isClaudeOAuthToken(apiKey) && !auth.ToolPrefixDisabled() { - line = stripClaudeToolPrefixFromStreamLine(line, claudeToolPrefix) - } - if isClaudeOAuthToken(apiKey) && oauthToolNamesRemapped { - line = reverseRemapOAuthToolNamesFromStreamLine(line) - } + line = restoreClaudeOAuthToolNamesFromStreamLine(line, claudeToolPrefix, auth.ToolPrefixDisabled(), oauthToolNamesReverseMap) chunks := sdktranslator.TranslateStream( ctx, to, @@ -520,18 +503,83 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A ¶m, ) for i := range chunks { - out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]}: + case <-ctx.Done(): + return + } } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } }() return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil } +func validateClaudeStreamingResponse(data []byte) error { + scanner := bufio.NewScanner(bytes.NewReader(data)) + scanner.Buffer(nil, 52_428_800) + + hasData := false + hasMessageStart := false + hasMessageDelta := false + + for scanner.Scan() { + line := bytes.TrimSpace(scanner.Bytes()) + if len(line) == 0 || !bytes.HasPrefix(line, []byte("data:")) { + continue + } + payload := bytes.TrimSpace(line[len("data:"):]) + if len(payload) == 0 || bytes.Equal(payload, []byte("[DONE]")) { + continue + } + hasData = true + if !gjson.ValidBytes(payload) { + return statusErr{code: http.StatusBadGateway, msg: "claude executor: upstream returned malformed stream data"} + } + + root := gjson.ParseBytes(payload) + switch root.Get("type").String() { + case "error": + message := strings.TrimSpace(root.Get("error.message").String()) + if message == "" { + message = strings.TrimSpace(root.Get("error.type").String()) + } + if message == "" { + message = "unknown upstream error" + } + return statusErr{code: http.StatusBadGateway, msg: "claude executor: upstream returned error event: " + message} + case "message_start": + message := root.Get("message") + if strings.TrimSpace(message.Get("id").String()) == "" || strings.TrimSpace(message.Get("model").String()) == "" { + return statusErr{code: http.StatusBadGateway, msg: "claude executor: upstream stream message_start is missing id or model"} + } + hasMessageStart = true + case "message_delta": + hasMessageDelta = true + } + } + if errScan := scanner.Err(); errScan != nil { + return errScan + } + if !hasData { + return statusErr{code: http.StatusBadGateway, msg: "claude executor: upstream returned empty stream response"} + } + if !hasMessageStart { + return statusErr{code: http.StatusBadGateway, msg: "claude executor: upstream stream response is missing message_start"} + } + if !hasMessageDelta { + return statusErr{code: http.StatusBadGateway, msg: "claude executor: upstream stream response ended before message completion"} + } + return nil +} + func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { baseModel := thinking.ParseSuffix(req.Model).ModelName @@ -558,12 +606,8 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut // Extract betas from body and convert to header (for count_tokens too) var extraBetas []string extraBetas, body = extractAndRemoveBetas(body) - if isClaudeOAuthToken(apiKey) && !auth.ToolPrefixDisabled() { - body = applyClaudeToolPrefix(body, claudeToolPrefix) - } - // Remap tool names for OAuth token requests to avoid third-party fingerprinting. if isClaudeOAuthToken(apiKey) { - body, _ = remapOAuthToolNames(body) + body, _ = prepareClaudeOAuthToolNamesForUpstream(body, claudeToolPrefix, auth.ToolPrefixDisabled()) } url := fmt.Sprintf("%s/v1/messages/count_tokens?beta=true", baseURL) @@ -647,6 +691,9 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut func (e *ClaudeExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { log.Debugf("claude executor: refresh called") + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } if auth == nil { return nil, fmt.Errorf("claude executor: auth is nil") } @@ -660,7 +707,7 @@ func (e *ClaudeExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) ( return auth, nil } svc := claudeauth.NewClaudeAuthWithProxyURL(e.cfg, auth.ProxyURL) - td, err := svc.RefreshTokens(ctx, refreshToken) + td, err := svc.RefreshTokensWithRetry(ctx, refreshToken, 3) if err != nil { return nil, err } @@ -911,15 +958,8 @@ func applyClaudeHeaders(r *http.Request, auth *cliproxyauth.Auth, apiKey string, baseBetas += ",interleaved-thinking-2025-05-14" } - hasClaude1MHeader := false - if ginHeaders != nil { - if _, ok := ginHeaders[textproto.CanonicalMIMEHeaderKey("X-CPA-CLAUDE-1M")]; ok { - hasClaude1MHeader = true - } - } - // Merge extra betas from request body and request flags. - if len(extraBetas) > 0 || hasClaude1MHeader { + if len(extraBetas) > 0 { existingSet := make(map[string]bool) for _, b := range strings.Split(baseBetas, ",") { betaName := strings.TrimSpace(b) @@ -934,9 +974,6 @@ func applyClaudeHeaders(r *http.Request, auth *cliproxyauth.Auth, apiKey string, existingSet[beta] = true } } - if hasClaude1MHeader && !existingSet["context-1m-2025-08-07"] { - baseBetas += ",context-1m-2025-08-07" - } } r.Header.Set("Anthropic-Beta", baseBetas) @@ -1013,6 +1050,36 @@ func isClaudeOAuthToken(apiKey string) bool { return strings.Contains(apiKey, "sk-ant-oat") } +// prepareClaudeOAuthToolNamesForUpstream applies the Claude OAuth tool-name +// transforms in the same order across request paths. Remap runs before prefixing +// so any future non-empty prefix still composes correctly with the per-request +// reverse map. +func prepareClaudeOAuthToolNamesForUpstream(body []byte, prefix string, prefixDisabled bool) ([]byte, map[string]string) { + body, reverseMap := remapOAuthToolNames(body) + if !prefixDisabled { + body = applyClaudeToolPrefix(body, prefix) + } + return body, reverseMap +} + +// restoreClaudeOAuthToolNamesFromResponse undoes the Claude OAuth tool-name +// transforms for non-stream responses in reverse order. +func restoreClaudeOAuthToolNamesFromResponse(body []byte, prefix string, prefixDisabled bool, reverseMap map[string]string) []byte { + if !prefixDisabled { + body = stripClaudeToolPrefixFromResponse(body, prefix) + } + return reverseRemapOAuthToolNames(body, reverseMap) +} + +// restoreClaudeOAuthToolNamesFromStreamLine undoes the Claude OAuth tool-name +// transforms for SSE lines in reverse order. +func restoreClaudeOAuthToolNamesFromStreamLine(line []byte, prefix string, prefixDisabled bool, reverseMap map[string]string) []byte { + if !prefixDisabled { + line = stripClaudeToolPrefixFromStreamLine(line, prefix) + } + return reverseRemapOAuthToolNamesFromStreamLine(line, reverseMap) +} + // remapOAuthToolNames renames third-party tool names to Claude Code equivalents // and removes tools without an official counterpart. This prevents Anthropic from // fingerprinting the request as a third-party client via tool naming patterns. @@ -1020,8 +1087,25 @@ func isClaudeOAuthToken(apiKey string) bool { // It operates on: tools[].name, tool_choice.name, and all tool_use/tool_reference // references in messages. Removed tools' corresponding tool_result blocks are preserved // (they just become orphaned, which is safe for Claude). -func remapOAuthToolNames(body []byte) ([]byte, bool) { - renamed := false +// +// The returned map is keyed on the upstream (TitleCase) name and maps to the +// client-supplied original name. Callers MUST pass this map to the reverse +// functions so only names the client actually caused us to rewrite are restored +// on the response. A global reverse map (the previous implementation) incorrectly +// rewrote names the client originally sent in TitleCase (e.g. Amp CLI's `Bash`) +// when any OTHER tool in the same request triggered a forward rename (e.g. +// Amp's `glob`→`Glob`), because the global reverse map contained `Bash`→`bash` +// regardless of what the client originally sent. +func remapOAuthToolNames(body []byte) ([]byte, map[string]string) { + reverseMap := make(map[string]string, len(oauthToolRenameMap)) + recordRename := func(original, renamed string) { + // Preserve the first-seen original name if the same upstream name is + // produced from multiple call sites; they all map back identically. + if _, exists := reverseMap[renamed]; !exists { + reverseMap[renamed] = original + } + } + // 1. Rewrite tools array in a single pass (if present). // IMPORTANT: do not mutate names first and then rebuild from an older gjson // snapshot. gjson results are snapshots of the original bytes; rebuilding from a @@ -1054,7 +1138,7 @@ func remapOAuthToolNames(body []byte) ([]byte, bool) { updatedTool, err := sjson.Set(toolJSON, "name", newName) if err == nil { toolJSON = updatedTool - renamed = true + recordRename(name, newName) } } @@ -1079,7 +1163,7 @@ func remapOAuthToolNames(body []byte) ([]byte, bool) { body, _ = sjson.DeleteBytes(body, "tool_choice") } else if newName, ok := oauthToolRenameMap[tcName]; ok && newName != tcName { body, _ = sjson.SetBytes(body, "tool_choice.name", newName) - renamed = true + recordRename(tcName, newName) } } @@ -1099,14 +1183,14 @@ func remapOAuthToolNames(body []byte) ([]byte, bool) { if newName, ok := oauthToolRenameMap[name]; ok && newName != name { path := fmt.Sprintf("messages.%d.content.%d.name", msgIndex.Int(), contentIndex.Int()) body, _ = sjson.SetBytes(body, path, newName) - renamed = true + recordRename(name, newName) } case "tool_reference": toolName := part.Get("tool_name").String() if newName, ok := oauthToolRenameMap[toolName]; ok && newName != toolName { path := fmt.Sprintf("messages.%d.content.%d.tool_name", msgIndex.Int(), contentIndex.Int()) body, _ = sjson.SetBytes(body, path, newName) - renamed = true + recordRename(toolName, newName) } case "tool_result": // Handle nested tool_reference blocks inside tool_result.content[] @@ -1120,7 +1204,7 @@ func remapOAuthToolNames(body []byte) ([]byte, bool) { if newName, ok := oauthToolRenameMap[nestedToolName]; ok && newName != nestedToolName { nestedPath := fmt.Sprintf("messages.%d.content.%d.content.%d.tool_name", msgIndex.Int(), contentIndex.Int(), nestedIndex.Int()) body, _ = sjson.SetBytes(body, nestedPath, newName) - renamed = true + recordRename(nestedToolName, newName) } } return true @@ -1133,13 +1217,16 @@ func remapOAuthToolNames(body []byte) ([]byte, bool) { }) } - return body, renamed + return body, reverseMap } -// reverseRemapOAuthToolNames reverses the tool name mapping for non-stream responses. -// It maps Claude Code TitleCase names back to the original lowercase names so the -// downstream client receives tool names it recognizes. -func reverseRemapOAuthToolNames(body []byte) []byte { +// reverseRemapOAuthToolNames reverses the tool name mapping for non-stream responses +// using the per-request map produced by remapOAuthToolNames. Names the client sent +// that were NOT forward-renamed are passed through unchanged. +func reverseRemapOAuthToolNames(body []byte, reverseMap map[string]string) []byte { + if len(reverseMap) == 0 { + return body + } content := gjson.GetBytes(body, "content") if !content.Exists() || !content.IsArray() { return body @@ -1149,13 +1236,13 @@ func reverseRemapOAuthToolNames(body []byte) []byte { switch partType { case "tool_use": name := part.Get("name").String() - if origName, ok := oauthToolRenameReverseMap[name]; ok { + if origName, ok := reverseMap[name]; ok { path := fmt.Sprintf("content.%d.name", index.Int()) body, _ = sjson.SetBytes(body, path, origName) } case "tool_reference": toolName := part.Get("tool_name").String() - if origName, ok := oauthToolRenameReverseMap[toolName]; ok { + if origName, ok := reverseMap[toolName]; ok { path := fmt.Sprintf("content.%d.tool_name", index.Int()) body, _ = sjson.SetBytes(body, path, origName) } @@ -1165,8 +1252,12 @@ func reverseRemapOAuthToolNames(body []byte) []byte { return body } -// reverseRemapOAuthToolNamesFromStreamLine reverses the tool name mapping for SSE stream lines. -func reverseRemapOAuthToolNamesFromStreamLine(line []byte) []byte { +// reverseRemapOAuthToolNamesFromStreamLine reverses the tool name mapping for SSE +// stream lines, using the per-request reverseMap produced by remapOAuthToolNames. +func reverseRemapOAuthToolNamesFromStreamLine(line []byte, reverseMap map[string]string) []byte { + if len(reverseMap) == 0 { + return line + } payload := helps.JSONPayload(line) if len(payload) == 0 || !gjson.ValidBytes(payload) { return line @@ -1184,7 +1275,7 @@ func reverseRemapOAuthToolNamesFromStreamLine(line []byte) []byte { switch blockType { case "tool_use": name := contentBlock.Get("name").String() - if origName, ok := oauthToolRenameReverseMap[name]; ok { + if origName, ok := reverseMap[name]; ok { updated, err = sjson.SetBytes(payload, "content_block.name", origName) if err != nil { return line @@ -1194,7 +1285,7 @@ func reverseRemapOAuthToolNamesFromStreamLine(line []byte) []byte { } case "tool_reference": toolName := contentBlock.Get("tool_name").String() - if origName, ok := oauthToolRenameReverseMap[toolName]; ok { + if origName, ok := reverseMap[toolName]; ok { updated, err = sjson.SetBytes(payload, "content_block.tool_name", origName) if err != nil { return line diff --git a/internal/runtime/executor/claude_executor_test.go b/internal/runtime/executor/claude_executor_test.go index f456064dc6..f5bca55ab7 100644 --- a/internal/runtime/executor/claude_executor_test.go +++ b/internal/runtime/executor/claude_executor_test.go @@ -17,12 +17,12 @@ import ( "github.com/gin-gonic/gin" "github.com/klauspost/compress/zstd" xxHash64 "github.com/pierrec/xxHash/xxHash64" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -936,6 +936,113 @@ func TestClaudeExecutor_GeneratesNewUserIDByDefault(t *testing.T) { } } +func TestClaudeExecutor_ExecuteOpenAINonStreamRejectsEmptyClaudeStream(t *testing.T) { + _, err := executeOpenAIChatCompletionThroughClaude(t, "") + if err == nil { + t.Fatal("Execute error = nil, want empty stream error") + } + assertStatusErr(t, err, http.StatusBadGateway) + if !strings.Contains(err.Error(), "empty stream response") { + t.Fatalf("Execute error = %q, want empty stream response", err.Error()) + } +} + +func TestClaudeExecutor_ExecuteOpenAINonStreamRejectsClaudeErrorEvent(t *testing.T) { + body := `data: {"type":"error","error":{"type":"overloaded_error","message":"upstream overloaded"}}` + "\n" + _, err := executeOpenAIChatCompletionThroughClaude(t, body) + if err == nil { + t.Fatal("Execute error = nil, want upstream error event") + } + assertStatusErr(t, err, http.StatusBadGateway) + if !strings.Contains(err.Error(), "upstream overloaded") { + t.Fatalf("Execute error = %q, want upstream overloaded", err.Error()) + } +} + +func TestClaudeExecutor_ExecuteOpenAINonStreamRejectsIncompleteClaudeStream(t *testing.T) { + body := strings.Join([]string{ + `data: {"type":"message_start","message":{"id":"msg_123","model":"claude-3-5-sonnet-20241022"}}`, + `data: {"type":"message_stop"}`, + ``, + }, "\n") + + _, err := executeOpenAIChatCompletionThroughClaude(t, body) + if err == nil { + t.Fatal("Execute error = nil, want incomplete stream error") + } + assertStatusErr(t, err, http.StatusBadGateway) + if !strings.Contains(err.Error(), "ended before message completion") { + t.Fatalf("Execute error = %q, want incomplete stream error", err.Error()) + } +} + +func TestClaudeExecutor_ExecuteOpenAINonStreamConvertsValidClaudeStream(t *testing.T) { + body := strings.Join([]string{ + `event: message_start`, + `data: {"type":"message_start","message":{"id":"msg_123","model":"claude-3-5-sonnet-20241022"}}`, + `event: content_block_delta`, + `data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"ok"}}`, + `event: message_delta`, + `data: {"type":"message_delta","delta":{"stop_reason":"end_turn"},"usage":{"input_tokens":2,"output_tokens":1}}`, + `event: message_stop`, + `data: {"type":"message_stop"}`, + ``, + }, "\n") + + resp, err := executeOpenAIChatCompletionThroughClaude(t, body) + if err != nil { + t.Fatalf("Execute error: %v", err) + } + if got := gjson.GetBytes(resp.Payload, "id").String(); got != "msg_123" { + t.Fatalf("response id = %q, want msg_123; payload=%s", got, string(resp.Payload)) + } + if got := gjson.GetBytes(resp.Payload, "model").String(); got != "claude-3-5-sonnet-20241022" { + t.Fatalf("response model = %q, want claude-3-5-sonnet-20241022", got) + } + if got := gjson.GetBytes(resp.Payload, "choices.0.message.content").String(); got != "ok" { + t.Fatalf("response content = %q, want ok", got) + } + if got := gjson.GetBytes(resp.Payload, "usage.total_tokens").Int(); got != 3 { + t.Fatalf("usage.total_tokens = %d, want 3", got) + } +} + +func executeOpenAIChatCompletionThroughClaude(t *testing.T, upstreamBody string) (cliproxyexecutor.Response, error) { + t.Helper() + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte(upstreamBody)) + })) + defer server.Close() + + executor := NewClaudeExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{Attributes: map[string]string{ + "api_key": "key-123", + "base_url": server.URL, + }} + payload := []byte(`{"model":"claude-3-5-sonnet-20241022","messages":[{"role":"user","content":"hi"}]}`) + + return executor.Execute(context.Background(), auth, cliproxyexecutor.Request{ + Model: "claude-3-5-sonnet-20241022", + Payload: payload, + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) +} + +func assertStatusErr(t *testing.T, err error, want int) { + t.Helper() + + status, ok := err.(interface{ StatusCode() int }) + if !ok { + t.Fatalf("error %T does not expose StatusCode", err) + } + if got := status.StatusCode(); got != want { + t.Fatalf("StatusCode() = %d, want %d", got, want) + } +} + func TestStripClaudeToolPrefixFromResponse_NestedToolReference(t *testing.T) { input := []byte(`{"content":[{"type":"tool_result","tool_use_id":"toolu_123","content":[{"type":"tool_reference","tool_name":"proxy_mcp__nia__manage_resource"}]}]}`) out := stripClaudeToolPrefixFromResponse(input, "proxy_") @@ -1714,7 +1821,27 @@ func TestClaudeExecutor_ExecuteStream_AcceptEncodingOverrideCannotBypassIdentity } } -// Test case 1: String system prompt is preserved and converted to a content block +func expectedClaudeCodeStaticPrompt() string { + return strings.Join([]string{ + helps.ClaudeCodeIntro, + helps.ClaudeCodeSystem, + helps.ClaudeCodeDoingTasks, + helps.ClaudeCodeToneAndStyle, + helps.ClaudeCodeOutputEfficiency, + }, "\n\n") +} + +func expectedForwardedSystemReminder(text string) string { + return fmt.Sprintf(` +As you answer the user's questions, you can use the following context from the system: +%s + +IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context unless it is highly relevant to your task. + +`, text) +} + +// Test case 1: String system prompt is preserved by forwarding it to the first user message func TestCheckSystemInstructionsWithMode_StringSystemPreserved(t *testing.T) { payload := []byte(`{"system":"You are a helpful assistant.","messages":[{"role":"user","content":"hi"}]}`) @@ -1733,42 +1860,52 @@ func TestCheckSystemInstructionsWithMode_StringSystemPreserved(t *testing.T) { if !strings.HasPrefix(blocks[0].Get("text").String(), "x-anthropic-billing-header:") { t.Fatalf("blocks[0] should be billing header, got %q", blocks[0].Get("text").String()) } - if blocks[1].Get("text").String() != "You are a Claude agent, built on Anthropic's Claude Agent SDK." { + if blocks[1].Get("text").String() != "You are Claude Code, Anthropic's official CLI for Claude." { t.Fatalf("blocks[1] should be agent block, got %q", blocks[1].Get("text").String()) } - if blocks[2].Get("text").String() != "You are a helpful assistant." { - t.Fatalf("blocks[2] should be user system prompt, got %q", blocks[2].Get("text").String()) + if blocks[2].Get("text").String() != expectedClaudeCodeStaticPrompt() { + t.Fatalf("blocks[2] should be static Claude Code prompt, got %q", blocks[2].Get("text").String()) } - if blocks[2].Get("cache_control.type").String() != "ephemeral" { - t.Fatalf("blocks[2] should have cache_control.type=ephemeral") + if blocks[2].Get("cache_control").Exists() { + t.Fatalf("blocks[2] should not have cache_control, got %s", blocks[2].Get("cache_control").Raw) + } + + if got := gjson.GetBytes(out, "messages.0.content").String(); got != expectedForwardedSystemReminder("You are a helpful assistant.")+"hi" { + t.Fatalf("messages[0].content should include forwarded system prompt, got %q", got) } } -// Test case 2: Strict mode drops the string system prompt +// Test case 2: Strict mode keeps only the injected Claude Code system blocks func TestCheckSystemInstructionsWithMode_StringSystemStrict(t *testing.T) { payload := []byte(`{"system":"You are a helpful assistant.","messages":[{"role":"user","content":"hi"}]}`) out := checkSystemInstructionsWithMode(payload, true) blocks := gjson.GetBytes(out, "system").Array() - if len(blocks) != 2 { - t.Fatalf("strict mode should produce 2 blocks, got %d", len(blocks)) + if len(blocks) != 3 { + t.Fatalf("strict mode should produce 3 injected blocks, got %d", len(blocks)) + } + if got := gjson.GetBytes(out, "messages.0.content").String(); got != "hi" { + t.Fatalf("strict mode should not forward system prompt into messages, got %q", got) } } -// Test case 3: Empty string system prompt does not produce a spurious block +// Test case 3: Empty string system prompt does not alter the first user message func TestCheckSystemInstructionsWithMode_EmptyStringSystemIgnored(t *testing.T) { payload := []byte(`{"system":"","messages":[{"role":"user","content":"hi"}]}`) out := checkSystemInstructionsWithMode(payload, false) blocks := gjson.GetBytes(out, "system").Array() - if len(blocks) != 2 { - t.Fatalf("empty string system should produce 2 blocks, got %d", len(blocks)) + if len(blocks) != 3 { + t.Fatalf("empty string system should still produce 3 injected blocks, got %d", len(blocks)) + } + if got := gjson.GetBytes(out, "messages.0.content").String(); got != "hi" { + t.Fatalf("empty string system should not alter messages, got %q", got) } } -// Test case 4: Array system prompt is unaffected by the string handling +// Test case 4: Array system prompt is forwarded to the first user message func TestCheckSystemInstructionsWithMode_ArraySystemStillWorks(t *testing.T) { payload := []byte(`{"system":[{"type":"text","text":"Be concise."}],"messages":[{"role":"user","content":"hi"}]}`) @@ -1778,12 +1915,15 @@ func TestCheckSystemInstructionsWithMode_ArraySystemStillWorks(t *testing.T) { if len(blocks) != 3 { t.Fatalf("expected 3 system blocks, got %d", len(blocks)) } - if blocks[2].Get("text").String() != "Be concise." { - t.Fatalf("blocks[2] should be user system prompt, got %q", blocks[2].Get("text").String()) + if blocks[2].Get("text").String() != expectedClaudeCodeStaticPrompt() { + t.Fatalf("blocks[2] should be static Claude Code prompt, got %q", blocks[2].Get("text").String()) + } + if got := gjson.GetBytes(out, "messages.0.content").String(); got != expectedForwardedSystemReminder("Be concise.")+"hi" { + t.Fatalf("messages[0].content should include forwarded array system prompt, got %q", got) } } -// Test case 5: Special characters in string system prompt survive conversion +// Test case 5: Special characters in string system prompt survive forwarding func TestCheckSystemInstructionsWithMode_StringWithSpecialChars(t *testing.T) { payload := []byte(`{"system":"Use tags & \"quotes\" in output.","messages":[{"role":"user","content":"hi"}]}`) @@ -1793,8 +1933,8 @@ func TestCheckSystemInstructionsWithMode_StringWithSpecialChars(t *testing.T) { if len(blocks) != 3 { t.Fatalf("expected 3 system blocks, got %d", len(blocks)) } - if blocks[2].Get("text").String() != `Use tags & "quotes" in output.` { - t.Fatalf("blocks[2] text mangled, got %q", blocks[2].Get("text").String()) + if got := gjson.GetBytes(out, "messages.0.content").String(); got != expectedForwardedSystemReminder(`Use tags & "quotes" in output.`)+"hi" { + t.Fatalf("forwarded system prompt text mangled, got %q", got) } } @@ -1902,8 +2042,11 @@ func TestApplyCloaking_PreservesConfiguredStrictModeAndSensitiveWordsWhenModeOmi out := applyCloaking(context.Background(), cfg, auth, payload, "claude-3-5-sonnet-20241022", "key-123") blocks := gjson.GetBytes(out, "system").Array() - if len(blocks) != 2 { - t.Fatalf("expected strict mode to keep only injected system blocks, got %d", len(blocks)) + if len(blocks) != 3 { + t.Fatalf("expected strict mode to keep the 3 injected Claude Code system blocks, got %d", len(blocks)) + } + if got := gjson.GetBytes(out, "messages.0.content.#").Int(); got != 1 { + t.Fatalf("strict mode should not prepend a forwarded system reminder block, got %d content blocks", got) } if got := gjson.GetBytes(out, "messages.0.content.0.text").String(); !strings.Contains(got, "\u200B") { t.Fatalf("expected configured sensitive word obfuscation to apply, got %q", got) @@ -1953,19 +2096,16 @@ func TestNormalizeClaudeTemperatureForThinking_AfterForcedToolChoiceKeepsOrigina func TestRemapOAuthToolNames_TitleCase_NoReverseNeeded(t *testing.T) { body := []byte(`{"tools":[{"name":"Bash","description":"Run shell commands","input_schema":{"type":"object","properties":{"cmd":{"type":"string"}}}}],"messages":[{"role":"user","content":[{"type":"text","text":"hi"}]}]}`) - out, renamed := remapOAuthToolNames(body) - if renamed { - t.Fatalf("renamed = true, want false") + out, reverseMap := remapOAuthToolNames(body) + if len(reverseMap) != 0 { + t.Fatalf("reverseMap = %v, want empty", reverseMap) } if got := gjson.GetBytes(out, "tools.0.name").String(); got != "Bash" { t.Fatalf("tools.0.name = %q, want %q", got, "Bash") } resp := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"Bash","input":{"cmd":"ls"}}]}`) - reversed := resp - if renamed { - reversed = reverseRemapOAuthToolNames(resp) - } + reversed := reverseRemapOAuthToolNames(resp, reverseMap) if got := gjson.GetBytes(reversed, "content.0.name").String(); got != "Bash" { t.Fatalf("content.0.name = %q, want %q", got, "Bash") } @@ -1974,20 +2114,150 @@ func TestRemapOAuthToolNames_TitleCase_NoReverseNeeded(t *testing.T) { func TestRemapOAuthToolNames_Lowercase_ReverseApplied(t *testing.T) { body := []byte(`{"tools":[{"name":"bash","description":"Run shell commands","input_schema":{"type":"object","properties":{"cmd":{"type":"string"}}}}],"messages":[{"role":"user","content":[{"type":"text","text":"hi"}]}]}`) - out, renamed := remapOAuthToolNames(body) - if !renamed { - t.Fatalf("renamed = false, want true") + out, reverseMap := remapOAuthToolNames(body) + if reverseMap["Bash"] != "bash" { + t.Fatalf("reverseMap = %v, want entry Bash->bash", reverseMap) } if got := gjson.GetBytes(out, "tools.0.name").String(); got != "Bash" { t.Fatalf("tools.0.name = %q, want %q", got, "Bash") } resp := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"Bash","input":{"cmd":"ls"}}]}`) - reversed := resp - if renamed { - reversed = reverseRemapOAuthToolNames(resp) - } + reversed := reverseRemapOAuthToolNames(resp, reverseMap) if got := gjson.GetBytes(reversed, "content.0.name").String(); got != "bash" { t.Fatalf("content.0.name = %q, want %q", got, "bash") } } + +// TestRemapOAuthToolNames_MixedCase_OnlyRenamedToolsReversed is the regression +// test for a case where a single request contains both a TitleCase tool (which +// must pass through unchanged) and a lowercase tool that we forward-rename. +// Before the fix, triggering ANY forward rename caused the reverse pass to +// lowercase every TitleCase tool in the response using a global reverse map, +// corrupting tool names the client originally sent in TitleCase (notably Amp +// CLI's `Bash`, which its registry lookup cannot find as `bash`). +func TestRemapOAuthToolNames_MixedCase_OnlyRenamedToolsReversed(t *testing.T) { + body := []byte(`{"tools":[` + + `{"name":"Bash","input_schema":{"type":"object","properties":{"cmd":{"type":"string"}}}},` + + `{"name":"glob","input_schema":{"type":"object","properties":{"filePattern":{"type":"string"}}}}` + + `]}`) + + out, reverseMap := remapOAuthToolNames(body) + + // Forward: TitleCase `Bash` is not a forward-map key, must pass through. + if got := gjson.GetBytes(out, "tools.0.name").String(); got != "Bash" { + t.Fatalf("tools.0.name = %q, want %q (TitleCase tool must not be renamed)", got, "Bash") + } + // Forward: `glob` is a forward-map key, upstream sees `Glob`. + if got := gjson.GetBytes(out, "tools.1.name").String(); got != "Glob" { + t.Fatalf("tools.1.name = %q, want %q", got, "Glob") + } + + // Reverse map records ONLY the rename that happened. + if len(reverseMap) != 1 || reverseMap["Glob"] != "glob" { + t.Fatalf("reverseMap = %v, want {Glob:glob}", reverseMap) + } + + // Upstream responds with a `Bash` tool_use. Since we never renamed `Bash`, + // reverseRemap MUST leave it alone. + bashResp := []byte(`{"content":[{"type":"tool_use","id":"toolu_01","name":"Bash","input":{"cmd":"ls"}}]}`) + reversed := reverseRemapOAuthToolNames(bashResp, reverseMap) + if got := gjson.GetBytes(reversed, "content.0.name").String(); got != "Bash" { + t.Fatalf("content.0.name = %q, want %q (Bash must be preserved; was never forward-renamed)", got, "Bash") + } + + // Upstream responds with a `Glob` tool_use. Since we renamed `glob`→`Glob`, + // reverseRemap MUST restore the original `glob`. + globResp := []byte(`{"content":[{"type":"tool_use","id":"toolu_02","name":"Glob","input":{"filePattern":"**/*.go"}}]}`) + reversed = reverseRemapOAuthToolNames(globResp, reverseMap) + if got := gjson.GetBytes(reversed, "content.0.name").String(); got != "glob" { + t.Fatalf("content.0.name = %q, want %q (Glob must be restored to client's original `glob`)", got, "glob") + } +} + +// TestReverseRemapOAuthToolNamesFromStreamLine_HonorsPerRequestMap guards the +// SSE streaming code path against the same mixed-case bug. +func TestReverseRemapOAuthToolNamesFromStreamLine_HonorsPerRequestMap(t *testing.T) { + reverseMap := map[string]string{"Glob": "glob"} + + // Bash block was never renamed, must pass through as-is. + bashLine := []byte(`data: {"type":"content_block_start","index":0,"content_block":{"type":"tool_use","id":"toolu_01","name":"Bash","input":{}}}`) + out := reverseRemapOAuthToolNamesFromStreamLine(bashLine, reverseMap) + if !bytes.Contains(out, []byte(`"name":"Bash"`)) { + t.Fatalf("Bash should be preserved, got: %s", string(out)) + } + if bytes.Contains(out, []byte(`"name":"bash"`)) { + t.Fatalf("Bash must not be lowercased, got: %s", string(out)) + } + + // Glob block IS in the reverseMap, must be restored to `glob`. + globLine := []byte(`data: {"type":"content_block_start","index":0,"content_block":{"type":"tool_use","id":"toolu_02","name":"Glob","input":{}}}`) + out = reverseRemapOAuthToolNamesFromStreamLine(globLine, reverseMap) + if !bytes.Contains(out, []byte(`"name":"glob"`)) { + t.Fatalf("Glob should be restored to glob, got: %s", string(out)) + } +} + +func TestPrepareClaudeOAuthToolNamesForUpstream_MixedCaseWithPrefix(t *testing.T) { + body := []byte(`{"tools":[` + + `{"name":"Bash","input_schema":{"type":"object","properties":{"cmd":{"type":"string"}}}},` + + `{"name":"glob","input_schema":{"type":"object","properties":{"filePattern":{"type":"string"}}}}` + + `],"messages":[{"role":"assistant","content":[` + + `{"type":"tool_use","id":"toolu_01","name":"Bash","input":{}},` + + `{"type":"tool_use","id":"toolu_02","name":"glob","input":{}}` + + `]}]}`) + + out, reverseMap := prepareClaudeOAuthToolNamesForUpstream(body, "proxy_", false) + + if got := gjson.GetBytes(out, "tools.0.name").String(); got != "proxy_Bash" { + t.Fatalf("tools.0.name = %q, want %q", got, "proxy_Bash") + } + if got := gjson.GetBytes(out, "tools.1.name").String(); got != "proxy_Glob" { + t.Fatalf("tools.1.name = %q, want %q", got, "proxy_Glob") + } + if got := gjson.GetBytes(out, "messages.0.content.0.name").String(); got != "proxy_Bash" { + t.Fatalf("messages.0.content.0.name = %q, want %q", got, "proxy_Bash") + } + if got := gjson.GetBytes(out, "messages.0.content.1.name").String(); got != "proxy_Glob" { + t.Fatalf("messages.0.content.1.name = %q, want %q", got, "proxy_Glob") + } + if len(reverseMap) != 1 || reverseMap["Glob"] != "glob" { + t.Fatalf("reverseMap = %v, want {Glob:glob}", reverseMap) + } +} + +func TestRestoreClaudeOAuthToolNamesFromResponse_MixedCaseWithPrefix(t *testing.T) { + reverseMap := map[string]string{"Glob": "glob"} + resp := []byte(`{"content":[` + + `{"type":"tool_use","id":"toolu_01","name":"proxy_Bash","input":{}},` + + `{"type":"tool_use","id":"toolu_02","name":"proxy_Glob","input":{}}` + + `]}`) + + out := restoreClaudeOAuthToolNamesFromResponse(resp, "proxy_", false, reverseMap) + + if got := gjson.GetBytes(out, "content.0.name").String(); got != "Bash" { + t.Fatalf("content.0.name = %q, want %q", got, "Bash") + } + if got := gjson.GetBytes(out, "content.1.name").String(); got != "glob" { + t.Fatalf("content.1.name = %q, want %q", got, "glob") + } +} + +func TestRestoreClaudeOAuthToolNamesFromStreamLine_MixedCaseWithPrefix(t *testing.T) { + reverseMap := map[string]string{"Glob": "glob"} + + bashLine := []byte(`data: {"type":"content_block_start","index":0,"content_block":{"type":"tool_use","id":"toolu_01","name":"proxy_Bash","input":{}}}`) + out := restoreClaudeOAuthToolNamesFromStreamLine(bashLine, "proxy_", false, reverseMap) + if !bytes.Contains(out, []byte(`"name":"Bash"`)) { + t.Fatalf("Bash should be preserved, got: %s", string(out)) + } + if bytes.Contains(out, []byte(`"name":"bash"`)) { + t.Fatalf("Bash must not be lowercased, got: %s", string(out)) + } + + globLine := []byte(`data: {"type":"content_block_start","index":0,"content_block":{"type":"tool_use","id":"toolu_02","name":"proxy_Glob","input":{}}}`) + out = restoreClaudeOAuthToolNamesFromStreamLine(globLine, "proxy_", false, reverseMap) + if !bytes.Contains(out, []byte(`"name":"glob"`)) { + t.Fatalf("Glob should be restored to glob, got: %s", string(out)) + } +} diff --git a/internal/runtime/executor/claude_signing.go b/internal/runtime/executor/claude_signing.go index 697a688265..060e86e846 100644 --- a/internal/runtime/executor/claude_signing.go +++ b/internal/runtime/executor/claude_signing.go @@ -6,8 +6,8 @@ import ( "strings" xxHash64 "github.com/pierrec/xxHash/xxHash64" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/runtime/executor/codearts_executor.go b/internal/runtime/executor/codearts_executor.go new file mode 100644 index 0000000000..34dd40c019 --- /dev/null +++ b/internal/runtime/executor/codearts_executor.go @@ -0,0 +1,963 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "crypto/rand" + "encoding/hex" + "encoding/json" + "fmt" + "io" + "net/http" + "sort" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codearts" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +const ( + codeartsChatURL = "https://snap-access.cn-north-4.myhuaweicloud.com/v1/chat/chat" + codeArtsUserAgent = "DevKit-VSCode:huaweicloud.codearts-snap|CodeArts Agent:D1" +) + +// CodeArtsExecutor executes chat completions against the HuaweiCloud CodeArts API. +type CodeArtsExecutor struct { + cfg *config.Config +} + +// NewCodeArtsExecutor constructs a new executor instance. +func NewCodeArtsExecutor(cfg *config.Config) *CodeArtsExecutor { + return &CodeArtsExecutor{cfg: cfg} +} + +// Identifier returns the executor's provider key. +func (e *CodeArtsExecutor) Identifier() string { return "codearts" } + +// PrepareRequest sets CodeArts-specific headers and signs the request. +func (e *CodeArtsExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if auth == nil || auth.Metadata == nil { + return fmt.Errorf("codearts: missing auth metadata") + } + + ak, _ := auth.Metadata["ak"].(string) + sk, _ := auth.Metadata["sk"].(string) + securityToken, _ := auth.Metadata["security_token"].(string) + + if ak == "" || sk == "" { + return fmt.Errorf("codearts: missing AK/SK credentials") + } + + var bodyBytes []byte + if req.Body != nil { + bodyBytes, _ = io.ReadAll(req.Body) + req.Body.Close() + req.Body = io.NopCloser(bytes.NewReader(bodyBytes)) + req.ContentLength = int64(len(bodyBytes)) + } + + traceID := generateTraceID() + + req.Header.Set("User-Agent", codeArtsUserAgent) + req.Header.Set("Accept", "text/event-stream") + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Agent-Type", "ChatAgent") + req.Header.Set("Client-Version", "Vscode_26.3.5") + req.Header.Set("Heartbeat-Enable", "true") + req.Header.Set("Ide-Name", "CodeArts Agent") + req.Header.Set("Ide-Version", "1.96.4") + req.Header.Set("Is-Confidential", "false") + req.Header.Set("Plugin-Name", "snap_vscode") + req.Header.Set("Plugin-Version", "26.3.5") + req.Header.Set("X-Language", "zh-cn") + req.Header.Set("X-Snap-Traceid", traceID) + + codearts.SignRequest(req, bodyBytes, ak, sk, securityToken) + + log.Debugf("codearts: signing request url=%s, body_len=%d, ak=%s, headers=%v", + req.URL.String(), len(bodyBytes), ak[:min(4, len(ak))]+"...", req.Header) + return nil +} + +// HttpRequest executes a signed HTTP request to CodeArts. +func (e *CodeArtsExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + client := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 5*time.Minute) + + if err := e.PrepareRequest(req, auth); err != nil { + return nil, err + } + + resp, err := client.Do(req) + if err != nil { + return nil, fmt.Errorf("codearts: request failed: %w", err) + } + return resp, nil +} + +// Execute handles non-streaming chat completions. +func (e *CodeArtsExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + parsed := thinking.ParseSuffix(req.Model) + baseModel := parsed.ModelName + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + agentID := codearts.DefaultAgentID + if auth.Attributes != nil { + if aid := strings.TrimSpace(auth.Attributes["agent_id"]); aid != "" { + agentID = aid + } + } + + userID := extractUserID(auth) + + payload := buildCodeArtsPayload(req.Payload, baseModel, agentID, userID, opts) + + httpReq, err := http.NewRequestWithContext(ctx, "POST", codeartsChatURL, bytes.NewReader(payload)) + if err != nil { + return resp, err + } + + httpResp, err := e.HttpRequest(ctx, auth, httpReq) + if err != nil { + return resp, err + } + defer httpResp.Body.Close() + + log.Debugf("codearts: Execute response status=%d, content_type=%s", httpResp.StatusCode, httpResp.Header.Get("Content-Type")) + + if httpResp.StatusCode != 200 { + body, _ := io.ReadAll(httpResp.Body) + return resp, statusErr{ + code: httpResp.StatusCode, + msg: fmt.Sprintf("codearts: API returned %d: %s", httpResp.StatusCode, string(body)), + } + } + + var contentBuilder strings.Builder + var reasoningBuilder strings.Builder + var promptTokens, completionTokens int64 + var respModel string + toolCallsAccumulated := make(map[int]map[string]interface{}) + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(make([]byte, 0, 1024*1024), 1024*1024) + for scanner.Scan() { + line := scanner.Text() + if strings.HasPrefix(line, ":heartbeat") || line == "" { + continue + } + if !strings.HasPrefix(line, "data: ") && !strings.HasPrefix(line, "data:") { + continue + } + var data string + if strings.HasPrefix(line, "data: ") { + data = strings.TrimPrefix(line, "data: ") + } else { + data = strings.TrimPrefix(line, "data:") + } + if data == "[DONE]" || (gjson.Get(data, "text").String() == "[DONE]") { + break + } + + errorCode := gjson.Get(data, "error_code") + if errorCode.Exists() && errorCode.Int() != 0 { + errMsg := gjson.Get(data, "error_msg").String() + return cliproxyexecutor.Response{}, fmt.Errorf("codearts: error %d: %s", errorCode.Int(), errMsg) + } + + delta := gjson.Get(data, "delta") + if delta.Exists() { + if c := delta.Get("content").String(); c != "" { + contentBuilder.WriteString(c) + } + if r := delta.Get("reasoning_content").String(); r != "" { + reasoningBuilder.WriteString(r) + } + if tcList := delta.Get("tool_calls"); tcList.Exists() && len(tcList.Array()) > 0 { + for _, tc := range tcList.Array() { + idx := int(tc.Get("index").Int()) + if _, exists := toolCallsAccumulated[idx]; !exists { + toolCallsAccumulated[idx] = map[string]interface{}{ + "id": tc.Get("id").String(), + "type": tc.Get("type").String(), + "function": map[string]interface{}{ + "name": tc.Get("function.name").String(), + "arguments": tc.Get("function.arguments").String(), + }, + } + } else { + existing := toolCallsAccumulated[idx] + if id := tc.Get("id").String(); id != "" { + existing["id"] = id + } + fnMap, _ := existing["function"].(map[string]interface{}) + if name := tc.Get("function.name").String(); name != "" { + fnMap["name"] = name + } + if args := tc.Get("function.arguments").String(); args != "" { + fnMap["arguments"] = fnMap["arguments"].(string) + args + } + } + } + } + } + if mn := gjson.Get(data, "model_name").String(); mn != "" { + respModel = mn + } + if pt := gjson.Get(data, "prompt_tokens").Int(); pt > 0 { + promptTokens = pt + } + if ct := gjson.Get(data, "completion_tokens").Int(); ct > 0 { + completionTokens = ct + } + } + + var toolCallsList []map[string]interface{} + if len(toolCallsAccumulated) > 0 { + indices := make([]int, 0, len(toolCallsAccumulated)) + for k := range toolCallsAccumulated { + indices = append(indices, k) + } + sort.Ints(indices) + for _, k := range indices { + toolCallsList = append(toolCallsList, toolCallsAccumulated[k]) + } + } + + fullContent := contentBuilder.String() + if len(toolCallsList) == 0 && fullContent != "" && strings.Contains(fullContent, "") { + xmlToolCalls := parseXMLToolCalls(fullContent) + if len(xmlToolCalls) > 0 { + toolCallsList = xmlToolCalls + stripped := stripXMLToolCalls(fullContent) + if stripped == "" { + fullContent = "" + } else { + fullContent = stripped + } + } + } + + if respModel == "" { + respModel = req.Model + } + + from := sdktranslator.FromString("openai") + to := sdktranslator.FromString("codearts") + + openAIResp := buildOpenAINonStreamResponse(fullContent, reasoningBuilder.String(), respModel, promptTokens, completionTokens, toolCallsList) + var param any + translated := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, req.Payload, openAIResp, ¶m) + + reporter.Publish(ctx, usage.Detail{ + InputTokens: promptTokens, + OutputTokens: completionTokens, + }) + + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: codeartsChatURL, + Method: "POST", + Provider: "codearts", + AuthID: auth.ID, + }) + + return cliproxyexecutor.Response{Payload: translated}, nil +} + +// ExecuteStream handles streaming chat completions. +func (e *CodeArtsExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + parsed := thinking.ParseSuffix(req.Model) + baseModel := parsed.ModelName + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + agentID := codearts.DefaultAgentID + if auth.Attributes != nil { + if aid := strings.TrimSpace(auth.Attributes["agent_id"]); aid != "" { + agentID = aid + } + } + + userID := extractUserID(auth) + + payload := buildCodeArtsPayload(req.Payload, baseModel, agentID, userID, opts) + + httpReq, err := http.NewRequestWithContext(ctx, "POST", codeartsChatURL, bytes.NewReader(payload)) + if err != nil { + return nil, err + } + + httpResp, err := e.HttpRequest(ctx, auth, httpReq) + if err != nil { + return nil, err + } + + if httpResp.StatusCode != 200 { + body, _ := io.ReadAll(httpResp.Body) + httpResp.Body.Close() + log.Debugf("codearts: non-200 response status=%d, body=%s", httpResp.StatusCode, string(body)) + return nil, statusErr{ + code: httpResp.StatusCode, + msg: fmt.Sprintf("codearts: API returned %d: %s", httpResp.StatusCode, string(body)), + } + } + + log.Debugf("codearts: stream response status=%d, content_type=%s, content_length=%d", + httpResp.StatusCode, httpResp.Header.Get("Content-Type"), httpResp.ContentLength) + + chunks := make(chan cliproxyexecutor.StreamChunk, 64) + + go func() { + defer close(chunks) + defer httpResp.Body.Close() + + from := sdktranslator.FromString("openai") + to := sdktranslator.FromString("codearts") + var streamParam any + var totalPromptTokens, totalCompletionTokens int64 + var lineCount int + var dataLineCount int + var firstNonEmptyLine string + var accumulatedContent strings.Builder + var hasToolCalls bool + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(make([]byte, 0, 1024*1024), 1024*1024) + for scanner.Scan() { + line := scanner.Text() + lineCount++ + if strings.HasPrefix(line, ":heartbeat") || line == "" { + continue + } + if firstNonEmptyLine == "" { + firstNonEmptyLine = line + } + var data string + if strings.HasPrefix(line, "data: ") { + data = strings.TrimPrefix(line, "data: ") + } else if strings.HasPrefix(line, "data:") { + data = strings.TrimPrefix(line, "data:") + } else { + log.Debugf("codearts: unexpected SSE line %d: %q", lineCount, line) + continue + } + if data == "[DONE]" || (gjson.Get(data, "text").String() == "[DONE]") { + break + } + dataLineCount++ + + result := convertCodeArtsSSEToOpenAI(data, req.Model) + if result.Err != nil { + log.Warnf("codearts: chunk error: %v", result.Err) + continue + } + if result.Chunk == nil { + if pt := gjson.Get(data, "prompt_tokens").Int(); pt > 0 { + totalPromptTokens = pt + } + if ct := gjson.Get(data, "completion_tokens").Int(); ct > 0 { + totalCompletionTokens = ct + } + continue + } + + if result.HasToolCalls { + hasToolCalls = true + } else if result.HasContent { + accumulatedContent.WriteString(result.ContentValue) + } + + if result.FinishReason == "stop" { + if pt := gjson.Get(data, "prompt_tokens").Int(); pt > 0 { + totalPromptTokens = pt + } + if ct := gjson.Get(data, "completion_tokens").Int(); ct > 0 { + totalCompletionTokens = ct + } + } + + translatedChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, req.Payload, result.Chunk, &streamParam) + for _, tc := range translatedChunks { + if len(tc) > 0 { + chunks <- cliproxyexecutor.StreamChunk{Payload: tc} + } + } + } + + if !hasToolCalls && accumulatedContent.Len() > 0 && strings.Contains(accumulatedContent.String(), "") { + xmlToolCalls := parseXMLToolCalls(accumulatedContent.String()) + if len(xmlToolCalls) > 0 { + hasToolCalls = true + for i, tc := range xmlToolCalls { + chunk := buildToolCallStreamChunk(req.Model, i, tc) + translatedChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, req.Payload, chunk, &streamParam) + for _, tChunk := range translatedChunks { + if len(tChunk) > 0 { + chunks <- cliproxyexecutor.StreamChunk{Payload: tChunk} + } + } + } + } + } + + if hasToolCalls { + finishChunk := buildFinishReasonStreamChunk(req.Model, "tool_calls") + translatedChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, req.Payload, finishChunk, &streamParam) + for _, tChunk := range translatedChunks { + if len(tChunk) > 0 { + chunks <- cliproxyexecutor.StreamChunk{Payload: tChunk} + } + } + } + + if dataLineCount == 0 { + log.Warnf("codearts: stream ended with no data lines (total_lines=%d, first_non_empty=%q)", lineCount, firstNonEmptyLine) + } + + if err := scanner.Err(); err != nil { + log.Warnf("codearts: stream scanner error: %v", err) + chunks <- cliproxyexecutor.StreamChunk{Err: err} + } + + reporter.Publish(ctx, usage.Detail{ + InputTokens: totalPromptTokens, + OutputTokens: totalCompletionTokens, + }) + + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: codeartsChatURL, + Method: "POST", + Provider: "codearts", + AuthID: auth.ID, + }) + }() + + return &cliproxyexecutor.StreamResult{ + Headers: httpResp.Header, + Chunks: chunks, + }, nil +} + +// CountTokens is not supported by CodeArts. +func (e *CodeArtsExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, fmt.Errorf("codearts: token counting not supported") +} + +// Refresh refreshes the CodeArts security token. +func (e *CodeArtsExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil || auth.Metadata == nil { + return nil, fmt.Errorf("codearts: no metadata to refresh") + } + + currentToken := extractCodeArtsToken(auth) + if currentToken == nil { + return nil, fmt.Errorf("codearts: no valid token data found for refresh") + } + + if !codearts.NeedsRefresh(currentToken) { + return auth, nil + } + + caAuth := codearts.NewCodeArtsAuth(nil) + newToken, err := caAuth.RefreshToken(ctx, currentToken) + if err != nil { + return nil, fmt.Errorf("codearts: refresh failed: %w", err) + } + + updated := auth.Clone() + updated.Metadata["ak"] = newToken.AK + updated.Metadata["sk"] = newToken.SK + updated.Metadata["security_token"] = newToken.SecurityToken + updated.Metadata["expires_at"] = newToken.ExpiresAt.Format(time.RFC3339) + if newToken.XAuthToken != "" { + updated.Metadata["x_auth_token"] = newToken.XAuthToken + } + + log.Infof("codearts: successfully refreshed token, expires at %s", newToken.ExpiresAt.Format(time.RFC3339)) + return updated, nil +} + +// extractCodeArtsToken extracts token data from auth metadata. +func extractCodeArtsToken(auth *cliproxyauth.Auth) *codearts.CodeArtsTokenData { + if auth == nil || auth.Metadata == nil { + return nil + } + + ak, _ := auth.Metadata["ak"].(string) + sk, _ := auth.Metadata["sk"].(string) + if ak == "" || sk == "" { + return nil + } + + token := &codearts.CodeArtsTokenData{ + AK: ak, + SK: sk, + SecurityToken: metadataStr(auth.Metadata, "security_token"), + XAuthToken: metadataStr(auth.Metadata, "x_auth_token"), + Email: metadataStr(auth.Metadata, "email"), + } + + if expiresStr := metadataStr(auth.Metadata, "expires_at"); expiresStr != "" { + if t, err := time.Parse(time.RFC3339, expiresStr); err == nil { + token.ExpiresAt = t + } + } + + return token +} + +func metadataStr(m map[string]any, key string) string { + if v, ok := m[key].(string); ok { + return v + } + return "" +} + +func extractUserID(auth *cliproxyauth.Auth) string { + if auth.Metadata != nil { + if uid, ok := auth.Metadata["user_id"].(string); ok { + return uid + } + if did, ok := auth.Metadata["domain_id"].(string); ok { + return did + } + } + return "" +} + +func generateTraceID() string { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return fmt.Sprintf("%032d", time.Now().UnixNano()) + } + return hex.EncodeToString(b) +} + +func generateChatID() string { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return fmt.Sprintf("%032d", time.Now().UnixNano()) + } + return hex.EncodeToString(b) +} + +func generateToolCallID() string { + b := make([]byte, 12) + if _, err := rand.Read(b); err != nil { + return fmt.Sprintf("call_%019d", time.Now().UnixNano()) + } + return "call_" + hex.EncodeToString(b) +} + +const toolsSystemPromptTemplate = "# Available Tools\n\nYou have access to the following tools. You MUST respond with tool calls using the exact XML format specified below.\n\n%s\n\n# Tool Call Format\n\nWhen you need to use a tool, you MUST output the tool call in the following XML format:\n\ncall_\nfunction_name_here\n\n{\"param1\": \"value1\", \"param2\": \"value2\"}\n\n\nRules:\n- Each tool call MUST have a unique tool_call_id starting with \"call_\" followed by 24 random hex characters.\n- tool_arguments MUST be valid JSON matching the function's parameters schema.\n- You may make multiple tool calls in a single response.\n- When you want to call tools, output ONLY the tool call XML blocks, do NOT output any other text.\n- Do NOT wrap tool calls in markdown code blocks.\n- The tool_call_id MUST be unique for each tool call." + +func buildToolsSystemPrompt(tools gjson.Result) string { + var toolDefs []string + for _, tool := range tools.Array() { + if tool.Get("type").String() != "function" { + continue + } + fn := tool.Get("function") + name := fn.Get("name").String() + desc := fn.Get("description").String() + params := fn.Get("parameters").Raw + if params == "" { + params = "{}" + } + toolDefs = append(toolDefs, fmt.Sprintf("## %s\n%s\nParameters: %s", name, desc, params)) + } + if len(toolDefs) == 0 { + return "" + } + return fmt.Sprintf(toolsSystemPromptTemplate, strings.Join(toolDefs, "\n\n")) +} + +func parseXMLToolCalls(text string) []map[string]interface{} { + var results []map[string]interface{} + segments := strings.Split(text, "") + for _, seg := range segments[1:] { + idEnd := strings.Index(seg, "") + if idEnd < 0 { + continue + } + tcID := strings.TrimSpace(seg[:idEnd]) + + rest := seg[idEnd+len(""):] + nameStart := strings.Index(rest, "") + if nameStart < 0 { + continue + } + nameStart += len("") + nameEnd := strings.Index(rest, "") + if nameEnd < 0 || nameEnd < nameStart { + continue + } + tcName := strings.TrimSpace(rest[nameStart:nameEnd]) + + argsRest := rest[nameEnd+len(""):] + argsStart := strings.Index(argsRest, "") + if argsStart < 0 { + continue + } + argsStart += len("") + argsEnd := strings.Index(argsRest, "") + if argsEnd < 0 || argsEnd < argsStart { + continue + } + argsStr := strings.TrimSpace(argsRest[argsStart:argsEnd]) + + if tcID == "" { + tcID = generateToolCallID() + } + results = append(results, map[string]interface{}{ + "id": tcID, + "type": "function", + "function": map[string]interface{}{ + "name": tcName, + "arguments": argsStr, + }, + }) + } + return results +} + +func stripXMLToolCalls(text string) string { + result := text + for strings.Contains(result, "") && strings.Contains(result, "") { + start := strings.Index(result, "") + end := strings.Index(result, "") + len("") + if end <= start { + break + } + result = result[:start] + result[end:] + } + return strings.TrimSpace(result) +} + +func buildToolCallStreamChunk(model string, index int, toolCall map[string]interface{}) []byte { + tc := map[string]interface{}{ + "index": index, + "id": toolCall["id"], + "type": "function", + "function": toolCall["function"], + } + chunk := map[string]interface{}{ + "id": "chatcmpl-codearts", + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{ + { + "index": 0, + "delta": map[string]interface{}{ + "tool_calls": []map[string]interface{}{tc}, + }, + }, + }, + } + result, _ := json.Marshal(chunk) + return result +} + +func buildFinishReasonStreamChunk(model string, finishReason string) []byte { + chunk := map[string]interface{}{ + "id": "chatcmpl-codearts", + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{ + { + "index": 0, + "delta": map[string]interface{}{}, + "finish_reason": finishReason, + }, + }, + } + result, _ := json.Marshal(chunk) + return result +} + +// buildCodeArtsPayload converts the OpenAI-format payload to CodeArts format. +func buildCodeArtsPayload(openaiPayload []byte, modelName, agentID, userID string, opts cliproxyexecutor.Options) []byte { + messages := gjson.GetBytes(openaiPayload, "messages") + if !messages.Exists() { + log.Warn("codearts: no messages found in payload") + return openaiPayload + } + + var codeArtsMessages []map[string]string + for _, msg := range messages.Array() { + role := msg.Get("role").String() + content := extractTextContent(msg.Get("content")) + + var formattedContent string + switch role { + case "system": + formattedContent = "[System]\n" + content + case "assistant": + toolCalls := msg.Get("tool_calls") + if toolCalls.Exists() && len(toolCalls.Array()) > 0 { + var parts []string + if content != "" { + parts = append(parts, content) + } + for _, tc := range toolCalls.Array() { + name := tc.Get("function.name").String() + id := tc.Get("id").String() + args := tc.Get("function.arguments").String() + parts = append(parts, fmt.Sprintf("[Tool Call: %s] (id: %s)\n%s", name, id, args)) + } + formattedContent = "[Assistant]\n" + strings.Join(parts, "\n") + } else { + formattedContent = "[Assistant]\n" + content + } + case "tool": + toolName := msg.Get("name").String() + toolID := msg.Get("tool_call_id").String() + if toolName == "" { + toolName = "unknown" + } + formattedContent = fmt.Sprintf("[Tool Result: %s] (id: %s)\n%s", toolName, toolID, content) + case "user": + formattedContent = content + default: + formattedContent = content + } + + codeArtsMessages = append(codeArtsMessages, map[string]string{ + "type": "text", + "content": formattedContent, + }) + } + + taskParameters := map[string]interface{}{ + "is_intent_recognition": false, + "W3_Search": false, + "codebase_search": false, + "related_question": true, + "preferred_language": "zh-cn", + "enable_code_interpreter": false, + "projectLevelPrompt": "", + "contexts": []interface{}{}, + "expert_rules": []interface{}{}, + "ide": "CodeArts Agent", + "routerVersion": "v2", + "isNewClient": true, + "features": map[string]interface{}{"support_end_tag": true}, + } + + if tools := gjson.GetBytes(openaiPayload, "tools"); tools.Exists() { + taskParameters["tools"] = tools.Value() + toolsPrompt := buildToolsSystemPrompt(tools) + if toolsPrompt != "" { + hasSystem := false + for i, msg := range codeArtsMessages { + if strings.HasPrefix(msg["content"], "[System]") { + codeArtsMessages[i]["content"] = msg["content"] + "\n\n" + toolsPrompt + hasSystem = true + break + } + } + if !hasSystem { + codeArtsMessages = append( + []map[string]string{{"type": "text", "content": "[System]\n" + toolsPrompt}}, + codeArtsMessages..., + ) + } + } + } + if temp := gjson.GetBytes(openaiPayload, "temperature"); temp.Exists() { + taskParameters["temperature"] = temp.Value() + } + + chatID := generateChatID() + + request := map[string]interface{}{ + "chat_id": chatID, + "messages": codeArtsMessages, + "client": "IDE", + "task": "chat", + "task_parameters": taskParameters, + "batch_task_parameters": []interface{}{}, + "attempt": 1, + "user_id": userID, + "parent_message_id": "", + "is_delta_response": true, + "model_id": modelName, + } + + result, err := json.Marshal(request) + if err != nil { + log.Errorf("codearts: failed to marshal payload: %v", err) + return openaiPayload + } + return result +} + +// convertCodeArtsSSEToOpenAI converts a CodeArts SSE data line to OpenAI SSE format. +type codeartsStreamResult struct { + Chunk []byte + HasToolCalls bool + HasContent bool + ContentValue string + FinishReason string + Err error +} + +func convertCodeArtsSSEToOpenAI(data string, model string) codeartsStreamResult { + errorCode := gjson.Get(data, "error_code") + if errorCode.Exists() && errorCode.Int() != 0 { + errMsg := gjson.Get(data, "error_msg").String() + return codeartsStreamResult{Err: fmt.Errorf("CodeArts error %d: %s", errorCode.Int(), errMsg)} + } + + delta := gjson.Get(data, "delta") + if !delta.Exists() { + return codeartsStreamResult{} + } + + contentResult := delta.Get("content") + reasoningResult := delta.Get("reasoning_content") + toolCallsResult := delta.Get("tool_calls") + + contentExists := contentResult.Exists() + contentValue := contentResult.String() + reasoningExists := reasoningResult.Exists() + reasoningValue := reasoningResult.String() + hasToolCalls := toolCallsResult.Exists() && len(toolCallsResult.Array()) > 0 + + openaiDelta := make(map[string]interface{}) + + if contentExists { + openaiDelta["content"] = contentValue + } else if reasoningExists || hasToolCalls { + openaiDelta["content"] = "" + } + + if reasoningExists { + openaiDelta["reasoning_content"] = reasoningValue + } + + if hasToolCalls { + openaiDelta["tool_calls"] = toolCallsResult.Value() + } + + if !contentExists && !reasoningExists && !hasToolCalls { + role := delta.Get("role").String() + if role != "" { + openaiDelta["role"] = role + } + } + + if len(openaiDelta) == 0 { + return codeartsStreamResult{} + } + + finishReason := "" + promptTokens := gjson.Get(data, "prompt_tokens").Int() + completionTokens := gjson.Get(data, "completion_tokens").Int() + totalTokens := gjson.Get(data, "total_tokens").Int() + + if completionTokens > 0 && !contentExists && !reasoningExists && !hasToolCalls { + finishReason = "stop" + } + + respModel := gjson.Get(data, "model_name").String() + if respModel == "" { + respModel = model + } + + chunk := map[string]interface{}{ + "id": "chatcmpl-codearts", + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": respModel, + "choices": []map[string]interface{}{ + { + "index": 0, + "delta": openaiDelta, + "finish_reason": nil, + }, + }, + } + + if finishReason != "" { + chunk["choices"].([]map[string]interface{})[0]["finish_reason"] = finishReason + } + + if totalTokens > 0 { + chunk["usage"] = map[string]interface{}{ + "prompt_tokens": promptTokens, + "completion_tokens": completionTokens, + "total_tokens": totalTokens, + } + } + + result, err := json.Marshal(chunk) + if err != nil { + return codeartsStreamResult{} + } + + return codeartsStreamResult{ + Chunk: result, + HasToolCalls: hasToolCalls, + HasContent: contentExists && contentValue != "", + ContentValue: contentValue, + FinishReason: finishReason, + } +} + +// buildOpenAINonStreamResponse builds a complete OpenAI non-stream response. +func buildOpenAINonStreamResponse(content, reasoning, model string, promptTokens, completionTokens int64, toolCalls []map[string]interface{}) []byte { + message := map[string]interface{}{ + "role": "assistant", + } + if content != "" { + message["content"] = content + } else { + message["content"] = nil + } + if reasoning != "" { + message["reasoning_content"] = reasoning + } + + finishReason := "stop" + if len(toolCalls) > 0 { + finishReason = "tool_calls" + message["tool_calls"] = toolCalls + } + + resp := map[string]interface{}{ + "id": "chatcmpl-codearts", + "object": "chat.completion", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{ + { + "index": 0, + "message": message, + "finish_reason": finishReason, + }, + }, + "usage": map[string]interface{}{ + "prompt_tokens": promptTokens, + "completion_tokens": completionTokens, + "total_tokens": promptTokens + completionTokens, + }, + } + + result, _ := json.Marshal(resp) + return result +} diff --git a/internal/runtime/executor/codebuddy_ai_executor.go b/internal/runtime/executor/codebuddy_ai_executor.go new file mode 100644 index 0000000000..2029922166 --- /dev/null +++ b/internal/runtime/executor/codebuddy_ai_executor.go @@ -0,0 +1,510 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "errors" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codebuddy_ai" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +const ( + codeBuddyAIChatPath = "/v2/chat/completions" + codeBuddyAIAuthType = "codebuddy-ai" +) + +type CodeBuddyAIExecutor struct { + cfg *config.Config +} + +func NewCodeBuddyAIExecutor(cfg *config.Config) *CodeBuddyAIExecutor { + return &CodeBuddyAIExecutor{cfg: cfg} +} + +func (e *CodeBuddyAIExecutor) Identifier() string { return codeBuddyAIAuthType } + +func codeBuddyAICredentials(auth *cliproxyauth.Auth) (accessToken, userID, domain string) { + if auth == nil { + return "", "", "" + } + accessToken = metaStringValue(auth.Metadata, "access_token") + userID = metaStringValue(auth.Metadata, "user_id") + domain = metaStringValue(auth.Metadata, "domain") + if domain == "" { + domain = codebuddy_ai.DefaultDomain + } + return +} + +func (e *CodeBuddyAIExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + accessToken, userID, domain := codeBuddyAICredentials(auth) + if accessToken == "" { + return fmt.Errorf("codebuddy-ai: missing access token") + } + e.applyHeaders(req, accessToken, userID, domain) + return nil +} + +func (e *CodeBuddyAIExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("codebuddy-ai executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if err := e.PrepareRequest(httpReq, auth); err != nil { + return nil, err + } + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +func (e *CodeBuddyAIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + accessToken, userID, domain := codeBuddyAICredentials(auth) + if accessToken == "" { + return resp, fmt.Errorf("codebuddy-ai: missing access token") + } + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayloadSource, true) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + requestedModel := payloadRequestedModel(opts, req.Model) + translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + translated, _ = sjson.SetBytes(translated, "stream", true) + translated, _ = sjson.SetBytes(translated, "stream_options.include_usage", true) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return resp, err + } + + url := codebuddy_ai.BaseURL + codeBuddyAIChatPath + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) + if err != nil { + return resp, err + } + e.applyHeaders(httpReq, accessToken, userID, domain) + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("codebuddy-ai executor: close response body error: %v", errClose) + } + }() + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if !isHTTPSuccess(httpResp.StatusCode) { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + log.Debugf("codebuddy-ai executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return resp, err + } + + body, err := io.ReadAll(httpResp.Body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + appendAPIResponseChunk(ctx, e.cfg, body) + aggregatedBody, usageDetail, err := aggregateOpenAIChatCompletionStream(body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + reporter.publish(ctx, usageDetail) + reporter.ensurePublished(ctx) + + var param any + out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, aggregatedBody, ¶m) + resp = cliproxyexecutor.Response{Payload: []byte(out), Headers: httpResp.Header.Clone()} + return resp, nil +} + +func (e *CodeBuddyAIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + accessToken, userID, domain := codeBuddyAICredentials(auth) + if accessToken == "" { + return nil, fmt.Errorf("codebuddy-ai: missing access token") + } + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayloadSource, true) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + requestedModel := payloadRequestedModel(opts, req.Model) + translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return nil, err + } + + url := codebuddy_ai.BaseURL + codeBuddyAIChatPath + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) + if err != nil { + return nil, err + } + e.applyHeaders(httpReq, accessToken, userID, domain) + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return nil, err + } + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if !isHTTPSuccess(httpResp.StatusCode) { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + httpResp.Body.Close() + log.Debugf("codebuddy-ai executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return nil, err + } + + out := make(chan cliproxyexecutor.StreamChunk) + go func() { + defer close(out) + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("codebuddy-ai executor: close stream body error: %v", errClose) + } + }() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, maxScannerBufferSize) + var param any + for scanner.Scan() { + line := scanner.Bytes() + appendAPIResponseChunk(ctx, e.cfg, line) + if detail, ok := parseOpenAIStreamUsage(line); ok { + reporter.publish(ctx, detail) + } + if len(line) == 0 { + continue + } + if !bytes.HasPrefix(line, []byte("data:")) { + continue + } + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(line), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])} + } + } + if errScan := scanner.Err(); errScan != nil { + recordAPIResponseError(ctx, e.cfg, errScan) + reporter.publishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } + reporter.ensurePublished(ctx) + }() + + return &cliproxyexecutor.StreamResult{ + Headers: httpResp.Header.Clone(), + Chunks: out, + }, nil +} + +func (e *CodeBuddyAIExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil { + return nil, fmt.Errorf("codebuddy-ai: missing auth") + } + + refreshToken := metaStringValue(auth.Metadata, "refresh_token") + if refreshToken == "" { + log.Debugf("codebuddy-ai executor: no refresh token available, skipping refresh") + return auth, nil + } + + accessToken, userID, domain := codeBuddyAICredentials(auth) + + authSvc := codebuddy_ai.NewCodeBuddyAIAuth(e.cfg) + storage, err := authSvc.RefreshToken(ctx, accessToken, refreshToken, userID, domain) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: token refresh failed: %w", err) + } + + updated := auth.Clone() + updated.Metadata["access_token"] = storage.AccessToken + if storage.RefreshToken != "" { + updated.Metadata["refresh_token"] = storage.RefreshToken + } + updated.Metadata["expires_in"] = storage.ExpiresIn + updated.Metadata["domain"] = storage.Domain + if storage.UserID != "" { + updated.Metadata["user_id"] = storage.UserID + } + now := time.Now() + updated.UpdatedAt = now + updated.LastRefreshedAt = now + + return updated, nil +} + +func (e *CodeBuddyAIExecutor) CountTokens(_ context.Context, _ *cliproxyauth.Auth, _ cliproxyexecutor.Request, _ cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, fmt.Errorf("codebuddy-ai: count tokens not supported") +} + +func (e *CodeBuddyAIExecutor) applyHeaders(req *http.Request, accessToken, userID, domain string) { + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + req.Header.Set("User-Agent", codebuddy_ai.UserAgent) + req.Header.Set("X-User-Id", userID) + req.Header.Set("X-Domain", domain) + req.Header.Set("X-IDE-Type", "IDE") + req.Header.Set("X-IDE-Name", "CodeBuddy") + req.Header.Set("X-IDE-Version", "1.100.0") + req.Header.Set("X-Product", "cloud") + req.Header.Set("X-Product-Version", "1.100.0") +} + +var codeBuddyAIInternalModelPrefixes = []string{ + "completion-", + "codewise-", + "nes-", + "chat-", + "enhance-", +} + +var codeBuddyAIAllowedInternalModels = map[string]bool{ + "o4-mini": true, +} + +func isCodeBuddyAIInternalModel(id string) bool { + for _, prefix := range codeBuddyAIInternalModelPrefixes { + if strings.HasPrefix(id, prefix) { + return !codeBuddyAIAllowedInternalModels[id] + } + } + return false +} + +func FetchCodeBuddyAIModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + accessToken, userID, domain := codeBuddyAICredentials(auth) + if accessToken == "" { + log.Infof("codebuddy-ai: no access token found, using static model list") + return registry.GetCodeBuddyAIModels() + } + + log.Debugf("codebuddy-ai: fetching dynamic models from config API") + + httpClient := newProxyAwareHTTPClient(ctx, cfg, auth, 15*time.Second) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, codebuddy_ai.BaseURL+"/v3/config", nil) + if err != nil { + log.Warnf("codebuddy-ai: failed to create config request: %v", err) + return registry.GetCodeBuddyAIModels() + } + + req.Header.Set("User-Agent", codebuddy_ai.UserAgent) + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("X-Requested-With", "XMLHttpRequest") + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("X-User-Id", userID) + req.Header.Set("X-Domain", domain) + req.Header.Set("X-IDE-Type", "CodeBuddyIDE") + req.Header.Set("X-IDE-Name", "CodeBuddyIDE") + req.Header.Set("X-IDE-Version", "4.9.5") + req.Header.Set("X-Product-Version", "4.9.5") + req.Header.Set("X-Env-ID", "production") + req.Header.Set("X-Product", "SaaS") + + resp, err := httpClient.Do(req) + if err != nil { + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + log.Warnf("codebuddy-ai: fetch models canceled: %v", err) + } else { + log.Warnf("codebuddy-ai: using static models (config API fetch failed: %v)", err) + } + return registry.GetCodeBuddyAIModels() + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy-ai: close config response body error: %v", errClose) + } + }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + log.Warnf("codebuddy-ai: failed to read config response: %v", err) + return registry.GetCodeBuddyAIModels() + } + + if resp.StatusCode != http.StatusOK { + log.Warnf("codebuddy-ai: config API returned status %d", resp.StatusCode) + return registry.GetCodeBuddyAIModels() + } + + modelsResult := gjson.GetBytes(body, "data.models") + if !modelsResult.Exists() || !modelsResult.IsArray() { + log.Warn("codebuddy-ai: config API response missing data.models array") + return registry.GetCodeBuddyAIModels() + } + + var dynamicModels []*registry.ModelInfo + now := time.Now().Unix() + count := 0 + + modelsResult.ForEach(func(key, value gjson.Result) bool { + id := value.Get("id").String() + if id == "" { + return true + } + + if isCodeBuddyAIInternalModel(id) { + return true + } + + name := value.Get("name").String() + if name == "" { + name = id + } + + descEn := value.Get("descriptionEn").String() + descZh := value.Get("descriptionZh").String() + desc := descEn + if desc == "" { + desc = descZh + } + if desc == "" { + desc = name + " via CodeBuddy AI" + } + + maxInputTokens := int(value.Get("maxInputTokens").Int()) + maxOutputTokens := int(value.Get("maxOutputTokens").Int()) + maxAllowedSize := int(value.Get("maxAllowedSize").Int()) + + contextLength := maxInputTokens + if contextLength <= 0 && maxAllowedSize > 0 { + contextLength = maxAllowedSize + } + if contextLength <= 0 { + contextLength = 128000 + } + if maxOutputTokens <= 0 { + maxOutputTokens = 32768 + } + + supportsReasoning := value.Get("supportsReasoning").Bool() + onlyReasoning := value.Get("onlyReasoning").Bool() + + var thinkingSupport *registry.ThinkingSupport + if supportsReasoning || onlyReasoning { + thinkingSupport = ®istry.ThinkingSupport{ZeroAllowed: true} + reasoningEffort := value.Get("reasoning.effort").String() + if reasoningEffort == "medium" || reasoningEffort == "high" { + thinkingSupport.DynamicAllowed = true + } + } + + dynamicModels = append(dynamicModels, ®istry.ModelInfo{ + ID: id, + Object: "model", + Created: now, + OwnedBy: "codebuddy-ai", + Type: "codebuddy-ai", + DisplayName: name, + Description: desc, + ContextLength: contextLength, + MaxCompletionTokens: maxOutputTokens, + Thinking: thinkingSupport, + SupportedEndpoints: []string{"/chat/completions"}, + }) + count++ + return true + }) + + log.Infof("codebuddy-ai: fetched %d models from config API", count) + if count == 0 { + log.Warn("codebuddy-ai: no models parsed from config API, using static fallback") + return registry.GetCodeBuddyAIModels() + } + + return dynamicModels +} diff --git a/internal/runtime/executor/codebuddy_executor.go b/internal/runtime/executor/codebuddy_executor.go new file mode 100644 index 0000000000..938398d39a --- /dev/null +++ b/internal/runtime/executor/codebuddy_executor.go @@ -0,0 +1,797 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codebuddy" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +const ( + codeBuddyChatPath = "/v2/chat/completions" + codeBuddyAuthType = "codebuddy" +) + +// CodeBuddyExecutor handles requests to the CodeBuddy API. +type CodeBuddyExecutor struct { + cfg *config.Config +} + +// NewCodeBuddyExecutor creates a new CodeBuddy executor instance. +func NewCodeBuddyExecutor(cfg *config.Config) *CodeBuddyExecutor { + return &CodeBuddyExecutor{cfg: cfg} +} + +// Identifier returns the unique identifier for this executor. +func (e *CodeBuddyExecutor) Identifier() string { return codeBuddyAuthType } + +// codeBuddyCredentials extracts the access token and domain from auth metadata. +func codeBuddyCredentials(auth *cliproxyauth.Auth) (accessToken, userID, domain string) { + if auth == nil { + return "", "", "" + } + accessToken = metaStringValue(auth.Metadata, "access_token") + userID = metaStringValue(auth.Metadata, "user_id") + domain = metaStringValue(auth.Metadata, "domain") + if domain == "" { + domain = codebuddy.DefaultDomain + } + return +} + +// PrepareRequest prepares the HTTP request before execution. +func (e *CodeBuddyExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + accessToken, userID, domain := codeBuddyCredentials(auth) + if accessToken == "" { + return fmt.Errorf("codebuddy: missing access token") + } + e.applyHeaders(req, accessToken, userID, domain) + return nil +} + +// HttpRequest executes a raw HTTP request. +func (e *CodeBuddyExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("codebuddy executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if err := e.PrepareRequest(httpReq, auth); err != nil { + return nil, err + } + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +// Execute performs a non-streaming request. +func (e *CodeBuddyExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + accessToken, userID, domain := codeBuddyCredentials(auth) + if accessToken == "" { + return resp, fmt.Errorf("codebuddy: missing access token") + } + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayloadSource, true) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + requestedModel := payloadRequestedModel(opts, req.Model) + translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + translated, _ = sjson.SetBytes(translated, "stream", true) + translated, _ = sjson.SetBytes(translated, "stream_options.include_usage", true) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return resp, err + } + + url := codebuddy.BaseURL + codeBuddyChatPath + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) + if err != nil { + return resp, err + } + e.applyHeaders(httpReq, accessToken, userID, domain) + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("codebuddy executor: close response body error: %v", errClose) + } + }() + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if !isHTTPSuccess(httpResp.StatusCode) { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + log.Debugf("codebuddy executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return resp, err + } + + body, err := io.ReadAll(httpResp.Body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + appendAPIResponseChunk(ctx, e.cfg, body) + aggregatedBody, usageDetail, err := aggregateOpenAIChatCompletionStream(body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + reporter.publish(ctx, usageDetail) + reporter.ensurePublished(ctx) + + var param any + out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, aggregatedBody, ¶m) + resp = cliproxyexecutor.Response{Payload: []byte(out), Headers: httpResp.Header.Clone()} + return resp, nil +} + +// ExecuteStream performs a streaming request. +func (e *CodeBuddyExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + accessToken, userID, domain := codeBuddyCredentials(auth) + if accessToken == "" { + return nil, fmt.Errorf("codebuddy: missing access token") + } + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayloadSource, true) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + requestedModel := payloadRequestedModel(opts, req.Model) + translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return nil, err + } + + url := codebuddy.BaseURL + codeBuddyChatPath + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) + if err != nil { + return nil, err + } + e.applyHeaders(httpReq, accessToken, userID, domain) + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return nil, err + } + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if !isHTTPSuccess(httpResp.StatusCode) { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + httpResp.Body.Close() + log.Debugf("codebuddy executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return nil, err + } + + out := make(chan cliproxyexecutor.StreamChunk) + go func() { + defer close(out) + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("codebuddy executor: close stream body error: %v", errClose) + } + }() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, maxScannerBufferSize) + var param any + for scanner.Scan() { + line := scanner.Bytes() + appendAPIResponseChunk(ctx, e.cfg, line) + if detail, ok := parseOpenAIStreamUsage(line); ok { + reporter.publish(ctx, detail) + } + if len(line) == 0 { + continue + } + if !bytes.HasPrefix(line, []byte("data:")) { + continue + } + raw := bytes.TrimSpace(line[5:]) + if len(raw) > 0 && !bytes.Equal(raw, []byte("[DONE]")) { + if cleaned := cleanDeltaChunk(raw); cleaned == nil { + continue + } else if !bytes.Equal(cleaned, raw) { + line = append([]byte("data: "), cleaned...) + } + } + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(line), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])} + } + } + if errScan := scanner.Err(); errScan != nil { + recordAPIResponseError(ctx, e.cfg, errScan) + reporter.publishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } + reporter.ensurePublished(ctx) + }() + + return &cliproxyexecutor.StreamResult{ + Headers: httpResp.Header.Clone(), + Chunks: out, + }, nil +} + +// Refresh exchanges the CodeBuddy refresh token for a new access token. +func (e *CodeBuddyExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil { + return nil, fmt.Errorf("codebuddy: missing auth") + } + + refreshToken := metaStringValue(auth.Metadata, "refresh_token") + if refreshToken == "" { + log.Debugf("codebuddy executor: no refresh token available, skipping refresh") + return auth, nil + } + + accessToken, userID, domain := codeBuddyCredentials(auth) + + authSvc := codebuddy.NewCodeBuddyAuth(e.cfg) + storage, err := authSvc.RefreshToken(ctx, accessToken, refreshToken, userID, domain) + if err != nil { + return nil, fmt.Errorf("codebuddy: token refresh failed: %w", err) + } + + updated := auth.Clone() + updated.Metadata["access_token"] = storage.AccessToken + if storage.RefreshToken != "" { + updated.Metadata["refresh_token"] = storage.RefreshToken + } + updated.Metadata["expires_in"] = storage.ExpiresIn + updated.Metadata["domain"] = storage.Domain + if storage.UserID != "" { + updated.Metadata["user_id"] = storage.UserID + } + now := time.Now() + updated.UpdatedAt = now + updated.LastRefreshedAt = now + + return updated, nil +} + +// CountTokens is not supported for CodeBuddy. +func (e *CodeBuddyExecutor) CountTokens(_ context.Context, _ *cliproxyauth.Auth, _ cliproxyexecutor.Request, _ cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, fmt.Errorf("codebuddy: count tokens not supported") +} + +// applyHeaders sets required headers for CodeBuddy API requests. +func (e *CodeBuddyExecutor) applyHeaders(req *http.Request, accessToken, userID, domain string) { + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + req.Header.Set("User-Agent", codebuddy.UserAgent) + req.Header.Set("X-User-Id", userID) + req.Header.Set("X-Domain", domain) + req.Header.Set("X-Product", "SaaS") + req.Header.Set("X-IDE-Type", "CodeBuddyIDE") + req.Header.Set("X-IDE-Name", "CodeBuddyIDE") + req.Header.Set("X-IDE-Version", "4.9.7") + req.Header.Set("X-Product-Version", "4.9.7") + req.Header.Set("X-Requested-With", "XMLHttpRequest") +} + +type openAIChatStreamChoiceAccumulator struct { + Role string + ContentParts []string + ReasoningParts []string + FinishReason string + ToolCalls map[int]*openAIChatStreamToolCallAccumulator + ToolCallOrder []int + NativeFinishReason any +} + +type openAIChatStreamToolCallAccumulator struct { + ID string + Type string + Name string + Arguments strings.Builder +} + +func aggregateOpenAIChatCompletionStream(raw []byte) ([]byte, usage.Detail, error) { + lines := bytes.Split(raw, []byte("\n")) + var ( + responseID string + model string + created int64 + serviceTier string + systemFP string + usageDetail usage.Detail + choices = map[int]*openAIChatStreamChoiceAccumulator{} + choiceOrder []int + ) + + for _, line := range lines { + line = bytes.TrimSpace(line) + if len(line) == 0 || !bytes.HasPrefix(line, []byte("data:")) { + continue + } + payload := bytes.TrimSpace(line[5:]) + if len(payload) == 0 || bytes.Equal(payload, []byte("[DONE]")) { + continue + } + if !gjson.ValidBytes(payload) { + continue + } + + root := gjson.ParseBytes(payload) + if responseID == "" { + responseID = root.Get("id").String() + } + if model == "" { + model = root.Get("model").String() + } + if created == 0 { + created = root.Get("created").Int() + } + if serviceTier == "" { + serviceTier = root.Get("service_tier").String() + } + if systemFP == "" { + systemFP = root.Get("system_fingerprint").String() + } + if detail, ok := parseOpenAIStreamUsage(line); ok { + usageDetail = detail + } + + for _, choiceResult := range root.Get("choices").Array() { + idx := int(choiceResult.Get("index").Int()) + choice := choices[idx] + if choice == nil { + choice = &openAIChatStreamChoiceAccumulator{ToolCalls: map[int]*openAIChatStreamToolCallAccumulator{}} + choices[idx] = choice + choiceOrder = append(choiceOrder, idx) + } + + delta := choiceResult.Get("delta") + if role := delta.Get("role").String(); role != "" { + choice.Role = role + } + if content := delta.Get("content").String(); content != "" { + choice.ContentParts = append(choice.ContentParts, content) + } + if reasoning := delta.Get("reasoning_content").String(); reasoning != "" { + choice.ReasoningParts = append(choice.ReasoningParts, reasoning) + } + if finishReason := choiceResult.Get("finish_reason").String(); finishReason != "" { + choice.FinishReason = finishReason + } + if nativeFinishReason := choiceResult.Get("native_finish_reason"); nativeFinishReason.Exists() { + choice.NativeFinishReason = nativeFinishReason.Value() + } + + for _, toolCallResult := range delta.Get("tool_calls").Array() { + toolIdx := int(toolCallResult.Get("index").Int()) + toolCall := choice.ToolCalls[toolIdx] + if toolCall == nil { + toolCall = &openAIChatStreamToolCallAccumulator{} + choice.ToolCalls[toolIdx] = toolCall + choice.ToolCallOrder = append(choice.ToolCallOrder, toolIdx) + } + if id := toolCallResult.Get("id").String(); id != "" { + toolCall.ID = id + } + if typ := toolCallResult.Get("type").String(); typ != "" { + toolCall.Type = typ + } + if name := toolCallResult.Get("function.name").String(); name != "" { + toolCall.Name = name + } + if args := toolCallResult.Get("function.arguments").String(); args != "" { + toolCall.Arguments.WriteString(args) + } + } + } + } + + if responseID == "" && model == "" && len(choiceOrder) == 0 { + return nil, usageDetail, fmt.Errorf("codebuddy: streaming response did not contain any chat completion chunks") + } + + response := map[string]any{ + "id": responseID, + "object": "chat.completion", + "created": created, + "model": model, + "choices": make([]map[string]any, 0, len(choiceOrder)), + "usage": map[string]any{ + "prompt_tokens": usageDetail.InputTokens, + "completion_tokens": usageDetail.OutputTokens, + "total_tokens": usageDetail.TotalTokens, + }, + } + if serviceTier != "" { + response["service_tier"] = serviceTier + } + if systemFP != "" { + response["system_fingerprint"] = systemFP + } + + for _, idx := range choiceOrder { + choice := choices[idx] + message := map[string]any{ + "role": choice.Role, + "content": strings.Join(choice.ContentParts, ""), + } + if message["role"] == "" { + message["role"] = "assistant" + } + if len(choice.ReasoningParts) > 0 { + message["reasoning_content"] = strings.Join(choice.ReasoningParts, "") + } + if len(choice.ToolCallOrder) > 0 { + toolCalls := make([]map[string]any, 0, len(choice.ToolCallOrder)) + for _, toolIdx := range choice.ToolCallOrder { + toolCall := choice.ToolCalls[toolIdx] + toolCallType := toolCall.Type + if toolCallType == "" { + toolCallType = "function" + } + arguments := toolCall.Arguments.String() + if arguments == "" { + arguments = "{}" + } + toolCalls = append(toolCalls, map[string]any{ + "id": toolCall.ID, + "type": toolCallType, + "function": map[string]any{ + "name": toolCall.Name, + "arguments": arguments, + }, + }) + } + message["tool_calls"] = toolCalls + } + + finishReason := choice.FinishReason + if finishReason == "" { + finishReason = "stop" + } + choicePayload := map[string]any{ + "index": idx, + "message": message, + "finish_reason": finishReason, + } + if choice.NativeFinishReason != nil { + choicePayload["native_finish_reason"] = choice.NativeFinishReason + } + response["choices"] = append(response["choices"].([]map[string]any), choicePayload) + } + + out, err := json.Marshal(response) + if err != nil { + return nil, usageDetail, fmt.Errorf("codebuddy: failed to encode aggregated response: %w", err) + } + return out, usageDetail, nil +} + +// filterEmptyDelta returns nil if the SSE JSON chunk contains a delta with no +// meaningful content (both content and reasoning_content are empty/null). +// This prevents clients from interpreting empty reasoning_content deltas as +// interruptions in the thinking chain. Non-delta chunks (usage, finish_reason) +// are passed through. +// cleanDeltaChunk processes a single SSE JSON chunk for CodeBuddy streaming. +// It returns: +// - nil: chunk should be dropped (no meaningful content) +// - modified bytes: chunk cleaned up (e.g. empty reasoning_content removed) +// - original bytes: chunk passed through as-is +// +// The CodeBuddy upstream sends reasoning_content:"" alongside non-empty content +// during the thinking-to-content transition. Many clients interpret +// reasoning_content:"" as "thinking ended", then see the next chunk's +// reasoning_content:"" again and think "thinking restarted". By stripping the +// empty reasoning_content field, the client never sees spurious thinking +// transitions. +func cleanDeltaChunk(raw []byte) []byte { + delta := gjson.GetBytes(raw, "choices.0.delta") + if !delta.Exists() { + return raw + } + finishReason := gjson.GetBytes(raw, "choices.0.finish_reason").String() + if finishReason == "stop" || finishReason == "tool_calls" { + return raw + } + content := delta.Get("content").String() + reasoning := delta.Get("reasoning_content").String() + hasRole := delta.Get("role").Exists() + toolCalls := delta.Get("tool_calls") + hasToolCalls := toolCalls.Exists() && len(toolCalls.Array()) > 0 + if content == "" && reasoning == "" && !hasRole && !hasToolCalls { + return nil + } + if reasoning == "" && content != "" { + if cleaned, err := sjson.DeleteBytes(raw, "choices.0.delta.reasoning_content"); err == nil { + return cleaned + } + } + return raw +} + +var codeBuddyInternalModelPrefixes = []string{ + "completion-", + "codewise-", + "hunyuan-3b", + "hunyuan-7b", + "nes-", + "default-", + "chat-", + "hunyuan-image-", +} + +var codeBuddyAllowedInternalModels = map[string]bool{ + "deepseek-r1-0528": true, + "deepseek-r1-0528-lkeap": true, + "deepseek-v3-0324": true, + "deepseek-v3-0324-lkeap": true, + "deepseek-v3-0324-taco-completion": true, + "hunyuan-2.0-instruct": true, + "hunyuan-chat": true, + "glm-4.6": true, + "glm-4.6v": true, + "glm-4.7": true, + "glm-5.0": true, + "deepseek-v3-1": true, + "deepseek-v3-1-lkeap": true, + "deepseek-v3-1-volc": true, + "kimi-k2-instruct-taiji": true, + "kimi-k2-thinking": true, + "minimax-m2.5": true, +} + +func isCodeBuddyInternalModel(id string) bool { + for _, prefix := range codeBuddyInternalModelPrefixes { + if strings.HasPrefix(id, prefix) { + return !codeBuddyAllowedInternalModels[id] + } + } + return false +} + +func FetchCodeBuddyModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + accessToken, userID, domain := codeBuddyCredentials(auth) + if accessToken == "" { + log.Infof("codebuddy: no access token found, using static model list") + return registry.GetCodeBuddyModels() + } + + log.Debugf("codebuddy: fetching dynamic models from config API") + + httpClient := newProxyAwareHTTPClient(ctx, cfg, auth, 15*time.Second) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, codebuddy.BaseURL+"/v3/config", nil) + if err != nil { + log.Warnf("codebuddy: failed to create config request: %v", err) + return registry.GetCodeBuddyModels() + } + + req.Header.Set("User-Agent", codebuddy.UserAgent) + req.Header.Set("Accept", "application/json, text/plain, */*") + req.Header.Set("X-Requested-With", "XMLHttpRequest") + req.Header.Set("X-IDE-Type", "CodeBuddyIDE") + req.Header.Set("X-IDE-Name", "CodeBuddyIDE") + req.Header.Set("X-IDE-Version", "4.9.7") + req.Header.Set("X-Product-Version", "4.9.7") + req.Header.Set("X-Env-ID", "production") + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("X-User-Id", userID) + req.Header.Set("X-Domain", domain) + req.Header.Set("X-Product", "SaaS") + req.Header.Set("Connection", "close") + + resp, err := httpClient.Do(req) + if err != nil { + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + log.Warnf("codebuddy: fetch models canceled: %v", err) + } else { + log.Warnf("codebuddy: using static models (config API fetch failed: %v)", err) + } + return registry.GetCodeBuddyModels() + } + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("codebuddy: close config response body error: %v", errClose) + } + }() + + body, err := io.ReadAll(resp.Body) + if err != nil { + log.Warnf("codebuddy: failed to read config response: %v", err) + return registry.GetCodeBuddyModels() + } + + if resp.StatusCode != http.StatusOK { + log.Warnf("codebuddy: config API returned status %d", resp.StatusCode) + return registry.GetCodeBuddyModels() + } + + modelsResult := gjson.GetBytes(body, "data.models") + if !modelsResult.Exists() || !modelsResult.IsArray() { + log.Warn("codebuddy: config API response missing data.models array") + return registry.GetCodeBuddyModels() + } + + var dynamicModels []*registry.ModelInfo + now := time.Now().Unix() + count := 0 + + modelsResult.ForEach(func(key, value gjson.Result) bool { + id := value.Get("id").String() + if id == "" { + return true + } + + if isCodeBuddyInternalModel(id) { + return true + } + + name := value.Get("name").String() + if name == "" { + name = id + } + + descZh := value.Get("descriptionZh").String() + descEn := value.Get("descriptionEn").String() + desc := descEn + if desc == "" { + desc = descZh + } + if desc == "" { + desc = name + " via CodeBuddy" + } + + maxInputTokens := int(value.Get("maxInputTokens").Int()) + maxOutputTokens := int(value.Get("maxOutputTokens").Int()) + maxAllowedSize := int(value.Get("maxAllowedSize").Int()) + + contextLength := maxInputTokens + if contextLength <= 0 && maxAllowedSize > 0 { + contextLength = maxAllowedSize + } + if contextLength <= 0 { + contextLength = 128000 + } + if maxOutputTokens <= 0 { + maxOutputTokens = 32768 + } + + supportsReasoning := value.Get("supportsReasoning").Bool() + onlyReasoning := value.Get("onlyReasoning").Bool() + supportsToolCall := value.Get("supportsToolCall").Bool() + supportsImages := value.Get("supportsImages").Bool() + disabledMultimodal := value.Get("disabledMultimodal").Bool() + + _ = supportsToolCall + _ = supportsImages + _ = disabledMultimodal + + var thinkingSupport *registry.ThinkingSupport + if supportsReasoning || onlyReasoning { + thinkingSupport = ®istry.ThinkingSupport{ZeroAllowed: true} + reasoningEffort := value.Get("reasoning.effort").String() + if reasoningEffort == "medium" || reasoningEffort == "high" { + thinkingSupport.DynamicAllowed = true + } + } + + displayName := name + + dynamicModels = append(dynamicModels, ®istry.ModelInfo{ + ID: id, + Object: "model", + Created: now, + OwnedBy: "tencent", + Type: "codebuddy", + DisplayName: displayName, + Description: desc, + ContextLength: contextLength, + MaxCompletionTokens: maxOutputTokens, + Thinking: thinkingSupport, + SupportedEndpoints: []string{"/chat/completions"}, + }) + count++ + return true + }) + + log.Infof("codebuddy: fetched %d models from config API", count) + if count == 0 { + log.Warn("codebuddy: no models parsed from config API, using static fallback") + return registry.GetCodeBuddyModels() + } + + return dynamicModels +} diff --git a/internal/runtime/executor/codex_executor.go b/internal/runtime/executor/codex_executor.go index 41b1c32527..a1bbe6b84a 100644 --- a/internal/runtime/executor/codex_executor.go +++ b/internal/runtime/executor/codex_executor.go @@ -11,15 +11,15 @@ import ( "strings" "time" - codexauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + codexauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -30,12 +30,76 @@ import ( ) const ( - codexUserAgent = "codex-tui/0.118.0 (Mac OS 26.3.1; arm64) iTerm.app/3.6.9 (codex-tui; 0.118.0)" - codexOriginator = "codex-tui" + codexUserAgent = "codex_cli_rs/0.118.0 (Mac OS 26.3.1; arm64) iTerm.app/3.6.9" + codexOriginator = "codex_cli_rs" + codexDefaultImageToolModel = "gpt-image-2" ) var dataTag = []byte("data:") +// Streamed Codex responses may emit response.output_item.done events while leaving +// response.completed.response.output empty. Keep the stream path aligned with the +// already-patched non-stream path by reconstructing response.output from those items. +func collectCodexOutputItemDone(eventData []byte, outputItemsByIndex map[int64][]byte, outputItemsFallback *[][]byte) { + itemResult := gjson.GetBytes(eventData, "item") + if !itemResult.Exists() || itemResult.Type != gjson.JSON { + return + } + outputIndexResult := gjson.GetBytes(eventData, "output_index") + if outputIndexResult.Exists() { + outputItemsByIndex[outputIndexResult.Int()] = []byte(itemResult.Raw) + return + } + *outputItemsFallback = append(*outputItemsFallback, []byte(itemResult.Raw)) +} + +func patchCodexCompletedOutput(eventData []byte, outputItemsByIndex map[int64][]byte, outputItemsFallback [][]byte) []byte { + outputResult := gjson.GetBytes(eventData, "response.output") + shouldPatchOutput := (!outputResult.Exists() || !outputResult.IsArray() || len(outputResult.Array()) == 0) && (len(outputItemsByIndex) > 0 || len(outputItemsFallback) > 0) + if !shouldPatchOutput { + return eventData + } + + indexes := make([]int64, 0, len(outputItemsByIndex)) + for idx := range outputItemsByIndex { + indexes = append(indexes, idx) + } + sort.Slice(indexes, func(i, j int) bool { + return indexes[i] < indexes[j] + }) + + items := make([][]byte, 0, len(outputItemsByIndex)+len(outputItemsFallback)) + for _, idx := range indexes { + items = append(items, outputItemsByIndex[idx]) + } + items = append(items, outputItemsFallback...) + + outputArray := []byte("[]") + if len(items) > 0 { + var buf bytes.Buffer + totalLen := 2 + for _, item := range items { + totalLen += len(item) + } + if len(items) > 1 { + totalLen += len(items) - 1 + } + buf.Grow(totalLen) + buf.WriteByte('[') + for i, item := range items { + if i > 0 { + buf.WriteByte(',') + } + buf.Write(item) + } + buf.WriteByte(']') + outputArray = buf.Bytes() + } + + completedDataPatched, _ := sjson.SetRawBytes(eventData, "response.output", outputArray) + return completedDataPatched +} + // CodexExecutor is a stateless executor for Codex (OpenAI Responses API entrypoint). // If api_key is unavailable on auth, it falls back to legacy via ClientAdapter. type CodexExecutor struct { @@ -109,7 +173,8 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) body, _ = sjson.SetBytes(body, "stream", true) body, _ = sjson.DeleteBytes(body, "previous_response_id") @@ -117,6 +182,9 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re body, _ = sjson.DeleteBytes(body, "safety_identifier") body, _ = sjson.DeleteBytes(body, "stream_options") body = normalizeCodexInstructions(body) + if e.cfg == nil || e.cfg.DisableImageGeneration == config.DisableImageGenerationOff { + body = ensureImageGenerationTool(body, baseModel, auth) + } url := strings.TrimSuffix(baseURL, "/") + "/responses" httpReq, err := e.cacheHelper(ctx, from, url, req, body) @@ -199,6 +267,7 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re if detail, ok := helps.ParseCodexUsage(eventData); ok { reporter.Publish(ctx, detail) } + publishCodexImageToolUsage(ctx, reporter, body, eventData) completedData := eventData outputResult := gjson.GetBytes(completedData, "response.output") @@ -259,10 +328,14 @@ func (e *CodexExecutor) executeCompact(ctx context.Context, auth *cliproxyauth.A } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) body, _ = sjson.DeleteBytes(body, "stream") body = normalizeCodexInstructions(body) + if e.cfg == nil || e.cfg.DisableImageGeneration == config.DisableImageGenerationOff { + body = ensureImageGenerationTool(body, baseModel, auth) + } url := strings.TrimSuffix(baseURL, "/") + "/responses/compact" httpReq, err := e.cacheHelper(ctx, from, url, req, body) @@ -350,13 +423,17 @@ func (e *CodexExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.DeleteBytes(body, "previous_response_id") body, _ = sjson.DeleteBytes(body, "prompt_cache_retention") body, _ = sjson.DeleteBytes(body, "safety_identifier") body, _ = sjson.DeleteBytes(body, "stream_options") body, _ = sjson.SetBytes(body, "model", baseModel) body = normalizeCodexInstructions(body) + if e.cfg == nil || e.cfg.DisableImageGeneration == config.DisableImageGenerationOff { + body = ensureImageGenerationTool(body, baseModel, auth) + } url := strings.TrimSuffix(baseURL, "/") + "/responses" httpReq, err := e.cacheHelper(ctx, from, url, req, body) @@ -414,28 +491,44 @@ func (e *CodexExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au scanner := bufio.NewScanner(httpResp.Body) scanner.Buffer(nil, 52_428_800) // 50MB var param any + outputItemsByIndex := make(map[int64][]byte) + var outputItemsFallback [][]byte for scanner.Scan() { line := scanner.Bytes() helps.AppendAPIResponseChunk(ctx, e.cfg, line) + translatedLine := bytes.Clone(line) if bytes.HasPrefix(line, dataTag) { data := bytes.TrimSpace(line[5:]) - if gjson.GetBytes(data, "type").String() == "response.completed" { + switch gjson.GetBytes(data, "type").String() { + case "response.output_item.done": + collectCodexOutputItemDone(data, outputItemsByIndex, &outputItemsFallback) + case "response.completed": if detail, ok := helps.ParseCodexUsage(data); ok { reporter.Publish(ctx, detail) } + publishCodexImageToolUsage(ctx, reporter, body, data) + data = patchCodexCompletedOutput(data, outputItemsByIndex, outputItemsFallback) + translatedLine = append([]byte("data: "), data...) } } - chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, originalPayload, body, bytes.Clone(line), ¶m) + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, originalPayload, body, translatedLine, ¶m) for i := range chunks { - out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]}: + case <-ctx.Done(): + return + } } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } }() return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil @@ -600,6 +693,9 @@ func countCodexInputTokens(enc tokenizer.Codec, body []byte) (int64, error) { func (e *CodexExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { log.Debugf("codex executor: refresh called") + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } if auth == nil { return nil, statusErr{code: 500, msg: "codex executor: auth is nil"} } @@ -735,6 +831,7 @@ func newCodexStatusErr(statusCode int, body []byte) statusErr { if isCodexModelCapacityError(body) { errCode = http.StatusTooManyRequests } + body = classifyCodexStatusError(errCode, body) err := statusErr{code: errCode, msg: string(body)} if retryAfter := parseCodexRetryAfter(errCode, body, time.Now()); retryAfter != nil { err.retryAfter = retryAfter @@ -742,6 +839,52 @@ func newCodexStatusErr(statusCode int, body []byte) statusErr { return err } +func classifyCodexStatusError(statusCode int, body []byte) []byte { + code, errType, ok := codexStatusErrorClassification(statusCode, body) + if !ok { + return body + } + message := gjson.GetBytes(body, "error.message").String() + if message == "" { + message = gjson.GetBytes(body, "message").String() + } + if message == "" { + message = strings.TrimSpace(string(body)) + } + if message == "" { + message = http.StatusText(statusCode) + } + out := []byte(`{"error":{}}`) + out, _ = sjson.SetBytes(out, "error.message", message) + out, _ = sjson.SetBytes(out, "error.type", errType) + out, _ = sjson.SetBytes(out, "error.code", code) + return out +} + +func codexStatusErrorClassification(statusCode int, body []byte) (code string, errType string, ok bool) { + errorMessage := strings.ToLower(strings.TrimSpace(gjson.GetBytes(body, "error.message").String())) + if errorMessage == "" { + errorMessage = strings.ToLower(strings.TrimSpace(gjson.GetBytes(body, "message").String())) + } + lower := strings.ToLower(strings.TrimSpace(string(body))) + upstreamCode := strings.ToLower(strings.TrimSpace(gjson.GetBytes(body, "error.code").String())) + upstreamType := strings.ToLower(strings.TrimSpace(gjson.GetBytes(body, "error.type").String())) + isInvalidRequest := upstreamType == "" || upstreamType == "invalid_request_error" + + switch { + case statusCode == http.StatusRequestEntityTooLarge || upstreamCode == "context_length_exceeded" || upstreamCode == "context_too_large" || isInvalidRequest && (strings.Contains(errorMessage, "context length") || strings.Contains(errorMessage, "context_length") || strings.Contains(errorMessage, "maximum context") || strings.Contains(errorMessage, "too many tokens")): + return "context_too_large", "invalid_request_error", true + case strings.Contains(lower, "invalid signature in thinking block") || strings.Contains(lower, "invalid_encrypted_content"): + return "thinking_signature_invalid", "invalid_request_error", true + case upstreamCode == "previous_response_not_found" || strings.Contains(lower, "previous_response_not_found") || strings.Contains(lower, "previous_response_id") && strings.Contains(lower, "not found"): + return "previous_response_not_found", "invalid_request_error", true + case statusCode == http.StatusUnauthorized || upstreamType == "authentication_error" || upstreamCode == "invalid_api_key" || strings.Contains(lower, "invalid or expired token") || strings.Contains(lower, "refresh_token_reused"): + return "auth_unavailable", "authentication_error", true + default: + return "", "", false + } +} + func normalizeCodexInstructions(body []byte) []byte { instructions := gjson.GetBytes(body, "instructions") if !instructions.Exists() || instructions.Type == gjson.Null { @@ -750,6 +893,66 @@ func normalizeCodexInstructions(body []byte) []byte { return body } +var imageGenToolJSON = []byte(`{"type":"image_generation","output_format":"png"}`) +var imageGenToolArrayJSON = []byte(`[{"type":"image_generation","output_format":"png"}]`) + +func isCodexFreePlanAuth(auth *cliproxyauth.Auth) bool { + if auth == nil || auth.Attributes == nil { + return false + } + if !strings.EqualFold(strings.TrimSpace(auth.Provider), "codex") { + return false + } + return strings.EqualFold(strings.TrimSpace(auth.Attributes["plan_type"]), "free") +} + +func ensureImageGenerationTool(body []byte, baseModel string, auth *cliproxyauth.Auth) []byte { + if strings.HasSuffix(baseModel, "spark") { + return body + } + if isCodexFreePlanAuth(auth) { + return body + } + + tools := gjson.GetBytes(body, "tools") + if !tools.Exists() || !tools.IsArray() { + body, _ = sjson.SetRawBytes(body, "tools", imageGenToolArrayJSON) + return body + } + for _, t := range tools.Array() { + if t.Get("type").String() == "image_generation" { + return body + } + } + body, _ = sjson.SetRawBytes(body, "tools.-1", imageGenToolJSON) + return body +} + +func publishCodexImageToolUsage(ctx context.Context, reporter *helps.UsageReporter, body []byte, completedData []byte) { + detail, ok := helps.ParseCodexImageToolUsage(completedData) + if !ok { + return + } + reporter.EnsurePublished(ctx) + reporter.PublishAdditionalModel(ctx, codexImageGenerationToolModel(body), detail) +} + +func codexImageGenerationToolModel(body []byte) string { + tools := gjson.GetBytes(body, "tools") + if tools.IsArray() { + for _, tool := range tools.Array() { + if tool.Get("type").String() != "image_generation" { + continue + } + if model := strings.TrimSpace(tool.Get("model").String()); model != "" { + return model + } + break + } + } + return codexDefaultImageToolModel +} + func isCodexModelCapacityError(errorBody []byte) bool { if len(errorBody) == 0 { return false diff --git a/internal/runtime/executor/codex_executor_cache_test.go b/internal/runtime/executor/codex_executor_cache_test.go index 7a24fd9643..cb96a90289 100644 --- a/internal/runtime/executor/codex_executor_cache_test.go +++ b/internal/runtime/executor/codex_executor_cache_test.go @@ -8,15 +8,15 @@ import ( "github.com/gin-gonic/gin" "github.com/google/uuid" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) func TestCodexExecutorCacheHelper_OpenAIChatCompletions_StablePromptCacheKeyFromAPIKey(t *testing.T) { recorder := httptest.NewRecorder() ginCtx, _ := gin.CreateTestContext(recorder) - ginCtx.Set("apiKey", "test-api-key") + ginCtx.Set("userApiKey", "test-api-key") ctx := context.WithValue(context.Background(), "gin", ginCtx) executor := &CodexExecutor{} diff --git a/internal/runtime/executor/codex_executor_compact_test.go b/internal/runtime/executor/codex_executor_compact_test.go index 02c6db29fd..549cad9e77 100644 --- a/internal/runtime/executor/codex_executor_compact_test.go +++ b/internal/runtime/executor/codex_executor_compact_test.go @@ -7,10 +7,10 @@ import ( "net/http/httptest" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) diff --git a/internal/runtime/executor/codex_executor_imagegen_test.go b/internal/runtime/executor/codex_executor_imagegen_test.go new file mode 100644 index 0000000000..89d2a1c2a3 --- /dev/null +++ b/internal/runtime/executor/codex_executor_imagegen_test.go @@ -0,0 +1,118 @@ +package executor + +import ( + "testing" + + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/tidwall/gjson" +) + +func TestEnsureImageGenerationTool_NoTools(t *testing.T) { + body := []byte(`{"model":"gpt-5.4","input":"draw a cat"}`) + result := ensureImageGenerationTool(body, "gpt-5.4", nil) + + tools := gjson.GetBytes(result, "tools") + if !tools.IsArray() { + t.Fatalf("expected tools array, got %v", tools.Type) + } + arr := tools.Array() + if len(arr) != 1 { + t.Fatalf("expected 1 tool, got %d", len(arr)) + } + if arr[0].Get("type").String() != "image_generation" { + t.Fatalf("expected type=image_generation, got %s", arr[0].Get("type").String()) + } + if arr[0].Get("output_format").String() != "png" { + t.Fatalf("expected output_format=png, got %s", arr[0].Get("output_format").String()) + } +} + +func TestEnsureImageGenerationTool_ExistingToolsWithoutImageGen(t *testing.T) { + body := []byte(`{"model":"gpt-5.4","tools":[{"type":"function","name":"get_weather","parameters":{}}]}`) + result := ensureImageGenerationTool(body, "gpt-5.4", nil) + + tools := gjson.GetBytes(result, "tools") + arr := tools.Array() + if len(arr) != 2 { + t.Fatalf("expected 2 tools, got %d", len(arr)) + } + if arr[0].Get("type").String() != "function" { + t.Fatalf("expected first tool type=function, got %s", arr[0].Get("type").String()) + } + if arr[1].Get("type").String() != "image_generation" { + t.Fatalf("expected second tool type=image_generation, got %s", arr[1].Get("type").String()) + } +} + +func TestEnsureImageGenerationTool_AlreadyPresent(t *testing.T) { + body := []byte(`{"model":"gpt-5.4","tools":[{"type":"image_generation","output_format":"webp"},{"type":"function","name":"f1"}]}`) + result := ensureImageGenerationTool(body, "gpt-5.4", nil) + + tools := gjson.GetBytes(result, "tools") + arr := tools.Array() + if len(arr) != 2 { + t.Fatalf("expected 2 tools (no duplicate), got %d", len(arr)) + } + if arr[0].Get("output_format").String() != "webp" { + t.Fatalf("expected original output_format=webp preserved, got %s", arr[0].Get("output_format").String()) + } +} + +func TestEnsureImageGenerationTool_EmptyToolsArray(t *testing.T) { + body := []byte(`{"model":"gpt-5.4","tools":[]}`) + result := ensureImageGenerationTool(body, "gpt-5.4", nil) + + tools := gjson.GetBytes(result, "tools") + arr := tools.Array() + if len(arr) != 1 { + t.Fatalf("expected 1 tool, got %d", len(arr)) + } + if arr[0].Get("type").String() != "image_generation" { + t.Fatalf("expected type=image_generation, got %s", arr[0].Get("type").String()) + } +} + +func TestEnsureImageGenerationTool_WebSearchAndImageGen(t *testing.T) { + body := []byte(`{"model":"gpt-5.4","tools":[{"type":"web_search"}]}`) + result := ensureImageGenerationTool(body, "gpt-5.4", nil) + + tools := gjson.GetBytes(result, "tools") + arr := tools.Array() + if len(arr) != 2 { + t.Fatalf("expected 2 tools, got %d", len(arr)) + } + if arr[0].Get("type").String() != "web_search" { + t.Fatalf("expected first tool type=web_search, got %s", arr[0].Get("type").String()) + } + if arr[1].Get("type").String() != "image_generation" { + t.Fatalf("expected second tool type=image_generation, got %s", arr[1].Get("type").String()) + } +} + +func TestEnsureImageGenerationTool_GPT53CodexSparkDoesNotInjectTool(t *testing.T) { + body := []byte(`{"model":"gpt-5.3-codex-spark","input":"draw a cat"}`) + result := ensureImageGenerationTool(body, "gpt-5.3-codex-spark", nil) + + if string(result) != string(body) { + t.Fatalf("expected body to be unchanged, got %s", string(result)) + } + if gjson.GetBytes(result, "tools").Exists() { + t.Fatalf("expected no tools for gpt-5.3-codex-spark, got %s", gjson.GetBytes(result, "tools").Raw) + } +} + +func TestEnsureImageGenerationTool_FreeCodexAuthDoesNotInjectTool(t *testing.T) { + body := []byte(`{"model":"gpt-5.4","input":"draw a cat"}`) + freeAuth := &cliproxyauth.Auth{ + Provider: "codex", + Attributes: map[string]string{"plan_type": "free"}, + } + result := ensureImageGenerationTool(body, "gpt-5.4", freeAuth) + + if string(result) != string(body) { + t.Fatalf("expected body to be unchanged, got %s", string(result)) + } + if gjson.GetBytes(result, "tools").Exists() { + t.Fatalf("expected no tools for free codex auth, got %s", gjson.GetBytes(result, "tools").Raw) + } +} diff --git a/internal/runtime/executor/codex_executor_instructions_test.go b/internal/runtime/executor/codex_executor_instructions_test.go index c5dc5aa813..b3c8ac18ac 100644 --- a/internal/runtime/executor/codex_executor_instructions_test.go +++ b/internal/runtime/executor/codex_executor_instructions_test.go @@ -7,10 +7,10 @@ import ( "net/http/httptest" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) diff --git a/internal/runtime/executor/codex_executor_retry_test.go b/internal/runtime/executor/codex_executor_retry_test.go index 249d40d656..7207d5734c 100644 --- a/internal/runtime/executor/codex_executor_retry_test.go +++ b/internal/runtime/executor/codex_executor_retry_test.go @@ -1,6 +1,7 @@ package executor import ( + "encoding/json" "net/http" "strconv" "testing" @@ -73,6 +74,94 @@ func TestNewCodexStatusErrTreatsCapacityAsRetryableRateLimit(t *testing.T) { } } +func TestNewCodexStatusErrClassifiesKnownCodexFailures(t *testing.T) { + tests := []struct { + name string + statusCode int + body []byte + wantStatus int + wantType string + wantCode string + }{ + { + name: "context length status", + statusCode: http.StatusRequestEntityTooLarge, + body: []byte(`{"error":{"message":"context length exceeded","type":"invalid_request_error","code":"context_length_exceeded"}}`), + wantStatus: http.StatusRequestEntityTooLarge, + wantType: "invalid_request_error", + wantCode: "context_too_large", + }, + { + name: "thinking signature", + statusCode: http.StatusBadRequest, + body: []byte(`{"error":{"message":"Invalid signature in thinking block","type":"invalid_request_error","code":"invalid_request_error"}}`), + wantStatus: http.StatusBadRequest, + wantType: "invalid_request_error", + wantCode: "thinking_signature_invalid", + }, + { + name: "previous response missing", + statusCode: http.StatusBadRequest, + body: []byte(`{"error":{"message":"No response found for previous_response_id resp_123","type":"invalid_request_error","code":"previous_response_not_found"}}`), + wantStatus: http.StatusBadRequest, + wantType: "invalid_request_error", + wantCode: "previous_response_not_found", + }, + { + name: "auth unavailable", + statusCode: http.StatusUnauthorized, + body: []byte(`{"error":{"message":"invalid or expired token","type":"authentication_error","code":"invalid_api_key"}}`), + wantStatus: http.StatusUnauthorized, + wantType: "authentication_error", + wantCode: "auth_unavailable", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + err := newCodexStatusErr(tc.statusCode, tc.body) + + if got := err.StatusCode(); got != tc.wantStatus { + t.Fatalf("status code = %d, want %d", got, tc.wantStatus) + } + assertCodexErrorCode(t, err.Error(), tc.wantType, tc.wantCode) + }) + } +} + +func TestNewCodexStatusErrPreservesUnclassifiedErrors(t *testing.T) { + body := []byte(`{"error":{"message":"documentation mentions too many tokens, but this is a billing configuration failure","type":"server_error","code":"billing_config_error"}}`) + + err := newCodexStatusErr(http.StatusBadGateway, body) + + if got := err.StatusCode(); got != http.StatusBadGateway { + t.Fatalf("status code = %d, want %d", got, http.StatusBadGateway) + } + if got := err.Error(); got != string(body) { + t.Fatalf("error body = %s, want original %s", got, string(body)) + } +} + +func assertCodexErrorCode(t *testing.T, raw string, wantType string, wantCode string) { + t.Helper() + + var payload struct { + Error struct { + Type string `json:"type"` + Code string `json:"code"` + } `json:"error"` + } + if err := json.Unmarshal([]byte(raw), &payload); err != nil { + t.Fatalf("error body is not valid JSON: %v; body=%s", err, raw) + } + if payload.Error.Type != wantType { + t.Fatalf("error.type = %q, want %q; body=%s", payload.Error.Type, wantType, raw) + } + if payload.Error.Code != wantCode { + t.Fatalf("error.code = %q, want %q; body=%s", payload.Error.Code, wantCode, raw) + } +} + func itoa(v int64) string { return strconv.FormatInt(v, 10) } diff --git a/internal/runtime/executor/codex_executor_stream_output_test.go b/internal/runtime/executor/codex_executor_stream_output_test.go index 91d9b0761c..b814c3e96d 100644 --- a/internal/runtime/executor/codex_executor_stream_output_test.go +++ b/internal/runtime/executor/codex_executor_stream_output_test.go @@ -1,16 +1,17 @@ package executor import ( + "bytes" "context" "net/http" "net/http/httptest" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) @@ -44,3 +45,53 @@ func TestCodexExecutorExecute_EmptyStreamCompletionOutputUsesOutputItemDone(t *t t.Fatalf("choices.0.message.content = %q, want %q; payload=%s", gotContent, "ok", string(resp.Payload)) } } + +func TestCodexExecutorExecuteStream_EmptyStreamCompletionOutputUsesOutputItemDone(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("data: {\"type\":\"response.output_item.done\",\"item\":{\"type\":\"message\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"ok\"}]},\"output_index\":0}\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"object\":\"response\",\"created_at\":1775555723,\"status\":\"completed\",\"model\":\"gpt-5.4-mini-2026-03-17\",\"output\":[],\"usage\":{\"input_tokens\":8,\"output_tokens\":28,\"total_tokens\":36}}}\n\n")) + })) + defer server.Close() + + executor := NewCodexExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{Attributes: map[string]string{ + "base_url": server.URL, + "api_key": "test", + }} + + result, err := executor.ExecuteStream(context.Background(), auth, cliproxyexecutor.Request{ + Model: "gpt-5.4-mini", + Payload: []byte(`{"model":"gpt-5.4-mini","input":"Say ok"}`), + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai-response"), + Stream: true, + }) + if err != nil { + t.Fatalf("ExecuteStream error: %v", err) + } + + var completed []byte + for chunk := range result.Chunks { + if chunk.Err != nil { + t.Fatalf("stream chunk error: %v", chunk.Err) + } + payload := bytes.TrimSpace(chunk.Payload) + if !bytes.HasPrefix(payload, []byte("data:")) { + continue + } + data := bytes.TrimSpace(payload[5:]) + if gjson.GetBytes(data, "type").String() == "response.completed" { + completed = append([]byte(nil), data...) + } + } + + if len(completed) == 0 { + t.Fatal("missing response.completed chunk") + } + + gotContent := gjson.GetBytes(completed, "response.output.0.content.0.text").String() + if gotContent != "ok" { + t.Fatalf("response.output[0].content[0].text = %q, want %q; completed=%s", gotContent, "ok", string(completed)) + } +} diff --git a/internal/runtime/executor/codex_websockets_executor.go b/internal/runtime/executor/codex_websockets_executor.go index 94c9b262e8..2b56f13b1c 100644 --- a/internal/runtime/executor/codex_websockets_executor.go +++ b/internal/runtime/executor/codex_websockets_executor.go @@ -18,15 +18,15 @@ import ( "github.com/gin-gonic/gin" "github.com/google/uuid" "github.com/gorilla/websocket" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -76,6 +76,9 @@ type codexWebsocketSession struct { activeCancel context.CancelFunc readerConn *websocket.Conn + + upstreamDisconnectOnce sync.Once + upstreamDisconnectCh chan error } func NewCodexWebsocketsExecutor(cfg *config.Config) *CodexWebsocketsExecutor { @@ -151,6 +154,22 @@ func (s *codexWebsocketSession) configureConn(conn *websocket.Conn) { }) } +func (s *codexWebsocketSession) notifyUpstreamDisconnect(err error) { + if s == nil { + return + } + s.upstreamDisconnectOnce.Do(func() { + if s.upstreamDisconnectCh == nil { + return + } + select { + case s.upstreamDisconnectCh <- err: + default: + } + close(s.upstreamDisconnectCh) + }) +} + func (e *CodexWebsocketsExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { if ctx == nil { ctx = context.Background() @@ -184,14 +203,15 @@ func (e *CodexWebsocketsExecutor) Execute(ctx context.Context, auth *cliproxyaut } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) body, _ = sjson.SetBytes(body, "stream", true) - body, _ = sjson.DeleteBytes(body, "previous_response_id") body, _ = sjson.DeleteBytes(body, "prompt_cache_retention") body, _ = sjson.DeleteBytes(body, "safety_identifier") - if !gjson.GetBytes(body, "instructions").Exists() { - body, _ = sjson.SetBytes(body, "instructions", "") + body = normalizeCodexInstructions(body) + if e.cfg == nil || e.cfg.DisableImageGeneration == config.DisableImageGenerationOff { + body = ensureImageGenerationTool(body, baseModel, auth) } httpURL := strings.TrimSuffix(baseURL, "/") + "/responses" @@ -387,7 +407,12 @@ func (e *CodexWebsocketsExecutor) ExecuteStream(ctx context.Context, auth *clipr } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, body, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, body, requestedModel, requestPath) + body = normalizeCodexInstructions(body) + if e.cfg == nil || e.cfg.DisableImageGeneration == config.DisableImageGenerationOff { + body = ensureImageGenerationTool(body, baseModel, auth) + } httpURL := strings.TrimSuffix(baseURL, "/") + "/responses" wsURL, err := buildCodexResponsesWebsocketURL(httpURL) @@ -555,7 +580,7 @@ func (e *CodexWebsocketsExecutor) ExecuteStream(ctx context.Context, auth *clipr terminateReason = "read_error" terminateErr = errRead helps.RecordAPIWebsocketError(ctx, e.cfg, "read", errRead) - reporter.PublishFailure(ctx) + reporter.PublishFailure(ctx, errRead) _ = send(cliproxyexecutor.StreamChunk{Err: errRead}) return } @@ -565,7 +590,7 @@ func (e *CodexWebsocketsExecutor) ExecuteStream(ctx context.Context, auth *clipr terminateReason = "unexpected_binary" terminateErr = err helps.RecordAPIWebsocketError(ctx, e.cfg, "unexpected_binary", err) - reporter.PublishFailure(ctx) + reporter.PublishFailure(ctx, err) if sess != nil { e.invalidateUpstreamConn(sess, conn, "unexpected_binary", err) } @@ -585,7 +610,7 @@ func (e *CodexWebsocketsExecutor) ExecuteStream(ctx context.Context, auth *clipr terminateReason = "upstream_error" terminateErr = wsErr helps.RecordAPIWebsocketError(ctx, e.cfg, "upstream_error", wsErr) - reporter.PublishFailure(ctx) + reporter.PublishFailure(ctx, wsErr) if sess != nil { e.invalidateUpstreamConn(sess, conn, "upstream_error", wsErr) } @@ -769,6 +794,11 @@ func buildCodexResponsesWebsocketURL(httpURL string) (string, error) { parsed.Scheme = "ws" case "https": parsed.Scheme = "wss" + default: + return "", fmt.Errorf("codex websockets executor: unsupported responses websocket URL scheme %q", parsed.Scheme) + } + if strings.TrimSpace(parsed.Host) == "" { + return "", fmt.Errorf("codex websockets executor: responses websocket URL host is empty") } return parsed.String(), nil } @@ -802,6 +832,7 @@ func applyCodexPromptCacheHeaders(from sdktranslator.Format, req cliproxyexecuto if cache.ID != "" { rawJSON, _ = sjson.SetBytes(rawJSON, "prompt_cache_key", cache.ID) + setHeaderCasePreserved(headers, "session_id", cache.ID) headers.Set("Conversation_id", cache.ID) } @@ -821,13 +852,19 @@ func applyCodexWebsocketHeaders(ctx context.Context, headers http.Header, auth * ginHeaders = ginCtx.Request.Header.Clone() } - _, cfgBetaFeatures := codexHeaderDefaults(cfg, auth) + isAPIKey := codexAuthUsesAPIKey(auth) + cfgUserAgent, cfgBetaFeatures := codexHeaderDefaults(cfg, auth) ensureHeaderWithPriority(headers, ginHeaders, "x-codex-beta-features", cfgBetaFeatures, "") misc.EnsureHeader(headers, ginHeaders, "x-codex-turn-state", "") misc.EnsureHeader(headers, ginHeaders, "x-codex-turn-metadata", "") misc.EnsureHeader(headers, ginHeaders, "x-client-request-id", "") misc.EnsureHeader(headers, ginHeaders, "x-responsesapi-include-timing-metrics", "") misc.EnsureHeader(headers, ginHeaders, "Version", "") + if isAPIKey { + ensureHeaderWithPriority(headers, ginHeaders, "User-Agent", "", "") + } else { + ensureHeaderWithConfigPrecedence(headers, ginHeaders, "User-Agent", cfgUserAgent, codexUserAgent) + } betaHeader := strings.TrimSpace(headers.Get("OpenAI-Beta")) if betaHeader == "" && ginHeaders != nil { @@ -838,16 +875,9 @@ func applyCodexWebsocketHeaders(ctx context.Context, headers http.Header, auth * } headers.Set("OpenAI-Beta", betaHeader) if strings.Contains(headers.Get("User-Agent"), "Mac OS") { - misc.EnsureHeader(headers, ginHeaders, "Session_id", uuid.NewString()) - } - headers.Del("User-Agent") - - isAPIKey := false - if auth != nil && auth.Attributes != nil { - if v := strings.TrimSpace(auth.Attributes["api_key"]); v != "" { - isAPIKey = true - } + ensureHeaderCasePreserved(headers, ginHeaders, "session_id", "", uuid.NewString()) } + ensureHeaderCasePreserved(headers, ginHeaders, "session_id", "", "") if originator := strings.TrimSpace(ginHeaders.Get("Originator")); originator != "" { headers.Set("Originator", originator) } else if !isAPIKey { @@ -857,7 +887,7 @@ func applyCodexWebsocketHeaders(ctx context.Context, headers http.Header, auth * if auth != nil && auth.Metadata != nil { if accountID, ok := auth.Metadata["account_id"].(string); ok { if trimmed := strings.TrimSpace(accountID); trimmed != "" { - headers.Set("Chatgpt-Account-Id", trimmed) + setHeaderCasePreserved(headers, "ChatGPT-Account-ID", trimmed) } } } @@ -872,6 +902,77 @@ func applyCodexWebsocketHeaders(ctx context.Context, headers http.Header, auth * return headers } +func codexAuthUsesAPIKey(auth *cliproxyauth.Auth) bool { + if auth == nil || auth.Attributes == nil { + return false + } + return strings.TrimSpace(auth.Attributes["api_key"]) != "" +} + +func ensureHeaderCasePreserved(target http.Header, source http.Header, key, configValue, fallbackValue string) { + if target == nil { + return + } + if strings.TrimSpace(headerValueCaseInsensitive(target, key)) != "" { + return + } + if source != nil { + if val := strings.TrimSpace(headerValueCaseInsensitive(source, key)); val != "" { + setHeaderCasePreserved(target, key, val) + return + } + } + if val := strings.TrimSpace(configValue); val != "" { + setHeaderCasePreserved(target, key, val) + return + } + if val := strings.TrimSpace(fallbackValue); val != "" { + setHeaderCasePreserved(target, key, val) + } +} + +func setHeaderCasePreserved(headers http.Header, key string, value string) { + if headers == nil { + return + } + key = strings.TrimSpace(key) + value = strings.TrimSpace(value) + if key == "" || value == "" { + return + } + deleteHeaderCaseInsensitive(headers, key) + headers[key] = []string{value} +} + +func headerValueCaseInsensitive(headers http.Header, key string) string { + key = strings.TrimSpace(key) + if headers == nil || key == "" { + return "" + } + if val := strings.TrimSpace(headers.Get(key)); val != "" { + return val + } + for existingKey, values := range headers { + if !strings.EqualFold(existingKey, key) { + continue + } + for _, value := range values { + if trimmed := strings.TrimSpace(value); trimmed != "" { + return trimmed + } + } + } + return "" +} + +func deleteHeaderCaseInsensitive(headers http.Header, key string) { + for existingKey := range headers { + if strings.EqualFold(existingKey, key) { + delete(headers, existingKey) + } + } +} + func codexHeaderDefaults(cfg *config.Config, auth *cliproxyauth.Auth) (string, string) { if cfg == nil || auth == nil { return "", "" @@ -955,25 +1056,55 @@ func parseCodexWebsocketError(payload []byte) (error, bool) { return nil, false } - out := []byte(`{}`) - if errNode := gjson.GetBytes(payload, "error"); errNode.Exists() { - raw := errNode.Raw - if errNode.Type == gjson.String { - raw = errNode.Raw - } - out, _ = sjson.SetRawBytes(out, "error", []byte(raw)) - } else { - out, _ = sjson.SetBytes(out, "error.type", "server_error") - out, _ = sjson.SetBytes(out, "error.message", http.StatusText(status)) - } - + out := buildCodexWebsocketErrorPayload(payload, status) headers := parseCodexWebsocketErrorHeaders(payload) + statusError := statusErr{code: status, msg: string(out)} + if retryAfter := parseCodexRetryAfter(status, out, time.Now()); retryAfter != nil { + statusError.retryAfter = retryAfter + } else if isCodexWebsocketConnectionLimitError(payload) { + retryAfter := time.Duration(0) + statusError.retryAfter = &retryAfter + } return statusErrWithHeaders{ - statusErr: statusErr{code: status, msg: string(out)}, + statusErr: statusError, headers: headers, }, true } +func buildCodexWebsocketErrorPayload(payload []byte, status int) []byte { + out := []byte(`{}`) + out, _ = sjson.SetBytes(out, "status", status) + + if bodyNode := gjson.GetBytes(payload, "body"); bodyNode.Exists() { + out, _ = sjson.SetRawBytes(out, "body", []byte(bodyNode.Raw)) + if bodyErrorNode := bodyNode.Get("error"); bodyErrorNode.Exists() { + out, _ = sjson.SetRawBytes(out, "error", []byte(bodyErrorNode.Raw)) + return out + } + } + + if errNode := gjson.GetBytes(payload, "error"); errNode.Exists() { + out, _ = sjson.SetRawBytes(out, "error", []byte(errNode.Raw)) + return out + } + + out, _ = sjson.SetBytes(out, "error.type", "server_error") + out, _ = sjson.SetBytes(out, "error.message", http.StatusText(status)) + return out +} + +func isCodexWebsocketConnectionLimitError(payload []byte) bool { + if len(payload) == 0 { + return false + } + for _, path := range []string{"error.code", "error.type", "body.error.code", "body.error.type", "code", "error"} { + if strings.TrimSpace(gjson.GetBytes(payload, path).String()) == "websocket_connection_limit_reached" { + return true + } + } + return false +} + func parseCodexWebsocketErrorHeaders(payload []byte) http.Header { headersNode := gjson.GetBytes(payload, "headers") if !headersNode.Exists() || !headersNode.IsObject() { @@ -1109,11 +1240,22 @@ func (e *CodexWebsocketsExecutor) getOrCreateSession(sessionID string) *codexWeb if sess, ok := store.sessions[sessionID]; ok && sess != nil { return sess } - sess := &codexWebsocketSession{sessionID: sessionID} + sess := &codexWebsocketSession{ + sessionID: sessionID, + upstreamDisconnectCh: make(chan error, 1), + } store.sessions[sessionID] = sess return sess } +func (e *CodexWebsocketsExecutor) UpstreamDisconnectChan(sessionID string) <-chan error { + sess := e.getOrCreateSession(sessionID) + if sess == nil { + return nil + } + return sess.upstreamDisconnectCh +} + func (e *CodexWebsocketsExecutor) ensureUpstreamConn(ctx context.Context, auth *cliproxyauth.Auth, sess *codexWebsocketSession, authID string, wsURL string, headers http.Header) (*websocket.Conn, *http.Response, error) { if sess == nil { return e.dialCodexWebsocket(ctx, auth, wsURL, headers) @@ -1242,6 +1384,7 @@ func (e *CodexWebsocketsExecutor) invalidateUpstreamConn(sess *codexWebsocketSes sess.connMu.Unlock() logCodexWebsocketDisconnected(sessionID, authID, wsURL, reason, err) + sess.notifyUpstreamDisconnect(err) if errClose := conn.Close(); errClose != nil { log.Errorf("codex websockets executor: close websocket error: %v", errClose) } @@ -1480,6 +1623,13 @@ func (e *CodexAutoExecutor) CloseExecutionSession(sessionID string) { e.wsExec.CloseExecutionSession(sessionID) } +func (e *CodexAutoExecutor) UpstreamDisconnectChan(sessionID string) <-chan error { + if e == nil || e.wsExec == nil { + return nil + } + return e.wsExec.UpstreamDisconnectChan(sessionID) +} + func codexWebsocketsEnabled(auth *cliproxyauth.Auth) bool { if auth == nil { return false diff --git a/internal/runtime/executor/codex_websockets_executor_store_test.go b/internal/runtime/executor/codex_websockets_executor_store_test.go index 1a23fa31b5..115ed066d2 100644 --- a/internal/runtime/executor/codex_websockets_executor_store_test.go +++ b/internal/runtime/executor/codex_websockets_executor_store_test.go @@ -3,7 +3,7 @@ package executor import ( "testing" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestCodexWebsocketsExecutor_SessionStoreSurvivesExecutorReplacement(t *testing.T) { diff --git a/internal/runtime/executor/codex_websockets_executor_test.go b/internal/runtime/executor/codex_websockets_executor_test.go index dec356de4c..4342ed8882 100644 --- a/internal/runtime/executor/codex_websockets_executor_test.go +++ b/internal/runtime/executor/codex_websockets_executor_test.go @@ -1,15 +1,22 @@ package executor import ( + "bytes" "context" + "errors" "net/http" "net/http/httptest" + "strings" "testing" + "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/gorilla/websocket" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) @@ -32,14 +39,138 @@ func TestBuildCodexWebsocketRequestBodyPreservesPreviousResponseID(t *testing.T) } } +func TestCodexWebsocketsExecutePreservesPreviousResponseIDUpstream(t *testing.T) { + upgrader := websocket.Upgrader{CheckOrigin: func(*http.Request) bool { return true }} + capturedPayload := make(chan []byte, 1) + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path != "/responses" { + t.Fatalf("request path = %s, want /responses", r.URL.Path) + } + conn, err := upgrader.Upgrade(w, r, nil) + if err != nil { + t.Fatalf("upgrade websocket: %v", err) + } + defer func() { _ = conn.Close() }() + + msgType, payload, err := conn.ReadMessage() + if err != nil { + t.Fatalf("read upstream websocket message: %v", err) + } + if msgType != websocket.TextMessage { + t.Fatalf("message type = %d, want text", msgType) + } + capturedPayload <- bytes.Clone(payload) + + completed := []byte(`{"type":"response.completed","response":{"id":"resp-2","output":[],"usage":{"input_tokens":0,"output_tokens":0,"total_tokens":0}}}`) + if errWrite := conn.WriteMessage(websocket.TextMessage, completed); errWrite != nil { + t.Fatalf("write completed websocket message: %v", errWrite) + } + })) + defer server.Close() + + exec := NewCodexWebsocketsExecutor(&config.Config{SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationAll}}) + auth := &cliproxyauth.Auth{Attributes: map[string]string{"api_key": "sk-test", "base_url": server.URL}} + req := cliproxyexecutor.Request{ + Model: "gpt-5-codex", + Payload: []byte(`{"model":"gpt-5-codex","previous_response_id":"resp-1","input":[{"type":"message","id":"msg-1"}]}`), + } + opts := cliproxyexecutor.Options{SourceFormat: sdktranslator.FromString("codex")} + + if _, err := exec.Execute(context.Background(), auth, req, opts); err != nil { + t.Fatalf("Execute() error = %v", err) + } + + select { + case payload := <-capturedPayload: + if got := gjson.GetBytes(payload, "type").String(); got != "response.create" { + t.Fatalf("upstream type = %s, want response.create; payload=%s", got, payload) + } + if got := gjson.GetBytes(payload, "previous_response_id").String(); got != "resp-1" { + t.Fatalf("upstream previous_response_id = %s, want resp-1; payload=%s", got, payload) + } + case <-time.After(5 * time.Second): + t.Fatal("timed out waiting for upstream websocket payload") + } +} + +func TestCodexWebsocketsUpstreamDisconnectChanSignalsOnInvalidate(t *testing.T) { + upgrader := websocket.Upgrader{CheckOrigin: func(*http.Request) bool { return true }} + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + conn, err := upgrader.Upgrade(w, r, nil) + if err != nil { + t.Errorf("upgrade websocket: %v", err) + return + } + defer func() { _ = conn.Close() }() + for { + if _, _, errRead := conn.ReadMessage(); errRead != nil { + return + } + } + })) + defer server.Close() + + wsURL := "ws" + strings.TrimPrefix(server.URL, "http") + conn, _, err := websocket.DefaultDialer.Dial(wsURL, nil) + if err != nil { + t.Fatalf("dial websocket: %v", err) + } + defer func() { _ = conn.Close() }() + + exec := NewCodexWebsocketsExecutor(&config.Config{}) + sessionID := "sess-1" + disconnectCh := exec.UpstreamDisconnectChan(sessionID) + if disconnectCh == nil { + t.Fatal("expected disconnect channel") + } + + sess := exec.getOrCreateSession(sessionID) + if sess == nil { + t.Fatal("expected session") + } + sess.connMu.Lock() + sess.conn = conn + sess.authID = "auth-1" + sess.wsURL = "ws://example.test/responses" + sess.readerConn = conn + sess.connMu.Unlock() + + upstreamErr := errors.New("upstream gone") + exec.invalidateUpstreamConn(sess, conn, "test_invalidate", upstreamErr) + + select { + case errRead, ok := <-disconnectCh: + if !ok { + t.Fatal("expected disconnect channel to deliver error before closing") + } + if errRead == nil || errRead.Error() != upstreamErr.Error() { + t.Fatalf("disconnect error = %v, want %v", errRead, upstreamErr) + } + case <-time.After(5 * time.Second): + t.Fatal("timed out waiting for disconnect signal") + } +} + func TestApplyCodexWebsocketHeadersDefaultsToCurrentResponsesBeta(t *testing.T) { headers := applyCodexWebsocketHeaders(context.Background(), http.Header{}, nil, "", nil) if got := headers.Get("OpenAI-Beta"); got != codexResponsesWebsocketBetaHeaderValue { t.Fatalf("OpenAI-Beta = %s, want %s", got, codexResponsesWebsocketBetaHeaderValue) } - if got := headers.Get("User-Agent"); got != "" { - t.Fatalf("User-Agent = %s, want empty", got) + if got := headers.Get("User-Agent"); got != codexUserAgent { + t.Fatalf("User-Agent = %s, want %s", got, codexUserAgent) + } + if !strings.HasPrefix(codexUserAgent, codexOriginator+"/") { + t.Fatalf("default Codex User-Agent = %s, want prefix %s/", codexUserAgent, codexOriginator) + } + if strings.HasPrefix(codexUserAgent, "codex-tui/") { + t.Fatalf("default Codex User-Agent = %s, must not use stale codex-tui prefix", codexUserAgent) + } + if strings.Contains(codexUserAgent, "(codex-tui;") { + t.Fatalf("default Codex User-Agent = %s, must not include stale codex-tui suffix", codexUserAgent) + } + if got := headers.Get("Originator"); got != codexOriginator { + t.Fatalf("Originator = %s, want %s", got, codexOriginator) } if got := headers.Get("Version"); got != "" { t.Fatalf("Version = %q, want empty", got) @@ -62,9 +193,11 @@ func TestApplyCodexWebsocketHeadersPassesThroughClientIdentityHeaders(t *testing } ctx := contextWithGinHeaders(map[string]string{ "Originator": "Codex Desktop", + "User-Agent": "codex_cli_rs/0.1.0", "Version": "0.115.0-alpha.27", "X-Codex-Turn-Metadata": `{"turn_id":"turn-1"}`, "X-Client-Request-Id": "019d2233-e240-7162-992d-38df0a2a0e0d", + "session_id": "sess-client", }) headers := applyCodexWebsocketHeaders(ctx, http.Header{}, auth, "", nil) @@ -72,6 +205,9 @@ func TestApplyCodexWebsocketHeadersPassesThroughClientIdentityHeaders(t *testing if got := headers.Get("Originator"); got != "Codex Desktop" { t.Fatalf("Originator = %s, want %s", got, "Codex Desktop") } + if got := headers.Get("User-Agent"); got != "codex_cli_rs/0.1.0" { + t.Fatalf("User-Agent = %s, want %s", got, "codex_cli_rs/0.1.0") + } if got := headers.Get("Version"); got != "0.115.0-alpha.27" { t.Fatalf("Version = %s, want %s", got, "0.115.0-alpha.27") } @@ -81,6 +217,12 @@ func TestApplyCodexWebsocketHeadersPassesThroughClientIdentityHeaders(t *testing if got := headers.Get("X-Client-Request-Id"); got != "019d2233-e240-7162-992d-38df0a2a0e0d" { t.Fatalf("X-Client-Request-Id = %s, want %s", got, "019d2233-e240-7162-992d-38df0a2a0e0d") } + if got := headerValueCaseInsensitive(headers, "session_id"); got != "sess-client" { + t.Fatalf("session_id = %s, want sess-client", got) + } + if _, ok := headers["session_id"]; !ok { + t.Fatalf("expected lowercase session_id header key, got %#v", headers) + } } func TestApplyCodexWebsocketHeadersUsesConfigDefaultsForOAuth(t *testing.T) { @@ -97,8 +239,8 @@ func TestApplyCodexWebsocketHeadersUsesConfigDefaultsForOAuth(t *testing.T) { headers := applyCodexWebsocketHeaders(context.Background(), http.Header{}, auth, "", cfg) - if got := headers.Get("User-Agent"); got != "" { - t.Fatalf("User-Agent = %s, want empty", got) + if got := headers.Get("User-Agent"); got != "my-codex-client/1.0" { + t.Fatalf("User-Agent = %s, want %s", got, "my-codex-client/1.0") } if got := headers.Get("x-codex-beta-features"); got != "feature-a,feature-b" { t.Fatalf("x-codex-beta-features = %s, want %s", got, "feature-a,feature-b") @@ -129,8 +271,8 @@ func TestApplyCodexWebsocketHeadersPrefersExistingHeadersOverClientAndConfig(t * got := applyCodexWebsocketHeaders(ctx, headers, auth, "", cfg) - if gotVal := got.Get("User-Agent"); gotVal != "" { - t.Fatalf("User-Agent = %s, want empty", gotVal) + if gotVal := got.Get("User-Agent"); gotVal != "existing-ua" { + t.Fatalf("User-Agent = %s, want %s", gotVal, "existing-ua") } if gotVal := got.Get("x-codex-beta-features"); gotVal != "existing-beta" { t.Fatalf("x-codex-beta-features = %s, want %s", gotVal, "existing-beta") @@ -155,8 +297,8 @@ func TestApplyCodexWebsocketHeadersConfigUserAgentOverridesClientHeader(t *testi headers := applyCodexWebsocketHeaders(ctx, http.Header{}, auth, "", cfg) - if got := headers.Get("User-Agent"); got != "" { - t.Fatalf("User-Agent = %s, want empty", got) + if got := headers.Get("User-Agent"); got != "config-ua" { + t.Fatalf("User-Agent = %s, want %s", got, "config-ua") } if got := headers.Get("x-codex-beta-features"); got != "client-beta" { t.Fatalf("x-codex-beta-features = %s, want %s", got, "client-beta") @@ -183,6 +325,131 @@ func TestApplyCodexWebsocketHeadersIgnoresConfigForAPIKeyAuth(t *testing.T) { if got := headers.Get("x-codex-beta-features"); got != "" { t.Fatalf("x-codex-beta-features = %q, want empty", got) } + if got := headers.Get("Originator"); got != "" { + t.Fatalf("Originator = %s, want empty", got) + } +} + +func TestApplyCodexWebsocketHeadersPreservesExplicitAPIKeyUserAgent(t *testing.T) { + auth := &cliproxyauth.Auth{Provider: "codex", Attributes: map[string]string{"api_key": "sk-test"}} + ctx := contextWithGinHeaders(map[string]string{"User-Agent": "api-key-client/1.0", "Originator": "explicit-origin"}) + + headers := applyCodexWebsocketHeaders(ctx, http.Header{}, auth, "sk-test", nil) + + if got := headers.Get("User-Agent"); got != "api-key-client/1.0" { + t.Fatalf("User-Agent = %s, want api-key-client/1.0", got) + } + if got := headers.Get("Originator"); got != "explicit-origin" { + t.Fatalf("Originator = %s, want explicit-origin", got) + } +} + +func TestApplyCodexPromptCacheHeadersSetsLowercaseSessionAndLegacyConversation(t *testing.T) { + req := cliproxyexecutor.Request{Model: "gpt-5-codex", Payload: []byte(`{"prompt_cache_key":"cache-1"}`)} + + _, headers := applyCodexPromptCacheHeaders("openai-response", req, []byte(`{"model":"gpt-5-codex"}`)) + + if got := headerValueCaseInsensitive(headers, "session_id"); got != "cache-1" { + t.Fatalf("session_id = %s, want cache-1", got) + } + if _, ok := headers["session_id"]; !ok { + t.Fatalf("expected lowercase session_id key, got %#v", headers) + } + if got := headers.Get("Conversation_id"); got != "cache-1" { + t.Fatalf("Conversation_id = %s, want cache-1", got) + } +} + +func TestApplyCodexWebsocketHeadersUsesCanonicalAccountHeader(t *testing.T) { + auth := &cliproxyauth.Auth{Provider: "codex", Metadata: map[string]any{"account_id": "acct-1"}} + + headers := applyCodexWebsocketHeaders(context.Background(), http.Header{}, auth, "", nil) + + if got := headerValueCaseInsensitive(headers, "ChatGPT-Account-ID"); got != "acct-1" { + t.Fatalf("ChatGPT-Account-ID = %s, want acct-1", got) + } + values, ok := headers["ChatGPT-Account-ID"] + if !ok { + t.Fatalf("expected exact ChatGPT-Account-ID key, got %#v", headers) + } + if len(values) != 1 || values[0] != "acct-1" { + t.Fatalf("ChatGPT-Account-ID values = %#v, want [acct-1]", values) + } +} + +func TestBuildCodexResponsesWebsocketURLRequiresHTTPURL(t *testing.T) { + if got, err := buildCodexResponsesWebsocketURL("https://example.com/backend/responses"); err != nil || got != "wss://example.com/backend/responses" { + t.Fatalf("https URL = %q, %v; want wss URL", got, err) + } + if _, err := buildCodexResponsesWebsocketURL("ftp://example.com/responses"); err == nil { + t.Fatalf("expected unsupported scheme error") + } + if _, err := buildCodexResponsesWebsocketURL("https:///responses"); err == nil { + t.Fatalf("expected empty host error") + } +} + +func TestParseCodexWebsocketErrorMarksConnectionLimitRetryable(t *testing.T) { + err, ok := parseCodexWebsocketError([]byte(`{"type":"error","status":429,"error":{"code":"websocket_connection_limit_reached","message":"too many websockets"},"headers":{"retry-after":"1"}}`)) + if !ok { + t.Fatalf("expected websocket error") + } + status, ok := err.(interface{ StatusCode() int }) + if !ok || status.StatusCode() != http.StatusTooManyRequests { + t.Fatalf("status = %#v, want 429", err) + } + retryable, ok := err.(interface{ RetryAfter() *time.Duration }) + if !ok || retryable.RetryAfter() == nil { + t.Fatalf("expected retryable websocket connection limit error") + } + if got := *retryable.RetryAfter(); got != 0 { + t.Fatalf("retryAfter = %v, want connection-limit fallback 0", got) + } + withHeaders, ok := err.(interface{ Headers() http.Header }) + if !ok || withHeaders.Headers().Get("retry-after") != "1" { + t.Fatalf("headers = %#v, want retry-after", err) + } +} + +func TestParseCodexWebsocketErrorUsesUsageLimitRetryMetadata(t *testing.T) { + err, ok := parseCodexWebsocketError([]byte(`{"type":"error","status":429,"body":{"error":{"type":"usage_limit_reached","message":"usage limit reached","resets_in_seconds":7}}}`)) + if !ok { + t.Fatalf("expected websocket error") + } + + retryable, ok := err.(interface{ RetryAfter() *time.Duration }) + if !ok || retryable.RetryAfter() == nil { + t.Fatalf("expected retryable usage limit websocket error") + } + if got := *retryable.RetryAfter(); got != 7*time.Second { + t.Fatalf("retryAfter = %v, want 7s", got) + } +} + +func TestParseCodexWebsocketErrorPreservesWrappedBodyAndHeaders(t *testing.T) { + err, ok := parseCodexWebsocketError([]byte(`{"type":"error","status":429,"body":{"error":{"code":"websocket_connection_limit_reached","type":"server_error","message":"too many websocket connections"}},"headers":{"x-request-id":"req-1"}}`)) + if !ok { + t.Fatalf("expected websocket error") + } + + parsed := gjson.Parse(err.Error()) + if got := parsed.Get("status").Int(); got != http.StatusTooManyRequests { + t.Fatalf("wrapped status = %d, want 429; payload=%s", got, err.Error()) + } + if got := parsed.Get("body.error.code").String(); got != "websocket_connection_limit_reached" { + t.Fatalf("wrapped body error code = %s, want websocket_connection_limit_reached; payload=%s", got, err.Error()) + } + if got := parsed.Get("error.code").String(); got != "websocket_connection_limit_reached" { + t.Fatalf("surface error code = %s, want websocket_connection_limit_reached; payload=%s", got, err.Error()) + } + retryable, ok := err.(interface{ RetryAfter() *time.Duration }) + if !ok || retryable.RetryAfter() == nil { + t.Fatalf("expected body.error.code websocket connection limit to be retryable") + } + withHeaders, ok := err.(interface{ Headers() http.Header }) + if !ok || withHeaders.Headers().Get("x-request-id") != "req-1" { + t.Fatalf("headers = %#v, want x-request-id", err) + } } func TestApplyCodexHeadersUsesConfigUserAgentForOAuth(t *testing.T) { diff --git a/internal/runtime/executor/compat_helpers.go b/internal/runtime/executor/compat_helpers.go new file mode 100644 index 0000000000..b28633235d --- /dev/null +++ b/internal/runtime/executor/compat_helpers.go @@ -0,0 +1,129 @@ +package executor + +import ( + "context" + "net/http" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + "github.com/tidwall/gjson" + "github.com/tiktoken-go/tokenizer" +) + +func newProxyAwareHTTPClient(ctx context.Context, cfg *config.Config, auth *cliproxyauth.Auth, timeout time.Duration) *http.Client { + return helps.NewProxyAwareHTTPClient(ctx, cfg, auth, timeout) +} + +func parseOpenAIUsage(data []byte) usage.Detail { + return helps.ParseOpenAIUsage(data) +} + +func parseOpenAIStreamUsage(line []byte) (usage.Detail, bool) { + return helps.ParseOpenAIStreamUsage(line) +} + +func parseOpenAIResponsesUsage(data []byte) usage.Detail { + return helps.ParseOpenAIUsage(data) +} + +func parseOpenAIResponsesStreamUsage(line []byte) (usage.Detail, bool) { + return helps.ParseOpenAIStreamUsage(line) +} + +func getTokenizer(model string) (tokenizer.Codec, error) { + return helps.TokenizerForModel(model) +} + +func countOpenAIChatTokens(enc tokenizer.Codec, payload []byte) (int64, error) { + return helps.CountOpenAIChatTokens(enc, payload) +} + +func countClaudeChatTokens(enc tokenizer.Codec, payload []byte) (int64, error) { + return helps.CountClaudeChatTokens(enc, payload) +} + +func buildOpenAIUsageJSON(count int64) []byte { + return helps.BuildOpenAIUsageJSON(count) +} + +type upstreamRequestLog = helps.UpstreamRequestLog + +func recordAPIRequest(ctx context.Context, cfg *config.Config, info upstreamRequestLog) { + helps.RecordAPIRequest(ctx, cfg, info) +} + +func recordAPIResponseMetadata(ctx context.Context, cfg *config.Config, status int, headers http.Header) { + helps.RecordAPIResponseMetadata(ctx, cfg, status, headers) +} + +func recordAPIResponseError(ctx context.Context, cfg *config.Config, err error) { + helps.RecordAPIResponseError(ctx, cfg, err) +} + +func appendAPIResponseChunk(ctx context.Context, cfg *config.Config, chunk []byte) { + helps.AppendAPIResponseChunk(ctx, cfg, chunk) +} + +func payloadRequestedModel(opts cliproxyexecutor.Options, fallback string) string { + return helps.PayloadRequestedModel(opts, fallback) +} + +func applyPayloadConfigWithRoot(cfg *config.Config, model, protocol, root string, payload, original []byte, requestedModel string) []byte { + return helps.ApplyPayloadConfigWithRoot(cfg, model, protocol, root, payload, original, requestedModel, "") +} + +func summarizeErrorBody(contentType string, body []byte) string { + return helps.SummarizeErrorBody(contentType, body) +} + +func apiKeyFromContext(ctx context.Context) string { + return helps.APIKeyFromContext(ctx) +} + +func tokenizerForModel(model string) (tokenizer.Codec, error) { + return helps.TokenizerForModel(model) +} + +func collectOpenAIContent(content gjson.Result, segments *[]string) { + helps.CollectOpenAIContent(content, segments) +} + +type usageReporter struct { + reporter *helps.UsageReporter +} + +func newUsageReporter(ctx context.Context, provider, model string, auth *cliproxyauth.Auth) *usageReporter { + return &usageReporter{reporter: helps.NewUsageReporter(ctx, provider, model, auth)} +} + +func (r *usageReporter) publish(ctx context.Context, detail usage.Detail) { + if r == nil || r.reporter == nil { + return + } + r.reporter.Publish(ctx, detail) +} + +func (r *usageReporter) publishFailure(ctx context.Context) { + if r == nil || r.reporter == nil { + return + } + r.reporter.PublishFailure(ctx) +} + +func (r *usageReporter) trackFailure(ctx context.Context, errPtr *error) { + if r == nil || r.reporter == nil { + return + } + r.reporter.TrackFailure(ctx, errPtr) +} + +func (r *usageReporter) ensurePublished(ctx context.Context) { + if r == nil || r.reporter == nil { + return + } + r.reporter.EnsurePublished(ctx) +} diff --git a/internal/runtime/executor/cursor_executor.go b/internal/runtime/executor/cursor_executor.go new file mode 100644 index 0000000000..eb1748fd1f --- /dev/null +++ b/internal/runtime/executor/cursor_executor.go @@ -0,0 +1,1719 @@ +package executor + +import ( + "bytes" + "context" + "crypto/sha256" + "crypto/tls" + "encoding/base64" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "strings" + "sync" + "time" + + "github.com/google/uuid" + cursorauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/cursor" + cursorproto "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/cursor/proto" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "golang.org/x/net/http2" +) + +const ( + cursorAPIURL = "https://api2.cursor.sh" + cursorRunPath = "/agent.v1.AgentService/Run" + cursorModelsPath = "/agent.v1.AgentService/GetUsableModels" + cursorClientVersion = "cli-2026.02.13-41ac335" + cursorAuthType = "cursor" + cursorHeartbeatInterval = 5 * time.Second + cursorSessionTTL = 5 * time.Minute + cursorCheckpointTTL = 30 * time.Minute +) + +// CursorExecutor handles requests to the Cursor API via Connect+Protobuf protocol. +type CursorExecutor struct { + cfg *config.Config + mu sync.Mutex + sessions map[string]*cursorSession + checkpoints map[string]*savedCheckpoint // keyed by conversationId +} + +// savedCheckpoint stores the server's conversation_checkpoint_update for reuse. +type savedCheckpoint struct { + data []byte // raw ConversationStateStructure protobuf bytes + blobStore map[string][]byte // blobs referenced by the checkpoint + authID string // auth that produced this checkpoint (checkpoint is auth-specific) + updatedAt time.Time +} + +type cursorSession struct { + stream *cursorproto.H2Stream + blobStore map[string][]byte + mcpTools []cursorproto.McpToolDef + pending []pendingMcpExec + cancel context.CancelFunc // cancels the session-scoped heartbeat (NOT tied to HTTP request) + createdAt time.Time + authID string // auth file ID that created this session (for multi-account isolation) + toolResultCh chan []toolResultInfo // receives tool results from the next HTTP request + resumeOutCh chan cliproxyexecutor.StreamChunk // output channel for resumed response + switchOutput func(ch chan cliproxyexecutor.StreamChunk) // callback to switch output channel +} + +type pendingMcpExec struct { + ExecMsgId uint32 + ExecId string + ToolCallId string + ToolName string + Args string // JSON-encoded args +} + +// NewCursorExecutor constructs a new executor instance. +func NewCursorExecutor(cfg *config.Config) *CursorExecutor { + e := &CursorExecutor{ + cfg: cfg, + sessions: make(map[string]*cursorSession), + checkpoints: make(map[string]*savedCheckpoint), + } + go e.cleanupLoop() + return e +} + +// Identifier implements ProviderExecutor. +func (e *CursorExecutor) Identifier() string { return cursorAuthType } + +// CloseExecutionSession implements ExecutionSessionCloser. +func (e *CursorExecutor) CloseExecutionSession(sessionID string) { + e.mu.Lock() + defer e.mu.Unlock() + if sessionID == cliproxyauth.CloseAllExecutionSessionsID { + for k, s := range e.sessions { + s.cancel() + delete(e.sessions, k) + } + return + } + if s, ok := e.sessions[sessionID]; ok { + s.cancel() + delete(e.sessions, sessionID) + } +} + +func (e *CursorExecutor) cleanupLoop() { + ticker := time.NewTicker(1 * time.Minute) + defer ticker.Stop() + for range ticker.C { + e.mu.Lock() + for k, s := range e.sessions { + if time.Since(s.createdAt) > cursorSessionTTL { + s.cancel() + delete(e.sessions, k) + } + } + for k, cp := range e.checkpoints { + if time.Since(cp.updatedAt) > cursorCheckpointTTL { + delete(e.checkpoints, k) + } + } + e.mu.Unlock() + } +} + +// findSessionByConversationLocked searches for a session matching the given +// conversationId regardless of authID. Used to find and clean up stale sessions +// from a previous auth after quota failover. Caller must hold e.mu. +func (e *CursorExecutor) findSessionByConversationLocked(convId string) string { + suffix := ":" + convId + for k := range e.sessions { + if strings.HasSuffix(k, suffix) { + return k + } + } + return "" +} + +// cursorStatusErr implements the StatusError and RetryAfter interfaces so the +// conductor can classify Cursor errors (e.g. 429 → quota cooldown). +type cursorStatusErr struct { + code int + msg string +} + +func (e cursorStatusErr) Error() string { return e.msg } +func (e cursorStatusErr) StatusCode() int { return e.code } +func (e cursorStatusErr) RetryAfter() *time.Duration { return nil } // no retry-after info from Cursor; conductor uses exponential backoff + +// classifyCursorError maps Cursor Connect/H2 errors to HTTP status codes. +// Layer 1: precise match on ConnectError.Code (gRPC standard codes). +// Layer 2: fuzzy string match for H2 frame errors and unknown formats. +// Unclassified errors pass through unchanged. +func classifyCursorError(err error) error { + if err == nil { + return nil + } + + // Layer 1: structured ConnectError from ParseConnectEndStream + var ce *cursorproto.ConnectError + if errors.As(err, &ce) { + log.Infof("cursor: Connect error code=%q message=%q", ce.Code, ce.Message) + switch ce.Code { + case "resource_exhausted": + return cursorStatusErr{code: 429, msg: err.Error()} + case "unauthenticated": + return cursorStatusErr{code: 401, msg: err.Error()} + case "permission_denied": + return cursorStatusErr{code: 403, msg: err.Error()} + case "unavailable": + return cursorStatusErr{code: 503, msg: err.Error()} + case "internal": + return cursorStatusErr{code: 500, msg: err.Error()} + default: + // Unknown Connect code — log for observation, treat as 502 + return cursorStatusErr{code: 502, msg: err.Error()} + } + } + + // Layer 2: fuzzy match for H2 errors and unstructured messages + msg := strings.ToLower(err.Error()) + switch { + case strings.Contains(msg, "rate limit") || strings.Contains(msg, "quota") || + strings.Contains(msg, "too many"): + return cursorStatusErr{code: 429, msg: err.Error()} + case strings.Contains(msg, "rst_stream") || strings.Contains(msg, "goaway"): + return cursorStatusErr{code: 502, msg: err.Error()} + } + + return err +} + +// PrepareRequest implements ProviderExecutor (for HttpRequest support). +func (e *CursorExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + token := cursorAccessToken(auth) + if token == "" { + return fmt.Errorf("cursor: access token not found") + } + req.Header.Set("Authorization", "Bearer "+token) + return nil +} + +// HttpRequest injects credentials and executes the request. +func (e *CursorExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("cursor: request is nil") + } + if err := e.PrepareRequest(req, auth); err != nil { + return nil, err + } + return http.DefaultClient.Do(req) +} + +// CountTokens estimates token count locally using tiktoken. +func (e *CursorExecutor) CountTokens(_ context.Context, _ *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + defer func() { + if err != nil { + log.Warnf("cursor CountTokens error: %v", err) + } else { + log.Debugf("cursor CountTokens: model=%s result=%s", req.Model, string(resp.Payload)) + } + }() + model := gjson.GetBytes(req.Payload, "model").String() + if model == "" { + model = req.Model + } + + enc, err := getTokenizer(model) + if err != nil { + // Fallback: return zero tokens rather than error (avoids 502) + return cliproxyexecutor.Response{Payload: buildOpenAIUsageJSON(0)}, nil + } + + // Detect format: Claude (/v1/messages) vs OpenAI (/v1/chat/completions) + var count int64 + if gjson.GetBytes(req.Payload, "system").Exists() || opts.SourceFormat.String() == "claude" { + count, _ = countClaudeChatTokens(enc, req.Payload) + } else { + count, _ = countOpenAIChatTokens(enc, req.Payload) + } + + return cliproxyexecutor.Response{Payload: buildOpenAIUsageJSON(count)}, nil +} + +// Refresh attempts to refresh the Cursor access token. +func (e *CursorExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + refreshToken := cursorRefreshToken(auth) + if refreshToken == "" { + return nil, fmt.Errorf("cursor: no refresh token available") + } + + tokens, err := cursorauth.RefreshToken(ctx, refreshToken) + if err != nil { + return nil, err + } + + expiresAt := cursorauth.GetTokenExpiry(tokens.AccessToken) + + newAuth := auth.Clone() + newAuth.Metadata["access_token"] = tokens.AccessToken + newAuth.Metadata["refresh_token"] = tokens.RefreshToken + newAuth.Metadata["expires_at"] = expiresAt.Format(time.RFC3339) + return newAuth, nil +} + +// Execute handles non-streaming requests. +func (e *CursorExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + log.Debugf("cursor Execute: model=%s sourceFormat=%s payloadLen=%d", req.Model, opts.SourceFormat, len(req.Payload)) + defer func() { + if r := recover(); r != nil { + log.Errorf("cursor Execute PANIC: %v", r) + err = fmt.Errorf("cursor: internal panic: %v", r) + } + if err != nil { + log.Warnf("cursor Execute error: %v", err) + } + }() + accessToken := cursorAccessToken(auth) + if accessToken == "" { + return resp, fmt.Errorf("cursor: access token not found") + } + + // Translate input to OpenAI format if needed (e.g. Claude /v1/messages format) + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + payload := req.Payload + if from.String() != "" && from.String() != "openai" { + payload = sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(payload), false) + } + + parsed := parseOpenAIRequest(payload) + ccSessId := extractClaudeCodeSessionId(req.Payload) + conversationId := deriveConversationId(apiKeyFromContext(ctx), ccSessId, parsed.SystemPrompt) + params := buildRunRequestParams(parsed, conversationId) + + requestBytes := cursorproto.EncodeRunRequest(params) + framedRequest := cursorproto.FrameConnectMessage(requestBytes, 0) + + stream, err := openCursorH2Stream(accessToken) + if err != nil { + return resp, err + } + defer stream.Close() + + // Send the request frame + if err := stream.Write(framedRequest); err != nil { + return resp, fmt.Errorf("cursor: failed to send request: %w", err) + } + + // Start heartbeat + sessionCtx, sessionCancel := context.WithCancel(ctx) + defer sessionCancel() + go cursorH2Heartbeat(sessionCtx, stream) + + // Collect full text from streaming response + var fullText strings.Builder + if streamErr := processH2SessionFrames(sessionCtx, stream, params.BlobStore, nil, + func(text string, isThinking bool) { + fullText.WriteString(text) + }, + nil, + nil, + nil, // tokenUsage - non-streaming + nil, // onCheckpoint - non-streaming doesn't persist + ); streamErr != nil && fullText.Len() == 0 { + return resp, classifyCursorError(fmt.Errorf("cursor: stream error: %w", streamErr)) + } + + id := "chatcmpl-" + uuid.New().String()[:28] + created := time.Now().Unix() + openaiResp := fmt.Sprintf(`{"id":"%s","object":"chat.completion","created":%d,"model":"%s","choices":[{"index":0,"message":{"role":"assistant","content":%s},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}`, + id, created, parsed.Model, jsonString(fullText.String())) + + // Translate response back to source format if needed + result := []byte(openaiResp) + if from.String() != "" && from.String() != "openai" { + var param any + result = sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), payload, result, ¶m) + } + resp.Payload = result + return resp, nil +} + +// ExecuteStream handles streaming requests. +// It supports MCP tool call sessions: when Cursor returns an MCP tool call, +// the H2 stream is kept alive. When Claude Code returns the tool result in +// the next request, the result is sent back on the same stream (session resume). +// This mirrors the activeSessions/resumeWithToolResults pattern in cursor-fetch.ts. +func (e *CursorExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + log.Debugf("cursor ExecuteStream: model=%s sourceFormat=%s payloadLen=%d", req.Model, opts.SourceFormat, len(req.Payload)) + defer func() { + if r := recover(); r != nil { + log.Errorf("cursor ExecuteStream PANIC: %v", r) + err = fmt.Errorf("cursor: internal panic: %v", r) + } + if err != nil { + log.Warnf("cursor ExecuteStream error: %v", err) + } + }() + accessToken := cursorAccessToken(auth) + if accessToken == "" { + return nil, fmt.Errorf("cursor: access token not found") + } + + // Extract session_id from metadata BEFORE translation (translation strips metadata) + ccSessionId := extractClaudeCodeSessionId(req.Payload) + if ccSessionId == "" && len(opts.OriginalRequest) > 0 { + ccSessionId = extractClaudeCodeSessionId(opts.OriginalRequest) + } + + // Translate input to OpenAI format if needed + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + payload := req.Payload + originalPayload := bytes.Clone(req.Payload) + if len(opts.OriginalRequest) > 0 { + originalPayload = bytes.Clone(opts.OriginalRequest) + } + if from.String() != "" && from.String() != "openai" { + log.Debugf("cursor: translating request from %s to openai", from) + payload = sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(payload), true) + log.Debugf("cursor: translated payload len=%d", len(payload)) + } + + parsed := parseOpenAIRequest(payload) + log.Debugf("cursor: parsed request: model=%s userText=%d chars, turns=%d, tools=%d, toolResults=%d", + parsed.Model, len(parsed.UserText), len(parsed.Turns), len(parsed.Tools), len(parsed.ToolResults)) + + conversationId := deriveConversationId(apiKeyFromContext(ctx), ccSessionId, parsed.SystemPrompt) + authID := auth.ID // e.g. "cursor.json" or "cursor-account2.json" + log.Debugf("cursor: conversationId=%s authID=%s", conversationId, authID) + + // Session key includes authID (H2 stream is auth-specific, not transferable). + // Checkpoint key uses conversationId only — allows detecting auth migration. + sessionKey := authID + ":" + conversationId + checkpointKey := conversationId + needsTranslate := from.String() != "" && from.String() != "openai" + + // Check if we can resume an existing session with tool results + if len(parsed.ToolResults) > 0 { + e.mu.Lock() + session, hasSession := e.sessions[sessionKey] + if hasSession { + delete(e.sessions, sessionKey) + } + // If no session found for current auth, check for stale sessions from + // a different auth on the same conversation (quota failover scenario). + // Clean them up since the H2 stream belongs to the old account. + if !hasSession { + if oldKey := e.findSessionByConversationLocked(conversationId); oldKey != "" { + oldSession := e.sessions[oldKey] + log.Infof("cursor: cleaning up stale session from auth %s for conv=%s (auth migrated to %s)", oldSession.authID, conversationId, authID) + oldSession.cancel() + if oldSession.stream != nil { + oldSession.stream.Close() + } + delete(e.sessions, oldKey) + } + } + e.mu.Unlock() + + if hasSession && session.stream != nil && session.authID == authID { + log.Debugf("cursor: resuming session %s with %d tool results", sessionKey, len(parsed.ToolResults)) + return e.resumeWithToolResults(ctx, session, parsed, from, to, req, originalPayload, payload, needsTranslate) + } + if hasSession && session.authID != authID { + log.Warnf("cursor: session %s belongs to auth %s, but request is from %s — skipping resume", sessionKey, session.authID, authID) + } + } + + // Clean up any stale session for this key (or from a previous auth on same conversation) + e.mu.Lock() + if old, ok := e.sessions[sessionKey]; ok { + old.cancel() + delete(e.sessions, sessionKey) + } else if oldKey := e.findSessionByConversationLocked(conversationId); oldKey != "" { + old := e.sessions[oldKey] + old.cancel() + if old.stream != nil { + old.stream.Close() + } + delete(e.sessions, oldKey) + } + e.mu.Unlock() + + // Look up saved checkpoint for this conversation (keyed by conversationId only). + // Checkpoint is auth-specific: if auth changed (e.g. quota exhaustion failover), + // the old checkpoint is useless on the new account — discard and flatten. + e.mu.Lock() + saved, hasCheckpoint := e.checkpoints[checkpointKey] + e.mu.Unlock() + + params := buildRunRequestParams(parsed, conversationId) + + if hasCheckpoint && saved.data != nil && saved.authID == authID { + // Same auth — use checkpoint normally + log.Debugf("cursor: using saved checkpoint (%d bytes) for conv=%s auth=%s", len(saved.data), checkpointKey, authID) + params.RawCheckpoint = saved.data + // Merge saved blobStore into params + if params.BlobStore == nil { + params.BlobStore = make(map[string][]byte) + } + for k, v := range saved.blobStore { + if _, exists := params.BlobStore[k]; !exists { + params.BlobStore[k] = v + } + } + } else if hasCheckpoint && saved.data != nil && saved.authID != authID { + // Auth changed (quota failover) — checkpoint is not portable across accounts. + // Discard and flatten conversation history into userText. + log.Infof("cursor: auth migrated (%s → %s) for conv=%s, discarding checkpoint and flattening context", saved.authID, authID, checkpointKey) + e.mu.Lock() + delete(e.checkpoints, checkpointKey) + e.mu.Unlock() + if len(parsed.ToolResults) > 0 || len(parsed.Turns) > 0 { + flattenConversationIntoUserText(parsed) + params = buildRunRequestParams(parsed, conversationId) + } + } else if len(parsed.ToolResults) > 0 || len(parsed.Turns) > 0 { + // Fallback: no checkpoint available (cold resume / proxy restart). + // Flatten the full conversation history (including tool interactions) into userText. + // Cursor's turns encoding is not reliably read by the model, but userText always works. + log.Debugf("cursor: no checkpoint, flattening %d turns + %d tool results into userText", len(parsed.Turns), len(parsed.ToolResults)) + flattenConversationIntoUserText(parsed) + params = buildRunRequestParams(parsed, conversationId) + } + requestBytes := cursorproto.EncodeRunRequest(params) + framedRequest := cursorproto.FrameConnectMessage(requestBytes, 0) + + stream, err := openCursorH2Stream(accessToken) + if err != nil { + return nil, err + } + + if err := stream.Write(framedRequest); err != nil { + stream.Close() + return nil, fmt.Errorf("cursor: failed to send request: %w", err) + } + + // Use a session-scoped context for the heartbeat that is NOT tied to the HTTP request. + // This ensures the heartbeat survives across request boundaries during MCP tool execution. + // Mirrors the TS plugin's setInterval-based heartbeat that lives independently of HTTP responses. + sessionCtx, sessionCancel := context.WithCancel(context.Background()) + go cursorH2Heartbeat(sessionCtx, stream) + + chunks := make(chan cliproxyexecutor.StreamChunk, 64) + chatId := "chatcmpl-" + uuid.New().String()[:28] + created := time.Now().Unix() + + var streamParam any + + // Tool result channel for inline mode. processH2SessionFrames blocks on it + // when mcpArgs is received, while continuing to handle KV/heartbeat. + toolResultCh := make(chan []toolResultInfo, 1) + + // Switchable output: initially writes to `chunks`. After mcpArgs, the + // onMcpExec callback closes `chunks` (ending the first HTTP response), + // then processH2SessionFrames blocks on toolResultCh. When results arrive, + // it switches to `resumeOutCh` (created by resumeWithToolResults). + var outMu sync.Mutex + currentOut := chunks + + emitToOut := func(chunk cliproxyexecutor.StreamChunk) { + outMu.Lock() + out := currentOut + outMu.Unlock() + if out != nil { + out <- chunk + } + } + + // Wrap sendChunk/sendDone to use emitToOut + sendChunkSwitchable := func(delta string, finishReason string) { + fr := "null" + if finishReason != "" { + fr = finishReason + } + openaiJSON := fmt.Sprintf(`{"id":"%s","object":"chat.completion.chunk","created":%d,"model":"%s","choices":[{"index":0,"delta":%s,"finish_reason":%s}]}`, + chatId, created, parsed.Model, delta, fr) + sseLine := []byte("data: " + openaiJSON + "\n") + + if needsTranslate { + translated := sdktranslator.TranslateStream(ctx, to, from, req.Model, originalPayload, payload, sseLine, &streamParam) + for _, t := range translated { + emitToOut(cliproxyexecutor.StreamChunk{Payload: bytes.Clone(t)}) + } + } else { + emitToOut(cliproxyexecutor.StreamChunk{Payload: []byte(openaiJSON)}) + } + } + + sendDoneSwitchable := func() { + if needsTranslate { + done := sdktranslator.TranslateStream(ctx, to, from, req.Model, originalPayload, payload, []byte("data: [DONE]\n"), &streamParam) + for _, d := range done { + emitToOut(cliproxyexecutor.StreamChunk{Payload: bytes.Clone(d)}) + } + } else { + emitToOut(cliproxyexecutor.StreamChunk{Payload: []byte("[DONE]")}) + } + } + + // Pre-response error detection for transparent failover: + // If the stream fails before any chunk is emitted (e.g. quota exceeded), + // ExecuteStream returns an error so the conductor retries with a different auth. + streamErrCh := make(chan error, 1) + firstChunkSent := make(chan struct{}, 1) // buffered: goroutine won't block signaling + + origEmitToOut := emitToOut + emitToOut = func(chunk cliproxyexecutor.StreamChunk) { + select { + case firstChunkSent <- struct{}{}: + default: + } + origEmitToOut(chunk) + } + + go func() { + var resumeOutCh chan cliproxyexecutor.StreamChunk + _ = resumeOutCh + thinkingActive := false + toolCallIndex := 0 + usage := &cursorTokenUsage{} + usage.setInputEstimate(len(payload)) + + streamErr := processH2SessionFrames(sessionCtx, stream, params.BlobStore, params.McpTools, + func(text string, isThinking bool) { + if isThinking { + if !thinkingActive { + thinkingActive = true + sendChunkSwitchable(`{"role":"assistant","content":""}`, "") + } + sendChunkSwitchable(fmt.Sprintf(`{"content":%s}`, jsonString(text)), "") + } else { + if thinkingActive { + thinkingActive = false + sendChunkSwitchable(`{"content":""}`, "") + } + sendChunkSwitchable(fmt.Sprintf(`{"content":%s}`, jsonString(text)), "") + } + }, + func(exec pendingMcpExec) { + if thinkingActive { + thinkingActive = false + sendChunkSwitchable(`{"content":""}`, "") + } + toolCallJSON := fmt.Sprintf(`{"tool_calls":[{"index":%d,"id":"%s","type":"function","function":{"name":"%s","arguments":%s}}]}`, + toolCallIndex, exec.ToolCallId, exec.ToolName, jsonString(exec.Args)) + toolCallIndex++ + sendChunkSwitchable(toolCallJSON, "") + sendChunkSwitchable(`{}`, `"tool_calls"`) + sendDoneSwitchable() + + // Close current output to end the current HTTP SSE response + outMu.Lock() + if currentOut != nil { + close(currentOut) + currentOut = nil + } + outMu.Unlock() + + // Create new resume output channel, reuse the same toolResultCh + resumeOut := make(chan cliproxyexecutor.StreamChunk, 64) + log.Debugf("cursor: saving session %s for MCP tool resume (tool=%s)", sessionKey, exec.ToolName) + e.mu.Lock() + e.sessions[sessionKey] = &cursorSession{ + stream: stream, + blobStore: params.BlobStore, + mcpTools: params.McpTools, + pending: []pendingMcpExec{exec}, + cancel: sessionCancel, + createdAt: time.Now(), + authID: authID, + toolResultCh: toolResultCh, // reuse same channel across rounds + resumeOutCh: resumeOut, + switchOutput: func(ch chan cliproxyexecutor.StreamChunk) { + outMu.Lock() + currentOut = ch + // Reset translator state so the new HTTP response gets + // a fresh message_start, content_block_start, etc. + streamParam = nil + // New response needs its own message ID + chatId = "chatcmpl-" + uuid.New().String()[:28] + created = time.Now().Unix() + outMu.Unlock() + }, + } + e.mu.Unlock() + resumeOutCh = resumeOut + + // processH2SessionFrames will now block on toolResultCh (inline wait loop) + // while continuing to handle KV messages + }, + toolResultCh, + usage, + func(cpData []byte) { + // Save checkpoint keyed by conversationId, tagged with authID for migration detection + e.mu.Lock() + e.checkpoints[checkpointKey] = &savedCheckpoint{ + data: cpData, + blobStore: params.BlobStore, + authID: authID, + updatedAt: time.Now(), + } + e.mu.Unlock() + log.Debugf("cursor: saved checkpoint (%d bytes) for conv=%s auth=%s", len(cpData), checkpointKey, authID) + }, + ) + + // processH2SessionFrames returned — stream is done. + // Check if error happened before any chunks were emitted. + if streamErr != nil { + select { + case <-firstChunkSent: + // Chunks were already sent to client — can't transparently retry. + // Next request will failover via conductor's cooldown mechanism. + log.Warnf("cursor: stream error after data sent (auth=%s conv=%s): %v", authID, conversationId, streamErr) + default: + // No data sent yet — propagate error for transparent conductor retry. + log.Warnf("cursor: stream error before data sent (auth=%s conv=%s): %v — signaling retry", authID, conversationId, streamErr) + streamErrCh <- streamErr + outMu.Lock() + if currentOut != nil { + close(currentOut) + currentOut = nil + } + outMu.Unlock() + sessionCancel() + stream.Close() + return + } + } + + if thinkingActive { + sendChunkSwitchable(`{"content":""}`, "") + } + // Include token usage in the final stop chunk + inputTok, outputTok := usage.get() + stopDelta := fmt.Sprintf(`{},"usage":{"prompt_tokens":%d,"completion_tokens":%d,"total_tokens":%d}`, + inputTok, outputTok, inputTok+outputTok) + // Build the stop chunk with usage embedded in the choices array level + fr := `"stop"` + openaiJSON := fmt.Sprintf(`{"id":"%s","object":"chat.completion.chunk","created":%d,"model":"%s","choices":[{"index":0,"delta":{},"finish_reason":%s}],"usage":{"prompt_tokens":%d,"completion_tokens":%d,"total_tokens":%d}}`, + chatId, created, parsed.Model, fr, inputTok, outputTok, inputTok+outputTok) + sseLine := []byte("data: " + openaiJSON + "\n") + if needsTranslate { + translated := sdktranslator.TranslateStream(ctx, to, from, req.Model, originalPayload, payload, sseLine, &streamParam) + for _, t := range translated { + emitToOut(cliproxyexecutor.StreamChunk{Payload: bytes.Clone(t)}) + } + } else { + emitToOut(cliproxyexecutor.StreamChunk{Payload: []byte(openaiJSON)}) + } + sendDoneSwitchable() + _ = stopDelta // unused + + // Close whatever output channel is still active + outMu.Lock() + if currentOut != nil { + close(currentOut) + currentOut = nil + } + outMu.Unlock() + sessionCancel() + stream.Close() + }() + + // Wait for either the first chunk or a pre-response error. + // If the stream fails before emitting any data (e.g. quota exceeded), + // return an error so the conductor retries with a different auth. + select { + case streamErr := <-streamErrCh: + return nil, classifyCursorError(fmt.Errorf("cursor: stream failed before response: %w", streamErr)) + case <-firstChunkSent: + // Data started flowing — return stream to client + return &cliproxyexecutor.StreamResult{Chunks: chunks}, nil + } +} + +// resumeWithToolResults injects tool results into the running processH2SessionFrames +// via the toolResultCh channel. The original goroutine from ExecuteStream is still alive, +// blocking on toolResultCh. Once we send the results, it sends the MCP result to Cursor +// and continues processing the response text — all in the same goroutine that has been +// handling KV messages the whole time. +func (e *CursorExecutor) resumeWithToolResults( + ctx context.Context, + session *cursorSession, + parsed *parsedOpenAIRequest, + from, to sdktranslator.Format, + req cliproxyexecutor.Request, + originalPayload, payload []byte, + needsTranslate bool, +) (*cliproxyexecutor.StreamResult, error) { + log.Debugf("cursor: resumeWithToolResults: injecting %d tool results via channel", len(parsed.ToolResults)) + + if session.toolResultCh == nil { + return nil, fmt.Errorf("cursor: session has no toolResultCh (stale session?)") + } + if session.resumeOutCh == nil { + return nil, fmt.Errorf("cursor: session has no resumeOutCh") + } + + log.Debugf("cursor: resumeWithToolResults: switching output to resumeOutCh and injecting results") + + // Switch the output channel BEFORE injecting results, so that when + // processH2SessionFrames unblocks and starts emitting text, it writes + // to the resumeOutCh which the new HTTP handler is reading from. + if session.switchOutput != nil { + session.switchOutput(session.resumeOutCh) + } + + // Inject tool results — this unblocks the waiting processH2SessionFrames + session.toolResultCh <- parsed.ToolResults + + // Return the resumeOutCh for the new HTTP handler to read from + return &cliproxyexecutor.StreamResult{Chunks: session.resumeOutCh}, nil +} + +// --- H2Stream helpers --- + +func openCursorH2Stream(accessToken string) (*cursorproto.H2Stream, error) { + headers := map[string]string{ + ":path": cursorRunPath, + "content-type": "application/connect+proto", + "connect-protocol-version": "1", + "te": "trailers", + "authorization": "Bearer " + accessToken, + "x-ghost-mode": "true", + "x-cursor-client-version": cursorClientVersion, + "x-cursor-client-type": "cli", + "x-request-id": uuid.New().String(), + } + return cursorproto.DialH2Stream("api2.cursor.sh", headers) +} + +func cursorH2Heartbeat(ctx context.Context, stream *cursorproto.H2Stream) { + ticker := time.NewTicker(cursorHeartbeatInterval) + defer ticker.Stop() + for { + select { + case <-ctx.Done(): + return + case <-ticker.C: + hb := cursorproto.EncodeHeartbeat() + frame := cursorproto.FrameConnectMessage(hb, 0) + if err := stream.Write(frame); err != nil { + return + } + } + } +} + +// --- Response processing --- + +// cursorTokenUsage tracks token counts from Cursor's TokenDeltaUpdate messages. +type cursorTokenUsage struct { + mu sync.Mutex + outputTokens int64 + inputTokensEst int64 // estimated from request payload size +} + +func (u *cursorTokenUsage) addOutput(delta int64) { + u.mu.Lock() + defer u.mu.Unlock() + u.outputTokens += delta +} + +func (u *cursorTokenUsage) setInputEstimate(payloadBytes int) { + u.mu.Lock() + defer u.mu.Unlock() + // Rough estimate: ~4 bytes per token for mixed content + u.inputTokensEst = int64(payloadBytes / 4) + if u.inputTokensEst < 1 { + u.inputTokensEst = 1 + } +} + +func (u *cursorTokenUsage) get() (input, output int64) { + u.mu.Lock() + defer u.mu.Unlock() + return u.inputTokensEst, u.outputTokens +} + +func processH2SessionFrames( + ctx context.Context, + stream *cursorproto.H2Stream, + blobStore map[string][]byte, + mcpTools []cursorproto.McpToolDef, + onText func(text string, isThinking bool), + onMcpExec func(exec pendingMcpExec), + toolResultCh <-chan []toolResultInfo, // nil for no tool result injection; non-nil to wait for results + tokenUsage *cursorTokenUsage, // tracks accumulated token usage (may be nil) + onCheckpoint func(data []byte), // called when server sends conversation_checkpoint_update +) error { + var buf bytes.Buffer + rejectReason := "Tool not available in this environment. Use the MCP tools provided instead." + log.Debugf("cursor: processH2SessionFrames started for streamID=%s, waiting for data...", stream.ID()) + for { + select { + case <-ctx.Done(): + log.Debugf("cursor: processH2SessionFrames exiting: context done") + return ctx.Err() + case data, ok := <-stream.Data(): + if !ok { + log.Debugf("cursor: processH2SessionFrames[%s]: exiting: stream data channel closed", stream.ID()) + return stream.Err() // may be RST_STREAM, GOAWAY, or nil for clean close + } + // Log first 20 bytes of raw data for debugging + previewLen := min(20, len(data)) + log.Debugf("cursor: processH2SessionFrames[%s]: received %d bytes from dataCh, first bytes: %x (%q)", stream.ID(), len(data), data[:previewLen], string(data[:previewLen])) + buf.Write(data) + log.Debugf("cursor: processH2SessionFrames[%s]: buf total=%d", stream.ID(), buf.Len()) + + // Process all complete frames + for { + currentBuf := buf.Bytes() + if len(currentBuf) == 0 { + break + } + flags, payload, consumed, ok := cursorproto.ParseConnectFrame(currentBuf) + if !ok { + // Log detailed info about why parsing failed + previewLen := min(20, len(currentBuf)) + log.Debugf("cursor: incomplete frame in buffer, waiting for more data (buf=%d bytes, first bytes: %x = %q)", len(currentBuf), currentBuf[:previewLen], string(currentBuf[:previewLen])) + break + } + buf.Next(consumed) + log.Debugf("cursor: parsed Connect frame flags=0x%02x payload=%d bytes consumed=%d", flags, len(payload), consumed) + + if flags&cursorproto.ConnectEndStreamFlag != 0 { + if err := cursorproto.ParseConnectEndStream(payload); err != nil { + log.Warnf("cursor: connect end stream error: %v", err) + return err // propagate server-side errors (quota, rate limit, etc.) + } + continue + } + + msg, err := cursorproto.DecodeAgentServerMessage(payload) + if err != nil { + log.Debugf("cursor: failed to decode server message: %v", err) + continue + } + + log.Debugf("cursor: decoded server message type=%d", msg.Type) + switch msg.Type { + case cursorproto.ServerMsgTextDelta: + if msg.Text != "" && onText != nil { + onText(msg.Text, false) + } + case cursorproto.ServerMsgThinkingDelta: + if msg.Text != "" && onText != nil { + onText(msg.Text, true) + } + case cursorproto.ServerMsgThinkingCompleted: + // Handled by caller + + case cursorproto.ServerMsgTurnEnded: + log.Debugf("cursor: TurnEnded received, stream will finish") + return nil // clean completion + + case cursorproto.ServerMsgHeartbeat: + // Server heartbeat, ignore silently + continue + + case cursorproto.ServerMsgCheckpoint: + if onCheckpoint != nil && len(msg.CheckpointData) > 0 { + onCheckpoint(msg.CheckpointData) + } + continue + + case cursorproto.ServerMsgTokenDelta: + if tokenUsage != nil && msg.TokenDelta > 0 { + tokenUsage.addOutput(msg.TokenDelta) + } + continue + + case cursorproto.ServerMsgKvGetBlob: + blobKey := cursorproto.BlobIdHex(msg.BlobId) + data := blobStore[blobKey] + resp := cursorproto.EncodeKvGetBlobResult(msg.KvId, data) + stream.Write(cursorproto.FrameConnectMessage(resp, 0)) + + case cursorproto.ServerMsgKvSetBlob: + blobKey := cursorproto.BlobIdHex(msg.BlobId) + blobStore[blobKey] = append([]byte(nil), msg.BlobData...) + resp := cursorproto.EncodeKvSetBlobResult(msg.KvId) + stream.Write(cursorproto.FrameConnectMessage(resp, 0)) + + case cursorproto.ServerMsgExecRequestCtx: + resp := cursorproto.EncodeExecRequestContextResult(msg.ExecMsgId, msg.ExecId, mcpTools) + stream.Write(cursorproto.FrameConnectMessage(resp, 0)) + + case cursorproto.ServerMsgExecMcpArgs: + if onMcpExec != nil { + decodedArgs := decodeMcpArgsToJSON(msg.McpArgs) + toolCallId := msg.McpToolCallId + if toolCallId == "" { + toolCallId = uuid.New().String() + } + log.Debugf("cursor: received mcpArgs from server: execMsgId=%d execId=%q toolName=%s toolCallId=%s", + msg.ExecMsgId, msg.ExecId, msg.McpToolName, toolCallId) + pending := pendingMcpExec{ + ExecMsgId: msg.ExecMsgId, + ExecId: msg.ExecId, + ToolCallId: toolCallId, + ToolName: msg.McpToolName, + Args: decodedArgs, + } + onMcpExec(pending) + + if toolResultCh == nil { + return nil + } + + // Inline mode: wait for tool result while handling KV/heartbeat + log.Debugf("cursor: waiting for tool result on channel (inline mode)...") + var toolResults []toolResultInfo + waitLoop: + for { + select { + case <-ctx.Done(): + return ctx.Err() + case results, ok := <-toolResultCh: + if !ok { + return nil + } + toolResults = results + break waitLoop + case waitData, ok := <-stream.Data(): + if !ok { + return stream.Err() + } + buf.Write(waitData) + for { + cb := buf.Bytes() + if len(cb) == 0 { + break + } + wf, wp, wc, wok := cursorproto.ParseConnectFrame(cb) + if !wok { + break + } + buf.Next(wc) + if wf&cursorproto.ConnectEndStreamFlag != 0 { + continue + } + wmsg, werr := cursorproto.DecodeAgentServerMessage(wp) + if werr != nil { + continue + } + switch wmsg.Type { + case cursorproto.ServerMsgKvGetBlob: + blobKey := cursorproto.BlobIdHex(wmsg.BlobId) + d := blobStore[blobKey] + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeKvGetBlobResult(wmsg.KvId, d), 0)) + case cursorproto.ServerMsgKvSetBlob: + blobKey := cursorproto.BlobIdHex(wmsg.BlobId) + blobStore[blobKey] = append([]byte(nil), wmsg.BlobData...) + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeKvSetBlobResult(wmsg.KvId), 0)) + case cursorproto.ServerMsgExecRequestCtx: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecRequestContextResult(wmsg.ExecMsgId, wmsg.ExecId, mcpTools), 0)) + case cursorproto.ServerMsgCheckpoint: + if onCheckpoint != nil && len(wmsg.CheckpointData) > 0 { + onCheckpoint(wmsg.CheckpointData) + } + } + } + case <-stream.Done(): + return stream.Err() + } + } + + // Send MCP result + for _, tr := range toolResults { + if tr.ToolCallId == pending.ToolCallId { + log.Debugf("cursor: sending inline MCP result for tool=%s", pending.ToolName) + resultBytes := cursorproto.EncodeExecMcpResult(pending.ExecMsgId, pending.ExecId, tr.Content, false) + stream.Write(cursorproto.FrameConnectMessage(resultBytes, 0)) + break + } + } + continue + } + + case cursorproto.ServerMsgExecReadArgs: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecReadRejected(msg.ExecMsgId, msg.ExecId, msg.Path, rejectReason), 0)) + case cursorproto.ServerMsgExecWriteArgs: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecWriteRejected(msg.ExecMsgId, msg.ExecId, msg.Path, rejectReason), 0)) + case cursorproto.ServerMsgExecDeleteArgs: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecDeleteRejected(msg.ExecMsgId, msg.ExecId, msg.Path, rejectReason), 0)) + case cursorproto.ServerMsgExecLsArgs: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecLsRejected(msg.ExecMsgId, msg.ExecId, msg.Path, rejectReason), 0)) + case cursorproto.ServerMsgExecGrepArgs: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecGrepError(msg.ExecMsgId, msg.ExecId, rejectReason), 0)) + case cursorproto.ServerMsgExecShellArgs, cursorproto.ServerMsgExecShellStream: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecShellRejected(msg.ExecMsgId, msg.ExecId, msg.Command, msg.WorkingDirectory, rejectReason), 0)) + case cursorproto.ServerMsgExecBgShellSpawn: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecBackgroundShellSpawnRejected(msg.ExecMsgId, msg.ExecId, msg.Command, msg.WorkingDirectory, rejectReason), 0)) + case cursorproto.ServerMsgExecFetchArgs: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecFetchError(msg.ExecMsgId, msg.ExecId, msg.Url, rejectReason), 0)) + case cursorproto.ServerMsgExecDiagnostics: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecDiagnosticsResult(msg.ExecMsgId, msg.ExecId), 0)) + case cursorproto.ServerMsgExecWriteShellStdin: + stream.Write(cursorproto.FrameConnectMessage(cursorproto.EncodeExecWriteShellStdinError(msg.ExecMsgId, msg.ExecId, rejectReason), 0)) + } + } + + case <-stream.Done(): + log.Debugf("cursor: processH2SessionFrames exiting: stream done") + return stream.Err() + } + } +} + +// --- OpenAI request parsing --- + +type parsedOpenAIRequest struct { + Model string + Messages []gjson.Result + Tools []gjson.Result + Stream bool + SystemPrompt string + UserText string + Images []cursorproto.ImageData + Turns []cursorproto.TurnData + ToolResults []toolResultInfo +} + +type toolResultInfo struct { + ToolCallId string + Content string +} + +func parseOpenAIRequest(payload []byte) *parsedOpenAIRequest { + p := &parsedOpenAIRequest{ + Model: gjson.GetBytes(payload, "model").String(), + Stream: gjson.GetBytes(payload, "stream").Bool(), + } + + messages := gjson.GetBytes(payload, "messages").Array() + p.Messages = messages + + // Extract system prompt + var systemParts []string + for _, msg := range messages { + if msg.Get("role").String() == "system" { + systemParts = append(systemParts, extractTextContent(msg.Get("content"))) + } + } + if len(systemParts) > 0 { + p.SystemPrompt = strings.Join(systemParts, "\n") + } else { + p.SystemPrompt = "You are a helpful assistant." + } + + // Extract turns, tool results, and last user message + var pendingUser string + for _, msg := range messages { + role := msg.Get("role").String() + switch role { + case "system": + continue + case "tool": + p.ToolResults = append(p.ToolResults, toolResultInfo{ + ToolCallId: msg.Get("tool_call_id").String(), + Content: extractTextContent(msg.Get("content")), + }) + case "user": + if pendingUser != "" { + p.Turns = append(p.Turns, cursorproto.TurnData{UserText: pendingUser}) + } + pendingUser = extractTextContent(msg.Get("content")) + p.Images = extractImages(msg.Get("content")) + case "assistant": + assistantText := extractTextContent(msg.Get("content")) + if pendingUser != "" { + p.Turns = append(p.Turns, cursorproto.TurnData{ + UserText: pendingUser, + AssistantText: assistantText, + }) + pendingUser = "" + } else if len(p.Turns) > 0 && assistantText != "" { + // Assistant message after tool results (no pending user) — + // append to the last turn's assistant text to preserve context. + last := &p.Turns[len(p.Turns)-1] + if last.AssistantText != "" { + last.AssistantText += "\n" + assistantText + } else { + last.AssistantText = assistantText + } + } + } + } + + if pendingUser != "" { + p.UserText = pendingUser + } else if len(p.Turns) > 0 && len(p.ToolResults) == 0 { + last := p.Turns[len(p.Turns)-1] + p.Turns = p.Turns[:len(p.Turns)-1] + p.UserText = last.UserText + } + + // Extract tools + p.Tools = gjson.GetBytes(payload, "tools").Array() + + return p +} + +// bakeToolResultsIntoTurns merges tool results into the last turn's assistant text +// when there's no active H2 session to resume. This ensures the model sees the +// full tool interaction context in a new conversation. +// flattenConversationIntoUserText flattens the full conversation history +// (turns + tool results) into the UserText field as plain text. +// This is the fallback for cold resume when no checkpoint is available. +// Cursor reliably reads UserText but ignores structured turns. +func flattenConversationIntoUserText(parsed *parsedOpenAIRequest) { + var buf strings.Builder + + // Flatten turns into readable context + for _, turn := range parsed.Turns { + if turn.UserText != "" { + buf.WriteString("USER: ") + buf.WriteString(turn.UserText) + buf.WriteString("\n\n") + } + if turn.AssistantText != "" { + buf.WriteString("ASSISTANT: ") + buf.WriteString(turn.AssistantText) + buf.WriteString("\n\n") + } + } + + // Flatten tool results + for _, tr := range parsed.ToolResults { + buf.WriteString("TOOL_RESULT (call_id: ") + buf.WriteString(tr.ToolCallId) + buf.WriteString("): ") + // Truncate very large tool results to avoid overwhelming the context + content := tr.Content + if len(content) > 8000 { + content = content[:8000] + "\n... [truncated]" + } + buf.WriteString(content) + buf.WriteString("\n\n") + } + + if buf.Len() > 0 { + buf.WriteString("The above is the previous conversation context including tool call results.\n") + buf.WriteString("Continue your response based on this context.\n\n") + } + + // Prepend flattened history to the current UserText + if parsed.UserText != "" { + parsed.UserText = buf.String() + "Current request: " + parsed.UserText + } else { + parsed.UserText = buf.String() + "Continue from the conversation above." + } + + // Clear turns and tool results since they're now in UserText + parsed.Turns = nil + parsed.ToolResults = nil +} + +func extractTextContent(content gjson.Result) string { + if content.Type == gjson.String { + return content.String() + } + if content.IsArray() { + var parts []string + for _, part := range content.Array() { + if part.Get("type").String() == "text" { + parts = append(parts, part.Get("text").String()) + } + } + return strings.Join(parts, "") + } + return content.String() +} + +func extractImages(content gjson.Result) []cursorproto.ImageData { + if !content.IsArray() { + return nil + } + var images []cursorproto.ImageData + for _, part := range content.Array() { + if part.Get("type").String() == "image_url" { + url := part.Get("image_url.url").String() + if strings.HasPrefix(url, "data:") { + img := parseDataURL(url) + if img != nil { + images = append(images, *img) + } + } + } + } + return images +} + +func parseDataURL(url string) *cursorproto.ImageData { + // data:image/png;base64,... + if !strings.HasPrefix(url, "data:") { + return nil + } + parts := strings.SplitN(url[5:], ";", 2) + if len(parts) != 2 { + return nil + } + mimeType := parts[0] + if !strings.HasPrefix(parts[1], "base64,") { + return nil + } + encoded := parts[1][7:] + data, err := base64.StdEncoding.DecodeString(encoded) + if err != nil { + // Try RawStdEncoding for unpadded base64 + data, err = base64.RawStdEncoding.DecodeString(encoded) + if err != nil { + return nil + } + } + return &cursorproto.ImageData{ + MimeType: mimeType, + Data: data, + } +} + +func buildRunRequestParams(parsed *parsedOpenAIRequest, conversationId string) *cursorproto.RunRequestParams { + params := &cursorproto.RunRequestParams{ + ModelId: parsed.Model, + SystemPrompt: parsed.SystemPrompt, + UserText: parsed.UserText, + MessageId: uuid.New().String(), + ConversationId: conversationId, + Images: parsed.Images, + Turns: parsed.Turns, + BlobStore: make(map[string][]byte), + } + + // Convert OpenAI tools to McpToolDefs + for _, tool := range parsed.Tools { + fn := tool.Get("function") + params.McpTools = append(params.McpTools, cursorproto.McpToolDef{ + Name: fn.Get("name").String(), + Description: fn.Get("description").String(), + InputSchema: json.RawMessage(fn.Get("parameters").Raw), + }) + } + + return params +} + +// --- Helpers --- + +func cursorAccessToken(auth *cliproxyauth.Auth) string { + if auth == nil || auth.Metadata == nil { + return "" + } + if v, ok := auth.Metadata["access_token"].(string); ok { + return v + } + return "" +} + +func cursorRefreshToken(auth *cliproxyauth.Auth) string { + if auth == nil || auth.Metadata == nil { + return "" + } + if v, ok := auth.Metadata["refresh_token"].(string); ok { + return v + } + return "" +} + +func applyCursorHeaders(req *http.Request, accessToken string) { + req.Header.Set("Content-Type", "application/connect+proto") + req.Header.Set("Connect-Protocol-Version", "1") + req.Header.Set("Te", "trailers") + req.Header.Set("Authorization", "Bearer "+accessToken) + req.Header.Set("X-Ghost-Mode", "true") + req.Header.Set("X-Cursor-Client-Version", cursorClientVersion) + req.Header.Set("X-Cursor-Client-Type", "cli") + req.Header.Set("X-Request-Id", uuid.New().String()) +} + +func newH2Client() *http.Client { + return &http.Client{ + Transport: &http2.Transport{ + TLSClientConfig: &tls.Config{}, + }, + } +} + +// extractCCH extracts the cch value from the system prompt's billing header. +func extractCCH(systemPrompt string) string { + idx := strings.Index(systemPrompt, "cch=") + if idx < 0 { + return "" + } + rest := systemPrompt[idx+4:] + end := strings.IndexAny(rest, "; \n") + if end < 0 { + return rest + } + return rest[:end] +} + +// extractClaudeCodeSessionId extracts session_id from Claude Code's metadata.user_id JSON. +// Format: {"metadata":{"user_id":"{\"session_id\":\"xxx\",\"device_id\":\"yyy\"}"}} +func extractClaudeCodeSessionId(payload []byte) string { + userIdStr := gjson.GetBytes(payload, "metadata.user_id").String() + if userIdStr == "" { + return "" + } + // user_id is a JSON string that needs to be parsed again + sid := gjson.Get(userIdStr, "session_id").String() + return sid +} + +// deriveConversationId generates a deterministic conversation_id. +// Priority: session_id (stable across resume) > system prompt hash (fallback). +func deriveConversationId(apiKey, sessionId, systemPrompt string) string { + var input string + if sessionId != "" { + // Best: use Claude Code's session_id — stable even across resume + input = "cursor-conv:" + apiKey + ":" + sessionId + } else { + // Fallback: use system prompt content minus volatile cch + stable := systemPrompt + if idx := strings.Index(stable, "cch="); idx >= 0 { + end := strings.IndexAny(stable[idx:], "; \n") + if end > 0 { + stable = stable[:idx] + stable[idx+end:] + } + } + if len(stable) > 500 { + stable = stable[:500] + } + input = "cursor-conv:" + apiKey + ":" + stable + } + h := sha256.Sum256([]byte(input)) + s := hex.EncodeToString(h[:16]) + return fmt.Sprintf("%s-%s-%s-%s-%s", s[:8], s[8:12], s[12:16], s[16:20], s[20:32]) +} + +func deriveSessionKey(clientKey string, model string, messages []gjson.Result) string { + var firstUserContent string + var systemContent string + for _, msg := range messages { + role := msg.Get("role").String() + if role == "user" && firstUserContent == "" { + firstUserContent = extractTextContent(msg.Get("content")) + } else if role == "system" && systemContent == "" { + // System prompt differs per Claude Code session (contains cwd, session_id, etc.) + content := extractTextContent(msg.Get("content")) + if len(content) > 200 { + systemContent = content[:200] + } else { + systemContent = content + } + } + } + // Include client API key + system prompt hash to prevent session collisions: + // - Different users have different API keys + // - Different Claude Code sessions have different system prompts (cwd, tools, etc.) + input := clientKey + ":" + model + ":" + systemContent + ":" + firstUserContent + if len(input) > 500 { + input = input[:500] + } + h := sha256.Sum256([]byte(input)) + return hex.EncodeToString(h[:])[:16] +} + +func sseChunk(id string, created int64, model string, delta string, finishReason string) cliproxyexecutor.StreamChunk { + fr := "null" + if finishReason != "" { + fr = finishReason + } + // Note: the framework's WriteChunk adds "data: " prefix and "\n\n" suffix, + // so we only output the raw JSON here. + data := fmt.Sprintf(`{"id":"%s","object":"chat.completion.chunk","created":%d,"model":"%s","choices":[{"index":0,"delta":%s,"finish_reason":%s}]}`, + id, created, model, delta, fr) + return cliproxyexecutor.StreamChunk{ + Payload: []byte(data), + } +} + +func jsonString(s string) string { + b, _ := json.Marshal(s) + return string(b) +} + +func decodeMcpArgsToJSON(args map[string][]byte) string { + if len(args) == 0 { + return "{}" + } + result := make(map[string]interface{}) + for k, v := range args { + // Try protobuf Value decoding first (matches TS: toJson(ValueSchema, fromBinary(ValueSchema, value))) + if decoded, err := cursorproto.ProtobufValueBytesToJSON(v); err == nil { + result[k] = decoded + } else { + // Fallback: try raw JSON + var jsonVal interface{} + if err := json.Unmarshal(v, &jsonVal); err == nil { + result[k] = jsonVal + } else { + result[k] = string(v) + } + } + } + b, _ := json.Marshal(result) + return string(b) +} + +// --- Model Discovery --- + +// FetchCursorModels retrieves available models from Cursor's API. +func FetchCursorModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + accessToken := cursorAccessToken(auth) + if accessToken == "" { + return GetCursorFallbackModels() + } + + ctx, cancel := context.WithTimeout(ctx, 5*time.Second) + defer cancel() + + // GetUsableModels is a unary RPC call (not streaming) + // Send an empty protobuf request + emptyReq := make([]byte, 0) + + h2Req, err := http.NewRequestWithContext(ctx, http.MethodPost, + cursorAPIURL+cursorModelsPath, bytes.NewReader(emptyReq)) + if err != nil { + log.Debugf("cursor: failed to create models request: %v", err) + return GetCursorFallbackModels() + } + + h2Req.Header.Set("Content-Type", "application/proto") + h2Req.Header.Set("Te", "trailers") + h2Req.Header.Set("Authorization", "Bearer "+accessToken) + h2Req.Header.Set("X-Ghost-Mode", "true") + h2Req.Header.Set("X-Cursor-Client-Version", cursorClientVersion) + h2Req.Header.Set("X-Cursor-Client-Type", "cli") + + client := newH2Client() + resp, err := client.Do(h2Req) + if err != nil { + log.Debugf("cursor: models request failed: %v", err) + return GetCursorFallbackModels() + } + defer resp.Body.Close() + + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + log.Debugf("cursor: models request returned status %d", resp.StatusCode) + return GetCursorFallbackModels() + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + return GetCursorFallbackModels() + } + + models := parseModelsResponse(body) + if len(models) == 0 { + return GetCursorFallbackModels() + } + return models +} + +func parseModelsResponse(data []byte) []*registry.ModelInfo { + // Try stripping Connect framing first + if len(data) >= cursorproto.ConnectFrameHeaderSize { + _, payload, _, ok := cursorproto.ParseConnectFrame(data) + if ok { + data = payload + } + } + + // The response is a GetUsableModelsResponse protobuf. + // We need to decode it manually - it contains a repeated "models" field. + // Based on the TS code, the response has a `models` field (repeated) containing + // model objects with modelId, displayName, thinkingDetails, etc. + + // For now, we'll try a simple decode approach + var models []*registry.ModelInfo + // Field 1 is likely "models" (repeated submessage) + for len(data) > 0 { + num, typ, n := consumeTag(data) + if n < 0 { + break + } + data = data[n:] + + if typ == 2 { // BytesType (submessage) + val, n := consumeBytes(data) + if n < 0 { + break + } + data = data[n:] + + if num == 1 { // models field + if m := parseModelEntry(val); m != nil { + models = append(models, m) + } + } + } else { + n := consumeFieldValue(num, typ, data) + if n < 0 { + break + } + data = data[n:] + } + } + + return models +} + +func parseModelEntry(data []byte) *registry.ModelInfo { + var modelId, displayName string + var hasThinking bool + + for len(data) > 0 { + num, typ, n := consumeTag(data) + if n < 0 { + break + } + data = data[n:] + + switch typ { + case 2: // BytesType + val, n := consumeBytes(data) + if n < 0 { + return nil + } + data = data[n:] + switch num { + case 1: // modelId + modelId = string(val) + case 2: // thinkingDetails + hasThinking = true + case 3: // displayModelId (use as fallback) + if displayName == "" { + displayName = string(val) + } + case 4: // displayName + displayName = string(val) + case 5: // displayNameShort + if displayName == "" { + displayName = string(val) + } + } + case 0: // VarintType + _, n := consumeVarint(data) + if n < 0 { + return nil + } + data = data[n:] + default: + n := consumeFieldValue(num, typ, data) + if n < 0 { + return nil + } + data = data[n:] + } + } + + if modelId == "" { + return nil + } + if displayName == "" { + displayName = modelId + } + + info := ®istry.ModelInfo{ + ID: modelId, + Object: "model", + Created: time.Now().Unix(), + OwnedBy: "cursor", + Type: cursorAuthType, + DisplayName: displayName, + ContextLength: 200000, + MaxCompletionTokens: 64000, + } + if hasThinking { + info.Thinking = ®istry.ThinkingSupport{ + Max: 50000, + DynamicAllowed: true, + } + } + return info +} + +// GetCursorFallbackModels returns hardcoded fallback models. +func GetCursorFallbackModels() []*registry.ModelInfo { + return []*registry.ModelInfo{ + {ID: "composer-2", Object: "model", OwnedBy: "cursor", Type: cursorAuthType, DisplayName: "Composer 2", ContextLength: 200000, MaxCompletionTokens: 64000, Thinking: ®istry.ThinkingSupport{Max: 50000, DynamicAllowed: true}}, + {ID: "claude-4-sonnet", Object: "model", OwnedBy: "cursor", Type: cursorAuthType, DisplayName: "Claude 4 Sonnet", ContextLength: 200000, MaxCompletionTokens: 64000, Thinking: ®istry.ThinkingSupport{Max: 50000, DynamicAllowed: true}}, + {ID: "claude-3.5-sonnet", Object: "model", OwnedBy: "cursor", Type: cursorAuthType, DisplayName: "Claude 3.5 Sonnet", ContextLength: 200000, MaxCompletionTokens: 8192}, + {ID: "gpt-4o", Object: "model", OwnedBy: "cursor", Type: cursorAuthType, DisplayName: "GPT-4o", ContextLength: 128000, MaxCompletionTokens: 16384}, + {ID: "cursor-small", Object: "model", OwnedBy: "cursor", Type: cursorAuthType, DisplayName: "Cursor Small", ContextLength: 200000, MaxCompletionTokens: 64000}, + {ID: "gemini-2.5-pro", Object: "model", OwnedBy: "cursor", Type: cursorAuthType, DisplayName: "Gemini 2.5 Pro", ContextLength: 1000000, MaxCompletionTokens: 65536, Thinking: ®istry.ThinkingSupport{Max: 50000, DynamicAllowed: true}}, + } +} + +// Low-level protowire helpers (avoid importing protowire in executor) +func consumeTag(b []byte) (num int, typ int, n int) { + v, n := consumeVarint(b) + if n < 0 { + return 0, 0, -1 + } + return int(v >> 3), int(v & 7), n +} + +func consumeVarint(b []byte) (uint64, int) { + var val uint64 + for i := 0; i < len(b) && i < 10; i++ { + val |= uint64(b[i]&0x7f) << (7 * i) + if b[i]&0x80 == 0 { + return val, i + 1 + } + } + return 0, -1 +} + +func consumeBytes(b []byte) ([]byte, int) { + length, n := consumeVarint(b) + if n < 0 || int(length) > len(b)-n { + return nil, -1 + } + return b[n : n+int(length)], n + int(length) +} + +func consumeFieldValue(num, typ int, b []byte) int { + switch typ { + case 0: // Varint + _, n := consumeVarint(b) + return n + case 1: // 64-bit + if len(b) < 8 { + return -1 + } + return 8 + case 2: // Length-delimited + _, n := consumeBytes(b) + return n + case 5: // 32-bit + if len(b) < 4 { + return -1 + } + return 4 + default: + return -1 + } +} diff --git a/internal/runtime/executor/gemini_cli_executor.go b/internal/runtime/executor/gemini_cli_executor.go index d2df610966..a298fe8a0e 100644 --- a/internal/runtime/executor/gemini_cli_executor.go +++ b/internal/runtime/executor/gemini_cli_executor.go @@ -16,15 +16,15 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/geminicli" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/geminicli" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -139,7 +139,8 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth basePayload = fixGeminiCLIImageAspectRatio(baseModel, basePayload) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - basePayload = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "gemini", "request", basePayload, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + basePayload = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "gemini", "request", basePayload, originalTranslated, requestedModel, requestPath) action := "generateContent" if req.Metadata != nil { @@ -294,7 +295,8 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut basePayload = fixGeminiCLIImageAspectRatio(baseModel, basePayload) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - basePayload = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "gemini", "request", basePayload, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + basePayload = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, "gemini", "request", basePayload, originalTranslated, requestedModel, requestPath) projectID := resolveGeminiProjectID(auth) @@ -409,28 +411,44 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut if bytes.HasPrefix(line, dataTag) { segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, bytes.Clone(line), ¶m) for i := range segments { - out <- cliproxyexecutor.StreamChunk{Payload: segments[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: segments[i]}: + case <-ctx.Done(): + return + } } } } segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, []byte("[DONE]"), ¶m) for i := range segments { - out <- cliproxyexecutor.StreamChunk{Payload: segments[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: segments[i]}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } + return } + reporter.EnsurePublished(ctx) return } data, errRead := io.ReadAll(resp.Body) if errRead != nil { helps.RecordAPIResponseError(ctx, e.cfg, errRead) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errRead} + reporter.PublishFailure(ctx, errRead) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errRead}: + case <-ctx.Done(): + } return } helps.AppendAPIResponseChunk(ctx, e.cfg, data) @@ -438,12 +456,20 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut var param any segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, data, ¶m) for i := range segments { - out <- cliproxyexecutor.StreamChunk{Payload: segments[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: segments[i]}: + case <-ctx.Done(): + return + } } segments = sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, []byte("[DONE]"), ¶m) for i := range segments { - out <- cliproxyexecutor.StreamChunk{Payload: segments[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: segments[i]}: + case <-ctx.Done(): + return + } } }(httpResp, append([]byte(nil), payload...), attemptModel) @@ -573,7 +599,10 @@ func (e *GeminiCLIExecutor) CountTokens(ctx context.Context, auth *cliproxyauth. } // Refresh refreshes the authentication credentials (no-op for Gemini CLI). -func (e *GeminiCLIExecutor) Refresh(_ context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { +func (e *GeminiCLIExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } return auth, nil } @@ -583,37 +612,43 @@ func prepareGeminiCLITokenSource(ctx context.Context, cfg *config.Config, auth * return nil, nil, fmt.Errorf("gemini-cli auth metadata missing") } - var base map[string]any - if tokenRaw, ok := metadata["token"].(map[string]any); ok && tokenRaw != nil { - base = cloneMap(tokenRaw) - } else { - base = make(map[string]any) - } + buildToken := func(meta map[string]any) (map[string]any, oauth2.Token) { + var base map[string]any + if tokenRaw, ok := meta["token"].(map[string]any); ok && tokenRaw != nil { + base = cloneMap(tokenRaw) + } else { + base = make(map[string]any) + } - var token oauth2.Token - if len(base) > 0 { - if raw, err := json.Marshal(base); err == nil { - _ = json.Unmarshal(raw, &token) + var token oauth2.Token + if len(base) > 0 { + if raw, err := json.Marshal(base); err == nil { + _ = json.Unmarshal(raw, &token) + } } - } - if token.AccessToken == "" { - token.AccessToken = stringValue(metadata, "access_token") - } - if token.RefreshToken == "" { - token.RefreshToken = stringValue(metadata, "refresh_token") - } - if token.TokenType == "" { - token.TokenType = stringValue(metadata, "token_type") - } - if token.Expiry.IsZero() { - if expiry := stringValue(metadata, "expiry"); expiry != "" { - if ts, err := time.Parse(time.RFC3339, expiry); err == nil { - token.Expiry = ts + if token.AccessToken == "" { + token.AccessToken = stringValue(meta, "access_token") + } + if token.RefreshToken == "" { + token.RefreshToken = stringValue(meta, "refresh_token") + } + if token.TokenType == "" { + token.TokenType = stringValue(meta, "token_type") + } + if token.Expiry.IsZero() { + if expiry := stringValue(meta, "expiry"); expiry != "" { + if ts, err := time.Parse(time.RFC3339, expiry); err == nil { + token.Expiry = ts + } } } + + return base, token } + base, token := buildToken(metadata) + conf := &oauth2.Config{ ClientID: geminiOAuthClientID, ClientSecret: geminiOAuthClientSecret, @@ -626,6 +661,29 @@ func prepareGeminiCLITokenSource(ctx context.Context, cfg *config.Config, auth * ctxToken = context.WithValue(ctxToken, oauth2.HTTPClient, httpClient) } + if cfg != nil && cfg.Home.Enabled { + now := time.Now() + if token.AccessToken == "" || (!token.Expiry.IsZero() && token.Expiry.Before(now.Add(30*time.Second))) { + refreshed, handled, errRefresh := helps.RefreshAuthViaHome(ctx, cfg, auth) + if handled { + if errRefresh != nil { + return nil, nil, errRefresh + } + auth = refreshed + metadata = geminiOAuthMetadata(auth) + if metadata == nil { + return nil, nil, fmt.Errorf("gemini-cli auth metadata missing") + } + base, token = buildToken(metadata) + } + } + if token.AccessToken == "" { + return nil, nil, fmt.Errorf("gemini-cli access token missing") + } + updateGeminiCLITokenMetadata(auth, base, &token) + return oauth2.StaticTokenSource(&token), base, nil + } + src := conf.TokenSource(ctxToken, &token) currentToken, err := src.Token() if err != nil { @@ -898,7 +956,14 @@ func parseRetryDelay(errorBody []byte) (*time.Duration, error) { if matches := re.FindStringSubmatch(message); len(matches) > 1 { seconds, err := strconv.Atoi(matches[1]) if err == nil { - return new(time.Duration(seconds) * time.Second), nil + duration := time.Duration(seconds) * time.Second + return &duration, nil + } + } + reHuman := regexp.MustCompile(`after\s+((?:\d+h)?(?:\d+m)?(?:\d+s)?)\.?`) + if matches := reHuman.FindStringSubmatch(strings.ToLower(message)); len(matches) > 1 { + if duration, err := time.ParseDuration(matches[1]); err == nil && duration > 0 { + return &duration, nil } } } diff --git a/internal/runtime/executor/gemini_executor.go b/internal/runtime/executor/gemini_executor.go index fb4fbfdaf2..e8fa2e405f 100644 --- a/internal/runtime/executor/gemini_executor.go +++ b/internal/runtime/executor/gemini_executor.go @@ -12,13 +12,13 @@ import ( "net/http" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -132,7 +132,8 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r body = fixGeminiImageAspectRatio(baseModel, body) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) action := "generateContent" @@ -239,7 +240,8 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A body = fixGeminiImageAspectRatio(baseModel, body) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) baseURL := resolveGeminiBaseURL(auth) @@ -322,17 +324,28 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A } lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(payload), ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: lines[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: lines[i]}: + case <-ctx.Done(): + return + } } } lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: lines[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: lines[i]}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } }() return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil @@ -424,7 +437,10 @@ func (e *GeminiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut } // Refresh refreshes the authentication credentials (no-op for Gemini API key). -func (e *GeminiExecutor) Refresh(_ context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { +func (e *GeminiExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } return auth, nil } diff --git a/internal/runtime/executor/gemini_vertex_executor.go b/internal/runtime/executor/gemini_vertex_executor.go index 50e66219ac..b899524c6a 100644 --- a/internal/runtime/executor/gemini_vertex_executor.go +++ b/internal/runtime/executor/gemini_vertex_executor.go @@ -14,14 +14,14 @@ import ( "strings" "time" - vertexauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/vertex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + vertexauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/vertex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -294,7 +294,10 @@ func (e *GeminiVertexExecutor) CountTokens(ctx context.Context, auth *cliproxyau } // Refresh refreshes the authentication credentials (no-op for Vertex). -func (e *GeminiVertexExecutor) Refresh(_ context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { +func (e *GeminiVertexExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } return auth, nil } @@ -335,8 +338,10 @@ func (e *GeminiVertexExecutor) executeWithServiceAccount(ctx context.Context, au body = fixGeminiImageAspectRatio(baseModel, body) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) + body = helps.StripVertexOpenAIResponsesToolCallIDs(body, from.String()) } action := getVertexAction(baseModel, false) @@ -455,8 +460,10 @@ func (e *GeminiVertexExecutor) executeWithAPIKey(ctx context.Context, auth *clip body = fixGeminiImageAspectRatio(baseModel, body) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) + body = helps.StripVertexOpenAIResponsesToolCallIDs(body, from.String()) action := getVertexAction(baseModel, false) if req.Metadata != nil { @@ -565,8 +572,10 @@ func (e *GeminiVertexExecutor) executeStreamWithServiceAccount(ctx context.Conte body = fixGeminiImageAspectRatio(baseModel, body) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) + body = helps.StripVertexOpenAIResponsesToolCallIDs(body, from.String()) action := getVertexAction(baseModel, true) baseURL := vertexBaseURL(location) @@ -653,17 +662,28 @@ func (e *GeminiVertexExecutor) executeStreamWithServiceAccount(ctx context.Conte } lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: lines[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: lines[i]}: + case <-ctx.Done(): + return + } } } lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: lines[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: lines[i]}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } }() return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil @@ -694,8 +714,10 @@ func (e *GeminiVertexExecutor) executeStreamWithAPIKey(ctx context.Context, auth body = fixGeminiImageAspectRatio(baseModel, body) requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, _ = sjson.SetBytes(body, "model", baseModel) + body = helps.StripVertexOpenAIResponsesToolCallIDs(body, from.String()) action := getVertexAction(baseModel, true) // For API key auth, use simpler URL format without project/location @@ -782,17 +804,28 @@ func (e *GeminiVertexExecutor) executeStreamWithAPIKey(ctx context.Context, auth } lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: lines[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: lines[i]}: + case <-ctx.Done(): + return + } } } lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), ¶m) for i := range lines { - out <- cliproxyexecutor.StreamChunk{Payload: lines[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: lines[i]}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } }() return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil @@ -814,6 +847,7 @@ func (e *GeminiVertexExecutor) countTokensWithServiceAccount(ctx context.Context translatedReq = fixGeminiImageAspectRatio(baseModel, translatedReq) translatedReq, _ = sjson.SetBytes(translatedReq, "model", baseModel) + translatedReq = helps.StripVertexOpenAIResponsesToolCallIDs(translatedReq, from.String()) respCtx := context.WithValue(ctx, "alt", opts.Alt) translatedReq, _ = sjson.DeleteBytes(translatedReq, "tools") translatedReq, _ = sjson.DeleteBytes(translatedReq, "generationConfig") @@ -903,6 +937,7 @@ func (e *GeminiVertexExecutor) countTokensWithAPIKey(ctx context.Context, auth * translatedReq = fixGeminiImageAspectRatio(baseModel, translatedReq) translatedReq, _ = sjson.SetBytes(translatedReq, "model", baseModel) + translatedReq = helps.StripVertexOpenAIResponsesToolCallIDs(translatedReq, from.String()) respCtx := context.WithValue(ctx, "alt", opts.Alt) translatedReq, _ = sjson.DeleteBytes(translatedReq, "tools") translatedReq, _ = sjson.DeleteBytes(translatedReq, "generationConfig") diff --git a/internal/runtime/executor/github_copilot_executor.go b/internal/runtime/executor/github_copilot_executor.go new file mode 100644 index 0000000000..1b2844372e --- /dev/null +++ b/internal/runtime/executor/github_copilot_executor.go @@ -0,0 +1,1731 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "fmt" + "io" + "net/http" + "slices" + "strings" + "sync" + "time" + + "github.com/google/uuid" + copilotauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/copilot" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +const ( + githubCopilotBaseURL = "https://api.githubcopilot.com" + githubCopilotChatPath = "/chat/completions" + githubCopilotResponsesPath = "/responses" + githubCopilotAuthType = "github-copilot" + githubCopilotTokenCacheTTL = 25 * time.Minute + // tokenExpiryBuffer is the time before expiry when we should refresh the token. + tokenExpiryBuffer = 5 * time.Minute + // maxScannerBufferSize is the maximum buffer size for SSE scanning (20MB). + maxScannerBufferSize = 20_971_520 + + // Copilot API header values. + copilotUserAgent = "GitHubCopilotChat/0.35.0" + copilotEditorVersion = "vscode/1.107.0" + copilotPluginVersion = "copilot-chat/0.35.0" + copilotIntegrationID = "vscode-chat" + copilotOpenAIIntent = "conversation-edits" + copilotGitHubAPIVer = "2025-04-01" +) + +// GitHubCopilotExecutor handles requests to the GitHub Copilot API. +type GitHubCopilotExecutor struct { + cfg *config.Config + mu sync.RWMutex + cache map[string]*cachedAPIToken +} + +// cachedAPIToken stores a cached Copilot API token with its expiry. +type cachedAPIToken struct { + token string + apiEndpoint string + expiresAt time.Time +} + +// NewGitHubCopilotExecutor constructs a new executor instance. +func NewGitHubCopilotExecutor(cfg *config.Config) *GitHubCopilotExecutor { + return &GitHubCopilotExecutor{ + cfg: cfg, + cache: make(map[string]*cachedAPIToken), + } +} + +// Identifier implements ProviderExecutor. +func (e *GitHubCopilotExecutor) Identifier() string { return githubCopilotAuthType } + +// PrepareRequest implements ProviderExecutor. +func (e *GitHubCopilotExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + ctx := req.Context() + if ctx == nil { + ctx = context.Background() + } + apiToken, _, errToken := e.ensureAPIToken(ctx, auth) + if errToken != nil { + return errToken + } + e.applyHeaders(req, apiToken, nil) + return nil +} + +// HttpRequest injects GitHub Copilot credentials into the request and executes it. +func (e *GitHubCopilotExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("github-copilot executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if errPrepare := e.PrepareRequest(httpReq, auth); errPrepare != nil { + return nil, errPrepare + } + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +// Execute handles non-streaming requests to GitHub Copilot. +func (e *GitHubCopilotExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + if nativeExec, nativeAuth, nativeReq, ok, errGateway := e.nativeGateway(ctx, auth, req); errGateway != nil { + return resp, errGateway + } else if ok { + return nativeExec.Execute(ctx, nativeAuth, nativeReq, opts) + } + + apiToken, baseURL, errToken := e.ensureAPIToken(ctx, auth) + if errToken != nil { + return resp, errToken + } + + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + defer reporter.trackFailure(ctx, &err) + + from := opts.SourceFormat + useResponses := useGitHubCopilotResponsesEndpoint(from, req.Model) + to := sdktranslator.FromString("openai") + if useResponses { + to = sdktranslator.FromString("openai-response") + } + originalPayload := bytes.Clone(req.Payload) + if len(opts.OriginalRequest) > 0 { + originalPayload = bytes.Clone(opts.OriginalRequest) + } + originalTranslated := sdktranslator.TranslateRequest(from, to, req.Model, originalPayload, false) + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false) + body = e.normalizeModel(req.Model, body) + body = flattenAssistantContent(body) + body = stripUnsupportedBetas(body) + + // Detect vision content before input normalization removes messages + hasVision := detectVisionContent(body) + + thinkingProvider := "openai" + if useResponses { + thinkingProvider = "codex" + } + body, err = thinking.ApplyThinking(body, req.Model, from.String(), thinkingProvider, e.Identifier()) + if err != nil { + return resp, err + } + + if useResponses { + body = normalizeGitHubCopilotResponsesInput(body) + body = normalizeGitHubCopilotResponsesTools(body) + body = applyGitHubCopilotResponsesDefaults(body) + } else { + body = normalizeGitHubCopilotChatTools(body) + } + requestedModel := payloadRequestedModel(opts, req.Model) + body = applyPayloadConfigWithRoot(e.cfg, req.Model, to.String(), "", body, originalTranslated, requestedModel) + body, _ = sjson.SetBytes(body, "stream", false) + + path := githubCopilotChatPath + if useResponses { + path = githubCopilotResponsesPath + } + url := baseURL + path + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body)) + if err != nil { + return resp, err + } + e.applyHeaders(httpReq, apiToken, body) + + // Add Copilot-Vision-Request header if the request contains vision content + if hasVision { + httpReq.Header.Set("Copilot-Vision-Request", "true") + } + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: body, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("github-copilot executor: close response body error: %v", errClose) + } + }() + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + + if !isHTTPSuccess(httpResp.StatusCode) { + data, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, data) + log.Debugf("github-copilot executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), data)) + err = statusErr{code: httpResp.StatusCode, msg: string(data)} + return resp, err + } + + data, err := io.ReadAll(httpResp.Body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + appendAPIResponseChunk(ctx, e.cfg, data) + + detail := parseOpenAIUsage(data) + if useResponses && detail.TotalTokens == 0 { + detail = parseOpenAIResponsesUsage(data) + } + if detail.TotalTokens > 0 { + reporter.publish(ctx, detail) + } + + var param any + var converted []byte + if useResponses && from.String() == "claude" { + converted = translateGitHubCopilotResponsesNonStreamToClaude(data) + } else { + data = normalizeGitHubCopilotReasoningField(data) + converted = sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, ¶m) + } + resp = cliproxyexecutor.Response{Payload: converted, Headers: httpResp.Header.Clone()} + reporter.ensurePublished(ctx) + return resp, nil +} + +// ExecuteStream handles streaming requests to GitHub Copilot. +func (e *GitHubCopilotExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + if nativeExec, nativeAuth, nativeReq, ok, errGateway := e.nativeGateway(ctx, auth, req); errGateway != nil { + return nil, errGateway + } else if ok { + return nativeExec.ExecuteStream(ctx, nativeAuth, nativeReq, opts) + } + + apiToken, baseURL, errToken := e.ensureAPIToken(ctx, auth) + if errToken != nil { + return nil, errToken + } + + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + defer reporter.trackFailure(ctx, &err) + + from := opts.SourceFormat + useResponses := useGitHubCopilotResponsesEndpoint(from, req.Model) + to := sdktranslator.FromString("openai") + if useResponses { + to = sdktranslator.FromString("openai-response") + } + originalPayload := bytes.Clone(req.Payload) + if len(opts.OriginalRequest) > 0 { + originalPayload = bytes.Clone(opts.OriginalRequest) + } + originalTranslated := sdktranslator.TranslateRequest(from, to, req.Model, originalPayload, false) + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) + body = e.normalizeModel(req.Model, body) + body = flattenAssistantContent(body) + body = stripUnsupportedBetas(body) + + // Detect vision content before input normalization removes messages + hasVision := detectVisionContent(body) + + thinkingProvider := "openai" + if useResponses { + thinkingProvider = "codex" + } + body, err = thinking.ApplyThinking(body, req.Model, from.String(), thinkingProvider, e.Identifier()) + if err != nil { + return nil, err + } + + if useResponses { + body = normalizeGitHubCopilotResponsesInput(body) + body = normalizeGitHubCopilotResponsesTools(body) + body = applyGitHubCopilotResponsesDefaults(body) + } else { + body = normalizeGitHubCopilotChatTools(body) + } + requestedModel := payloadRequestedModel(opts, req.Model) + body = applyPayloadConfigWithRoot(e.cfg, req.Model, to.String(), "", body, originalTranslated, requestedModel) + body, _ = sjson.SetBytes(body, "stream", true) + // Enable stream options for usage stats in stream + if !useResponses { + body, _ = sjson.SetBytes(body, "stream_options.include_usage", true) + } + + path := githubCopilotChatPath + if useResponses { + path = githubCopilotResponsesPath + } + url := baseURL + path + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body)) + if err != nil { + return nil, err + } + e.applyHeaders(httpReq, apiToken, body) + + // Add Copilot-Vision-Request header if the request contains vision content + if hasVision { + httpReq.Header.Set("Copilot-Vision-Request", "true") + } + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: body, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return nil, err + } + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + + if !isHTTPSuccess(httpResp.StatusCode) { + data, readErr := io.ReadAll(httpResp.Body) + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("github-copilot executor: close response body error: %v", errClose) + } + if readErr != nil { + recordAPIResponseError(ctx, e.cfg, readErr) + return nil, readErr + } + appendAPIResponseChunk(ctx, e.cfg, data) + log.Debugf("github-copilot executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), data)) + err = statusErr{code: httpResp.StatusCode, msg: string(data)} + return nil, err + } + + out := make(chan cliproxyexecutor.StreamChunk) + + go func() { + defer close(out) + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("github-copilot executor: close response body error: %v", errClose) + } + }() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, maxScannerBufferSize) + var param any + + for scanner.Scan() { + line := scanner.Bytes() + appendAPIResponseChunk(ctx, e.cfg, line) + + // Parse SSE data + if bytes.HasPrefix(line, dataTag) { + data := bytes.TrimSpace(line[5:]) + if bytes.Equal(data, []byte("[DONE]")) { + continue + } + if detail, ok := parseOpenAIStreamUsage(line); ok { + reporter.publish(ctx, detail) + } else if useResponses { + if detail, ok := parseOpenAIResponsesStreamUsage(line); ok { + reporter.publish(ctx, detail) + } + } + } + + var chunks [][]byte + if useResponses && from.String() == "claude" { + chunks = translateGitHubCopilotResponsesStreamToClaude(bytes.Clone(line), ¶m) + } else { + // Strip SSE "data: " prefix before reasoning field normalization, + // since normalizeGitHubCopilotReasoningField expects pure JSON. + // Re-wrap with the prefix afterward for the translator. + normalizedLine := bytes.Clone(line) + if bytes.HasPrefix(line, dataTag) { + sseData := bytes.TrimSpace(line[len(dataTag):]) + if !bytes.Equal(sseData, []byte("[DONE]")) && gjson.ValidBytes(sseData) { + normalized := normalizeGitHubCopilotReasoningField(bytes.Clone(sseData)) + if !bytes.Equal(normalized, sseData) { + normalizedLine = append(append([]byte(nil), dataTag...), normalized...) + } + } + } + chunks = sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, normalizedLine, ¶m) + } + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: bytes.Clone(chunks[i])} + } + } + + if errScan := scanner.Err(); errScan != nil { + recordAPIResponseError(ctx, e.cfg, errScan) + reporter.publishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } else { + reporter.ensurePublished(ctx) + } + }() + + return &cliproxyexecutor.StreamResult{ + Headers: httpResp.Header.Clone(), + Chunks: out, + }, nil +} + +// CountTokens estimates token count locally using tiktoken, since the GitHub +// Copilot API does not expose a dedicated token counting endpoint. +func (e *GitHubCopilotExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + if nativeExec, nativeAuth, nativeReq, ok, errGateway := e.nativeGateway(ctx, auth, req); errGateway != nil { + return cliproxyexecutor.Response{}, errGateway + } else if ok { + return nativeExec.CountTokens(ctx, nativeAuth, nativeReq, opts) + } + + baseModel := thinking.ParseSuffix(req.Model).ModelName + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false) + + enc, err := helps.TokenizerForModel(baseModel) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("github copilot executor: tokenizer init failed: %w", err) + } + + count, err := helps.CountOpenAIChatTokens(enc, translated) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("github copilot executor: token counting failed: %w", err) + } + + usageJSON := helps.BuildOpenAIUsageJSON(count) + translatedUsage := sdktranslator.TranslateTokenCount(ctx, to, from, count, usageJSON) + return cliproxyexecutor.Response{Payload: translatedUsage}, nil +} + +// Refresh validates the GitHub token is still working. +// GitHub OAuth tokens don't expire traditionally, so we just validate. +func (e *GitHubCopilotExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil { + return nil, statusErr{code: http.StatusUnauthorized, msg: "missing auth"} + } + + // Get the GitHub access token + accessToken := metaStringValue(auth.Metadata, "access_token") + if accessToken == "" { + return auth, nil + } + + // Validate the token can still get a Copilot API token + copilotAuth := copilotauth.NewCopilotAuth(e.cfg) + _, err := copilotAuth.GetCopilotAPIToken(ctx, accessToken) + if err != nil { + return nil, statusErr{code: http.StatusUnauthorized, msg: fmt.Sprintf("github-copilot token validation failed: %v", err)} + } + + return auth, nil +} + +func (e *GitHubCopilotExecutor) nativeGateway( + ctx context.Context, + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, +) (cliproxyauth.ProviderExecutor, *cliproxyauth.Auth, cliproxyexecutor.Request, bool, error) { + if !githubCopilotUsesAnthropicGateway(req.Model) { + return nil, nil, req, false, nil + } + if auth == nil || metaStringValue(auth.Metadata, "access_token") == "" { + return nil, nil, req, false, nil + } + apiToken, baseURL, err := e.ensureAPIToken(ctx, auth) + if err != nil { + return nil, nil, req, false, err + } + nativeAuth := buildCopilotAnthropicGatewayAuth(auth, apiToken, baseURL, req.Payload) + if nativeAuth == nil { + return nil, nil, req, false, nil + } + return NewClaudeExecutor(e.cfg), nativeAuth, req, true, nil +} + +func githubCopilotUsesAnthropicGateway(model string) bool { + baseModel := strings.ToLower(thinking.ParseSuffix(model).ModelName) + return strings.HasPrefix(baseModel, "claude-") +} + +func buildCopilotAnthropicGatewayAuth(auth *cliproxyauth.Auth, apiToken, baseURL string, body []byte) *cliproxyauth.Auth { + apiToken = strings.TrimSpace(apiToken) + baseURL = strings.TrimRight(strings.TrimSpace(baseURL), "/") + if apiToken == "" || baseURL == "" { + return nil + } + + nativeAuth := auth.Clone() + if nativeAuth == nil { + nativeAuth = &cliproxyauth.Auth{} + } + nativeAuth.Provider = "claude" + if nativeAuth.Attributes == nil { + nativeAuth.Attributes = make(map[string]string) + } + nativeAuth.Attributes["api_key"] = apiToken + nativeAuth.Attributes["base_url"] = baseURL + nativeAuth.Attributes["header:Content-Type"] = "application/json" + nativeAuth.Attributes["header:Accept"] = "application/json" + nativeAuth.Attributes["header:User-Agent"] = copilotUserAgent + nativeAuth.Attributes["header:Editor-Version"] = copilotEditorVersion + nativeAuth.Attributes["header:Editor-Plugin-Version"] = copilotPluginVersion + nativeAuth.Attributes["header:Openai-Intent"] = copilotOpenAIIntent + nativeAuth.Attributes["header:Copilot-Integration-Id"] = copilotIntegrationID + nativeAuth.Attributes["header:X-Github-Api-Version"] = copilotGitHubAPIVer + nativeAuth.Attributes["header:X-Request-Id"] = uuid.NewString() + if isAgentInitiated(body) { + nativeAuth.Attributes["header:X-Initiator"] = "agent" + } else { + nativeAuth.Attributes["header:X-Initiator"] = "user" + } + if detectVisionContent(body) { + nativeAuth.Attributes["header:Copilot-Vision-Request"] = "true" + } + return nativeAuth +} + +// ensureAPIToken gets or refreshes the Copilot API token. +func (e *GitHubCopilotExecutor) ensureAPIToken(ctx context.Context, auth *cliproxyauth.Auth) (string, string, error) { + if auth == nil { + return "", "", statusErr{code: http.StatusUnauthorized, msg: "missing auth"} + } + + // Get the GitHub access token + accessToken := metaStringValue(auth.Metadata, "access_token") + if accessToken == "" { + return "", "", statusErr{code: http.StatusUnauthorized, msg: "missing github access token"} + } + + // Check for cached API token using thread-safe access + e.mu.RLock() + if cached, ok := e.cache[accessToken]; ok && cached.expiresAt.After(time.Now().Add(tokenExpiryBuffer)) { + e.mu.RUnlock() + return cached.token, cached.apiEndpoint, nil + } + e.mu.RUnlock() + + // Get a new Copilot API token + copilotAuth := copilotauth.NewCopilotAuth(e.cfg) + apiToken, err := copilotAuth.GetCopilotAPIToken(ctx, accessToken) + if err != nil { + return "", "", statusErr{code: http.StatusUnauthorized, msg: fmt.Sprintf("failed to get copilot api token: %v", err)} + } + + // Use endpoint from token response, fall back to default + apiEndpoint := githubCopilotBaseURL + if apiToken.Endpoints.API != "" { + apiEndpoint = strings.TrimRight(apiToken.Endpoints.API, "/") + } + + // Cache the token with thread-safe access + expiresAt := time.Now().Add(githubCopilotTokenCacheTTL) + if apiToken.ExpiresAt > 0 { + expiresAt = time.Unix(apiToken.ExpiresAt, 0) + } + e.mu.Lock() + e.cache[accessToken] = &cachedAPIToken{ + token: apiToken.Token, + apiEndpoint: apiEndpoint, + expiresAt: expiresAt, + } + e.mu.Unlock() + + return apiToken.Token, apiEndpoint, nil +} + +// applyHeaders sets the required headers for GitHub Copilot API requests. +func (e *GitHubCopilotExecutor) applyHeaders(r *http.Request, apiToken string, body []byte) { + r.Header.Set("Content-Type", "application/json") + r.Header.Set("Authorization", "Bearer "+apiToken) + r.Header.Set("Accept", "application/json") + r.Header.Set("User-Agent", copilotUserAgent) + r.Header.Set("Editor-Version", copilotEditorVersion) + r.Header.Set("Editor-Plugin-Version", copilotPluginVersion) + r.Header.Set("Openai-Intent", copilotOpenAIIntent) + r.Header.Set("Copilot-Integration-Id", copilotIntegrationID) + r.Header.Set("X-Github-Api-Version", copilotGitHubAPIVer) + r.Header.Set("X-Request-Id", uuid.NewString()) + + initiator := "user" + if isAgentInitiated(body) { + initiator = "agent" + } + r.Header.Set("X-Initiator", initiator) +} + +// isAgentInitiated determines whether the current request is agent-initiated +// (tool callbacks, continuations) rather than user-initiated (new user prompt). +// +// GitHub Copilot uses the X-Initiator header for billing: +// - "user" → consumes premium request quota +// - "agent" → free (tool loops, continuations) +// +// The challenge: Claude Code sends tool results as role:"user" messages with +// content type "tool_result". After translation to OpenAI format, the tool_result +// part becomes a separate role:"tool" message, but if the original Claude message +// also contained text content (e.g. skill invocations, attachment descriptions), +// a role:"user" message is emitted AFTER the tool message, making the last message +// appear user-initiated when it's actually part of an agent tool loop. +// +// VSCode Copilot Chat solves this with explicit flags (iterationNumber, +// isContinuation, subAgentInvocationId). Since CPA doesn't have these flags, +// we infer agent status by checking whether the conversation contains prior +// assistant/tool messages — if it does, the current request is a continuation. +// +// References: +// - opencode#8030, opencode#15824: same root cause and fix approach +// - vscode-copilot-chat: toolCallingLoop.ts (iterationNumber === 0) +// - pi-ai: github-copilot-headers.ts (last message role check) +func isAgentInitiated(body []byte) bool { + if len(body) == 0 { + return false + } + + // Chat Completions API: check messages array + if messages := gjson.GetBytes(body, "messages"); messages.Exists() && messages.IsArray() { + arr := messages.Array() + if len(arr) == 0 { + return false + } + + lastRole := "" + for i := len(arr) - 1; i >= 0; i-- { + if r := arr[i].Get("role").String(); r != "" { + lastRole = r + break + } + } + + // If last message is assistant or tool, clearly agent-initiated. + if lastRole == "assistant" || lastRole == "tool" { + return true + } + + // If last message is "user", check whether it contains tool results + // (indicating a tool-loop continuation) or if the preceding message + // is an assistant tool_use. This is more precise than checking for + // any prior assistant message, which would false-positive on genuine + // multi-turn follow-ups. + if lastRole == "user" { + // Check if the last user message contains tool_result content + lastContent := arr[len(arr)-1].Get("content") + if lastContent.Exists() && lastContent.IsArray() { + for _, part := range lastContent.Array() { + if part.Get("type").String() == "tool_result" { + return true + } + } + } + // Check if the second-to-last message is an assistant with tool_use + if len(arr) >= 2 { + prev := arr[len(arr)-2] + if prev.Get("role").String() == "assistant" { + prevContent := prev.Get("content") + if prevContent.Exists() && prevContent.IsArray() { + for _, part := range prevContent.Array() { + if part.Get("type").String() == "tool_use" { + return true + } + } + } + } + } + } + + return false + } + + // Responses API: check input array + if inputs := gjson.GetBytes(body, "input"); inputs.Exists() && inputs.IsArray() { + arr := inputs.Array() + if len(arr) == 0 { + return false + } + + // Check last item + last := arr[len(arr)-1] + if role := last.Get("role").String(); role == "assistant" { + return true + } + switch last.Get("type").String() { + case "function_call", "function_call_arguments", "computer_call": + return true + case "function_call_output", "function_call_response", "tool_result", "computer_call_output": + return true + } + + // If last item is user-role, check for prior non-user items + for _, item := range arr { + if role := item.Get("role").String(); role == "assistant" { + return true + } + switch item.Get("type").String() { + case "function_call", "function_call_output", "function_call_response", + "function_call_arguments", "computer_call", "computer_call_output": + return true + } + } + } + + return false +} + +// detectVisionContent checks if the request body contains vision/image content. +// Returns true if the request includes image_url or image type content blocks. +func detectVisionContent(body []byte) bool { + // Parse messages array + messagesResult := gjson.GetBytes(body, "messages") + if !messagesResult.Exists() || !messagesResult.IsArray() { + return false + } + + // Check each message for vision content + for _, message := range messagesResult.Array() { + content := message.Get("content") + + // If content is an array, check each content block + if content.IsArray() { + for _, block := range content.Array() { + blockType := block.Get("type").String() + // Check for image_url or image type + if blockType == "image_url" || blockType == "image" { + return true + } + } + } + } + + return false +} + +// normalizeModel strips the suffix (e.g. "(medium)") from the model name +// before sending to GitHub Copilot, as the upstream API does not accept +// suffixed model identifiers. +func (e *GitHubCopilotExecutor) normalizeModel(model string, body []byte) []byte { + baseModel := thinking.ParseSuffix(model).ModelName + if baseModel != model { + body, _ = sjson.SetBytes(body, "model", baseModel) + } + return body +} + +// copilotUnsupportedBetas lists beta headers that are Anthropic-specific and +// must not be forwarded to GitHub Copilot. The context-1m beta enables 1M +// context on Anthropic's API, but Copilot's Claude models are limited to +// ~128K-200K. Passing it through would not enable 1M on Copilot, but stripping +// it from the translated body avoids confusing downstream translators. +var copilotUnsupportedBetas = []string{ + "context-1m-2025-08-07", +} + +// stripUnsupportedBetas removes Anthropic-specific beta entries from the +// translated request body. In OpenAI format the betas may appear under +// "metadata.betas" or a top-level "betas" array; in Claude format they sit at +// "betas". This function checks all known locations. +func stripUnsupportedBetas(body []byte) []byte { + betaPaths := []string{"betas", "metadata.betas"} + for _, path := range betaPaths { + arr := gjson.GetBytes(body, path) + if !arr.Exists() || !arr.IsArray() { + continue + } + var filtered []string + changed := false + for _, item := range arr.Array() { + beta := item.String() + if isCopilotUnsupportedBeta(beta) { + changed = true + continue + } + filtered = append(filtered, beta) + } + if !changed { + continue + } + if len(filtered) == 0 { + body, _ = sjson.DeleteBytes(body, path) + } else { + body, _ = sjson.SetBytes(body, path, filtered) + } + } + return body +} + +func isCopilotUnsupportedBeta(beta string) bool { + return slices.Contains(copilotUnsupportedBetas, beta) +} + +// normalizeGitHubCopilotReasoningField maps Copilot's non-standard +// 'reasoning_text' field to the standard OpenAI 'reasoning_content' field +// that the SDK translator expects. This handles both streaming deltas +// (choices[].delta.reasoning_text) and non-streaming messages +// (choices[].message.reasoning_text). The field is only renamed when +// 'reasoning_content' is absent or null, preserving standard responses. +// All choices are processed to support n>1 requests. +func normalizeGitHubCopilotReasoningField(data []byte) []byte { + choices := gjson.GetBytes(data, "choices") + if !choices.Exists() || !choices.IsArray() { + return data + } + for i := range choices.Array() { + // Non-streaming: choices[i].message.reasoning_text + msgRT := fmt.Sprintf("choices.%d.message.reasoning_text", i) + msgRC := fmt.Sprintf("choices.%d.message.reasoning_content", i) + if rt := gjson.GetBytes(data, msgRT); rt.Exists() && rt.String() != "" { + if rc := gjson.GetBytes(data, msgRC); !rc.Exists() || rc.Type == gjson.Null || rc.String() == "" { + data, _ = sjson.SetBytes(data, msgRC, rt.String()) + } + } + // Streaming: choices[i].delta.reasoning_text + deltaRT := fmt.Sprintf("choices.%d.delta.reasoning_text", i) + deltaRC := fmt.Sprintf("choices.%d.delta.reasoning_content", i) + if rt := gjson.GetBytes(data, deltaRT); rt.Exists() && rt.String() != "" { + if rc := gjson.GetBytes(data, deltaRC); !rc.Exists() || rc.Type == gjson.Null || rc.String() == "" { + data, _ = sjson.SetBytes(data, deltaRC, rt.String()) + } + } + } + return data +} + +func useGitHubCopilotResponsesEndpoint(sourceFormat sdktranslator.Format, model string) bool { + if sourceFormat.String() == "openai-response" { + return true + } + baseModel := strings.ToLower(thinking.ParseSuffix(model).ModelName) + if info := registry.GetGlobalRegistry().GetModelInfo(baseModel, githubCopilotAuthType); info != nil { + return len(info.SupportedEndpoints) > 0 && !containsEndpoint(info.SupportedEndpoints, githubCopilotChatPath) && containsEndpoint(info.SupportedEndpoints, githubCopilotResponsesPath) + } + if info := lookupGitHubCopilotStaticModelInfo(baseModel); info != nil { + return len(info.SupportedEndpoints) > 0 && !containsEndpoint(info.SupportedEndpoints, githubCopilotChatPath) && containsEndpoint(info.SupportedEndpoints, githubCopilotResponsesPath) + } + return strings.Contains(baseModel, "codex") +} + +func lookupGitHubCopilotStaticModelInfo(model string) *registry.ModelInfo { + for _, info := range registry.GetStaticModelDefinitionsByChannel(githubCopilotAuthType) { + if info != nil && strings.EqualFold(info.ID, model) { + return info + } + } + return nil +} + +func containsEndpoint(endpoints []string, endpoint string) bool { + return slices.Contains(endpoints, endpoint) +} + +// flattenAssistantContent converts assistant message content from array format +// to a joined string. GitHub Copilot requires assistant content as a string; +// sending it as an array causes Claude models to re-answer all previous prompts. +func flattenAssistantContent(body []byte) []byte { + messages := gjson.GetBytes(body, "messages") + if !messages.Exists() || !messages.IsArray() { + return body + } + result := body + for i, msg := range messages.Array() { + if msg.Get("role").String() != "assistant" { + continue + } + content := msg.Get("content") + if !content.Exists() || !content.IsArray() { + continue + } + // Skip flattening if the content contains non-text blocks (tool_use, thinking, etc.) + hasNonText := false + for _, part := range content.Array() { + if t := part.Get("type").String(); t != "" && t != "text" { + hasNonText = true + break + } + } + if hasNonText { + continue + } + var textParts []string + for _, part := range content.Array() { + if part.Get("type").String() == "text" { + if t := part.Get("text").String(); t != "" { + textParts = append(textParts, t) + } + } + } + joined := strings.Join(textParts, "") + path := fmt.Sprintf("messages.%d.content", i) + result, _ = sjson.SetBytes(result, path, joined) + } + return result +} + +func normalizeGitHubCopilotChatTools(body []byte) []byte { + tools := gjson.GetBytes(body, "tools") + if tools.Exists() { + filtered := "[]" + if tools.IsArray() { + for _, tool := range tools.Array() { + if tool.Get("type").String() != "function" { + continue + } + filtered, _ = sjson.SetRaw(filtered, "-1", tool.Raw) + } + } + body, _ = sjson.SetRawBytes(body, "tools", []byte(filtered)) + } + + toolChoice := gjson.GetBytes(body, "tool_choice") + if !toolChoice.Exists() { + return body + } + if toolChoice.Type == gjson.String { + switch toolChoice.String() { + case "auto", "none", "required": + return body + } + } + body, _ = sjson.SetBytes(body, "tool_choice", "auto") + return body +} + +func normalizeGitHubCopilotResponsesInput(body []byte) []byte { + body = stripGitHubCopilotResponsesUnsupportedFields(body) + input := gjson.GetBytes(body, "input") + if input.Exists() { + // If input is already a string or array, keep it as-is. + if input.Type == gjson.String || input.IsArray() { + return body + } + // Non-string/non-array input: stringify as fallback. + body, _ = sjson.SetBytes(body, "input", input.Raw) + return body + } + + // Convert Claude messages format to OpenAI Responses API input array. + // This preserves the conversation structure (roles, tool calls, tool results) + // which is critical for multi-turn tool-use conversations. + inputArr := "[]" + + // System messages → developer role + if system := gjson.GetBytes(body, "system"); system.Exists() { + var systemParts []string + if system.IsArray() { + for _, part := range system.Array() { + if txt := part.Get("text").String(); txt != "" { + systemParts = append(systemParts, txt) + } + } + } else if system.Type == gjson.String { + systemParts = append(systemParts, system.String()) + } + if len(systemParts) > 0 { + msg := `{"type":"message","role":"developer","content":[]}` + for _, txt := range systemParts { + part := `{"type":"input_text","text":""}` + part, _ = sjson.Set(part, "text", txt) + msg, _ = sjson.SetRaw(msg, "content.-1", part) + } + inputArr, _ = sjson.SetRaw(inputArr, "-1", msg) + } + } + + // Messages → structured input items + if messages := gjson.GetBytes(body, "messages"); messages.Exists() && messages.IsArray() { + for _, msg := range messages.Array() { + role := msg.Get("role").String() + content := msg.Get("content") + + if !content.Exists() { + continue + } + + // Simple string content + if content.Type == gjson.String { + textType := "input_text" + if role == "assistant" { + textType = "output_text" + } + item := `{"type":"message","role":"","content":[]}` + item, _ = sjson.Set(item, "role", role) + part := fmt.Sprintf(`{"type":"%s","text":""}`, textType) + part, _ = sjson.Set(part, "text", content.String()) + item, _ = sjson.SetRaw(item, "content.-1", part) + inputArr, _ = sjson.SetRaw(inputArr, "-1", item) + continue + } + + if !content.IsArray() { + continue + } + + // Array content: split into message parts vs tool items + var msgParts []string + for _, c := range content.Array() { + cType := c.Get("type").String() + switch cType { + case "text": + textType := "input_text" + if role == "assistant" { + textType = "output_text" + } + part := fmt.Sprintf(`{"type":"%s","text":""}`, textType) + part, _ = sjson.Set(part, "text", c.Get("text").String()) + msgParts = append(msgParts, part) + case "image": + source := c.Get("source") + if source.Exists() { + data := source.Get("data").String() + if data == "" { + data = source.Get("base64").String() + } + mediaType := source.Get("media_type").String() + if mediaType == "" { + mediaType = source.Get("mime_type").String() + } + if mediaType == "" { + mediaType = "application/octet-stream" + } + if data != "" { + part := `{"type":"input_image","image_url":""}` + part, _ = sjson.Set(part, "image_url", fmt.Sprintf("data:%s;base64,%s", mediaType, data)) + msgParts = append(msgParts, part) + } + } + case "tool_use": + // Flush any accumulated message parts first + if len(msgParts) > 0 { + item := `{"type":"message","role":"","content":[]}` + item, _ = sjson.Set(item, "role", role) + for _, p := range msgParts { + item, _ = sjson.SetRaw(item, "content.-1", p) + } + inputArr, _ = sjson.SetRaw(inputArr, "-1", item) + msgParts = nil + } + fc := `{"type":"function_call","call_id":"","name":"","arguments":""}` + fc, _ = sjson.Set(fc, "call_id", c.Get("id").String()) + fc, _ = sjson.Set(fc, "name", c.Get("name").String()) + if inputRaw := c.Get("input"); inputRaw.Exists() { + fc, _ = sjson.Set(fc, "arguments", inputRaw.Raw) + } + inputArr, _ = sjson.SetRaw(inputArr, "-1", fc) + case "tool_result": + // Flush any accumulated message parts first + if len(msgParts) > 0 { + item := `{"type":"message","role":"","content":[]}` + item, _ = sjson.Set(item, "role", role) + for _, p := range msgParts { + item, _ = sjson.SetRaw(item, "content.-1", p) + } + inputArr, _ = sjson.SetRaw(inputArr, "-1", item) + msgParts = nil + } + fco := `{"type":"function_call_output","call_id":"","output":""}` + fco, _ = sjson.Set(fco, "call_id", c.Get("tool_use_id").String()) + // Extract output text + resultContent := c.Get("content") + if resultContent.Type == gjson.String { + fco, _ = sjson.Set(fco, "output", resultContent.String()) + } else if resultContent.IsArray() { + var resultParts []string + for _, rc := range resultContent.Array() { + if txt := rc.Get("text").String(); txt != "" { + resultParts = append(resultParts, txt) + } + } + fco, _ = sjson.Set(fco, "output", strings.Join(resultParts, "\n")) + } else if resultContent.Exists() { + fco, _ = sjson.Set(fco, "output", resultContent.String()) + } + inputArr, _ = sjson.SetRaw(inputArr, "-1", fco) + case "thinking": + // Skip thinking blocks - not part of the API input + } + } + + // Flush remaining message parts + if len(msgParts) > 0 { + item := `{"type":"message","role":"","content":[]}` + item, _ = sjson.Set(item, "role", role) + for _, p := range msgParts { + item, _ = sjson.SetRaw(item, "content.-1", p) + } + inputArr, _ = sjson.SetRaw(inputArr, "-1", item) + } + } + } + + body, _ = sjson.SetRawBytes(body, "input", []byte(inputArr)) + // Remove messages/system since we've converted them to input + body, _ = sjson.DeleteBytes(body, "messages") + body, _ = sjson.DeleteBytes(body, "system") + return body +} + +func stripGitHubCopilotResponsesUnsupportedFields(body []byte) []byte { + // GitHub Copilot /responses rejects service_tier, so always remove it. + body, _ = sjson.DeleteBytes(body, "service_tier") + return body +} + +// applyGitHubCopilotResponsesDefaults sets required fields for the Responses API +// that both vscode-copilot-chat and pi-ai always include. +// +// References: +// - vscode-copilot-chat: src/platform/endpoint/node/responsesApi.ts +// - pi-ai (badlogic/pi-mono): packages/ai/src/providers/openai-responses.ts +func applyGitHubCopilotResponsesDefaults(body []byte) []byte { + // store: false — prevents request/response storage + if !gjson.GetBytes(body, "store").Exists() { + body, _ = sjson.SetBytes(body, "store", false) + } + + // include: ["reasoning.encrypted_content"] — enables reasoning content + // reuse across turns, avoiding redundant computation + if !gjson.GetBytes(body, "include").Exists() { + body, _ = sjson.SetRawBytes(body, "include", []byte(`["reasoning.encrypted_content"]`)) + } + + // If reasoning.effort is set but reasoning.summary is not, default to "auto" + if gjson.GetBytes(body, "reasoning.effort").Exists() && !gjson.GetBytes(body, "reasoning.summary").Exists() { + body, _ = sjson.SetBytes(body, "reasoning.summary", "auto") + } + + return body +} + +func normalizeGitHubCopilotResponsesTools(body []byte) []byte { + tools := gjson.GetBytes(body, "tools") + if tools.Exists() { + filtered := "[]" + if tools.IsArray() { + for _, tool := range tools.Array() { + toolType := tool.Get("type").String() + if isGitHubCopilotResponsesBuiltinTool(toolType) { + filtered, _ = sjson.SetRaw(filtered, "-1", tool.Raw) + continue + } + // Accept OpenAI format (type="function") and Claude format + // (no type field, but has top-level name + input_schema). + if toolType != "" && toolType != "function" { + continue + } + name := tool.Get("name").String() + if name == "" { + name = tool.Get("function.name").String() + } + if name == "" { + continue + } + normalized := `{"type":"function","name":""}` + normalized, _ = sjson.Set(normalized, "name", name) + if desc := tool.Get("description").String(); desc != "" { + normalized, _ = sjson.Set(normalized, "description", desc) + } else if desc = tool.Get("function.description").String(); desc != "" { + normalized, _ = sjson.Set(normalized, "description", desc) + } + if params := tool.Get("parameters"); params.Exists() { + normalized, _ = sjson.SetRaw(normalized, "parameters", params.Raw) + } else if params = tool.Get("function.parameters"); params.Exists() { + normalized, _ = sjson.SetRaw(normalized, "parameters", params.Raw) + } else if params = tool.Get("input_schema"); params.Exists() { + normalized, _ = sjson.SetRaw(normalized, "parameters", params.Raw) + } + filtered, _ = sjson.SetRaw(filtered, "-1", normalized) + } + } + body, _ = sjson.SetRawBytes(body, "tools", []byte(filtered)) + } + + toolChoice := gjson.GetBytes(body, "tool_choice") + if !toolChoice.Exists() { + return body + } + if toolChoice.Type == gjson.String { + switch toolChoice.String() { + case "auto", "none", "required": + return body + default: + body, _ = sjson.SetBytes(body, "tool_choice", "auto") + return body + } + } + if toolChoice.Type == gjson.JSON { + choiceType := toolChoice.Get("type").String() + if isGitHubCopilotResponsesBuiltinTool(choiceType) { + body, _ = sjson.SetRawBytes(body, "tool_choice", []byte(toolChoice.Raw)) + return body + } + if choiceType == "function" { + name := toolChoice.Get("name").String() + if name == "" { + name = toolChoice.Get("function.name").String() + } + if name != "" { + normalized := `{"type":"function","name":""}` + normalized, _ = sjson.Set(normalized, "name", name) + body, _ = sjson.SetRawBytes(body, "tool_choice", []byte(normalized)) + return body + } + } + } + body, _ = sjson.SetBytes(body, "tool_choice", "auto") + return body +} + +func isGitHubCopilotResponsesBuiltinTool(toolType string) bool { + switch strings.TrimSpace(toolType) { + case "computer", "computer_use_preview": + return true + default: + return false + } +} + +func collectTextFromNode(node gjson.Result) string { + if !node.Exists() { + return "" + } + if node.Type == gjson.String { + return node.String() + } + if node.IsArray() { + var parts []string + for _, item := range node.Array() { + if item.Type == gjson.String { + if text := item.String(); text != "" { + parts = append(parts, text) + } + continue + } + if text := item.Get("text").String(); text != "" { + parts = append(parts, text) + continue + } + if nested := collectTextFromNode(item.Get("content")); nested != "" { + parts = append(parts, nested) + } + } + return strings.Join(parts, "\n") + } + if node.Type == gjson.JSON { + if text := node.Get("text").String(); text != "" { + return text + } + if nested := collectTextFromNode(node.Get("content")); nested != "" { + return nested + } + return node.Raw + } + return node.String() +} + +type githubCopilotResponsesStreamToolState struct { + Index int + ID string + Name string +} + +type githubCopilotResponsesStreamState struct { + MessageStarted bool + MessageStopSent bool + TextBlockStarted bool + TextBlockIndex int + NextContentIndex int + HasToolUse bool + ReasoningActive bool + ReasoningIndex int + OutputIndexToTool map[int]*githubCopilotResponsesStreamToolState + ItemIDToTool map[string]*githubCopilotResponsesStreamToolState +} + +func translateGitHubCopilotResponsesNonStreamToClaude(data []byte) []byte { + root := gjson.ParseBytes(data) + out := `{"id":"","type":"message","role":"assistant","model":"","content":[],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":0,"output_tokens":0}}` + out, _ = sjson.Set(out, "id", root.Get("id").String()) + out, _ = sjson.Set(out, "model", root.Get("model").String()) + + hasToolUse := false + if output := root.Get("output"); output.Exists() && output.IsArray() { + for _, item := range output.Array() { + switch item.Get("type").String() { + case "reasoning": + var thinkingText string + if summary := item.Get("summary"); summary.Exists() && summary.IsArray() { + var parts []string + for _, part := range summary.Array() { + if txt := part.Get("text").String(); txt != "" { + parts = append(parts, txt) + } + } + thinkingText = strings.Join(parts, "") + } + if thinkingText == "" { + if content := item.Get("content"); content.Exists() && content.IsArray() { + var parts []string + for _, part := range content.Array() { + if txt := part.Get("text").String(); txt != "" { + parts = append(parts, txt) + } + } + thinkingText = strings.Join(parts, "") + } + } + if thinkingText != "" { + block := `{"type":"thinking","thinking":""}` + block, _ = sjson.Set(block, "thinking", thinkingText) + out, _ = sjson.SetRaw(out, "content.-1", block) + } + case "message": + if content := item.Get("content"); content.Exists() && content.IsArray() { + for _, part := range content.Array() { + if part.Get("type").String() != "output_text" { + continue + } + text := part.Get("text").String() + if text == "" { + continue + } + block := `{"type":"text","text":""}` + block, _ = sjson.Set(block, "text", text) + out, _ = sjson.SetRaw(out, "content.-1", block) + } + } + case "function_call": + hasToolUse = true + toolUse := `{"type":"tool_use","id":"","name":"","input":{}}` + toolID := item.Get("call_id").String() + if toolID == "" { + toolID = item.Get("id").String() + } + toolUse, _ = sjson.Set(toolUse, "id", toolID) + toolUse, _ = sjson.Set(toolUse, "name", item.Get("name").String()) + if args := item.Get("arguments").String(); args != "" && gjson.Valid(args) { + argObj := gjson.Parse(args) + if argObj.IsObject() { + toolUse, _ = sjson.SetRaw(toolUse, "input", argObj.Raw) + } + } + out, _ = sjson.SetRaw(out, "content.-1", toolUse) + } + } + } + + inputTokens := root.Get("usage.input_tokens").Int() + outputTokens := root.Get("usage.output_tokens").Int() + cachedTokens := root.Get("usage.input_tokens_details.cached_tokens").Int() + if cachedTokens > 0 && inputTokens >= cachedTokens { + inputTokens -= cachedTokens + } + out, _ = sjson.Set(out, "usage.input_tokens", inputTokens) + out, _ = sjson.Set(out, "usage.output_tokens", outputTokens) + if cachedTokens > 0 { + out, _ = sjson.Set(out, "usage.cache_read_input_tokens", cachedTokens) + } + if hasToolUse { + out, _ = sjson.Set(out, "stop_reason", "tool_use") + } else if sr := root.Get("stop_reason").String(); sr == "max_tokens" || sr == "stop" { + out, _ = sjson.Set(out, "stop_reason", sr) + } else { + out, _ = sjson.Set(out, "stop_reason", "end_turn") + } + return []byte(out) +} + +func translateGitHubCopilotResponsesStreamToClaude(line []byte, param *any) [][]byte { + if *param == nil { + *param = &githubCopilotResponsesStreamState{ + TextBlockIndex: -1, + OutputIndexToTool: make(map[int]*githubCopilotResponsesStreamToolState), + ItemIDToTool: make(map[string]*githubCopilotResponsesStreamToolState), + } + } + state := (*param).(*githubCopilotResponsesStreamState) + + if !bytes.HasPrefix(line, dataTag) { + return nil + } + payload := bytes.TrimSpace(line[5:]) + if bytes.Equal(payload, []byte("[DONE]")) { + return nil + } + if !gjson.ValidBytes(payload) { + return nil + } + + event := gjson.GetBytes(payload, "type").String() + results := make([][]byte, 0, 4) + appendResult := func(chunk string) { + results = append(results, []byte(chunk)) + } + ensureMessageStart := func() { + if state.MessageStarted { + return + } + messageStart := `{"type":"message_start","message":{"id":"","type":"message","role":"assistant","model":"","content":[],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":0,"output_tokens":0}}}` + messageStart, _ = sjson.Set(messageStart, "message.id", gjson.GetBytes(payload, "response.id").String()) + messageStart, _ = sjson.Set(messageStart, "message.model", gjson.GetBytes(payload, "response.model").String()) + appendResult("event: message_start\ndata: " + messageStart + "\n\n") + state.MessageStarted = true + } + startTextBlockIfNeeded := func() { + if state.TextBlockStarted { + return + } + if state.TextBlockIndex < 0 { + state.TextBlockIndex = state.NextContentIndex + state.NextContentIndex++ + } + contentBlockStart := `{"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}` + contentBlockStart, _ = sjson.Set(contentBlockStart, "index", state.TextBlockIndex) + appendResult("event: content_block_start\ndata: " + contentBlockStart + "\n\n") + state.TextBlockStarted = true + } + stopTextBlockIfNeeded := func() { + if !state.TextBlockStarted { + return + } + contentBlockStop := `{"type":"content_block_stop","index":0}` + contentBlockStop, _ = sjson.Set(contentBlockStop, "index", state.TextBlockIndex) + appendResult("event: content_block_stop\ndata: " + contentBlockStop + "\n\n") + state.TextBlockStarted = false + state.TextBlockIndex = -1 + } + resolveTool := func(itemID string, outputIndex int) *githubCopilotResponsesStreamToolState { + if itemID != "" { + if tool, ok := state.ItemIDToTool[itemID]; ok { + return tool + } + } + if tool, ok := state.OutputIndexToTool[outputIndex]; ok { + if itemID != "" { + state.ItemIDToTool[itemID] = tool + } + return tool + } + return nil + } + + switch event { + case "response.created": + ensureMessageStart() + case "response.output_text.delta": + ensureMessageStart() + startTextBlockIfNeeded() + delta := gjson.GetBytes(payload, "delta").String() + if delta != "" { + contentDelta := `{"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":""}}` + contentDelta, _ = sjson.Set(contentDelta, "index", state.TextBlockIndex) + contentDelta, _ = sjson.Set(contentDelta, "delta.text", delta) + appendResult("event: content_block_delta\ndata: " + contentDelta + "\n\n") + } + case "response.reasoning_summary_part.added": + ensureMessageStart() + state.ReasoningActive = true + state.ReasoningIndex = state.NextContentIndex + state.NextContentIndex++ + thinkingStart := `{"type":"content_block_start","index":0,"content_block":{"type":"thinking","thinking":""}}` + thinkingStart, _ = sjson.Set(thinkingStart, "index", state.ReasoningIndex) + appendResult("event: content_block_start\ndata: " + thinkingStart + "\n\n") + case "response.reasoning_summary_text.delta": + if state.ReasoningActive { + delta := gjson.GetBytes(payload, "delta").String() + if delta != "" { + thinkingDelta := `{"type":"content_block_delta","index":0,"delta":{"type":"thinking_delta","thinking":""}}` + thinkingDelta, _ = sjson.Set(thinkingDelta, "index", state.ReasoningIndex) + thinkingDelta, _ = sjson.Set(thinkingDelta, "delta.thinking", delta) + appendResult("event: content_block_delta\ndata: " + thinkingDelta + "\n\n") + } + } + case "response.reasoning_summary_part.done": + if state.ReasoningActive { + thinkingStop := `{"type":"content_block_stop","index":0}` + thinkingStop, _ = sjson.Set(thinkingStop, "index", state.ReasoningIndex) + appendResult("event: content_block_stop\ndata: " + thinkingStop + "\n\n") + state.ReasoningActive = false + } + case "response.output_item.added": + if gjson.GetBytes(payload, "item.type").String() != "function_call" { + break + } + ensureMessageStart() + stopTextBlockIfNeeded() + state.HasToolUse = true + tool := &githubCopilotResponsesStreamToolState{ + Index: state.NextContentIndex, + ID: gjson.GetBytes(payload, "item.call_id").String(), + Name: gjson.GetBytes(payload, "item.name").String(), + } + if tool.ID == "" { + tool.ID = gjson.GetBytes(payload, "item.id").String() + } + state.NextContentIndex++ + outputIndex := int(gjson.GetBytes(payload, "output_index").Int()) + state.OutputIndexToTool[outputIndex] = tool + if itemID := gjson.GetBytes(payload, "item.id").String(); itemID != "" { + state.ItemIDToTool[itemID] = tool + } + contentBlockStart := `{"type":"content_block_start","index":0,"content_block":{"type":"tool_use","id":"","name":"","input":{}}}` + contentBlockStart, _ = sjson.Set(contentBlockStart, "index", tool.Index) + contentBlockStart, _ = sjson.Set(contentBlockStart, "content_block.id", tool.ID) + contentBlockStart, _ = sjson.Set(contentBlockStart, "content_block.name", tool.Name) + appendResult("event: content_block_start\ndata: " + contentBlockStart + "\n\n") + case "response.output_item.delta": + item := gjson.GetBytes(payload, "item") + if item.Get("type").String() != "function_call" { + break + } + tool := resolveTool(item.Get("id").String(), int(gjson.GetBytes(payload, "output_index").Int())) + if tool == nil { + break + } + partial := gjson.GetBytes(payload, "delta").String() + if partial == "" { + partial = item.Get("arguments").String() + } + if partial == "" { + break + } + inputDelta := `{"type":"content_block_delta","index":0,"delta":{"type":"input_json_delta","partial_json":""}}` + inputDelta, _ = sjson.Set(inputDelta, "index", tool.Index) + inputDelta, _ = sjson.Set(inputDelta, "delta.partial_json", partial) + appendResult("event: content_block_delta\ndata: " + inputDelta + "\n\n") + case "response.function_call_arguments.delta": + // Copilot sends tool call arguments via this event type (not response.output_item.delta). + // Data format: {"delta":"...", "item_id":"...", "output_index":N, ...} + itemID := gjson.GetBytes(payload, "item_id").String() + outputIndex := int(gjson.GetBytes(payload, "output_index").Int()) + tool := resolveTool(itemID, outputIndex) + if tool == nil { + break + } + partial := gjson.GetBytes(payload, "delta").String() + if partial == "" { + break + } + inputDelta := `{"type":"content_block_delta","index":0,"delta":{"type":"input_json_delta","partial_json":""}}` + inputDelta, _ = sjson.Set(inputDelta, "index", tool.Index) + inputDelta, _ = sjson.Set(inputDelta, "delta.partial_json", partial) + appendResult("event: content_block_delta\ndata: " + inputDelta + "\n\n") + case "response.output_item.done": + if gjson.GetBytes(payload, "item.type").String() != "function_call" { + break + } + tool := resolveTool(gjson.GetBytes(payload, "item.id").String(), int(gjson.GetBytes(payload, "output_index").Int())) + if tool == nil { + break + } + contentBlockStop := `{"type":"content_block_stop","index":0}` + contentBlockStop, _ = sjson.Set(contentBlockStop, "index", tool.Index) + appendResult("event: content_block_stop\ndata: " + contentBlockStop + "\n\n") + case "response.completed": + ensureMessageStart() + stopTextBlockIfNeeded() + if !state.MessageStopSent { + stopReason := "end_turn" + if state.HasToolUse { + stopReason = "tool_use" + } else if sr := gjson.GetBytes(payload, "response.stop_reason").String(); sr == "max_tokens" || sr == "stop" { + stopReason = sr + } + inputTokens := gjson.GetBytes(payload, "response.usage.input_tokens").Int() + outputTokens := gjson.GetBytes(payload, "response.usage.output_tokens").Int() + cachedTokens := gjson.GetBytes(payload, "response.usage.input_tokens_details.cached_tokens").Int() + if cachedTokens > 0 && inputTokens >= cachedTokens { + inputTokens -= cachedTokens + } + messageDelta := `{"type":"message_delta","delta":{"stop_reason":"","stop_sequence":null},"usage":{"input_tokens":0,"output_tokens":0}}` + messageDelta, _ = sjson.Set(messageDelta, "delta.stop_reason", stopReason) + messageDelta, _ = sjson.Set(messageDelta, "usage.input_tokens", inputTokens) + messageDelta, _ = sjson.Set(messageDelta, "usage.output_tokens", outputTokens) + if cachedTokens > 0 { + messageDelta, _ = sjson.Set(messageDelta, "usage.cache_read_input_tokens", cachedTokens) + } + appendResult("event: message_delta\ndata: " + messageDelta + "\n\n") + appendResult("event: message_stop\ndata: {\"type\":\"message_stop\"}\n\n") + state.MessageStopSent = true + } + } + + return results +} + +// isHTTPSuccess checks if the status code indicates success (2xx). +func isHTTPSuccess(statusCode int) bool { + return statusCode >= 200 && statusCode < 300 +} + +const ( + // defaultCopilotContextLength is the default context window for unknown Copilot models. + defaultCopilotContextLength = 128000 + // defaultCopilotMaxCompletionTokens is the default max output tokens for unknown Copilot models. + defaultCopilotMaxCompletionTokens = 16384 +) + +// FetchGitHubCopilotModels dynamically fetches available models from the GitHub Copilot API. +// It exchanges the GitHub access token stored in auth.Metadata for a Copilot API token, +// then queries the /models endpoint. Falls back to the static registry on any failure. +func FetchGitHubCopilotModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + if auth == nil { + log.Debug("github-copilot: auth is nil, using static models") + return registry.GetGitHubCopilotModels() + } + + accessToken := metaStringValue(auth.Metadata, "access_token") + if accessToken == "" { + log.Debug("github-copilot: no access_token in auth metadata, using static models") + return registry.GetGitHubCopilotModels() + } + + copilotAuth := copilotauth.NewCopilotAuth(cfg) + + entries, err := copilotAuth.ListModelsWithGitHubToken(ctx, accessToken) + if err != nil { + log.Warnf("github-copilot: failed to fetch dynamic models: %v, using static models", err) + return registry.GetGitHubCopilotModels() + } + + if len(entries) == 0 { + log.Debug("github-copilot: API returned no models, using static models") + return registry.GetGitHubCopilotModels() + } + + // Build a lookup from the static definitions so we can enrich dynamic entries + // with known context lengths, thinking support, etc. + staticMap := make(map[string]*registry.ModelInfo) + for _, m := range registry.GetGitHubCopilotModels() { + staticMap[m.ID] = m + } + + now := time.Now().Unix() + models := make([]*registry.ModelInfo, 0, len(entries)) + seen := make(map[string]struct{}, len(entries)) + for _, entry := range entries { + if entry.ID == "" { + continue + } + // Deduplicate model IDs to avoid incorrect reference counting. + if _, dup := seen[entry.ID]; dup { + continue + } + seen[entry.ID] = struct{}{} + + m := ®istry.ModelInfo{ + ID: entry.ID, + Object: "model", + Created: now, + OwnedBy: "github-copilot", + Type: "github-copilot", + } + + if entry.Created > 0 { + m.Created = entry.Created + } + if entry.Name != "" { + m.DisplayName = entry.Name + } else { + m.DisplayName = entry.ID + } + + // Merge known metadata from the static fallback list + if static, ok := staticMap[entry.ID]; ok { + if m.DisplayName == entry.ID && static.DisplayName != "" { + m.DisplayName = static.DisplayName + } + m.Description = static.Description + m.ContextLength = static.ContextLength + m.MaxCompletionTokens = static.MaxCompletionTokens + m.SupportedEndpoints = static.SupportedEndpoints + m.Thinking = static.Thinking + } else { + // Sensible defaults for models not in the static list + m.Description = entry.ID + " via GitHub Copilot" + m.ContextLength = defaultCopilotContextLength + m.MaxCompletionTokens = defaultCopilotMaxCompletionTokens + } + + // Override with real limits from the Copilot API when available. + // The API returns per-account limits (individual vs business) under + // capabilities.limits, which are more accurate than our static + // fallback values. We use max_prompt_tokens as ContextLength because + // that's the hard limit the Copilot API enforces on prompt size — + // exceeding it triggers "prompt token count exceeds the limit" errors. + if limits := entry.Limits(); limits != nil { + if limits.MaxPromptTokens > 0 { + m.ContextLength = limits.MaxPromptTokens + } + if limits.MaxOutputTokens > 0 { + m.MaxCompletionTokens = limits.MaxOutputTokens + } + } + + models = append(models, m) + } + + log.Infof("github-copilot: fetched %d models from API", len(models)) + return models +} diff --git a/internal/runtime/executor/github_copilot_executor_test.go b/internal/runtime/executor/github_copilot_executor_test.go new file mode 100644 index 0000000000..e3ed153c19 --- /dev/null +++ b/internal/runtime/executor/github_copilot_executor_test.go @@ -0,0 +1,960 @@ +package executor + +import ( + "context" + "io" + "net/http" + "net/http/httptest" + "strings" + "testing" + "time" + + copilotauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/copilot" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + "github.com/tidwall/gjson" +) + +func TestGitHubCopilotNormalizeModel_StripsSuffix(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + model string + wantModel string + }{ + { + name: "suffix stripped", + model: "claude-opus-4.6(medium)", + wantModel: "claude-opus-4.6", + }, + { + name: "no suffix unchanged", + model: "claude-opus-4.6", + wantModel: "claude-opus-4.6", + }, + { + name: "different suffix stripped", + model: "gpt-4o(high)", + wantModel: "gpt-4o", + }, + { + name: "numeric suffix stripped", + model: "gemini-2.5-pro(8192)", + wantModel: "gemini-2.5-pro", + }, + } + + e := &GitHubCopilotExecutor{} + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + body := []byte(`{"model":"` + tt.model + `","messages":[]}`) + got := e.normalizeModel(tt.model, body) + + gotModel := gjson.GetBytes(got, "model").String() + if gotModel != tt.wantModel { + t.Fatalf("normalizeModel() model = %q, want %q", gotModel, tt.wantModel) + } + }) + } +} + +func TestUseGitHubCopilotResponsesEndpoint_OpenAIResponseSource(t *testing.T) { + t.Parallel() + if !useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai-response"), "claude-3-5-sonnet") { + t.Fatal("expected openai-response source to use /responses") + } +} + +func TestUseGitHubCopilotResponsesEndpoint_CodexModel(t *testing.T) { + t.Parallel() + if !useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai"), "gpt-5-codex") { + t.Fatal("expected codex model to use /responses") + } +} + +func TestUseGitHubCopilotResponsesEndpoint_RegistryResponsesOnlyModel(t *testing.T) { + // Not parallel: shares global model registry with DynamicRegistryWinsOverStatic. + if !useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai"), "gpt-5.4") { + t.Fatal("expected responses-only registry model to use /responses") + } + if !useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai"), "gpt-5.4-mini") { + t.Fatal("expected responses-only registry model to use /responses") + } +} + +func TestUseGitHubCopilotResponsesEndpoint_DynamicRegistryWinsOverStatic(t *testing.T) { + // Not parallel: mutates global model registry, conflicts with RegistryResponsesOnlyModel. + + reg := registry.GetGlobalRegistry() + clientID := "github-copilot-test-client" + reg.RegisterClient(clientID, "github-copilot", []*registry.ModelInfo{ + { + ID: "gpt-5.4", + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + }, + { + ID: "gpt-5.4-mini", + SupportedEndpoints: []string{"/chat/completions", "/responses"}, + }, + }) + defer reg.UnregisterClient(clientID) + + if useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai"), "gpt-5.4") { + t.Fatal("expected dynamic registry definition to take precedence over static fallback") + } + + if useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai"), "gpt-5.4-mini") { + t.Fatal("expected dynamic registry definition to take precedence over static fallback") + } +} + +func TestUseGitHubCopilotResponsesEndpoint_DefaultChat(t *testing.T) { + t.Parallel() + if useGitHubCopilotResponsesEndpoint(sdktranslator.FromString("openai"), "claude-3-5-sonnet") { + t.Fatal("expected default openai source with non-codex model to use /chat/completions") + } +} + +func TestNormalizeGitHubCopilotChatTools_KeepFunctionOnly(t *testing.T) { + t.Parallel() + body := []byte(`{"tools":[{"type":"function","function":{"name":"ok"}},{"type":"code_interpreter"}],"tool_choice":"auto"}`) + got := normalizeGitHubCopilotChatTools(body) + tools := gjson.GetBytes(got, "tools").Array() + if len(tools) != 1 { + t.Fatalf("tools len = %d, want 1", len(tools)) + } + if tools[0].Get("type").String() != "function" { + t.Fatalf("tool type = %q, want function", tools[0].Get("type").String()) + } +} + +func TestNormalizeGitHubCopilotChatTools_InvalidToolChoiceDowngradeToAuto(t *testing.T) { + t.Parallel() + body := []byte(`{"tools":[],"tool_choice":{"type":"function","function":{"name":"x"}}}`) + got := normalizeGitHubCopilotChatTools(body) + if gjson.GetBytes(got, "tool_choice").String() != "auto" { + t.Fatalf("tool_choice = %s, want auto", gjson.GetBytes(got, "tool_choice").Raw) + } +} + +func TestNormalizeGitHubCopilotResponsesInput_MissingInputExtractedFromSystemAndMessages(t *testing.T) { + t.Parallel() + body := []byte(`{"system":"sys text","messages":[{"role":"user","content":"user text"},{"role":"assistant","content":[{"type":"text","text":"assistant text"}]}]}`) + got := normalizeGitHubCopilotResponsesInput(body) + in := gjson.GetBytes(got, "input") + if !in.IsArray() { + t.Fatalf("input type = %v, want array", in.Type) + } + raw := in.Raw + if !strings.Contains(raw, "sys text") || !strings.Contains(raw, "user text") || !strings.Contains(raw, "assistant text") { + t.Fatalf("input = %s, want structured array with all texts", raw) + } + if gjson.GetBytes(got, "messages").Exists() { + t.Fatal("messages should be removed after conversion") + } + if gjson.GetBytes(got, "system").Exists() { + t.Fatal("system should be removed after conversion") + } +} + +func TestNormalizeGitHubCopilotResponsesInput_NonStringInputStringified(t *testing.T) { + t.Parallel() + body := []byte(`{"input":{"foo":"bar"}}`) + got := normalizeGitHubCopilotResponsesInput(body) + in := gjson.GetBytes(got, "input") + if in.Type != gjson.String { + t.Fatalf("input type = %v, want string", in.Type) + } + if !strings.Contains(in.String(), "foo") { + t.Fatalf("input = %q, want stringified object", in.String()) + } +} + +func TestNormalizeGitHubCopilotResponsesInput_StripsServiceTier(t *testing.T) { + t.Parallel() + body := []byte(`{"input":"user text","service_tier":"default"}`) + got := normalizeGitHubCopilotResponsesInput(body) + + if gjson.GetBytes(got, "service_tier").Exists() { + t.Fatalf("service_tier should be removed, got %s", gjson.GetBytes(got, "service_tier").Raw) + } + if gjson.GetBytes(got, "input").String() != "user text" { + t.Fatalf("input = %q, want %q", gjson.GetBytes(got, "input").String(), "user text") + } +} + +func TestNormalizeGitHubCopilotResponsesTools_FlattenFunctionTools(t *testing.T) { + t.Parallel() + body := []byte(`{"tools":[{"type":"function","function":{"name":"sum","description":"d","parameters":{"type":"object"}}},{"type":"web_search"}]}`) + got := normalizeGitHubCopilotResponsesTools(body) + tools := gjson.GetBytes(got, "tools").Array() + if len(tools) != 1 { + t.Fatalf("tools len = %d, want 1", len(tools)) + } + if tools[0].Get("name").String() != "sum" { + t.Fatalf("tools[0].name = %q, want sum", tools[0].Get("name").String()) + } + if !tools[0].Get("parameters").Exists() { + t.Fatal("expected parameters to be preserved") + } +} + +func TestNormalizeGitHubCopilotResponsesTools_ClaudeFormatTools(t *testing.T) { + t.Parallel() + body := []byte(`{"tools":[{"name":"Bash","description":"Run commands","input_schema":{"type":"object","properties":{"command":{"type":"string"}},"required":["command"]}},{"name":"Read","description":"Read files","input_schema":{"type":"object","properties":{"path":{"type":"string"}}}}]}`) + got := normalizeGitHubCopilotResponsesTools(body) + tools := gjson.GetBytes(got, "tools").Array() + if len(tools) != 2 { + t.Fatalf("tools len = %d, want 2", len(tools)) + } + if tools[0].Get("type").String() != "function" { + t.Fatalf("tools[0].type = %q, want function", tools[0].Get("type").String()) + } + if tools[0].Get("name").String() != "Bash" { + t.Fatalf("tools[0].name = %q, want Bash", tools[0].Get("name").String()) + } + if tools[0].Get("description").String() != "Run commands" { + t.Fatalf("tools[0].description = %q, want 'Run commands'", tools[0].Get("description").String()) + } + if !tools[0].Get("parameters").Exists() { + t.Fatal("expected parameters to be set from input_schema") + } + if tools[0].Get("parameters.properties.command").Exists() != true { + t.Fatal("expected parameters.properties.command to exist") + } + if tools[1].Get("name").String() != "Read" { + t.Fatalf("tools[1].name = %q, want Read", tools[1].Get("name").String()) + } +} + +func TestNormalizeGitHubCopilotResponsesTools_FlattenToolChoiceFunctionObject(t *testing.T) { + t.Parallel() + body := []byte(`{"tool_choice":{"type":"function","function":{"name":"sum"}}}`) + got := normalizeGitHubCopilotResponsesTools(body) + if gjson.GetBytes(got, "tool_choice.type").String() != "function" { + t.Fatalf("tool_choice.type = %q, want function", gjson.GetBytes(got, "tool_choice.type").String()) + } + if gjson.GetBytes(got, "tool_choice.name").String() != "sum" { + t.Fatalf("tool_choice.name = %q, want sum", gjson.GetBytes(got, "tool_choice.name").String()) + } +} + +func TestNormalizeGitHubCopilotResponsesTools_InvalidToolChoiceDowngradeToAuto(t *testing.T) { + t.Parallel() + body := []byte(`{"tool_choice":{"type":"function"}}`) + got := normalizeGitHubCopilotResponsesTools(body) + if gjson.GetBytes(got, "tool_choice").String() != "auto" { + t.Fatalf("tool_choice = %s, want auto", gjson.GetBytes(got, "tool_choice").Raw) + } +} + +func TestTranslateGitHubCopilotResponsesNonStreamToClaude_TextMapping(t *testing.T) { + t.Parallel() + resp := []byte(`{"id":"resp_1","model":"gpt-5-codex","output":[{"type":"message","content":[{"type":"output_text","text":"hello"}]}],"usage":{"input_tokens":3,"output_tokens":5}}`) + out := translateGitHubCopilotResponsesNonStreamToClaude(resp) + if gjson.GetBytes(out, "type").String() != "message" { + t.Fatalf("type = %q, want message", gjson.GetBytes(out, "type").String()) + } + if gjson.GetBytes(out, "content.0.type").String() != "text" { + t.Fatalf("content.0.type = %q, want text", gjson.GetBytes(out, "content.0.type").String()) + } + if gjson.GetBytes(out, "content.0.text").String() != "hello" { + t.Fatalf("content.0.text = %q, want hello", gjson.GetBytes(out, "content.0.text").String()) + } +} + +func TestTranslateGitHubCopilotResponsesNonStreamToClaude_ToolUseMapping(t *testing.T) { + t.Parallel() + resp := []byte(`{"id":"resp_2","model":"gpt-5-codex","output":[{"type":"function_call","id":"fc_1","call_id":"call_1","name":"sum","arguments":"{\"a\":1}"}],"usage":{"input_tokens":1,"output_tokens":2}}`) + out := translateGitHubCopilotResponsesNonStreamToClaude(resp) + if gjson.GetBytes(out, "content.0.type").String() != "tool_use" { + t.Fatalf("content.0.type = %q, want tool_use", gjson.GetBytes(out, "content.0.type").String()) + } + if gjson.GetBytes(out, "content.0.name").String() != "sum" { + t.Fatalf("content.0.name = %q, want sum", gjson.GetBytes(out, "content.0.name").String()) + } + if gjson.GetBytes(out, "stop_reason").String() != "tool_use" { + t.Fatalf("stop_reason = %q, want tool_use", gjson.GetBytes(out, "stop_reason").String()) + } +} + +func TestTranslateGitHubCopilotResponsesStreamToClaude_TextLifecycle(t *testing.T) { + t.Parallel() + var param any + + created := translateGitHubCopilotResponsesStreamToClaude([]byte(`data: {"type":"response.created","response":{"id":"resp_1","model":"gpt-5-codex"}}`), ¶m) + if len(created) == 0 || !strings.Contains(string(created[0]), "message_start") { + t.Fatalf("created events = %#v, want message_start", created) + } + + delta := translateGitHubCopilotResponsesStreamToClaude([]byte(`data: {"type":"response.output_text.delta","delta":"he"}`), ¶m) + var joinedDelta string + for _, d := range delta { + joinedDelta += string(d) + } + if !strings.Contains(joinedDelta, "content_block_start") || !strings.Contains(joinedDelta, "text_delta") { + t.Fatalf("delta events = %#v, want content_block_start + text_delta", delta) + } + + completed := translateGitHubCopilotResponsesStreamToClaude([]byte(`data: {"type":"response.completed","response":{"usage":{"input_tokens":7,"output_tokens":9}}}`), ¶m) + var joinedCompleted string + for _, c := range completed { + joinedCompleted += string(c) + } + if !strings.Contains(joinedCompleted, "message_delta") || !strings.Contains(joinedCompleted, "message_stop") { + t.Fatalf("completed events = %#v, want message_delta + message_stop", completed) + } +} + +// --- Tests for X-Initiator detection logic (Problem L) --- + +func TestApplyHeaders_XInitiator_UserOnly(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + body := []byte(`{"messages":[{"role":"system","content":"sys"},{"role":"user","content":"hello"}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "user" { + t.Fatalf("X-Initiator = %q, want user", got) + } +} + +func TestApplyHeaders_XInitiator_AgentWhenLastUserButHistoryHasAssistant(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + // When the last role is "user" and the message contains tool_result content, + // the request is a continuation (e.g. Claude tool result translated to a + // synthetic user message). Should be "agent". + body := []byte(`{"messages":[{"role":"user","content":"hello"},{"role":"assistant","content":"I will read the file"},{"role":"user","content":[{"type":"tool_result","tool_use_id":"tu1","content":"file contents..."}]}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "agent" { + t.Fatalf("X-Initiator = %q, want agent (last user contains tool_result)", got) + } +} + +func TestApplyHeaders_XInitiator_AgentWithToolRole(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + // When the last message has role "tool", it's clearly agent-initiated. + body := []byte(`{"messages":[{"role":"user","content":"hello"},{"role":"tool","content":"result"}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "agent" { + t.Fatalf("X-Initiator = %q, want agent (last role is tool)", got) + } +} + +func TestApplyHeaders_XInitiator_InputArrayLastAssistantMessage(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + body := []byte(`{"input":[{"type":"message","role":"user","content":[{"type":"input_text","text":"Hi"}]},{"type":"message","role":"assistant","content":[{"type":"output_text","text":"Hello"}]}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "agent" { + t.Fatalf("X-Initiator = %q, want agent (last role is assistant)", got) + } +} + +func TestApplyHeaders_XInitiator_InputArrayAgentWhenLastUserButHistoryHasAssistant(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + // Responses API: last item is user-role but history contains assistant → agent. + body := []byte(`{"input":[{"type":"message","role":"assistant","content":[{"type":"output_text","text":"I can help"}]},{"type":"message","role":"user","content":[{"type":"input_text","text":"Do X"}]}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "agent" { + t.Fatalf("X-Initiator = %q, want agent (history has assistant)", got) + } +} + +func TestApplyHeaders_XInitiator_InputArrayLastFunctionCallOutput(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + body := []byte(`{"input":[{"type":"message","role":"user","content":[{"type":"input_text","text":"Use tool"}]},{"type":"function_call","call_id":"c1","name":"Read","arguments":"{}"},{"type":"function_call_output","call_id":"c1","output":"ok"}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "agent" { + t.Fatalf("X-Initiator = %q, want agent (last item maps to tool role)", got) + } +} + +func TestApplyHeaders_XInitiator_UserInMultiTurnNoTools(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + // Genuine multi-turn: user → assistant (plain text) → user follow-up. + // No tool messages → should be "user" (not a false-positive). + body := []byte(`{"messages":[{"role":"user","content":"hello"},{"role":"assistant","content":"Hi there!"},{"role":"user","content":"what is 2+2?"}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "user" { + t.Fatalf("X-Initiator = %q, want user (genuine multi-turn, no tools)", got) + } +} + +func TestApplyHeaders_XInitiator_UserFollowUpAfterToolHistory(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + // User follow-up after a completed tool-use conversation. + // The last message is a genuine user question — should be "user", not "agent". + // This aligns with opencode's behavior: only active tool loops are agent-initiated. + body := []byte(`{"messages":[{"role":"user","content":"hello"},{"role":"assistant","content":[{"type":"tool_use","id":"tu1","name":"Read","input":{}}]},{"role":"tool","tool_call_id":"tu1","content":"file data"},{"role":"assistant","content":"I read the file."},{"role":"user","content":"What did we do so far?"}]}`) + e.applyHeaders(req, "token", body) + if got := req.Header.Get("X-Initiator"); got != "user" { + t.Fatalf("X-Initiator = %q, want user (genuine follow-up after tool history)", got) + } +} + +// --- Tests for x-github-api-version header (Problem M) --- + +func TestApplyHeaders_GitHubAPIVersion(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + e.applyHeaders(req, "token", nil) + if got := req.Header.Get("X-Github-Api-Version"); got != "2025-04-01" { + t.Fatalf("X-Github-Api-Version = %q, want 2025-04-01", got) + } +} + +// --- Tests for vision detection (Problem P) --- + +func TestDetectVisionContent_WithImageURL(t *testing.T) { + t.Parallel() + body := []byte(`{"messages":[{"role":"user","content":[{"type":"text","text":"describe"},{"type":"image_url","image_url":{"url":"data:image/png;base64,abc"}}]}]}`) + if !detectVisionContent(body) { + t.Fatal("expected vision content to be detected") + } +} + +func TestDetectVisionContent_WithImageType(t *testing.T) { + t.Parallel() + body := []byte(`{"messages":[{"role":"user","content":[{"type":"image","source":{"data":"abc","media_type":"image/png"}}]}]}`) + if !detectVisionContent(body) { + t.Fatal("expected image type to be detected") + } +} + +func TestDetectVisionContent_NoVision(t *testing.T) { + t.Parallel() + body := []byte(`{"messages":[{"role":"user","content":[{"type":"text","text":"hello"}]}]}`) + if detectVisionContent(body) { + t.Fatal("expected no vision content") + } +} + +func TestDetectVisionContent_NoMessages(t *testing.T) { + t.Parallel() + // After Responses API normalization, messages is removed — detection should return false + body := []byte(`{"input":[{"type":"message","role":"user","content":[{"type":"input_text","text":"hello"}]}]}`) + if detectVisionContent(body) { + t.Fatal("expected no vision content when messages field is absent") + } +} + +// --- Tests for applyGitHubCopilotResponsesDefaults --- + +func TestApplyGitHubCopilotResponsesDefaults_SetsAllDefaults(t *testing.T) { + t.Parallel() + body := []byte(`{"input":"hello","reasoning":{"effort":"medium"}}`) + got := applyGitHubCopilotResponsesDefaults(body) + + if gjson.GetBytes(got, "store").Bool() != false { + t.Fatalf("store = %v, want false", gjson.GetBytes(got, "store").Raw) + } + inc := gjson.GetBytes(got, "include") + if !inc.IsArray() || inc.Array()[0].String() != "reasoning.encrypted_content" { + t.Fatalf("include = %s, want [\"reasoning.encrypted_content\"]", inc.Raw) + } + if gjson.GetBytes(got, "reasoning.summary").String() != "auto" { + t.Fatalf("reasoning.summary = %q, want auto", gjson.GetBytes(got, "reasoning.summary").String()) + } +} + +func TestApplyGitHubCopilotResponsesDefaults_DoesNotOverrideExisting(t *testing.T) { + t.Parallel() + body := []byte(`{"input":"hello","store":true,"include":["other"],"reasoning":{"effort":"high","summary":"concise"}}`) + got := applyGitHubCopilotResponsesDefaults(body) + + if gjson.GetBytes(got, "store").Bool() != true { + t.Fatalf("store should not be overridden, got %s", gjson.GetBytes(got, "store").Raw) + } + if gjson.GetBytes(got, "include").Array()[0].String() != "other" { + t.Fatalf("include should not be overridden, got %s", gjson.GetBytes(got, "include").Raw) + } + if gjson.GetBytes(got, "reasoning.summary").String() != "concise" { + t.Fatalf("reasoning.summary should not be overridden, got %q", gjson.GetBytes(got, "reasoning.summary").String()) + } +} + +func TestApplyGitHubCopilotResponsesDefaults_NoReasoningEffort(t *testing.T) { + t.Parallel() + body := []byte(`{"input":"hello"}`) + got := applyGitHubCopilotResponsesDefaults(body) + + if gjson.GetBytes(got, "store").Bool() != false { + t.Fatalf("store = %v, want false", gjson.GetBytes(got, "store").Raw) + } + // reasoning.summary should NOT be set when reasoning.effort is absent + if gjson.GetBytes(got, "reasoning.summary").Exists() { + t.Fatalf("reasoning.summary should not be set when reasoning.effort is absent, got %q", gjson.GetBytes(got, "reasoning.summary").String()) + } +} + +// --- Tests for normalizeGitHubCopilotReasoningField --- + +func TestNormalizeReasoningField_NonStreaming(t *testing.T) { + t.Parallel() + data := []byte(`{"choices":[{"message":{"content":"hello","reasoning_text":"I think..."}}]}`) + got := normalizeGitHubCopilotReasoningField(data) + rc := gjson.GetBytes(got, "choices.0.message.reasoning_content").String() + if rc != "I think..." { + t.Fatalf("reasoning_content = %q, want %q", rc, "I think...") + } +} + +func TestNormalizeReasoningField_Streaming(t *testing.T) { + t.Parallel() + data := []byte(`{"choices":[{"delta":{"reasoning_text":"thinking delta"}}]}`) + got := normalizeGitHubCopilotReasoningField(data) + rc := gjson.GetBytes(got, "choices.0.delta.reasoning_content").String() + if rc != "thinking delta" { + t.Fatalf("reasoning_content = %q, want %q", rc, "thinking delta") + } +} + +func TestNormalizeReasoningField_PreservesExistingReasoningContent(t *testing.T) { + t.Parallel() + data := []byte(`{"choices":[{"message":{"reasoning_text":"old","reasoning_content":"existing"}}]}`) + got := normalizeGitHubCopilotReasoningField(data) + rc := gjson.GetBytes(got, "choices.0.message.reasoning_content").String() + if rc != "existing" { + t.Fatalf("reasoning_content = %q, want %q (should not overwrite)", rc, "existing") + } +} + +func TestNormalizeReasoningField_MultiChoice(t *testing.T) { + t.Parallel() + data := []byte(`{"choices":[{"message":{"reasoning_text":"thought-0"}},{"message":{"reasoning_text":"thought-1"}}]}`) + got := normalizeGitHubCopilotReasoningField(data) + rc0 := gjson.GetBytes(got, "choices.0.message.reasoning_content").String() + rc1 := gjson.GetBytes(got, "choices.1.message.reasoning_content").String() + if rc0 != "thought-0" { + t.Fatalf("choices[0].reasoning_content = %q, want %q", rc0, "thought-0") + } + if rc1 != "thought-1" { + t.Fatalf("choices[1].reasoning_content = %q, want %q", rc1, "thought-1") + } +} + +func TestNormalizeReasoningField_NoChoices(t *testing.T) { + t.Parallel() + data := []byte(`{"id":"chatcmpl-123"}`) + got := normalizeGitHubCopilotReasoningField(data) + if string(got) != string(data) { + t.Fatalf("expected no change, got %s", string(got)) + } +} + +func TestApplyHeaders_OpenAIIntentValue(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + req, _ := http.NewRequest(http.MethodPost, "https://example.com", nil) + e.applyHeaders(req, "token", nil) + if got := req.Header.Get("Openai-Intent"); got != "conversation-edits" { + t.Fatalf("Openai-Intent = %q, want conversation-edits", got) + } +} + +// --- Tests for CountTokens (local tiktoken estimation) --- + +func TestCountTokens_ReturnsPositiveCount(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + body := []byte(`{"model":"gpt-4o","messages":[{"role":"user","content":"Hello, world!"}]}`) + resp, err := e.CountTokens(context.Background(), nil, cliproxyexecutor.Request{ + Model: "gpt-4o", + Payload: body, + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("CountTokens() error: %v", err) + } + if len(resp.Payload) == 0 { + t.Fatal("CountTokens() returned empty payload") + } + // The response should contain a positive token count. + tokens := gjson.GetBytes(resp.Payload, "usage.prompt_tokens").Int() + if tokens <= 0 { + t.Fatalf("expected positive token count, got %d", tokens) + } +} + +func TestCountTokens_ClaudeSourceFormatTranslates(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + body := []byte(`{"model":"claude-sonnet-4","messages":[{"role":"user","content":"Tell me a joke"}],"max_tokens":1024}`) + resp, err := e.CountTokens(context.Background(), nil, cliproxyexecutor.Request{ + Model: "claude-sonnet-4", + Payload: body, + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("claude"), + }) + if err != nil { + t.Fatalf("CountTokens() error: %v", err) + } + // Claude source format → should get input_tokens in response + inputTokens := gjson.GetBytes(resp.Payload, "input_tokens").Int() + if inputTokens <= 0 { + // Fallback: check usage.prompt_tokens (depends on translator registration) + promptTokens := gjson.GetBytes(resp.Payload, "usage.prompt_tokens").Int() + if promptTokens <= 0 { + t.Fatalf("expected positive token count, got payload: %s", resp.Payload) + } + } +} + +func TestGitHubCopilotExecute_ClaudeModelUsesNativeGateway(t *testing.T) { + t.Parallel() + + var gotPath string + var gotQuery string + var gotAuth string + var gotAPIVersion string + var gotEditorVersion string + var gotIntent string + var gotInitiator string + var gotBody []byte + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotQuery = r.URL.RawQuery + gotAuth = r.Header.Get("Authorization") + gotAPIVersion = r.Header.Get("X-Github-Api-Version") + gotEditorVersion = r.Header.Get("Editor-Version") + gotIntent = r.Header.Get("Openai-Intent") + gotInitiator = r.Header.Get("X-Initiator") + gotBody, _ = io.ReadAll(r.Body) + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"id":"msg_1","type":"message","model":"claude-sonnet-4.6","role":"assistant","content":[{"type":"text","text":"ok"}],"usage":{"input_tokens":1,"output_tokens":1}}`)) + })) + defer server.Close() + + e := NewGitHubCopilotExecutor(&config.Config{}) + e.cache["gh-access-token"] = &cachedAPIToken{ + token: "copilot-api-token", + apiEndpoint: server.URL, + expiresAt: time.Now().Add(time.Hour), + } + auth := &cliproxyauth.Auth{Metadata: map[string]any{"access_token": "gh-access-token"}} + payload := []byte(`{"model":"claude-sonnet-4.6","max_tokens":256,"messages":[{"role":"user","content":"hello"}]}`) + + resp, err := e.Execute(context.Background(), auth, cliproxyexecutor.Request{ + Model: "claude-sonnet-4.6", + Payload: payload, + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("claude"), + OriginalRequest: payload, + }) + if err != nil { + t.Fatalf("Execute() error: %v", err) + } + + if gotPath != "/v1/messages" { + t.Fatalf("path = %q, want %q", gotPath, "/v1/messages") + } + if gotQuery != "beta=true" { + t.Fatalf("query = %q, want %q", gotQuery, "beta=true") + } + if gotAuth != "Bearer copilot-api-token" { + t.Fatalf("Authorization = %q, want %q", gotAuth, "Bearer copilot-api-token") + } + if gotAPIVersion != copilotGitHubAPIVer { + t.Fatalf("X-Github-Api-Version = %q, want %q", gotAPIVersion, copilotGitHubAPIVer) + } + if gotEditorVersion != copilotEditorVersion { + t.Fatalf("Editor-Version = %q, want %q", gotEditorVersion, copilotEditorVersion) + } + if gotIntent != copilotOpenAIIntent { + t.Fatalf("Openai-Intent = %q, want %q", gotIntent, copilotOpenAIIntent) + } + if gotInitiator != "user" { + t.Fatalf("X-Initiator = %q, want %q", gotInitiator, "user") + } + if gjson.GetBytes(gotBody, "model").String() != "claude-sonnet-4.6" { + t.Fatalf("upstream model = %q, want %q", gjson.GetBytes(gotBody, "model").String(), "claude-sonnet-4.6") + } + if gjson.GetBytes(resp.Payload, "content.0.text").String() != "ok" { + t.Fatalf("response text = %q, want %q", gjson.GetBytes(resp.Payload, "content.0.text").String(), "ok") + } +} + +func TestGitHubCopilotExecuteStream_ClaudeModelUsesNativeGateway(t *testing.T) { + t.Parallel() + + var gotPath string + var gotInitiator string + var gotAPIVersion string + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotInitiator = r.Header.Get("X-Initiator") + gotAPIVersion = r.Header.Get("X-Github-Api-Version") + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("event: message_start\ndata: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_1\",\"type\":\"message\",\"role\":\"assistant\",\"model\":\"claude-sonnet-4.6\",\"content\":[],\"usage\":{\"input_tokens\":1,\"output_tokens\":0}}}\n\n")) + _, _ = w.Write([]byte("event: content_block_start\ndata: {\"type\":\"content_block_start\",\"index\":0,\"content_block\":{\"type\":\"text\",\"text\":\"\"}}\n\n")) + _, _ = w.Write([]byte("event: content_block_delta\ndata: {\"type\":\"content_block_delta\",\"index\":0,\"delta\":{\"type\":\"text_delta\",\"text\":\"ok\"}}\n\n")) + _, _ = w.Write([]byte("event: content_block_stop\ndata: {\"type\":\"content_block_stop\",\"index\":0}\n\n")) + _, _ = w.Write([]byte("event: message_delta\ndata: {\"type\":\"message_delta\",\"delta\":{\"stop_reason\":\"end_turn\"},\"usage\":{\"output_tokens\":1}}\n\n")) + _, _ = w.Write([]byte("event: message_stop\ndata: {\"type\":\"message_stop\"}\n\n")) + })) + defer server.Close() + + e := NewGitHubCopilotExecutor(&config.Config{}) + e.cache["gh-access-token"] = &cachedAPIToken{ + token: "copilot-api-token", + apiEndpoint: server.URL, + expiresAt: time.Now().Add(time.Hour), + } + auth := &cliproxyauth.Auth{Metadata: map[string]any{"access_token": "gh-access-token"}} + payload := []byte(`{"model":"claude-sonnet-4.6","stream":true,"max_tokens":256,"messages":[{"role":"assistant","content":[{"type":"tool_use","id":"toolu_1","name":"Read","input":{"path":"notes.txt"}}]},{"role":"user","content":[{"type":"tool_result","tool_use_id":"toolu_1","content":"file contents"}]}]}`) + + result, err := e.ExecuteStream(context.Background(), auth, cliproxyexecutor.Request{ + Model: "claude-sonnet-4.6", + Payload: payload, + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("claude"), + OriginalRequest: payload, + }) + if err != nil { + t.Fatalf("ExecuteStream() error: %v", err) + } + + var joined strings.Builder + for chunk := range result.Chunks { + if chunk.Err != nil { + t.Fatalf("stream chunk error: %v", chunk.Err) + } + joined.Write(chunk.Payload) + } + + if gotPath != "/v1/messages" { + t.Fatalf("path = %q, want %q", gotPath, "/v1/messages") + } + if gotInitiator != "agent" { + t.Fatalf("X-Initiator = %q, want %q", gotInitiator, "agent") + } + if gotAPIVersion != copilotGitHubAPIVer { + t.Fatalf("X-Github-Api-Version = %q, want %q", gotAPIVersion, copilotGitHubAPIVer) + } + if !strings.Contains(joined.String(), "message_start") || !strings.Contains(joined.String(), "text_delta") { + t.Fatalf("stream = %q, want Claude SSE payload", joined.String()) + } +} + +func TestCountTokens_EmptyPayload(t *testing.T) { + t.Parallel() + e := &GitHubCopilotExecutor{} + resp, err := e.CountTokens(context.Background(), nil, cliproxyexecutor.Request{ + Model: "gpt-4o", + Payload: []byte(`{"model":"gpt-4o","messages":[]}`), + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("CountTokens() error: %v", err) + } + tokens := gjson.GetBytes(resp.Payload, "usage.prompt_tokens").Int() + // Empty messages should return 0 tokens. + if tokens != 0 { + t.Fatalf("expected 0 tokens for empty messages, got %d", tokens) + } +} + +func TestStripUnsupportedBetas_RemovesContext1M(t *testing.T) { + t.Parallel() + + body := []byte(`{"model":"claude-opus-4.6","betas":["interleaved-thinking-2025-05-14","context-1m-2025-08-07","claude-code-20250219"],"messages":[]}`) + result := stripUnsupportedBetas(body) + + betas := gjson.GetBytes(result, "betas") + if !betas.Exists() { + t.Fatal("betas field should still exist after stripping") + } + for _, item := range betas.Array() { + if item.String() == "context-1m-2025-08-07" { + t.Fatal("context-1m-2025-08-07 should have been stripped") + } + } + // Other betas should be preserved + found := false + for _, item := range betas.Array() { + if item.String() == "interleaved-thinking-2025-05-14" { + found = true + } + } + if !found { + t.Fatal("other betas should be preserved") + } +} + +func TestStripUnsupportedBetas_NoBetasField(t *testing.T) { + t.Parallel() + + body := []byte(`{"model":"gpt-4o","messages":[]}`) + result := stripUnsupportedBetas(body) + + // Should be unchanged + if string(result) != string(body) { + t.Fatalf("body should be unchanged when no betas field exists, got %s", string(result)) + } +} + +func TestStripUnsupportedBetas_MetadataBetas(t *testing.T) { + t.Parallel() + + body := []byte(`{"model":"claude-opus-4.6","metadata":{"betas":["context-1m-2025-08-07","other-beta"]},"messages":[]}`) + result := stripUnsupportedBetas(body) + + betas := gjson.GetBytes(result, "metadata.betas") + if !betas.Exists() { + t.Fatal("metadata.betas field should still exist after stripping") + } + for _, item := range betas.Array() { + if item.String() == "context-1m-2025-08-07" { + t.Fatal("context-1m-2025-08-07 should have been stripped from metadata.betas") + } + } + if betas.Array()[0].String() != "other-beta" { + t.Fatal("other betas in metadata.betas should be preserved") + } +} + +func TestStripUnsupportedBetas_AllBetasStripped(t *testing.T) { + t.Parallel() + + body := []byte(`{"model":"claude-opus-4.6","betas":["context-1m-2025-08-07"],"messages":[]}`) + result := stripUnsupportedBetas(body) + + betas := gjson.GetBytes(result, "betas") + if betas.Exists() { + t.Fatal("betas field should be deleted when all betas are stripped") + } +} + +func TestCopilotModelEntry_Limits(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + capabilities map[string]any + wantNil bool + wantPrompt int + wantOutput int + wantContext int + }{ + { + name: "nil capabilities", + capabilities: nil, + wantNil: true, + }, + { + name: "no limits key", + capabilities: map[string]any{"family": "claude-opus-4.6"}, + wantNil: true, + }, + { + name: "limits is not a map", + capabilities: map[string]any{"limits": "invalid"}, + wantNil: true, + }, + { + name: "all zero values", + capabilities: map[string]any{ + "limits": map[string]any{ + "max_context_window_tokens": float64(0), + "max_prompt_tokens": float64(0), + "max_output_tokens": float64(0), + }, + }, + wantNil: true, + }, + { + name: "individual account limits (128K prompt)", + capabilities: map[string]any{ + "limits": map[string]any{ + "max_context_window_tokens": float64(144000), + "max_prompt_tokens": float64(128000), + "max_output_tokens": float64(64000), + }, + }, + wantNil: false, + wantPrompt: 128000, + wantOutput: 64000, + wantContext: 144000, + }, + { + name: "business account limits (168K prompt)", + capabilities: map[string]any{ + "limits": map[string]any{ + "max_context_window_tokens": float64(200000), + "max_prompt_tokens": float64(168000), + "max_output_tokens": float64(32000), + }, + }, + wantNil: false, + wantPrompt: 168000, + wantOutput: 32000, + wantContext: 200000, + }, + { + name: "partial limits (only prompt)", + capabilities: map[string]any{ + "limits": map[string]any{ + "max_prompt_tokens": float64(128000), + }, + }, + wantNil: false, + wantPrompt: 128000, + wantOutput: 0, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + entry := copilotauth.CopilotModelEntry{ + ID: "claude-opus-4.6", + Capabilities: tt.capabilities, + } + limits := entry.Limits() + if tt.wantNil { + if limits != nil { + t.Fatalf("expected nil limits, got %+v", limits) + } + return + } + if limits == nil { + t.Fatal("expected non-nil limits, got nil") + } + if limits.MaxPromptTokens != tt.wantPrompt { + t.Errorf("MaxPromptTokens = %d, want %d", limits.MaxPromptTokens, tt.wantPrompt) + } + if limits.MaxOutputTokens != tt.wantOutput { + t.Errorf("MaxOutputTokens = %d, want %d", limits.MaxOutputTokens, tt.wantOutput) + } + if tt.wantContext > 0 && limits.MaxContextWindowTokens != tt.wantContext { + t.Errorf("MaxContextWindowTokens = %d, want %d", limits.MaxContextWindowTokens, tt.wantContext) + } + }) + } +} diff --git a/internal/runtime/executor/gitlab_executor.go b/internal/runtime/executor/gitlab_executor.go new file mode 100644 index 0000000000..585f8d9b94 --- /dev/null +++ b/internal/runtime/executor/gitlab_executor.go @@ -0,0 +1,1374 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/gitlab" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + "github.com/tidwall/gjson" +) + +const ( + gitLabProviderKey = "gitlab" + gitLabAuthMethodOAuth = "oauth" + gitLabAuthMethodPAT = "pat" + gitLabChatEndpoint = "/api/v4/chat/completions" + gitLabCodeSuggestionsEndpoint = "/api/v4/code_suggestions/completions" + gitLabSSEStreamingHeader = "X-Supports-Sse-Streaming" + gitLabContext1MBeta = "context-1m-2025-08-07" + gitLabNativeUserAgent = "CLIProxyAPIPlus/GitLab-Duo" +) + +type GitLabExecutor struct { + cfg *config.Config +} + +type gitLabCatalogModel struct { + ID string + DisplayName string + Provider string +} + +type gitLabPrompt struct { + Instruction string + FileName string + ContentAboveCursor string + ChatContext []map[string]any + CodeSuggestionContext []map[string]any +} + +type gitLabOpenAIStreamState struct { + ID string + Model string + Created int64 + LastFullText string + Started bool + Finished bool +} + +var gitLabAgenticCatalog = []gitLabCatalogModel{ + {ID: "duo-chat-gpt-5-1", DisplayName: "GitLab Duo (GPT-5.1)", Provider: "openai"}, + {ID: "duo-chat-opus-4-6", DisplayName: "GitLab Duo (Claude Opus 4.6)", Provider: "anthropic"}, + {ID: "duo-chat-opus-4-5", DisplayName: "GitLab Duo (Claude Opus 4.5)", Provider: "anthropic"}, + {ID: "duo-chat-sonnet-4-6", DisplayName: "GitLab Duo (Claude Sonnet 4.6)", Provider: "anthropic"}, + {ID: "duo-chat-sonnet-4-5", DisplayName: "GitLab Duo (Claude Sonnet 4.5)", Provider: "anthropic"}, + {ID: "duo-chat-gpt-5-mini", DisplayName: "GitLab Duo (GPT-5 Mini)", Provider: "openai"}, + {ID: "duo-chat-gpt-5-2", DisplayName: "GitLab Duo (GPT-5.2)", Provider: "openai"}, + {ID: "duo-chat-gpt-5-2-codex", DisplayName: "GitLab Duo (GPT-5.2 Codex)", Provider: "openai"}, + {ID: "duo-chat-gpt-5-codex", DisplayName: "GitLab Duo (GPT-5 Codex)", Provider: "openai"}, + {ID: "duo-chat-haiku-4-5", DisplayName: "GitLab Duo (Claude Haiku 4.5)", Provider: "anthropic"}, +} + +var gitLabModelAliases = map[string]string{ + "duo-chat-haiku-4-6": "duo-chat-haiku-4-5", +} + +func NewGitLabExecutor(cfg *config.Config) *GitLabExecutor { + return &GitLabExecutor{cfg: cfg} +} + +func (e *GitLabExecutor) Identifier() string { return gitLabProviderKey } + +func (e *GitLabExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + if nativeExec, nativeAuth, nativeReq, ok := e.nativeGateway(auth, req); ok { + return nativeExec.Execute(ctx, nativeAuth, nativeReq, opts) + } + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + translated, err := e.translateToOpenAI(req, opts) + if err != nil { + return resp, err + } + prompt := buildGitLabPrompt(translated) + if strings.TrimSpace(prompt.Instruction) == "" && strings.TrimSpace(prompt.ContentAboveCursor) == "" { + err = statusErr{code: http.StatusBadRequest, msg: "gitlab duo executor: request has no usable text content"} + return resp, err + } + + text, err := e.invokeText(ctx, auth, prompt) + if err != nil { + return resp, err + } + + responseModel := gitLabResolvedModel(auth, req.Model) + openAIResponse := buildGitLabOpenAIResponse(responseModel, text, translated) + reporter.publish(ctx, parseOpenAIUsage(openAIResponse)) + reporter.ensurePublished(ctx) + + var param any + out := sdktranslator.TranslateNonStream( + ctx, + sdktranslator.FromString("openai"), + opts.SourceFormat, + req.Model, + opts.OriginalRequest, + translated, + openAIResponse, + ¶m, + ) + return cliproxyexecutor.Response{Payload: []byte(out), Headers: make(http.Header)}, nil +} + +func (e *GitLabExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + if nativeExec, nativeAuth, nativeReq, ok := e.nativeGateway(auth, req); ok { + return nativeExec.ExecuteStream(ctx, nativeAuth, nativeReq, opts) + } + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + translated, err := e.translateToOpenAI(req, opts) + if err != nil { + return nil, err + } + prompt := buildGitLabPrompt(translated) + if strings.TrimSpace(prompt.Instruction) == "" && strings.TrimSpace(prompt.ContentAboveCursor) == "" { + return nil, statusErr{code: http.StatusBadRequest, msg: "gitlab duo executor: request has no usable text content"} + } + + if result, streamErr := e.requestCodeSuggestionsStream(ctx, auth, prompt, translated, req, opts, reporter); streamErr == nil { + return result, nil + } else if !shouldFallbackToCodeSuggestions(streamErr) { + return nil, streamErr + } + + text, err := e.invokeText(ctx, auth, prompt) + if err != nil { + return nil, err + } + responseModel := gitLabResolvedModel(auth, req.Model) + openAIResponse := buildGitLabOpenAIResponse(responseModel, text, translated) + reporter.publish(ctx, parseOpenAIUsage(openAIResponse)) + reporter.ensurePublished(ctx) + + out := make(chan cliproxyexecutor.StreamChunk, 8) + go func() { + defer close(out) + var param any + lines := buildGitLabOpenAIStream(responseModel, text) + for _, line := range lines { + chunks := sdktranslator.TranslateStream( + ctx, + sdktranslator.FromString("openai"), + opts.SourceFormat, + req.Model, + opts.OriginalRequest, + translated, + []byte(line), + ¶m, + ) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])} + } + } + }() + return &cliproxyexecutor.StreamResult{Headers: make(http.Header), Chunks: out}, nil +} + +func (e *GitLabExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil { + return nil, fmt.Errorf("gitlab duo executor: auth is nil") + } + baseURL := gitLabBaseURL(auth) + token := gitLabPrimaryToken(auth) + if baseURL == "" || token == "" { + return nil, fmt.Errorf("gitlab duo executor: missing base URL or token") + } + + client := gitlab.NewAuthClient(e.cfg) + method := strings.ToLower(strings.TrimSpace(gitLabMetadataString(auth.Metadata, "auth_method", "auth_kind"))) + if method == "" { + method = gitLabAuthMethodOAuth + } + + if method == gitLabAuthMethodOAuth { + if refreshed, refreshErr := e.refreshOAuthToken(ctx, client, auth, baseURL); refreshErr == nil && refreshed != nil { + token = refreshed.AccessToken + applyGitLabTokenMetadata(auth.Metadata, refreshed) + } + } + + direct, err := client.FetchDirectAccess(ctx, baseURL, token) + if err != nil && method == gitLabAuthMethodOAuth { + if refreshed, refreshErr := e.refreshOAuthToken(ctx, client, auth, baseURL); refreshErr == nil && refreshed != nil { + token = refreshed.AccessToken + applyGitLabTokenMetadata(auth.Metadata, refreshed) + direct, err = client.FetchDirectAccess(ctx, baseURL, token) + } + } + if err != nil { + return nil, err + } + + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["type"] = gitLabProviderKey + auth.Metadata["auth_method"] = method + auth.Metadata["auth_kind"] = gitLabAuthKind(method) + auth.Metadata["base_url"] = gitlab.NormalizeBaseURL(baseURL) + auth.Metadata["last_refresh"] = time.Now().UTC().Format(time.RFC3339) + mergeGitLabDirectAccessMetadata(auth.Metadata, direct) + return auth, nil +} + +func (e *GitLabExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + if nativeExec, nativeAuth, nativeReq, ok := e.nativeGateway(auth, req); ok { + return nativeExec.CountTokens(ctx, nativeAuth, nativeReq, opts) + } + baseModel := thinking.ParseSuffix(req.Model).ModelName + translated := sdktranslator.TranslateRequest(opts.SourceFormat, sdktranslator.FromString("openai"), baseModel, req.Payload, false) + enc, err := tokenizerForModel(baseModel) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("gitlab duo executor: tokenizer init failed: %w", err) + } + count, err := countOpenAIChatTokens(enc, translated) + if err != nil { + return cliproxyexecutor.Response{}, err + } + return cliproxyexecutor.Response{Payload: buildOpenAIUsageJSON(count), Headers: make(http.Header)}, nil +} + +func (e *GitLabExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("gitlab duo executor: request is nil") + } + if nativeExec, nativeAuth := e.nativeGatewayHTTP(auth); nativeExec != nil { + return nativeExec.HttpRequest(ctx, nativeAuth, req) + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if token := gitLabPrimaryToken(auth); token != "" { + httpReq.Header.Set("Authorization", "Bearer "+token) + } + return newProxyAwareHTTPClient(ctx, e.cfg, auth, 0).Do(httpReq) +} + +func (e *GitLabExecutor) translateToOpenAI(req cliproxyexecutor.Request, opts cliproxyexecutor.Options) ([]byte, error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + return sdktranslator.TranslateRequest(opts.SourceFormat, sdktranslator.FromString("openai"), baseModel, req.Payload, opts.Stream), nil +} + +func (e *GitLabExecutor) nativeGateway( + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, +) (cliproxyauth.ProviderExecutor, *cliproxyauth.Auth, cliproxyexecutor.Request, bool) { + if nativeAuth, ok := buildGitLabAnthropicGatewayAuth(auth, req.Model); ok { + nativeReq := req + nativeReq.Model = gitLabResolvedModel(auth, req.Model) + return NewClaudeExecutor(e.cfg), nativeAuth, nativeReq, true + } + if nativeAuth, ok := buildGitLabOpenAIGatewayAuth(auth, req.Model); ok { + nativeReq := req + nativeReq.Model = gitLabResolvedModel(auth, req.Model) + return NewCodexExecutor(e.cfg), nativeAuth, nativeReq, true + } + return nil, nil, req, false +} + +func (e *GitLabExecutor) nativeGatewayHTTP(auth *cliproxyauth.Auth) (cliproxyauth.ProviderExecutor, *cliproxyauth.Auth) { + if nativeAuth, ok := buildGitLabAnthropicGatewayAuth(auth, ""); ok { + return NewClaudeExecutor(e.cfg), nativeAuth + } + if nativeAuth, ok := buildGitLabOpenAIGatewayAuth(auth, ""); ok { + return NewCodexExecutor(e.cfg), nativeAuth + } + return nil, nil +} + +func (e *GitLabExecutor) invokeText(ctx context.Context, auth *cliproxyauth.Auth, prompt gitLabPrompt) (string, error) { + if text, err := e.requestChat(ctx, auth, prompt); err == nil { + return text, nil + } else if !shouldFallbackToCodeSuggestions(err) { + return "", err + } + return e.requestCodeSuggestions(ctx, auth, prompt) +} + +func (e *GitLabExecutor) requestChat(ctx context.Context, auth *cliproxyauth.Auth, prompt gitLabPrompt) (string, error) { + body := map[string]any{ + "content": prompt.Instruction, + "with_clean_history": true, + } + if len(prompt.ChatContext) > 0 { + body["additional_context"] = prompt.ChatContext + } + return e.doJSONTextRequest(ctx, auth, gitLabChatEndpoint, body) +} + +func (e *GitLabExecutor) requestCodeSuggestions(ctx context.Context, auth *cliproxyauth.Auth, prompt gitLabPrompt) (string, error) { + contentAbove := strings.TrimSpace(prompt.ContentAboveCursor) + if contentAbove == "" { + contentAbove = prompt.Instruction + } + body := map[string]any{ + "current_file": map[string]any{ + "file_name": prompt.FileName, + "content_above_cursor": contentAbove, + "content_below_cursor": "", + }, + "intent": "generation", + "generation_type": "small_file", + "user_instruction": prompt.Instruction, + "stream": false, + } + if len(prompt.CodeSuggestionContext) > 0 { + body["context"] = prompt.CodeSuggestionContext + } + return e.doJSONTextRequest(ctx, auth, gitLabCodeSuggestionsEndpoint, body) +} + +func (e *GitLabExecutor) requestCodeSuggestionsStream( + ctx context.Context, + auth *cliproxyauth.Auth, + prompt gitLabPrompt, + translated []byte, + req cliproxyexecutor.Request, + opts cliproxyexecutor.Options, + reporter *usageReporter, +) (*cliproxyexecutor.StreamResult, error) { + contentAbove := strings.TrimSpace(prompt.ContentAboveCursor) + if contentAbove == "" { + contentAbove = prompt.Instruction + } + body := map[string]any{ + "current_file": map[string]any{ + "file_name": prompt.FileName, + "content_above_cursor": contentAbove, + "content_below_cursor": "", + }, + "intent": "generation", + "generation_type": "small_file", + "user_instruction": prompt.Instruction, + "stream": true, + } + if len(prompt.CodeSuggestionContext) > 0 { + body["context"] = prompt.CodeSuggestionContext + } + + httpResp, bodyRaw, err := e.doJSONRequest(ctx, auth, gitLabCodeSuggestionsEndpoint, body, "text/event-stream") + if err != nil { + return nil, err + } + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + defer func() { _ = httpResp.Body.Close() }() + respBody, readErr := io.ReadAll(httpResp.Body) + if readErr != nil { + recordAPIResponseError(ctx, e.cfg, readErr) + return nil, readErr + } + appendAPIResponseChunk(ctx, e.cfg, respBody) + return nil, statusErr{code: httpResp.StatusCode, msg: strings.TrimSpace(string(respBody))} + } + + responseModel := gitLabResolvedModel(auth, req.Model) + out := make(chan cliproxyexecutor.StreamChunk, 16) + go func() { + defer close(out) + defer func() { _ = httpResp.Body.Close() }() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, 52_428_800) + + var ( + param any + eventName string + state gitLabOpenAIStreamState + ) + for scanner.Scan() { + line := bytes.Clone(scanner.Bytes()) + appendAPIResponseChunk(ctx, e.cfg, line) + trimmed := bytes.TrimSpace(line) + if len(trimmed) == 0 { + continue + } + if bytes.HasPrefix(trimmed, []byte("event:")) { + eventName = strings.TrimSpace(string(trimmed[len("event:"):])) + continue + } + if !bytes.HasPrefix(trimmed, []byte("data:")) { + continue + } + payload := bytes.TrimSpace(trimmed[len("data:"):]) + normalized := normalizeGitLabStreamChunk(eventName, payload, responseModel, &state) + eventName = "" + for _, item := range normalized { + if detail, ok := parseOpenAIStreamUsage(item); ok { + reporter.publish(ctx, detail) + } + chunks := sdktranslator.TranslateStream( + ctx, + sdktranslator.FromString("openai"), + opts.SourceFormat, + req.Model, + opts.OriginalRequest, + translated, + item, + ¶m, + ) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])} + } + } + } + if errScan := scanner.Err(); errScan != nil { + recordAPIResponseError(ctx, e.cfg, errScan) + reporter.publishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + return + } + if !state.Finished { + for _, item := range finalizeGitLabStream(responseModel, &state) { + chunks := sdktranslator.TranslateStream( + ctx, + sdktranslator.FromString("openai"), + opts.SourceFormat, + req.Model, + opts.OriginalRequest, + translated, + item, + ¶m, + ) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])} + } + } + } + reporter.ensurePublished(ctx) + }() + + return &cliproxyexecutor.StreamResult{ + Headers: cloneGitLabStreamHeaders(httpResp.Header, bodyRaw), + Chunks: out, + }, nil +} + +func (e *GitLabExecutor) doJSONTextRequest(ctx context.Context, auth *cliproxyauth.Auth, endpoint string, payload map[string]any) (string, error) { + resp, _, err := e.doJSONRequest(ctx, auth, endpoint, payload, "application/json") + if err != nil { + return "", err + } + defer func() { _ = resp.Body.Close() }() + + respBody, err := io.ReadAll(resp.Body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return "", err + } + appendAPIResponseChunk(ctx, e.cfg, respBody) + + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + return "", statusErr{code: resp.StatusCode, msg: strings.TrimSpace(string(respBody))} + } + + text, err := parseGitLabTextResponse(endpoint, respBody) + if err != nil { + return "", err + } + return strings.TrimSpace(text), nil +} + +func (e *GitLabExecutor) doJSONRequest( + ctx context.Context, + auth *cliproxyauth.Auth, + endpoint string, + payload map[string]any, + accept string, +) (*http.Response, []byte, error) { + token := gitLabPrimaryToken(auth) + baseURL := gitLabBaseURL(auth) + if token == "" || baseURL == "" { + return nil, nil, statusErr{code: http.StatusUnauthorized, msg: "gitlab duo executor: missing credentials"} + } + + body, err := json.Marshal(payload) + if err != nil { + return nil, nil, fmt.Errorf("gitlab duo executor: marshal request failed: %w", err) + } + + url := strings.TrimRight(baseURL, "/") + endpoint + req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body)) + if err != nil { + return nil, nil, err + } + req.Header.Set("Authorization", "Bearer "+token) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", accept) + req.Header.Set("User-Agent", "CLIProxyAPI/GitLab-Duo") + applyGitLabRequestHeaders(req, auth) + if strings.EqualFold(accept, "text/event-stream") { + req.Header.Set("Cache-Control", "no-cache") + req.Header.Set(gitLabSSEStreamingHeader, "true") + req.Header.Set("Accept-Encoding", "identity") + } + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: req.Header.Clone(), + Body: body, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + resp, err := httpClient.Do(req) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return nil, body, err + } + recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone()) + return resp, body, nil +} + +func (e *GitLabExecutor) refreshOAuthToken(ctx context.Context, client *gitlab.AuthClient, auth *cliproxyauth.Auth, baseURL string) (*gitlab.TokenResponse, error) { + if auth == nil { + return nil, fmt.Errorf("gitlab duo executor: auth is nil") + } + refreshToken := gitLabMetadataString(auth.Metadata, "refresh_token") + if refreshToken == "" { + return nil, fmt.Errorf("gitlab duo executor: refresh token missing") + } + if !gitLabOAuthTokenNeedsRefresh(auth.Metadata) && gitLabPrimaryToken(auth) != "" { + return nil, nil + } + return client.RefreshTokens( + ctx, + baseURL, + gitLabMetadataString(auth.Metadata, "oauth_client_id"), + gitLabMetadataString(auth.Metadata, "oauth_client_secret"), + refreshToken, + ) +} + +func buildGitLabPrompt(payload []byte) gitLabPrompt { + root := gjson.ParseBytes(payload) + prompt := gitLabPrompt{ + FileName: "prompt.txt", + } + + msgs := root.Get("messages") + if msgs.Exists() && msgs.IsArray() { + systemIndex := 0 + contextIndex := 0 + transcript := make([]string, 0, len(msgs.Array())) + var lastUser string + msgs.ForEach(func(_, msg gjson.Result) bool { + role := strings.TrimSpace(msg.Get("role").String()) + if role == "" { + role = "user" + } + content := openAIContentText(msg.Get("content")) + if content == "" { + return true + } + switch role { + case "system": + systemIndex++ + prompt.ChatContext = append(prompt.ChatContext, map[string]any{ + "category": "snippet", + "id": fmt.Sprintf("system-%d", systemIndex), + "content": content, + }) + case "user": + lastUser = content + contextIndex++ + prompt.CodeSuggestionContext = append(prompt.CodeSuggestionContext, map[string]any{ + "type": "snippet", + "name": fmt.Sprintf("user-%d", contextIndex), + "content": content, + }) + transcript = append(transcript, "User:\n"+content) + default: + contextIndex++ + prompt.ChatContext = append(prompt.ChatContext, map[string]any{ + "category": "snippet", + "id": fmt.Sprintf("%s-%d", role, contextIndex), + "content": content, + }) + prompt.CodeSuggestionContext = append(prompt.CodeSuggestionContext, map[string]any{ + "type": "snippet", + "name": fmt.Sprintf("%s-%d", role, contextIndex), + "content": content, + }) + transcript = append(transcript, strings.Title(role)+":\n"+content) + } + return true + }) + prompt.Instruction = strings.TrimSpace(lastUser) + prompt.ContentAboveCursor = truncateGitLabPrompt(strings.Join(transcript, "\n\n"), 12000) + } + + if prompt.Instruction == "" { + for _, key := range []string{"prompt", "input", "instructions"} { + if value := strings.TrimSpace(root.Get(key).String()); value != "" { + prompt.Instruction = value + break + } + } + } + if prompt.ContentAboveCursor == "" { + prompt.ContentAboveCursor = prompt.Instruction + } + prompt.Instruction = truncateGitLabPrompt(prompt.Instruction, 4000) + prompt.ContentAboveCursor = truncateGitLabPrompt(prompt.ContentAboveCursor, 12000) + return prompt +} + +func openAIContentText(content gjson.Result) string { + segments := make([]string, 0, 8) + collectOpenAIContent(content, &segments) + return strings.TrimSpace(strings.Join(segments, "\n")) +} + +func truncateGitLabPrompt(value string, limit int) string { + value = strings.TrimSpace(value) + if limit <= 0 || len(value) <= limit { + return value + } + return strings.TrimSpace(value[:limit]) +} + +func parseGitLabTextResponse(endpoint string, body []byte) (string, error) { + if endpoint == gitLabChatEndpoint { + var text string + if err := json.Unmarshal(body, &text); err == nil { + return text, nil + } + if value := strings.TrimSpace(gjson.GetBytes(body, "response").String()); value != "" { + return value, nil + } + } + if value := strings.TrimSpace(gjson.GetBytes(body, "choices.0.text").String()); value != "" { + return value, nil + } + if value := strings.TrimSpace(gjson.GetBytes(body, "response").String()); value != "" { + return value, nil + } + var plain string + if err := json.Unmarshal(body, &plain); err == nil && strings.TrimSpace(plain) != "" { + return plain, nil + } + return "", fmt.Errorf("gitlab duo executor: upstream returned no text payload") +} + +func applyGitLabRequestHeaders(req *http.Request, auth *cliproxyauth.Auth) { + if req == nil { + return + } + if auth != nil { + util.ApplyCustomHeadersFromAttrs(req, auth.Attributes) + } + for key, value := range gitLabGatewayHeaders(auth, "") { + if key == "" || value == "" { + continue + } + req.Header.Set(key, value) + } +} + +func gitLabGatewayHeaders(auth *cliproxyauth.Auth, targetProvider string) map[string]string { + out := make(map[string]string) + if auth != nil && auth.Metadata != nil { + raw, ok := auth.Metadata["duo_gateway_headers"] + if ok { + switch typed := raw.(type) { + case map[string]string: + for key, value := range typed { + key = strings.TrimSpace(key) + value = strings.TrimSpace(value) + if key != "" && value != "" { + out[key] = value + } + } + case map[string]any: + for key, value := range typed { + key = strings.TrimSpace(key) + if key == "" { + continue + } + strValue := strings.TrimSpace(fmt.Sprint(value)) + if strValue != "" { + out[key] = strValue + } + } + } + } + } + if _, ok := out["User-Agent"]; !ok { + out["User-Agent"] = gitLabNativeUserAgent + } + if strings.EqualFold(strings.TrimSpace(targetProvider), "openai") { + if _, ok := out["anthropic-beta"]; !ok { + out["anthropic-beta"] = gitLabContext1MBeta + } + } + if len(out) == 0 { + return nil + } + return out +} + +func cloneGitLabStreamHeaders(headers http.Header, _ []byte) http.Header { + cloned := headers.Clone() + if cloned == nil { + cloned = make(http.Header) + } + cloned.Set("Content-Type", "text/event-stream") + return cloned +} + +func normalizeGitLabStreamChunk(eventName string, payload []byte, fallbackModel string, state *gitLabOpenAIStreamState) [][]byte { + payload = bytes.TrimSpace(payload) + if len(payload) == 0 { + return nil + } + if bytes.Equal(payload, []byte("[DONE]")) { + return finalizeGitLabStream(fallbackModel, state) + } + + root := gjson.ParseBytes(payload) + if root.Exists() { + if obj := root.Get("object").String(); obj == "chat.completion.chunk" { + return [][]byte{append([]byte("data: "), bytes.Clone(payload)...)} + } + if root.Get("choices.0.delta").Exists() || root.Get("choices.0.finish_reason").Exists() { + return [][]byte{append([]byte("data: "), bytes.Clone(payload)...)} + } + } + + state.ensureInitialized(fallbackModel, root) + + switch strings.TrimSpace(eventName) { + case "stream_end": + return finalizeGitLabStream(fallbackModel, state) + case "stream_start": + if text := extractGitLabStreamText(root); text != "" { + return state.emitText(text) + } + return nil + } + + if done := root.Get("done"); done.Exists() && done.Bool() { + return finalizeGitLabStream(fallbackModel, state) + } + if finishReason := strings.TrimSpace(root.Get("finish_reason").String()); finishReason != "" { + out := state.emitText(extractGitLabStreamText(root)) + return append(out, state.finish(finishReason)...) + } + + return state.emitText(extractGitLabStreamText(root)) +} + +func extractGitLabStreamText(root gjson.Result) string { + for _, key := range []string{ + "choices.0.delta.content", + "choices.0.text", + "delta.content", + "content_chunk", + "content", + "text", + "response", + "completion", + } { + if value := root.Get(key).String(); strings.TrimSpace(value) != "" { + return value + } + } + return "" +} + +func finalizeGitLabStream(fallbackModel string, state *gitLabOpenAIStreamState) [][]byte { + if state == nil { + return nil + } + state.ensureInitialized(fallbackModel, gjson.Result{}) + return state.finish("stop") +} + +func (s *gitLabOpenAIStreamState) ensureInitialized(fallbackModel string, root gjson.Result) { + if s == nil { + return + } + if s.ID == "" { + s.ID = fmt.Sprintf("gitlab-%d", time.Now().UnixNano()) + } + if s.Created == 0 { + s.Created = time.Now().Unix() + } + if s.Model == "" { + for _, key := range []string{"model.name", "model", "metadata.model_name"} { + if value := strings.TrimSpace(root.Get(key).String()); value != "" { + s.Model = value + break + } + } + } + if s.Model == "" { + s.Model = fallbackModel + } +} + +func (s *gitLabOpenAIStreamState) emitText(text string) [][]byte { + if s == nil { + return nil + } + if strings.TrimSpace(text) == "" { + return nil + } + delta := s.nextDelta(text) + if delta == "" { + return nil + } + out := make([][]byte, 0, 2) + if !s.Started { + out = append(out, s.buildChunk(map[string]any{"role": "assistant"}, "")) + s.Started = true + } + out = append(out, s.buildChunk(map[string]any{"content": delta}, "")) + return out +} + +func (s *gitLabOpenAIStreamState) finish(reason string) [][]byte { + if s == nil || s.Finished { + return nil + } + if !s.Started { + s.Started = true + } + s.Finished = true + return [][]byte{ + s.buildChunk(map[string]any{}, reason), + []byte("data: [DONE]"), + } +} + +func (s *gitLabOpenAIStreamState) nextDelta(text string) string { + if s == nil { + return text + } + if strings.TrimSpace(text) == "" { + return "" + } + if s.LastFullText == "" { + s.LastFullText = text + return text + } + if text == s.LastFullText { + return "" + } + if strings.HasPrefix(text, s.LastFullText) { + delta := text[len(s.LastFullText):] + s.LastFullText = text + return delta + } + s.LastFullText += text + return text +} + +func (s *gitLabOpenAIStreamState) buildChunk(delta map[string]any, finishReason string) []byte { + payload := map[string]any{ + "id": s.ID, + "object": "chat.completion.chunk", + "created": s.Created, + "model": s.Model, + "choices": []map[string]any{{ + "index": 0, + "delta": delta, + }}, + } + if finishReason != "" { + payload["choices"] = []map[string]any{{ + "index": 0, + "delta": delta, + "finish_reason": finishReason, + }} + } + raw, _ := json.Marshal(payload) + return append([]byte("data: "), raw...) +} + +func shouldFallbackToCodeSuggestions(err error) bool { + if err == nil { + return false + } + status, ok := err.(interface{ StatusCode() int }) + if !ok { + return false + } + switch status.StatusCode() { + case http.StatusForbidden, http.StatusNotFound, http.StatusMethodNotAllowed, http.StatusNotImplemented: + return true + default: + return false + } +} + +func buildGitLabOpenAIResponse(model, text string, translatedReq []byte) []byte { + promptTokens, completionTokens := gitLabUsage(model, translatedReq, text) + payload := map[string]any{ + "id": fmt.Sprintf("gitlab-%d", time.Now().UnixNano()), + "object": "chat.completion", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]any{{ + "index": 0, + "message": map[string]any{ + "role": "assistant", + "content": text, + }, + "finish_reason": "stop", + }}, + "usage": map[string]any{ + "prompt_tokens": promptTokens, + "completion_tokens": completionTokens, + "total_tokens": promptTokens + completionTokens, + }, + } + raw, _ := json.Marshal(payload) + return raw +} + +func buildGitLabOpenAIStream(model, text string) []string { + now := time.Now().Unix() + id := fmt.Sprintf("gitlab-%d", time.Now().UnixNano()) + chunks := []map[string]any{ + { + "id": id, + "object": "chat.completion.chunk", + "created": now, + "model": model, + "choices": []map[string]any{{ + "index": 0, + "delta": map[string]any{"role": "assistant"}, + }}, + }, + { + "id": id, + "object": "chat.completion.chunk", + "created": now, + "model": model, + "choices": []map[string]any{{ + "index": 0, + "delta": map[string]any{"content": text}, + }}, + }, + { + "id": id, + "object": "chat.completion.chunk", + "created": now, + "model": model, + "choices": []map[string]any{{ + "index": 0, + "delta": map[string]any{}, + "finish_reason": "stop", + }}, + }, + } + lines := make([]string, 0, len(chunks)+1) + for _, chunk := range chunks { + raw, _ := json.Marshal(chunk) + lines = append(lines, "data: "+string(raw)) + } + lines = append(lines, "data: [DONE]") + return lines +} + +func gitLabUsage(model string, translatedReq []byte, text string) (int64, int64) { + enc, err := tokenizerForModel(model) + if err != nil { + return 0, 0 + } + promptTokens, err := countOpenAIChatTokens(enc, translatedReq) + if err != nil { + promptTokens = 0 + } + completionCount, err := enc.Count(strings.TrimSpace(text)) + if err != nil { + return promptTokens, 0 + } + return promptTokens, int64(completionCount) +} + +func buildGitLabAnthropicGatewayAuth(auth *cliproxyauth.Auth, requestedModel string) (*cliproxyauth.Auth, bool) { + if !gitLabUsesAnthropicGateway(auth, requestedModel) { + return nil, false + } + baseURL := gitLabAnthropicGatewayBaseURL(auth) + token := gitLabMetadataString(auth.Metadata, "duo_gateway_token") + if baseURL == "" || token == "" { + return nil, false + } + + nativeAuth := auth.Clone() + nativeAuth.Provider = "claude" + if nativeAuth.Attributes == nil { + nativeAuth.Attributes = make(map[string]string) + } + nativeAuth.Attributes["api_key"] = token + nativeAuth.Attributes["base_url"] = baseURL + nativeAuth.Attributes["gitlab_duo_force_context_1m"] = "true" + for key, value := range gitLabGatewayHeaders(auth, "anthropic") { + if key == "" || value == "" { + continue + } + nativeAuth.Attributes["header:"+key] = value + } + return nativeAuth, true +} + +func buildGitLabOpenAIGatewayAuth(auth *cliproxyauth.Auth, requestedModel string) (*cliproxyauth.Auth, bool) { + if !gitLabUsesOpenAIGateway(auth, requestedModel) { + return nil, false + } + baseURL := gitLabOpenAIGatewayBaseURL(auth) + token := gitLabMetadataString(auth.Metadata, "duo_gateway_token") + if baseURL == "" || token == "" { + return nil, false + } + + nativeAuth := auth.Clone() + nativeAuth.Provider = "codex" + if nativeAuth.Attributes == nil { + nativeAuth.Attributes = make(map[string]string) + } + nativeAuth.Attributes["api_key"] = token + nativeAuth.Attributes["base_url"] = baseURL + for key, value := range gitLabGatewayHeaders(auth, "openai") { + if key == "" || value == "" { + continue + } + nativeAuth.Attributes["header:"+key] = value + } + return nativeAuth, true +} + +func gitLabUsesAnthropicGateway(auth *cliproxyauth.Auth, requestedModel string) bool { + if auth == nil || auth.Metadata == nil { + return false + } + provider := gitLabGatewayProvider(auth, requestedModel) + return provider == "anthropic" && + gitLabMetadataString(auth.Metadata, "duo_gateway_base_url") != "" && + gitLabMetadataString(auth.Metadata, "duo_gateway_token") != "" +} + +func gitLabUsesOpenAIGateway(auth *cliproxyauth.Auth, requestedModel string) bool { + if auth == nil || auth.Metadata == nil { + return false + } + provider := gitLabGatewayProvider(auth, requestedModel) + return provider == "openai" && + gitLabMetadataString(auth.Metadata, "duo_gateway_base_url") != "" && + gitLabMetadataString(auth.Metadata, "duo_gateway_token") != "" +} + +func gitLabGatewayProvider(auth *cliproxyauth.Auth, requestedModel string) string { + modelName := strings.TrimSpace(gitLabResolvedModel(auth, requestedModel)) + if provider := inferGitLabProviderFromModel(modelName); provider != "" { + return provider + } + if auth == nil || auth.Metadata == nil { + return "" + } + provider := strings.ToLower(gitLabMetadataString(auth.Metadata, "model_provider")) + if provider == "" { + provider = inferGitLabProviderFromModel(gitLabMetadataString(auth.Metadata, "model_name")) + } + return provider +} + +func inferGitLabProviderFromModel(model string) string { + model = strings.ToLower(strings.TrimSpace(model)) + switch { + case strings.Contains(model, "claude"): + return "anthropic" + case strings.Contains(model, "gpt"), strings.Contains(model, "o1"), strings.Contains(model, "o3"), strings.Contains(model, "o4"): + return "openai" + default: + return "" + } +} + +func gitLabAnthropicGatewayBaseURL(auth *cliproxyauth.Auth) string { + raw := strings.TrimSpace(gitLabMetadataString(auth.Metadata, "duo_gateway_base_url")) + if raw == "" { + return "" + } + base, err := url.Parse(raw) + if err != nil { + return strings.TrimRight(raw, "/") + } + path := strings.TrimRight(base.EscapedPath(), "/") + switch { + case strings.HasSuffix(path, "/ai/v1/proxy/anthropic"), strings.HasSuffix(path, "/v1/proxy/anthropic"): + return strings.TrimRight(base.String(), "/") + case path == "/ai": + base.Path = "/ai/v1/proxy/anthropic" + case path != "": + base.Path = strings.TrimRight(path, "/") + "/v1/proxy/anthropic" + case strings.Contains(strings.ToLower(base.Host), "gitlab.com"): + base.Path = "/ai/v1/proxy/anthropic" + default: + base.Path = "/v1/proxy/anthropic" + } + return strings.TrimRight(base.String(), "/") +} + +func gitLabOpenAIGatewayBaseURL(auth *cliproxyauth.Auth) string { + raw := strings.TrimSpace(gitLabMetadataString(auth.Metadata, "duo_gateway_base_url")) + if raw == "" { + return "" + } + base, err := url.Parse(raw) + if err != nil { + return strings.TrimRight(raw, "/") + } + path := strings.TrimRight(base.EscapedPath(), "/") + switch { + case strings.HasSuffix(path, "/ai/v1/proxy/openai/v1"), strings.HasSuffix(path, "/v1/proxy/openai/v1"): + return strings.TrimRight(base.String(), "/") + case path == "/ai": + base.Path = "/ai/v1/proxy/openai/v1" + case path != "": + base.Path = strings.TrimRight(path, "/") + "/v1/proxy/openai/v1" + case strings.Contains(strings.ToLower(base.Host), "gitlab.com"): + base.Path = "/ai/v1/proxy/openai/v1" + default: + base.Path = "/v1/proxy/openai/v1" + } + return strings.TrimRight(base.String(), "/") +} + +func gitLabPrimaryToken(auth *cliproxyauth.Auth) string { + if auth == nil || auth.Metadata == nil { + return "" + } + if token := gitLabMetadataString(auth.Metadata, "access_token"); token != "" { + return token + } + return gitLabMetadataString(auth.Metadata, "personal_access_token") +} + +func gitLabBaseURL(auth *cliproxyauth.Auth) string { + if auth == nil || auth.Metadata == nil { + return "" + } + return gitlab.NormalizeBaseURL(gitLabMetadataString(auth.Metadata, "base_url")) +} + +func gitLabResolvedModel(auth *cliproxyauth.Auth, requested string) string { + requested = strings.TrimSpace(thinking.ParseSuffix(requested).ModelName) + if requested != "" && !strings.EqualFold(requested, "gitlab-duo") { + if mapped, ok := gitLabModelAliases[strings.ToLower(requested)]; ok && strings.TrimSpace(mapped) != "" { + return mapped + } + return requested + } + if auth != nil && auth.Metadata != nil { + for _, model := range gitlab.ExtractDiscoveredModels(auth.Metadata) { + if name := strings.TrimSpace(model.ModelName); name != "" { + return name + } + } + } + if requested != "" { + return requested + } + return "gitlab-duo" +} + +func gitLabMetadataString(metadata map[string]any, keys ...string) string { + for _, key := range keys { + if metadata == nil { + return "" + } + if value, ok := metadata[key].(string); ok { + if trimmed := strings.TrimSpace(value); trimmed != "" { + return trimmed + } + } + } + return "" +} + +func gitLabOAuthTokenNeedsRefresh(metadata map[string]any) bool { + expiry := gitLabMetadataString(metadata, "oauth_expires_at") + if expiry == "" { + return true + } + ts, err := time.Parse(time.RFC3339, expiry) + if err != nil { + return true + } + return time.Until(ts) <= 5*time.Minute +} + +func applyGitLabTokenMetadata(metadata map[string]any, tokenResp *gitlab.TokenResponse) { + if metadata == nil || tokenResp == nil { + return + } + if accessToken := strings.TrimSpace(tokenResp.AccessToken); accessToken != "" { + metadata["access_token"] = accessToken + } + if refreshToken := strings.TrimSpace(tokenResp.RefreshToken); refreshToken != "" { + metadata["refresh_token"] = refreshToken + } + if tokenType := strings.TrimSpace(tokenResp.TokenType); tokenType != "" { + metadata["token_type"] = tokenType + } + if scope := strings.TrimSpace(tokenResp.Scope); scope != "" { + metadata["scope"] = scope + } + if expiry := gitlab.TokenExpiry(time.Now(), tokenResp); !expiry.IsZero() { + metadata["oauth_expires_at"] = expiry.Format(time.RFC3339) + } +} + +func mergeGitLabDirectAccessMetadata(metadata map[string]any, direct *gitlab.DirectAccessResponse) { + if metadata == nil || direct == nil { + return + } + if base := strings.TrimSpace(direct.BaseURL); base != "" { + metadata["duo_gateway_base_url"] = base + } + if token := strings.TrimSpace(direct.Token); token != "" { + metadata["duo_gateway_token"] = token + } + if direct.ExpiresAt > 0 { + expiry := time.Unix(direct.ExpiresAt, 0).UTC() + metadata["duo_gateway_expires_at"] = expiry.Format(time.RFC3339) + if ttl := expiry.Sub(time.Now().UTC()); ttl > 0 { + interval := int(ttl.Seconds()) / 2 + switch { + case interval < 60: + interval = 60 + case interval > 240: + interval = 240 + } + metadata["refresh_interval_seconds"] = interval + } + } + if len(direct.Headers) > 0 { + headers := make(map[string]string, len(direct.Headers)) + for key, value := range direct.Headers { + key = strings.TrimSpace(key) + value = strings.TrimSpace(value) + if key == "" || value == "" { + continue + } + headers[key] = value + } + if len(headers) > 0 { + metadata["duo_gateway_headers"] = headers + } + } + if direct.ModelDetails != nil { + modelDetails := map[string]any{} + if provider := strings.TrimSpace(direct.ModelDetails.ModelProvider); provider != "" { + modelDetails["model_provider"] = provider + metadata["model_provider"] = provider + } + if model := strings.TrimSpace(direct.ModelDetails.ModelName); model != "" { + modelDetails["model_name"] = model + metadata["model_name"] = model + } + if len(modelDetails) > 0 { + metadata["model_details"] = modelDetails + } + } +} + +func gitLabAuthKind(method string) string { + switch strings.ToLower(strings.TrimSpace(method)) { + case gitLabAuthMethodPAT: + return "personal_access_token" + default: + return "oauth" + } +} + +func GitLabModelsFromAuth(auth *cliproxyauth.Auth) []*registry.ModelInfo { + models := make([]*registry.ModelInfo, 0, len(gitLabAgenticCatalog)+4) + seen := make(map[string]struct{}, len(gitLabAgenticCatalog)+4) + addModel := func(id, displayName, provider string) { + id = strings.TrimSpace(id) + if id == "" { + return + } + key := strings.ToLower(id) + if _, ok := seen[key]; ok { + return + } + seen[key] = struct{}{} + models = append(models, ®istry.ModelInfo{ + ID: id, + Object: "model", + Created: time.Now().Unix(), + OwnedBy: "gitlab", + Type: "gitlab", + DisplayName: displayName, + Description: provider, + UserDefined: true, + }) + } + + addModel("gitlab-duo", "GitLab Duo", "gitlab") + for _, model := range gitLabAgenticCatalog { + addModel(model.ID, model.DisplayName, model.Provider) + } + for alias, upstream := range gitLabModelAliases { + target := strings.TrimSpace(upstream) + displayName := "GitLab Duo Alias" + provider := strings.TrimSpace(inferGitLabProviderFromModel(target)) + if provider != "" { + displayName = fmt.Sprintf("GitLab Duo Alias (%s)", provider) + } + addModel(alias, displayName, provider) + } + if auth == nil { + return models + } + for _, model := range gitlab.ExtractDiscoveredModels(auth.Metadata) { + name := strings.TrimSpace(model.ModelName) + if name == "" { + continue + } + displayName := "GitLab Duo" + if provider := strings.TrimSpace(model.ModelProvider); provider != "" { + displayName = fmt.Sprintf("GitLab Duo (%s)", provider) + } + addModel(name, displayName, strings.TrimSpace(model.ModelProvider)) + } + return models +} diff --git a/internal/runtime/executor/gitlab_executor_test.go b/internal/runtime/executor/gitlab_executor_test.go new file mode 100644 index 0000000000..625275cfc9 --- /dev/null +++ b/internal/runtime/executor/gitlab_executor_test.go @@ -0,0 +1,539 @@ +package executor + +import ( + "context" + "encoding/json" + "io" + "net/http" + "net/http/httptest" + "strings" + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + "github.com/tidwall/gjson" +) + +func TestGitLabExecutorExecuteUsesChatEndpoint(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path != gitLabChatEndpoint { + t.Fatalf("unexpected path %q", r.URL.Path) + } + _, _ = w.Write([]byte(`"chat response"`)) + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "base_url": srv.URL, + "access_token": "oauth-access", + "model_name": "claude-sonnet-4-5", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{"model":"gitlab-duo","messages":[{"role":"user","content":"hello"}]}`), + } + + resp, err := exec.Execute(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("Execute() error = %v", err) + } + if got := gjson.GetBytes(resp.Payload, "choices.0.message.content").String(); got != "chat response" { + t.Fatalf("expected chat response, got %q", got) + } + if got := gjson.GetBytes(resp.Payload, "model").String(); got != "claude-sonnet-4-5" { + t.Fatalf("expected resolved model, got %q", got) + } +} + +func TestGitLabExecutorExecuteFallsBackToCodeSuggestions(t *testing.T) { + chatCalls := 0 + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + switch r.URL.Path { + case gitLabChatEndpoint: + chatCalls++ + http.Error(w, "feature unavailable", http.StatusForbidden) + case gitLabCodeSuggestionsEndpoint: + _ = json.NewEncoder(w).Encode(map[string]any{ + "choices": []map[string]any{{ + "text": "fallback response", + }}, + }) + default: + t.Fatalf("unexpected path %q", r.URL.Path) + } + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "base_url": srv.URL, + "personal_access_token": "glpat-token", + "auth_method": "pat", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{"model":"gitlab-duo","messages":[{"role":"user","content":"write code"}]}`), + } + + resp, err := exec.Execute(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("Execute() error = %v", err) + } + if chatCalls != 1 { + t.Fatalf("expected chat endpoint to be tried once, got %d", chatCalls) + } + if got := gjson.GetBytes(resp.Payload, "choices.0.message.content").String(); got != "fallback response" { + t.Fatalf("expected fallback response, got %q", got) + } +} + +func TestGitLabExecutorExecuteUsesAnthropicGateway(t *testing.T) { + var gotAuthHeader, gotRealmHeader string + var gotPath string + var gotModel string + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotAuthHeader = r.Header.Get("Authorization") + gotRealmHeader = r.Header.Get("X-Gitlab-Realm") + gotModel = gjson.GetBytes(readBody(t, r), "model").String() + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"id":"msg_1","type":"message","role":"assistant","model":"claude-sonnet-4-5","content":[{"type":"tool_use","id":"toolu_1","name":"Bash","input":{"cmd":"ls"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":11,"output_tokens":4}}`)) + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "duo_gateway_base_url": srv.URL, + "duo_gateway_token": "gateway-token", + "duo_gateway_headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{ + "model":"gitlab-duo", + "messages":[{"role":"user","content":[{"type":"text","text":"list files"}]}], + "tools":[{"name":"Bash","description":"run bash","input_schema":{"type":"object","properties":{"cmd":{"type":"string"}},"required":["cmd"]}}], + "max_tokens":128 + }`), + } + + resp, err := exec.Execute(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("claude"), + }) + if err != nil { + t.Fatalf("Execute() error = %v", err) + } + if gotPath != "/v1/proxy/anthropic/v1/messages" { + t.Fatalf("Path = %q, want %q", gotPath, "/v1/proxy/anthropic/v1/messages") + } + if gotAuthHeader != "Bearer gateway-token" { + t.Fatalf("Authorization = %q, want Bearer gateway-token", gotAuthHeader) + } + if gotRealmHeader != "saas" { + t.Fatalf("X-Gitlab-Realm = %q, want saas", gotRealmHeader) + } + if gotModel != "claude-sonnet-4-5" { + t.Fatalf("model = %q, want claude-sonnet-4-5", gotModel) + } + if got := gjson.GetBytes(resp.Payload, "content.0.type").String(); got != "tool_use" { + t.Fatalf("expected tool_use response, got %q", got) + } + if got := gjson.GetBytes(resp.Payload, "content.0.name").String(); got != "Bash" { + t.Fatalf("expected tool name Bash, got %q", got) + } +} + +func TestGitLabExecutorExecuteUsesOpenAIGateway(t *testing.T) { + var gotAuthHeader, gotRealmHeader string + var gotPath string + var gotModel string + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotAuthHeader = r.Header.Get("Authorization") + gotRealmHeader = r.Header.Get("X-Gitlab-Realm") + gotModel = gjson.GetBytes(readBody(t, r), "model").String() + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("data: {\"type\":\"response.created\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"gpt-5-codex\"}}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.output_text.delta\",\"delta\":\"hello from openai gateway\"}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"gpt-5-codex\",\"output\":[{\"type\":\"message\",\"id\":\"msg_1\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"hello from openai gateway\"}]}],\"usage\":{\"input_tokens\":11,\"output_tokens\":4,\"total_tokens\":15}}}\n\n")) + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "duo_gateway_base_url": srv.URL, + "duo_gateway_token": "gateway-token", + "duo_gateway_headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_provider": "openai", + "model_name": "gpt-5-codex", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{"model":"gitlab-duo","messages":[{"role":"user","content":"hello"}]}`), + } + + resp, err := exec.Execute(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("Execute() error = %v", err) + } + if gotPath != "/v1/proxy/openai/v1/responses" { + t.Fatalf("Path = %q, want %q", gotPath, "/v1/proxy/openai/v1/responses") + } + if gotAuthHeader != "Bearer gateway-token" { + t.Fatalf("Authorization = %q, want Bearer gateway-token", gotAuthHeader) + } + if gotRealmHeader != "saas" { + t.Fatalf("X-Gitlab-Realm = %q, want saas", gotRealmHeader) + } + if gotModel != "gpt-5-codex" { + t.Fatalf("model = %q, want gpt-5-codex", gotModel) + } + if got := gjson.GetBytes(resp.Payload, "choices.0.message.content").String(); got != "hello from openai gateway" { + t.Fatalf("expected openai gateway response, got %q payload=%s", got, string(resp.Payload)) + } +} + +func TestGitLabExecutorExecuteUsesRequestedModelToSelectOpenAIGateway(t *testing.T) { + var gotAuthHeader, gotRealmHeader, gotBetaHeader, gotUserAgent string + var gotPath string + var gotModel string + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotAuthHeader = r.Header.Get("Authorization") + gotRealmHeader = r.Header.Get("X-Gitlab-Realm") + gotBetaHeader = r.Header.Get("anthropic-beta") + gotUserAgent = r.Header.Get("User-Agent") + gotModel = gjson.GetBytes(readBody(t, r), "model").String() + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("data: {\"type\":\"response.created\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"duo-chat-gpt-5-codex\"}}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.output_text.delta\",\"delta\":\"hello from explicit openai model\"}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"duo-chat-gpt-5-codex\",\"output\":[{\"type\":\"message\",\"id\":\"msg_1\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"hello from explicit openai model\"}]}],\"usage\":{\"input_tokens\":11,\"output_tokens\":4,\"total_tokens\":15}}}\n\n")) + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "duo_gateway_base_url": srv.URL, + "duo_gateway_token": "gateway-token", + "duo_gateway_headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + } + req := cliproxyexecutor.Request{ + Model: "duo-chat-gpt-5-codex", + Payload: []byte(`{"model":"duo-chat-gpt-5-codex","messages":[{"role":"user","content":"hello"}]}`), + } + + resp, err := exec.Execute(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("Execute() error = %v", err) + } + if gotPath != "/v1/proxy/openai/v1/responses" { + t.Fatalf("Path = %q, want %q", gotPath, "/v1/proxy/openai/v1/responses") + } + if gotAuthHeader != "Bearer gateway-token" { + t.Fatalf("Authorization = %q, want Bearer gateway-token", gotAuthHeader) + } + if gotRealmHeader != "saas" { + t.Fatalf("X-Gitlab-Realm = %q, want saas", gotRealmHeader) + } + if gotBetaHeader != gitLabContext1MBeta { + t.Fatalf("anthropic-beta = %q, want %q", gotBetaHeader, gitLabContext1MBeta) + } + if gotUserAgent != gitLabNativeUserAgent { + t.Fatalf("User-Agent = %q, want %q", gotUserAgent, gitLabNativeUserAgent) + } + if gotModel != "duo-chat-gpt-5-codex" { + t.Fatalf("model = %q, want duo-chat-gpt-5-codex", gotModel) + } + if got := gjson.GetBytes(resp.Payload, "choices.0.message.content").String(); got != "hello from explicit openai model" { + t.Fatalf("expected explicit openai model response, got %q payload=%s", got, string(resp.Payload)) + } +} + +func TestGitLabExecutorRefreshUpdatesMetadata(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + switch r.URL.Path { + case "/oauth/token": + _ = json.NewEncoder(w).Encode(map[string]any{ + "access_token": "oauth-refreshed", + "refresh_token": "oauth-refresh", + "token_type": "Bearer", + "scope": "api read_user", + "created_at": 1710000000, + "expires_in": 3600, + }) + case "/api/v4/code_suggestions/direct_access": + _ = json.NewEncoder(w).Encode(map[string]any{ + "base_url": "https://cloud.gitlab.example.com", + "token": "gateway-token", + "expires_at": 1710003600, + "headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_details": map[string]any{ + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + }) + default: + t.Fatalf("unexpected path %q", r.URL.Path) + } + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + ID: "gitlab-auth.json", + Provider: "gitlab", + Metadata: map[string]any{ + "base_url": srv.URL, + "access_token": "oauth-access", + "refresh_token": "oauth-refresh", + "oauth_client_id": "client-id", + "auth_method": "oauth", + "oauth_expires_at": "2000-01-01T00:00:00Z", + }, + } + + updated, err := exec.Refresh(context.Background(), auth) + if err != nil { + t.Fatalf("Refresh() error = %v", err) + } + if got := updated.Metadata["access_token"]; got != "oauth-refreshed" { + t.Fatalf("expected refreshed access token, got %#v", got) + } + if got := updated.Metadata["model_name"]; got != "claude-sonnet-4-5" { + t.Fatalf("expected refreshed model metadata, got %#v", got) + } +} + +func TestGitLabExecutorExecuteStreamUsesCodeSuggestionsSSE(t *testing.T) { + var gotAccept, gotStreamingHeader, gotEncoding string + var gotStreamFlag bool + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path != gitLabCodeSuggestionsEndpoint { + t.Fatalf("unexpected path %q", r.URL.Path) + } + gotAccept = r.Header.Get("Accept") + gotStreamingHeader = r.Header.Get(gitLabSSEStreamingHeader) + gotEncoding = r.Header.Get("Accept-Encoding") + gotStreamFlag = gjson.GetBytes(readBody(t, r), "stream").Bool() + + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("event: stream_start\n")) + _, _ = w.Write([]byte("data: {\"model\":{\"name\":\"claude-sonnet-4-5\"}}\n\n")) + _, _ = w.Write([]byte("event: content_chunk\n")) + _, _ = w.Write([]byte("data: {\"content\":\"hello\"}\n\n")) + _, _ = w.Write([]byte("event: content_chunk\n")) + _, _ = w.Write([]byte("data: {\"content\":\" world\"}\n\n")) + _, _ = w.Write([]byte("event: stream_end\n")) + _, _ = w.Write([]byte("data: {}\n\n")) + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "base_url": srv.URL, + "access_token": "oauth-access", + "model_name": "claude-sonnet-4-5", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{"model":"gitlab-duo","stream":true,"messages":[{"role":"user","content":"hello"}]}`), + } + + result, err := exec.ExecuteStream(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("ExecuteStream() error = %v", err) + } + + lines := collectStreamLines(t, result) + if gotAccept != "text/event-stream" { + t.Fatalf("Accept = %q, want text/event-stream", gotAccept) + } + if gotStreamingHeader != "true" { + t.Fatalf("%s = %q, want true", gitLabSSEStreamingHeader, gotStreamingHeader) + } + if gotEncoding != "identity" { + t.Fatalf("Accept-Encoding = %q, want identity", gotEncoding) + } + if !gotStreamFlag { + t.Fatalf("expected upstream request to set stream=true") + } + if len(lines) < 4 { + t.Fatalf("expected translated stream chunks, got %d", len(lines)) + } + if !strings.Contains(strings.Join(lines, "\n"), `"content":"hello"`) { + t.Fatalf("expected hello delta in stream, got %q", strings.Join(lines, "\n")) + } + if !strings.Contains(strings.Join(lines, "\n"), `"content":" world"`) { + t.Fatalf("expected world delta in stream, got %q", strings.Join(lines, "\n")) + } + last := lines[len(lines)-1] + if last != "data: [DONE]" && !strings.Contains(last, `"finish_reason":"stop"`) { + t.Fatalf("expected stream terminator, got %q", last) + } +} + +func TestGitLabExecutorExecuteStreamFallsBackToSyntheticChat(t *testing.T) { + chatCalls := 0 + streamCalls := 0 + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + switch r.URL.Path { + case gitLabCodeSuggestionsEndpoint: + streamCalls++ + http.Error(w, "feature unavailable", http.StatusForbidden) + case gitLabChatEndpoint: + chatCalls++ + _, _ = w.Write([]byte(`"chat fallback response"`)) + default: + t.Fatalf("unexpected path %q", r.URL.Path) + } + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "base_url": srv.URL, + "access_token": "oauth-access", + "model_name": "claude-sonnet-4-5", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{"model":"gitlab-duo","stream":true,"messages":[{"role":"user","content":"hello"}]}`), + } + + result, err := exec.ExecuteStream(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + }) + if err != nil { + t.Fatalf("ExecuteStream() error = %v", err) + } + + lines := collectStreamLines(t, result) + if streamCalls != 1 { + t.Fatalf("expected streaming endpoint once, got %d", streamCalls) + } + if chatCalls != 1 { + t.Fatalf("expected chat fallback once, got %d", chatCalls) + } + if !strings.Contains(strings.Join(lines, "\n"), `"content":"chat fallback response"`) { + t.Fatalf("expected fallback content in stream, got %q", strings.Join(lines, "\n")) + } +} + +func TestGitLabExecutorExecuteStreamUsesAnthropicGateway(t *testing.T) { + var gotPath, gotBetaHeader, gotUserAgent string + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotBetaHeader = r.Header.Get("Anthropic-Beta") + gotUserAgent = r.Header.Get("User-Agent") + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("event: message_start\n")) + _, _ = w.Write([]byte("data: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_1\",\"type\":\"message\",\"role\":\"assistant\",\"model\":\"claude-sonnet-4-5\",\"content\":[],\"stop_reason\":null,\"stop_sequence\":null,\"usage\":{\"input_tokens\":0,\"output_tokens\":0}}}\n\n")) + _, _ = w.Write([]byte("event: content_block_start\n")) + _, _ = w.Write([]byte("data: {\"type\":\"content_block_start\",\"index\":0,\"content_block\":{\"type\":\"text\",\"text\":\"\"}}\n\n")) + _, _ = w.Write([]byte("event: content_block_delta\n")) + _, _ = w.Write([]byte("data: {\"type\":\"content_block_delta\",\"index\":0,\"delta\":{\"type\":\"text_delta\",\"text\":\"hello from gateway\"}}\n\n")) + _, _ = w.Write([]byte("event: message_delta\n")) + _, _ = w.Write([]byte("data: {\"type\":\"message_delta\",\"delta\":{\"stop_reason\":\"end_turn\",\"stop_sequence\":null},\"usage\":{\"input_tokens\":10,\"output_tokens\":3}}\n\n")) + _, _ = w.Write([]byte("event: message_stop\n")) + _, _ = w.Write([]byte("data: {\"type\":\"message_stop\"}\n\n")) + })) + defer srv.Close() + + exec := NewGitLabExecutor(&config.Config{}) + auth := &cliproxyauth.Auth{ + Provider: "gitlab", + Metadata: map[string]any{ + "duo_gateway_base_url": srv.URL, + "duo_gateway_token": "gateway-token", + "duo_gateway_headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + } + req := cliproxyexecutor.Request{ + Model: "gitlab-duo", + Payload: []byte(`{"model":"gitlab-duo","messages":[{"role":"user","content":[{"type":"text","text":"hello"}]}],"max_tokens":64}`), + } + + result, err := exec.ExecuteStream(context.Background(), auth, req, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("claude"), + }) + if err != nil { + t.Fatalf("ExecuteStream() error = %v", err) + } + + lines := collectStreamLines(t, result) + if gotPath != "/v1/proxy/anthropic/v1/messages" { + t.Fatalf("Path = %q, want %q", gotPath, "/v1/proxy/anthropic/v1/messages") + } + if !strings.Contains(gotBetaHeader, gitLabContext1MBeta) { + t.Fatalf("Anthropic-Beta = %q, want to contain %q", gotBetaHeader, gitLabContext1MBeta) + } + if gotUserAgent != gitLabNativeUserAgent { + t.Fatalf("User-Agent = %q, want %q", gotUserAgent, gitLabNativeUserAgent) + } + if !strings.Contains(strings.Join(lines, "\n"), "hello from gateway") { + t.Fatalf("expected anthropic gateway stream, got %q", strings.Join(lines, "\n")) + } +} + +func collectStreamLines(t *testing.T, result *cliproxyexecutor.StreamResult) []string { + t.Helper() + lines := make([]string, 0, 8) + for chunk := range result.Chunks { + if chunk.Err != nil { + t.Fatalf("unexpected stream error: %v", chunk.Err) + } + lines = append(lines, string(chunk.Payload)) + } + return lines +} + +func readBody(t *testing.T, r *http.Request) []byte { + t.Helper() + defer func() { _ = r.Body.Close() }() + body, err := io.ReadAll(r.Body) + if err != nil { + t.Fatalf("ReadAll() error = %v", err) + } + return body +} diff --git a/internal/runtime/executor/helps/claude_device_profile.go b/internal/runtime/executor/helps/claude_device_profile.go index 154901b53b..09f04929fe 100644 --- a/internal/runtime/executor/helps/claude_device_profile.go +++ b/internal/runtime/executor/helps/claude_device_profile.go @@ -11,8 +11,8 @@ import ( "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) const ( diff --git a/internal/runtime/executor/helps/home_refresh.go b/internal/runtime/executor/helps/home_refresh.go new file mode 100644 index 0000000000..e52fdd2435 --- /dev/null +++ b/internal/runtime/executor/helps/home_refresh.go @@ -0,0 +1,91 @@ +package helps + +import ( + "context" + "encoding/json" + "fmt" + "net/http" + "strings" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/home" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +type homeStatusErr struct { + code int + msg string +} + +func (e homeStatusErr) Error() string { + if e.msg != "" { + return e.msg + } + return fmt.Sprintf("status %d", e.code) +} + +func (e homeStatusErr) StatusCode() int { return e.code } + +type homeErrorEnvelope struct { + Error *homeErrorDetail `json:"error"` +} + +type homeErrorDetail struct { + Type string `json:"type"` + Message string `json:"message"` + Code string `json:"code,omitempty"` +} + +// RefreshAuthViaHome replaces local refresh logic when home control plane integration is enabled. +// It returns (updatedAuth, true, nil) when home refresh succeeds; (nil, true, err) when home is +// enabled but refresh fails; and (nil, false, nil) when home is disabled. +func RefreshAuthViaHome(ctx context.Context, cfg *config.Config, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, bool, error) { + if cfg == nil || !cfg.Home.Enabled { + return nil, false, nil + } + if ctx == nil { + ctx = context.Background() + } + if auth == nil { + return nil, true, homeStatusErr{code: http.StatusInternalServerError, msg: "home refresh: auth is nil"} + } + + client := home.Current() + if client == nil || !client.HeartbeatOK() { + return nil, true, homeStatusErr{code: http.StatusServiceUnavailable, msg: "home control center unavailable"} + } + + authIndex := strings.TrimSpace(auth.Index) + if authIndex == "" { + authIndex = strings.TrimSpace(auth.EnsureIndex()) + } + if authIndex == "" { + return nil, true, homeStatusErr{code: http.StatusBadGateway, msg: "home refresh: auth_index is empty"} + } + + raw, err := client.GetRefreshAuth(ctx, authIndex) + if err != nil { + return nil, true, homeStatusErr{code: http.StatusBadGateway, msg: err.Error()} + } + + var env homeErrorEnvelope + if errUnmarshal := json.Unmarshal(raw, &env); errUnmarshal == nil && env.Error != nil { + code := strings.TrimSpace(env.Error.Type) + if code == "" { + code = strings.TrimSpace(env.Error.Code) + } + msg := strings.TrimSpace(env.Error.Message) + if msg == "" { + msg = "home returned error" + } + return nil, true, homeStatusErr{code: http.StatusBadGateway, msg: msg} + } + + var updated cliproxyauth.Auth + if errUnmarshal := json.Unmarshal(raw, &updated); errUnmarshal != nil { + return nil, true, homeStatusErr{code: http.StatusBadGateway, msg: "home returned invalid auth payload"} + } + updated.Index = authIndex + updated.EnsureIndex() + return &updated, true, nil +} diff --git a/internal/runtime/executor/helps/logging_helpers.go b/internal/runtime/executor/helps/logging_helpers.go index 767c882016..fa7143347e 100644 --- a/internal/runtime/executor/helps/logging_helpers.go +++ b/internal/runtime/executor/helps/logging_helpers.go @@ -12,9 +12,9 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" ) @@ -24,6 +24,7 @@ const ( apiRequestKey = "API_REQUEST" apiResponseKey = "API_RESPONSE" apiWebsocketTimelineKey = "API_WEBSOCKET_TIMELINE" + creditsUsedKey = "__antigravity_credits_used__" ) // UpstreamRequestLog captures the outbound upstream request details for logging. @@ -568,3 +569,24 @@ func LogWithRequestID(ctx context.Context) *log.Entry { } return log.WithField("request_id", requestID) } + +// MarkCreditsUsed flags the request as having used AI credits for billing. +func MarkCreditsUsed(ctx context.Context) { + ginCtx := ginContextFrom(ctx) + if ginCtx != nil { + ginCtx.Set(creditsUsedKey, true) + } +} + +// CreditsUsed returns true if the request used AI credits. +func CreditsUsed(ctx context.Context) bool { + ginCtx := ginContextFrom(ctx) + if ginCtx != nil { + if val, exists := ginCtx.Get(creditsUsedKey); exists { + if b, ok := val.(bool); ok { + return b + } + } + } + return false +} diff --git a/internal/runtime/executor/helps/payload_helpers.go b/internal/runtime/executor/helps/payload_helpers.go index 73514c2dd1..af69a488c3 100644 --- a/internal/runtime/executor/helps/payload_helpers.go +++ b/internal/runtime/executor/helps/payload_helpers.go @@ -4,9 +4,9 @@ import ( "encoding/json" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -16,139 +16,167 @@ import ( // and restricts matches to the given protocol when supplied. Defaults are checked // against the original payload when provided. requestedModel carries the client-visible // model name before alias resolution so payload rules can target aliases precisely. -func ApplyPayloadConfigWithRoot(cfg *config.Config, model, protocol, root string, payload, original []byte, requestedModel string) []byte { +// requestPath is the inbound HTTP request path (when available) used for endpoint-scoped gates. +func ApplyPayloadConfigWithRoot(cfg *config.Config, model, protocol, root string, payload, original []byte, requestedModel string, requestPath string) []byte { if cfg == nil || len(payload) == 0 { return payload } - rules := cfg.Payload - if len(rules.Default) == 0 && len(rules.DefaultRaw) == 0 && len(rules.Override) == 0 && len(rules.OverrideRaw) == 0 && len(rules.Filter) == 0 { - return payload - } - model = strings.TrimSpace(model) - requestedModel = strings.TrimSpace(requestedModel) - if model == "" && requestedModel == "" { - return payload - } - candidates := payloadModelCandidates(model, requestedModel) out := payload - source := original - if len(source) == 0 { - source = payload - } - appliedDefaults := make(map[string]struct{}) - // Apply default rules: first write wins per field across all matching rules. - for i := range rules.Default { - rule := &rules.Default[i] - if !payloadModelRulesMatch(rule.Models, protocol, candidates) { - continue - } - for path, value := range rule.Params { - fullPath := buildPayloadPath(root, path) - if fullPath == "" { - continue - } - if gjson.GetBytes(source, fullPath).Exists() { - continue - } - if _, ok := appliedDefaults[fullPath]; ok { - continue - } - updated, errSet := sjson.SetBytes(out, fullPath, value) - if errSet != nil { - continue - } - out = updated - appliedDefaults[fullPath] = struct{}{} + + // Apply disable-image-generation filtering before payload rules so config payload + // overrides can explicitly re-enable image_generation when desired. + if cfg.DisableImageGeneration != config.DisableImageGenerationOff { + if cfg.DisableImageGeneration != config.DisableImageGenerationChat || !isImagesEndpointRequestPath(requestPath) { + out = removeToolTypeFromPayloadWithRoot(out, root, "image_generation") + out = removeToolChoiceFromPayloadWithRoot(out, root, "image_generation") } } - // Apply default raw rules: first write wins per field across all matching rules. - for i := range rules.DefaultRaw { - rule := &rules.DefaultRaw[i] - if !payloadModelRulesMatch(rule.Models, protocol, candidates) { - continue - } - for path, value := range rule.Params { - fullPath := buildPayloadPath(root, path) - if fullPath == "" { - continue + + rules := cfg.Payload + hasPayloadRules := len(rules.Default) != 0 || len(rules.DefaultRaw) != 0 || len(rules.Override) != 0 || len(rules.OverrideRaw) != 0 || len(rules.Filter) != 0 + if hasPayloadRules { + model = strings.TrimSpace(model) + requestedModel = strings.TrimSpace(requestedModel) + if model != "" || requestedModel != "" { + candidates := payloadModelCandidates(model, requestedModel) + source := original + if len(source) == 0 { + source = payload } - if gjson.GetBytes(source, fullPath).Exists() { - continue + appliedDefaults := make(map[string]struct{}) + // Apply default rules: first write wins per field across all matching rules. + for i := range rules.Default { + rule := &rules.Default[i] + if !payloadModelRulesMatch(rule.Models, protocol, candidates) { + continue + } + for path, value := range rule.Params { + fullPath := buildPayloadPath(root, path) + if fullPath == "" { + continue + } + if gjson.GetBytes(source, fullPath).Exists() { + continue + } + if _, ok := appliedDefaults[fullPath]; ok { + continue + } + updated, errSet := sjson.SetBytes(out, fullPath, value) + if errSet != nil { + continue + } + out = updated + appliedDefaults[fullPath] = struct{}{} + } } - if _, ok := appliedDefaults[fullPath]; ok { - continue + // Apply default raw rules: first write wins per field across all matching rules. + for i := range rules.DefaultRaw { + rule := &rules.DefaultRaw[i] + if !payloadModelRulesMatch(rule.Models, protocol, candidates) { + continue + } + for path, value := range rule.Params { + fullPath := buildPayloadPath(root, path) + if fullPath == "" { + continue + } + if gjson.GetBytes(source, fullPath).Exists() { + continue + } + if _, ok := appliedDefaults[fullPath]; ok { + continue + } + rawValue, ok := payloadRawValue(value) + if !ok { + continue + } + updated, errSet := sjson.SetRawBytes(out, fullPath, rawValue) + if errSet != nil { + continue + } + out = updated + appliedDefaults[fullPath] = struct{}{} + } } - rawValue, ok := payloadRawValue(value) - if !ok { - continue - } - updated, errSet := sjson.SetRawBytes(out, fullPath, rawValue) - if errSet != nil { - continue + // Apply override rules: last write wins per field across all matching rules. + for i := range rules.Override { + rule := &rules.Override[i] + if !payloadModelRulesMatch(rule.Models, protocol, candidates) { + continue + } + for path, value := range rule.Params { + fullPath := buildPayloadPath(root, path) + if fullPath == "" { + continue + } + updated, errSet := sjson.SetBytes(out, fullPath, value) + if errSet != nil { + continue + } + out = updated + } } - out = updated - appliedDefaults[fullPath] = struct{}{} - } - } - // Apply override rules: last write wins per field across all matching rules. - for i := range rules.Override { - rule := &rules.Override[i] - if !payloadModelRulesMatch(rule.Models, protocol, candidates) { - continue - } - for path, value := range rule.Params { - fullPath := buildPayloadPath(root, path) - if fullPath == "" { - continue + // Apply override raw rules: last write wins per field across all matching rules. + for i := range rules.OverrideRaw { + rule := &rules.OverrideRaw[i] + if !payloadModelRulesMatch(rule.Models, protocol, candidates) { + continue + } + for path, value := range rule.Params { + fullPath := buildPayloadPath(root, path) + if fullPath == "" { + continue + } + rawValue, ok := payloadRawValue(value) + if !ok { + continue + } + updated, errSet := sjson.SetRawBytes(out, fullPath, rawValue) + if errSet != nil { + continue + } + out = updated + } } - updated, errSet := sjson.SetBytes(out, fullPath, value) - if errSet != nil { - continue + // Apply filter rules: remove matching paths from payload. + for i := range rules.Filter { + rule := &rules.Filter[i] + if !payloadModelRulesMatch(rule.Models, protocol, candidates) { + continue + } + for _, path := range rule.Params { + fullPath := buildPayloadPath(root, path) + if fullPath == "" { + continue + } + updated, errDel := sjson.DeleteBytes(out, fullPath) + if errDel != nil { + continue + } + out = updated + } } - out = updated } } - // Apply override raw rules: last write wins per field across all matching rules. - for i := range rules.OverrideRaw { - rule := &rules.OverrideRaw[i] - if !payloadModelRulesMatch(rule.Models, protocol, candidates) { - continue - } - for path, value := range rule.Params { - fullPath := buildPayloadPath(root, path) - if fullPath == "" { - continue - } - rawValue, ok := payloadRawValue(value) - if !ok { - continue - } - updated, errSet := sjson.SetRawBytes(out, fullPath, rawValue) - if errSet != nil { - continue - } - out = updated - } + return out +} + +func isImagesEndpointRequestPath(path string) bool { + path = strings.TrimSpace(path) + if path == "" { + return false } - // Apply filter rules: remove matching paths from payload. - for i := range rules.Filter { - rule := &rules.Filter[i] - if !payloadModelRulesMatch(rule.Models, protocol, candidates) { - continue - } - for _, path := range rule.Params { - fullPath := buildPayloadPath(root, path) - if fullPath == "" { - continue - } - updated, errDel := sjson.DeleteBytes(out, fullPath) - if errDel != nil { - continue - } - out = updated - } + if path == "/v1/images/generations" || path == "/v1/images/edits" { + return true } - return out + // Be tolerant of prefix routers that may report a longer matched route. + if strings.HasSuffix(path, "/v1/images/generations") || strings.HasSuffix(path, "/v1/images/edits") { + return true + } + if strings.HasSuffix(path, "/images/generations") || strings.HasSuffix(path, "/images/edits") { + return true + } + return false } func payloadModelRulesMatch(rules []config.PayloadModelRule, protocol string, models []string) bool { @@ -226,6 +254,95 @@ func buildPayloadPath(root, path string) string { return r + "." + p } +func removeToolTypeFromPayloadWithRoot(payload []byte, root string, toolType string) []byte { + if len(payload) == 0 { + return payload + } + toolType = strings.TrimSpace(toolType) + if toolType == "" { + return payload + } + toolsPath := buildPayloadPath(root, "tools") + return removeToolTypeFromToolsArray(payload, toolsPath, toolType) +} + +func removeToolChoiceFromPayloadWithRoot(payload []byte, root string, toolType string) []byte { + if len(payload) == 0 { + return payload + } + toolType = strings.TrimSpace(toolType) + if toolType == "" { + return payload + } + toolChoicePath := buildPayloadPath(root, "tool_choice") + return removeToolChoiceFromPayload(payload, toolChoicePath, toolType) +} + +func removeToolChoiceFromPayload(payload []byte, toolChoicePath string, toolType string) []byte { + choice := gjson.GetBytes(payload, toolChoicePath) + if !choice.Exists() { + return payload + } + if choice.Type == gjson.String { + if strings.EqualFold(strings.TrimSpace(choice.String()), toolType) { + updated, errDel := sjson.DeleteBytes(payload, toolChoicePath) + if errDel == nil { + return updated + } + } + return payload + } + if choice.Type != gjson.JSON { + return payload + } + choiceType := strings.TrimSpace(choice.Get("type").String()) + if strings.EqualFold(choiceType, toolType) { + updated, errDel := sjson.DeleteBytes(payload, toolChoicePath) + if errDel == nil { + return updated + } + return payload + } + if strings.EqualFold(choiceType, "tool") { + name := strings.TrimSpace(choice.Get("name").String()) + if strings.EqualFold(name, toolType) { + updated, errDel := sjson.DeleteBytes(payload, toolChoicePath) + if errDel == nil { + return updated + } + } + } + return payload +} + +func removeToolTypeFromToolsArray(payload []byte, toolsPath string, toolType string) []byte { + tools := gjson.GetBytes(payload, toolsPath) + if !tools.Exists() || !tools.IsArray() { + return payload + } + removed := false + filtered := []byte(`[]`) + for _, tool := range tools.Array() { + if tool.Get("type").String() == toolType { + removed = true + continue + } + updated, errSet := sjson.SetRawBytes(filtered, "-1", []byte(tool.Raw)) + if errSet != nil { + continue + } + filtered = updated + } + if !removed { + return payload + } + updated, errSet := sjson.SetRawBytes(payload, toolsPath, filtered) + if errSet != nil { + return payload + } + return updated +} + func payloadRawValue(value any) ([]byte, bool) { if value == nil { return nil, false @@ -273,6 +390,24 @@ func PayloadRequestedModel(opts cliproxyexecutor.Options, fallback string) strin } } +func PayloadRequestPath(opts cliproxyexecutor.Options) string { + if len(opts.Metadata) == 0 { + return "" + } + raw, ok := opts.Metadata[cliproxyexecutor.RequestPathMetadataKey] + if !ok || raw == nil { + return "" + } + switch v := raw.(type) { + case string: + return strings.TrimSpace(v) + case []byte: + return strings.TrimSpace(string(v)) + default: + return "" + } +} + // matchModelPattern performs simple wildcard matching where '*' matches zero or more characters. // Examples: // diff --git a/internal/runtime/executor/helps/payload_helpers_disable_image_generation_test.go b/internal/runtime/executor/helps/payload_helpers_disable_image_generation_test.go new file mode 100644 index 0000000000..0faf012b35 --- /dev/null +++ b/internal/runtime/executor/helps/payload_helpers_disable_image_generation_test.go @@ -0,0 +1,134 @@ +package helps + +import ( + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/tidwall/gjson" +) + +func TestApplyPayloadConfigWithRoot_DisableImageGeneration_RemovesToolsEntry(t *testing.T) { + cfg := &config.Config{ + SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationAll}, + } + payload := []byte(`{"tools":[{"type":"image_generation","output_format":"png"},{"type":"function","name":"f1"}]}`) + + out := ApplyPayloadConfigWithRoot(cfg, "gpt-5.4", "openai-response", "", payload, nil, "", "") + + tools := gjson.GetBytes(out, "tools") + if !tools.Exists() || !tools.IsArray() { + t.Fatalf("expected tools array, got %v", tools.Type) + } + arr := tools.Array() + if len(arr) != 1 { + t.Fatalf("expected 1 tool after removal, got %d", len(arr)) + } + if got := arr[0].Get("type").String(); got != "function" { + t.Fatalf("expected remaining tool type=function, got %q", got) + } +} + +func TestApplyPayloadConfigWithRoot_DisableImageGeneration_RemovesToolsEntryWithRoot(t *testing.T) { + cfg := &config.Config{ + SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationAll}, + } + payload := []byte(`{"request":{"tools":[{"type":"image_generation"},{"type":"web_search"}]}}`) + + out := ApplyPayloadConfigWithRoot(cfg, "gpt-5.4", "gemini-cli", "request", payload, nil, "", "") + + tools := gjson.GetBytes(out, "request.tools") + if !tools.Exists() || !tools.IsArray() { + t.Fatalf("expected request.tools array, got %v", tools.Type) + } + arr := tools.Array() + if len(arr) != 1 { + t.Fatalf("expected 1 tool after removal, got %d", len(arr)) + } + if got := arr[0].Get("type").String(); got != "web_search" { + t.Fatalf("expected remaining tool type=web_search, got %q", got) + } +} + +func TestApplyPayloadConfigWithRoot_DisableImageGeneration_RemovesToolChoiceByType(t *testing.T) { + cfg := &config.Config{ + SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationAll}, + } + payload := []byte(`{"tools":[{"type":"image_generation"},{"type":"function","name":"f1"}],"tool_choice":{"type":"image_generation"}}`) + + out := ApplyPayloadConfigWithRoot(cfg, "gpt-5.4", "openai-response", "", payload, nil, "", "") + + if gjson.GetBytes(out, "tool_choice").Exists() { + t.Fatalf("expected tool_choice to be removed") + } +} + +func TestApplyPayloadConfigWithRoot_DisableImageGeneration_RemovesToolChoiceByNameWithRoot(t *testing.T) { + cfg := &config.Config{ + SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationAll}, + } + payload := []byte(`{"request":{"tools":[{"type":"image_generation"},{"type":"web_search"}],"tool_choice":{"type":"tool","name":"image_generation"}}}`) + + out := ApplyPayloadConfigWithRoot(cfg, "gpt-5.4", "gemini-cli", "request", payload, nil, "", "") + + if gjson.GetBytes(out, "request.tool_choice").Exists() { + t.Fatalf("expected request.tool_choice to be removed") + } +} + +func TestApplyPayloadConfigWithRoot_DisableImageGenerationChat_KeepsImageGenerationOnImagesEndpoints(t *testing.T) { + cfg := &config.Config{ + SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationChat}, + } + payload := []byte(`{"tools":[{"type":"image_generation"},{"type":"function","name":"f1"}],"tool_choice":{"type":"image_generation"}}`) + + out := ApplyPayloadConfigWithRoot(cfg, "gpt-5.4", "openai-response", "", payload, nil, "", "/v1/images/generations") + + tools := gjson.GetBytes(out, "tools") + if !tools.Exists() || !tools.IsArray() { + t.Fatalf("expected tools array, got %v", tools.Type) + } + arr := tools.Array() + if len(arr) != 2 { + t.Fatalf("expected 2 tools (no removal), got %d", len(arr)) + } + if !gjson.GetBytes(out, "tool_choice").Exists() { + t.Fatalf("expected tool_choice to be kept on images endpoint") + } +} + +func TestApplyPayloadConfigWithRoot_DisableImageGeneration_PayloadOverrideCanRestoreImageGeneration(t *testing.T) { + cfg := &config.Config{ + SDKConfig: config.SDKConfig{DisableImageGeneration: config.DisableImageGenerationAll}, + Payload: config.PayloadConfig{ + OverrideRaw: []config.PayloadRule{ + { + Models: []config.PayloadModelRule{ + {Name: "gpt-5.4", Protocol: "openai-response"}, + }, + Params: map[string]any{ + "tools": `[{"type":"image_generation"},{"type":"function","name":"f1"}]`, + "tool_choice": `{"type":"image_generation"}`, + }, + }, + }, + }, + } + payload := []byte(`{"tools":[{"type":"image_generation"},{"type":"function","name":"f1"}],"tool_choice":{"type":"image_generation"}}`) + + out := ApplyPayloadConfigWithRoot(cfg, "gpt-5.4", "openai-response", "", payload, nil, "", "") + + tools := gjson.GetBytes(out, "tools") + if !tools.Exists() || !tools.IsArray() { + t.Fatalf("expected tools array, got %v", tools.Type) + } + arr := tools.Array() + if len(arr) != 2 { + t.Fatalf("expected 2 tools after payload override, got %d", len(arr)) + } + if got := arr[0].Get("type").String(); got != "image_generation" { + t.Fatalf("expected first tool type=image_generation, got %q", got) + } + if !gjson.GetBytes(out, "tool_choice").Exists() { + t.Fatalf("expected tool_choice to be restored by payload override") + } +} diff --git a/internal/runtime/executor/helps/proxy_helpers.go b/internal/runtime/executor/helps/proxy_helpers.go index 022bc65c17..91fdc9be49 100644 --- a/internal/runtime/executor/helps/proxy_helpers.go +++ b/internal/runtime/executor/helps/proxy_helpers.go @@ -6,9 +6,9 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" ) diff --git a/internal/runtime/executor/helps/proxy_helpers_test.go b/internal/runtime/executor/helps/proxy_helpers_test.go index 3311716765..fb57b6b745 100644 --- a/internal/runtime/executor/helps/proxy_helpers_test.go +++ b/internal/runtime/executor/helps/proxy_helpers_test.go @@ -5,9 +5,9 @@ import ( "net/http" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestNewProxyAwareHTTPClientDirectBypassesGlobalProxy(t *testing.T) { diff --git a/internal/runtime/executor/helps/thinking_providers.go b/internal/runtime/executor/helps/thinking_providers.go index bbd019624d..a776136fde 100644 --- a/internal/runtime/executor/helps/thinking_providers.go +++ b/internal/runtime/executor/helps/thinking_providers.go @@ -1,11 +1,11 @@ package helps import ( - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/antigravity" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/codex" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/geminicli" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/kimi" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/openai" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/antigravity" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/codex" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/geminicli" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/kimi" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/openai" ) diff --git a/internal/runtime/executor/helps/token_helpers.go b/internal/runtime/executor/helps/token_helpers.go index 92b8ba8dfb..c102e5f1d2 100644 --- a/internal/runtime/executor/helps/token_helpers.go +++ b/internal/runtime/executor/helps/token_helpers.go @@ -234,3 +234,127 @@ func addIfNotEmpty(segments *[]string, value string) { *segments = append(*segments, trimmed) } } + +// CountClaudeChatTokens approximates prompt tokens for Claude API chat payloads. +func CountClaudeChatTokens(enc tokenizer.Codec, payload []byte) (int64, error) { + if enc == nil { + return 0, fmt.Errorf("encoder is nil") + } + if len(payload) == 0 { + return 0, nil + } + + root := gjson.ParseBytes(payload) + segments := make([]string, 0, 32) + imageTokens := 0 + + collectClaudeContent(root.Get("system"), &segments, &imageTokens) + collectClaudeMessages(root.Get("messages"), &segments, &imageTokens) + collectClaudeTools(root.Get("tools"), &segments) + + joined := strings.TrimSpace(strings.Join(segments, "\n")) + if joined == "" { + return int64(imageTokens), nil + } + count, err := enc.Count(joined) + if err != nil { + return 0, err + } + return int64(count + imageTokens), nil +} + +func CollectOpenAIContent(content gjson.Result, segments *[]string) { + collectOpenAIContent(content, segments) +} + +func collectClaudeMessages(messages gjson.Result, segments *[]string, imageTokens *int) { + if !messages.Exists() || !messages.IsArray() { + return + } + messages.ForEach(func(_, message gjson.Result) bool { + addIfNotEmpty(segments, message.Get("role").String()) + collectClaudeContent(message.Get("content"), segments, imageTokens) + return true + }) +} + +func collectClaudeContent(content gjson.Result, segments *[]string, imageTokens *int) { + if !content.Exists() { + return + } + if content.Type == gjson.String { + addIfNotEmpty(segments, content.String()) + return + } + if content.IsArray() { + content.ForEach(func(_, part gjson.Result) bool { + partType := part.Get("type").String() + switch partType { + case "text": + addIfNotEmpty(segments, part.Get("text").String()) + case "image": + source := part.Get("source") + width := source.Get("width").Float() + height := source.Get("height").Float() + if imageTokens != nil { + *imageTokens += estimateImageTokens(width, height) + } + case "tool_use": + addIfNotEmpty(segments, part.Get("id").String()) + addIfNotEmpty(segments, part.Get("name").String()) + if input := part.Get("input"); input.Exists() { + addIfNotEmpty(segments, input.Raw) + } + case "tool_result": + addIfNotEmpty(segments, part.Get("tool_use_id").String()) + collectClaudeContent(part.Get("content"), segments, imageTokens) + case "thinking": + addIfNotEmpty(segments, part.Get("thinking").String()) + default: + if part.Type == gjson.String { + addIfNotEmpty(segments, part.String()) + } else if part.Type == gjson.JSON { + addIfNotEmpty(segments, part.Raw) + } + } + return true + }) + return + } + if content.Type == gjson.JSON { + addIfNotEmpty(segments, content.Raw) + } +} + +func collectClaudeTools(tools gjson.Result, segments *[]string) { + if !tools.Exists() || !tools.IsArray() { + return + } + tools.ForEach(func(_, tool gjson.Result) bool { + addIfNotEmpty(segments, tool.Get("name").String()) + addIfNotEmpty(segments, tool.Get("description").String()) + if inputSchema := tool.Get("input_schema"); inputSchema.Exists() { + addIfNotEmpty(segments, inputSchema.Raw) + } + return true + }) +} + +// estimateImageTokens calculates estimated tokens for an image based on dimensions. +// Based on Claude's image token calculation: tokens ≈ (width * height) / 750 +// Minimum 85 tokens, maximum 1590 tokens (for 1568x1568 images). +func estimateImageTokens(width, height float64) int { + if width <= 0 || height <= 0 { + // No valid dimensions, use default estimate (medium-sized image). + return 1000 + } + + tokens := int(width * height / 750) + if tokens < 85 { + return 85 + } + if tokens > 1590 { + return 1590 + } + return tokens +} diff --git a/internal/runtime/executor/helps/usage_helpers.go b/internal/runtime/executor/helps/usage_helpers.go index 8da8fd1e7a..dd76362e10 100644 --- a/internal/runtime/executor/helps/usage_helpers.go +++ b/internal/runtime/executor/helps/usage_helpers.go @@ -3,14 +3,15 @@ package helps import ( "bytes" "context" + "errors" "fmt" "strings" "sync" "time" "github.com/gin-gonic/gin" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/usage" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -18,8 +19,10 @@ import ( type UsageReporter struct { provider string model string + alias string authID string authIndex string + authType string apiKey string source string requestedAt time.Time @@ -28,12 +31,18 @@ type UsageReporter struct { func NewUsageReporter(ctx context.Context, provider, model string, auth *cliproxyauth.Auth) *UsageReporter { apiKey := APIKeyFromContext(ctx) + alias := usage.RequestedModelAliasFromContext(ctx) + if alias == "" { + alias = model + } reporter := &UsageReporter{ provider: provider, model: model, + alias: strings.TrimSpace(alias), requestedAt: time.Now(), apiKey: apiKey, source: resolveUsageSource(auth, apiKey), + authType: resolveUsageAuthType(auth), } if auth != nil { reporter.authID = auth.ID @@ -43,11 +52,34 @@ func NewUsageReporter(ctx context.Context, provider, model string, auth *cliprox } func (r *UsageReporter) Publish(ctx context.Context, detail usage.Detail) { - r.publishWithOutcome(ctx, detail, false) + r.publishWithOutcome(ctx, detail, false, usage.Failure{}) +} + +func (r *UsageReporter) PublishAdditionalModel(ctx context.Context, model string, detail usage.Detail) { + record, ok := r.buildAdditionalModelRecord(model, detail) + if !ok { + return + } + usage.PublishRecord(ctx, record) +} + +func (r *UsageReporter) buildAdditionalModelRecord(model string, detail usage.Detail) (usage.Record, bool) { + if r == nil { + return usage.Record{}, false + } + model = strings.TrimSpace(model) + if model == "" { + return usage.Record{}, false + } + detail = normalizeUsageDetailTotal(detail) + if !hasNonZeroTokenUsage(detail) { + return usage.Record{}, false + } + return r.buildRecordForModel(model, detail, false, usage.Failure{}), true } -func (r *UsageReporter) PublishFailure(ctx context.Context) { - r.publishWithOutcome(ctx, usage.Detail{}, true) +func (r *UsageReporter) PublishFailure(ctx context.Context, errs ...error) { + r.publishWithOutcome(ctx, usage.Detail{}, true, failFromErrors(errs...)) } func (r *UsageReporter) TrackFailure(ctx context.Context, errPtr *error) { @@ -55,23 +87,36 @@ func (r *UsageReporter) TrackFailure(ctx context.Context, errPtr *error) { return } if *errPtr != nil { - r.PublishFailure(ctx) + r.PublishFailure(ctx, *errPtr) } } -func (r *UsageReporter) publishWithOutcome(ctx context.Context, detail usage.Detail, failed bool) { +func (r *UsageReporter) publishWithOutcome(ctx context.Context, detail usage.Detail, failed bool, fail usage.Failure) { if r == nil { return } + detail = normalizeUsageDetailTotal(detail) + r.once.Do(func() { + usage.PublishRecord(ctx, r.buildRecord(detail, failed, fail)) + }) +} + +func normalizeUsageDetailTotal(detail usage.Detail) usage.Detail { if detail.TotalTokens == 0 { total := detail.InputTokens + detail.OutputTokens + detail.ReasoningTokens if total > 0 { detail.TotalTokens = total } } - r.once.Do(func() { - usage.PublishRecord(ctx, r.buildRecord(detail, failed)) - }) + return detail +} + +func hasNonZeroTokenUsage(detail usage.Detail) bool { + return detail.InputTokens != 0 || + detail.OutputTokens != 0 || + detail.ReasoningTokens != 0 || + detail.CachedTokens != 0 || + detail.TotalTokens != 0 } // ensurePublished guarantees that a usage record is emitted exactly once. @@ -83,28 +128,59 @@ func (r *UsageReporter) EnsurePublished(ctx context.Context) { return } r.once.Do(func() { - usage.PublishRecord(ctx, r.buildRecord(usage.Detail{}, false)) + usage.PublishRecord(ctx, r.buildRecord(usage.Detail{}, false, usage.Failure{})) }) } -func (r *UsageReporter) buildRecord(detail usage.Detail, failed bool) usage.Record { +func (r *UsageReporter) buildRecord(detail usage.Detail, failed bool, failures ...usage.Failure) usage.Record { + var fail usage.Failure + if len(failures) > 0 { + fail = failures[0] + } + if r == nil { + return usage.Record{Detail: detail, Failed: failed, Fail: fail} + } + return r.buildRecordForModel(r.model, detail, failed, fail) +} + +func (r *UsageReporter) buildRecordForModel(model string, detail usage.Detail, failed bool, fail usage.Failure) usage.Record { if r == nil { - return usage.Record{Detail: detail, Failed: failed} + return usage.Record{Model: model, Detail: detail, Failed: failed, Fail: fail} } return usage.Record{ Provider: r.provider, - Model: r.model, + Model: model, + Alias: r.alias, Source: r.source, APIKey: r.apiKey, AuthID: r.authID, AuthIndex: r.authIndex, + AuthType: r.authType, RequestedAt: r.requestedAt, Latency: r.latency(), Failed: failed, + Fail: fail, Detail: detail, } } +func failFromErrors(errs ...error) usage.Failure { + for _, err := range errs { + if err == nil { + continue + } + fail := usage.Failure{ + Body: strings.TrimSpace(err.Error()), + } + var se interface{ StatusCode() int } + if errors.As(err, &se) && se != nil { + fail.StatusCode = se.StatusCode() + } + return fail + } + return usage.Failure{} +} + func (r *UsageReporter) latency() time.Duration { if r == nil || r.requestedAt.IsZero() { return 0 @@ -124,7 +200,7 @@ func APIKeyFromContext(ctx context.Context) string { if !ok || ginCtx == nil { return "" } - if v, exists := ginCtx.Get("apiKey"); exists { + if v, exists := ginCtx.Get("userApiKey"); exists { switch value := v.(type) { case string: return value @@ -181,30 +257,58 @@ func resolveUsageSource(auth *cliproxyauth.Auth, ctxAPIKey string) string { return "" } +func resolveUsageAuthType(auth *cliproxyauth.Auth) string { + if auth == nil { + return "" + } + kind, _ := auth.AccountInfo() + kind = strings.TrimSpace(kind) + if kind == "api_key" { + return "apikey" + } + return kind +} + func ParseCodexUsage(data []byte) (usage.Detail, bool) { usageNode := gjson.ParseBytes(data).Get("response.usage") - if !usageNode.Exists() { + if !hasOpenAIStyleUsageTokenFields(usageNode) { return usage.Detail{}, false } - detail := usage.Detail{ - InputTokens: usageNode.Get("input_tokens").Int(), - OutputTokens: usageNode.Get("output_tokens").Int(), - TotalTokens: usageNode.Get("total_tokens").Int(), - } - if cached := usageNode.Get("input_tokens_details.cached_tokens"); cached.Exists() { - detail.CachedTokens = cached.Int() - } - if reasoning := usageNode.Get("output_tokens_details.reasoning_tokens"); reasoning.Exists() { - detail.ReasoningTokens = reasoning.Int() + return parseOpenAIStyleUsageNode(usageNode), true +} + +func ParseCodexImageToolUsage(data []byte) (usage.Detail, bool) { + usageNode := gjson.ParseBytes(data).Get("response.tool_usage.image_gen") + if !hasOpenAIStyleUsageTokenFields(usageNode) { + return usage.Detail{}, false } - return detail, true + return parseOpenAIStyleUsageNode(usageNode), true } func ParseOpenAIUsage(data []byte) usage.Detail { usageNode := gjson.ParseBytes(data).Get("usage") - if !usageNode.Exists() { + if !hasOpenAIStyleUsageTokenFields(usageNode) { return usage.Detail{} } + return parseOpenAIStyleUsageNode(usageNode) +} + +func hasOpenAIStyleUsageTokenFields(usageNode gjson.Result) bool { + if !usageNode.Exists() || !usageNode.IsObject() { + return false + } + return usageNode.Get("prompt_tokens").Exists() || + usageNode.Get("input_tokens").Exists() || + usageNode.Get("completion_tokens").Exists() || + usageNode.Get("output_tokens").Exists() || + usageNode.Get("total_tokens").Exists() || + usageNode.Get("prompt_tokens_details.cached_tokens").Exists() || + usageNode.Get("input_tokens_details.cached_tokens").Exists() || + usageNode.Get("completion_tokens_details.reasoning_tokens").Exists() || + usageNode.Get("output_tokens_details.reasoning_tokens").Exists() +} + +func parseOpenAIStyleUsageNode(usageNode gjson.Result) usage.Detail { inputNode := usageNode.Get("prompt_tokens") if !inputNode.Exists() { inputNode = usageNode.Get("input_tokens") @@ -241,21 +345,10 @@ func ParseOpenAIStreamUsage(line []byte) (usage.Detail, bool) { return usage.Detail{}, false } usageNode := gjson.GetBytes(payload, "usage") - if !usageNode.Exists() { + if !hasOpenAIStyleUsageTokenFields(usageNode) { return usage.Detail{}, false } - detail := usage.Detail{ - InputTokens: usageNode.Get("prompt_tokens").Int(), - OutputTokens: usageNode.Get("completion_tokens").Int(), - TotalTokens: usageNode.Get("total_tokens").Int(), - } - if cached := usageNode.Get("prompt_tokens_details.cached_tokens"); cached.Exists() { - detail.CachedTokens = cached.Int() - } - if reasoning := usageNode.Get("completion_tokens_details.reasoning_tokens"); reasoning.Exists() { - detail.ReasoningTokens = reasoning.Int() - } - return detail, true + return parseOpenAIStyleUsageNode(usageNode), true } func ParseClaudeUsage(data []byte) usage.Detail { @@ -311,12 +404,22 @@ func parseGeminiFamilyUsageDetail(node gjson.Result) usage.Detail { return detail } +func hasGeminiFamilyUsageTokenFields(node gjson.Result) bool { + return node.Get("promptTokenCount").Exists() || + node.Get("candidatesTokenCount").Exists() || + node.Get("thoughtsTokenCount").Exists() || + node.Get("totalTokenCount").Exists() || + node.Get("cachedContentTokenCount").Exists() +} + func ParseGeminiCLIUsage(data []byte) usage.Detail { usageNode := gjson.ParseBytes(data) - node := usageNode.Get("response.usageMetadata") - if !node.Exists() { - node = usageNode.Get("response.usage_metadata") - } + node := firstExistingUsageNode(usageNode, + "response.usageMetadata", + "response.usage_metadata", + "usageMetadata", + "usage_metadata", + ) if !node.Exists() { return usage.Detail{} } @@ -355,16 +458,32 @@ func ParseGeminiCLIStreamUsage(line []byte) (usage.Detail, bool) { if len(payload) == 0 || !gjson.ValidBytes(payload) { return usage.Detail{}, false } - node := gjson.GetBytes(payload, "response.usageMetadata") + root := gjson.ParseBytes(payload) + node := firstExistingUsageNode(root, + "response.usageMetadata", + "response.usage_metadata", + "usageMetadata", + "usage_metadata", + ) if !node.Exists() { - node = gjson.GetBytes(payload, "usage_metadata") + return usage.Detail{}, false } - if !node.Exists() { + if !hasGeminiFamilyUsageTokenFields(node) { return usage.Detail{}, false } return parseGeminiFamilyUsageDetail(node), true } +func firstExistingUsageNode(root gjson.Result, paths ...string) gjson.Result { + for _, path := range paths { + node := root.Get(path) + if node.Exists() { + return node + } + } + return gjson.Result{} +} + func ParseAntigravityUsage(data []byte) usage.Detail { usageNode := gjson.ParseBytes(data) node := usageNode.Get("response.usageMetadata") diff --git a/internal/runtime/executor/helps/usage_helpers_test.go b/internal/runtime/executor/helps/usage_helpers_test.go index 1a5648e89b..bd0a9c21ba 100644 --- a/internal/runtime/executor/helps/usage_helpers_test.go +++ b/internal/runtime/executor/helps/usage_helpers_test.go @@ -1,10 +1,11 @@ package helps import ( + "context" "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/usage" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" ) func TestParseOpenAIUsageChatCompletions(t *testing.T) { @@ -47,6 +48,88 @@ func TestParseOpenAIUsageResponses(t *testing.T) { } } +func TestParseOpenAIUsageIgnoresNullUsage(t *testing.T) { + data := []byte(`{"usage":null}`) + detail := ParseOpenAIUsage(data) + if detail != (usage.Detail{}) { + t.Fatalf("detail = %+v, want zero detail", detail) + } +} + +func TestParseOpenAIStreamUsageIgnoresNullUsage(t *testing.T) { + line := []byte(`data: {"id":"chunk_1","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"hi"},"finish_reason":null}],"usage":null}`) + if detail, ok := ParseOpenAIStreamUsage(line); ok { + t.Fatalf("ParseOpenAIStreamUsage() = (%+v, true), want false for null usage", detail) + } +} + +func TestParseOpenAIStreamUsageResponsesFields(t *testing.T) { + line := []byte(`data: {"id":"chunk_1","object":"chat.completion.chunk","choices":[],"usage":{"input_tokens":8,"output_tokens":5,"total_tokens":13,"input_tokens_details":{"cached_tokens":3},"output_tokens_details":{"reasoning_tokens":2}}}`) + detail, ok := ParseOpenAIStreamUsage(line) + if !ok { + t.Fatal("ParseOpenAIStreamUsage() ok = false, want true") + } + if detail.InputTokens != 8 { + t.Fatalf("input tokens = %d, want %d", detail.InputTokens, 8) + } + if detail.OutputTokens != 5 { + t.Fatalf("output tokens = %d, want %d", detail.OutputTokens, 5) + } + if detail.TotalTokens != 13 { + t.Fatalf("total tokens = %d, want %d", detail.TotalTokens, 13) + } + if detail.CachedTokens != 3 { + t.Fatalf("cached tokens = %d, want %d", detail.CachedTokens, 3) + } + if detail.ReasoningTokens != 2 { + t.Fatalf("reasoning tokens = %d, want %d", detail.ReasoningTokens, 2) + } +} + +func TestParseGeminiCLIUsage_TopLevelUsageMetadata(t *testing.T) { + data := []byte(`{"usageMetadata":{"promptTokenCount":11,"candidatesTokenCount":7,"thoughtsTokenCount":3,"totalTokenCount":21,"cachedContentTokenCount":5}}`) + detail := ParseGeminiCLIUsage(data) + if detail.InputTokens != 11 { + t.Fatalf("input tokens = %d, want %d", detail.InputTokens, 11) + } + if detail.OutputTokens != 7 { + t.Fatalf("output tokens = %d, want %d", detail.OutputTokens, 7) + } + if detail.ReasoningTokens != 3 { + t.Fatalf("reasoning tokens = %d, want %d", detail.ReasoningTokens, 3) + } + if detail.TotalTokens != 21 { + t.Fatalf("total tokens = %d, want %d", detail.TotalTokens, 21) + } + if detail.CachedTokens != 5 { + t.Fatalf("cached tokens = %d, want %d", detail.CachedTokens, 5) + } +} + +func TestParseGeminiCLIStreamUsage_ResponseSnakeCaseUsageMetadata(t *testing.T) { + line := []byte(`data: {"response":{"usage_metadata":{"promptTokenCount":13,"candidatesTokenCount":2,"totalTokenCount":15}}}`) + detail, ok := ParseGeminiCLIStreamUsage(line) + if !ok { + t.Fatal("ParseGeminiCLIStreamUsage() ok = false, want true") + } + if detail.InputTokens != 13 { + t.Fatalf("input tokens = %d, want %d", detail.InputTokens, 13) + } + if detail.OutputTokens != 2 { + t.Fatalf("output tokens = %d, want %d", detail.OutputTokens, 2) + } + if detail.TotalTokens != 15 { + t.Fatalf("total tokens = %d, want %d", detail.TotalTokens, 15) + } +} + +func TestParseGeminiCLIStreamUsage_IgnoresTrafficTypeOnlyUsageMetadata(t *testing.T) { + line := []byte(`data: {"response":{"usageMetadata":{"trafficType":"ON_DEMAND"}}}`) + if detail, ok := ParseGeminiCLIStreamUsage(line); ok { + t.Fatalf("ParseGeminiCLIStreamUsage() = (%+v, true), want false for traffic-only usage metadata", detail) + } +} + func TestUsageReporterBuildRecordIncludesLatency(t *testing.T) { reporter := &UsageReporter{ provider: "openai", @@ -62,3 +145,34 @@ func TestUsageReporterBuildRecordIncludesLatency(t *testing.T) { t.Fatalf("latency = %v, want <= 3s", record.Latency) } } + +func TestUsageReporterBuildRecordIncludesRequestedModelAlias(t *testing.T) { + ctx := usage.WithRequestedModelAlias(context.Background(), "client-gpt") + reporter := NewUsageReporter(ctx, "openai", "gpt-5.4", nil) + + record := reporter.buildRecord(usage.Detail{TotalTokens: 3}, false) + if record.Model != "gpt-5.4" { + t.Fatalf("model = %q, want %q", record.Model, "gpt-5.4") + } + if record.Alias != "client-gpt" { + t.Fatalf("alias = %q, want %q", record.Alias, "client-gpt") + } +} + +func TestUsageReporterBuildAdditionalModelRecordSkipsZeroTokens(t *testing.T) { + reporter := &UsageReporter{ + provider: "codex", + model: "gpt-5.4", + requestedAt: time.Now(), + } + + if _, ok := reporter.buildAdditionalModelRecord("gpt-image-2", usage.Detail{}); ok { + t.Fatalf("expected all-zero token usage to be skipped") + } + if _, ok := reporter.buildAdditionalModelRecord("gpt-image-2", usage.Detail{InputTokens: 2}); !ok { + t.Fatalf("expected non-zero input token usage to be recorded") + } + if _, ok := reporter.buildAdditionalModelRecord("gpt-image-2", usage.Detail{CachedTokens: 2}); !ok { + t.Fatalf("expected non-zero cached token usage to be recorded") + } +} diff --git a/internal/runtime/executor/helps/utls_client.go b/internal/runtime/executor/helps/utls_client.go index 39512a58de..29174e47b6 100644 --- a/internal/runtime/executor/helps/utls_client.go +++ b/internal/runtime/executor/helps/utls_client.go @@ -8,9 +8,9 @@ import ( "time" tls "github.com/refraction-networking/utls" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" "golang.org/x/net/http2" "golang.org/x/net/proxy" diff --git a/internal/runtime/executor/helps/vertex_payload_helpers.go b/internal/runtime/executor/helps/vertex_payload_helpers.go new file mode 100644 index 0000000000..4c84fae45e --- /dev/null +++ b/internal/runtime/executor/helps/vertex_payload_helpers.go @@ -0,0 +1,43 @@ +package helps + +import ( + "fmt" + "strings" + + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +// StripVertexOpenAIResponsesToolCallIDs removes OpenAI Responses call IDs that +// Vertex rejects in Gemini functionCall/functionResponse payloads. +func StripVertexOpenAIResponsesToolCallIDs(payload []byte, sourceFormat string) []byte { + if !strings.EqualFold(strings.TrimSpace(sourceFormat), "openai-response") { + return payload + } + + contents := gjson.GetBytes(payload, "contents") + if !contents.IsArray() { + return payload + } + + out := payload + for contentIndex, content := range contents.Array() { + parts := content.Get("parts") + if !parts.IsArray() { + continue + } + for partIndex, part := range parts.Array() { + if part.Get("functionCall.id").Exists() { + if updated, errDelete := sjson.DeleteBytes(out, fmt.Sprintf("contents.%d.parts.%d.functionCall.id", contentIndex, partIndex)); errDelete == nil { + out = updated + } + } + if part.Get("functionResponse.id").Exists() { + if updated, errDelete := sjson.DeleteBytes(out, fmt.Sprintf("contents.%d.parts.%d.functionResponse.id", contentIndex, partIndex)); errDelete == nil { + out = updated + } + } + } + } + return out +} diff --git a/internal/runtime/executor/iflow_executor.go b/internal/runtime/executor/iflow_executor.go new file mode 100644 index 0000000000..8fd03c6794 --- /dev/null +++ b/internal/runtime/executor/iflow_executor.go @@ -0,0 +1,585 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "crypto/hmac" + "crypto/sha256" + "encoding/hex" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/google/uuid" + iflowauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/iflow" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +const ( + iflowDefaultEndpoint = "/chat/completions" + iflowUserAgent = "iFlow-Cli" +) + +// IFlowExecutor executes OpenAI-compatible chat completions against the iFlow API using API keys derived from OAuth. +type IFlowExecutor struct { + cfg *config.Config +} + +// NewIFlowExecutor constructs a new executor instance. +func NewIFlowExecutor(cfg *config.Config) *IFlowExecutor { return &IFlowExecutor{cfg: cfg} } + +// Identifier returns the provider key. +func (e *IFlowExecutor) Identifier() string { return "iflow" } + +// PrepareRequest injects iFlow credentials into the outgoing HTTP request. +func (e *IFlowExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + apiKey, _ := iflowCreds(auth) + if strings.TrimSpace(apiKey) != "" { + req.Header.Set("Authorization", "Bearer "+apiKey) + } + return nil +} + +// HttpRequest injects iFlow credentials into the request and executes it. +func (e *IFlowExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("iflow executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if err := e.PrepareRequest(httpReq, auth); err != nil { + return nil, err + } + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +// Execute performs a non-streaming chat completion request. +func (e *IFlowExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + if opts.Alt == "responses/compact" { + return resp, statusErr{code: http.StatusNotImplemented, msg: "/responses/compact not supported"} + } + baseModel := thinking.ParseSuffix(req.Model).ModelName + + apiKey, baseURL := iflowCreds(auth) + if strings.TrimSpace(apiKey) == "" { + err = fmt.Errorf("iflow executor: missing api key") + return resp, err + } + if baseURL == "" { + baseURL = iflowauth.DefaultAPIBaseURL + } + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := originalPayloadSource + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false) + body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false) + body, _ = sjson.SetBytes(body, "model", baseModel) + + body, err = thinking.ApplyThinking(body, req.Model, from.String(), "iflow", e.Identifier()) + if err != nil { + return resp, err + } + + body = preserveReasoningContentInMessages(body) + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, "") + + endpoint := strings.TrimSuffix(baseURL, "/") + iflowDefaultEndpoint + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, bytes.NewReader(body)) + if err != nil { + return resp, err + } + applyIFlowHeaders(httpReq, apiKey, false) + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: endpoint, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: body, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("iflow executor: close response body error: %v", errClose) + } + }() + helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + helps.AppendAPIResponseChunk(ctx, e.cfg, b) + helps.LogWithRequestID(ctx).Debugf("request error, error status: %d error message: %s", httpResp.StatusCode, helps.SummarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return resp, err + } + + data, err := io.ReadAll(httpResp.Body) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + helps.AppendAPIResponseChunk(ctx, e.cfg, data) + reporter.Publish(ctx, helps.ParseOpenAIUsage(data)) + // Ensure usage is recorded even if upstream omits usage metadata. + reporter.EnsurePublished(ctx) + + var param any + // Note: TranslateNonStream uses req.Model (original with suffix) to preserve + // the original model name in the response for client compatibility. + out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, ¶m) + resp = cliproxyexecutor.Response{Payload: out, Headers: httpResp.Header.Clone()} + return resp, nil +} + +// ExecuteStream performs a streaming chat completion request. +func (e *IFlowExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + if opts.Alt == "responses/compact" { + return nil, statusErr{code: http.StatusNotImplemented, msg: "/responses/compact not supported"} + } + baseModel := thinking.ParseSuffix(req.Model).ModelName + + apiKey, baseURL := iflowCreds(auth) + if strings.TrimSpace(apiKey) == "" { + err = fmt.Errorf("iflow executor: missing api key") + return nil, err + } + if baseURL == "" { + baseURL = iflowauth.DefaultAPIBaseURL + } + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := originalPayloadSource + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true) + body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + body, _ = sjson.SetBytes(body, "model", baseModel) + + body, err = thinking.ApplyThinking(body, req.Model, from.String(), "iflow", e.Identifier()) + if err != nil { + return nil, err + } + + body = preserveReasoningContentInMessages(body) + // Ensure tools array exists to avoid provider quirks observed in some upstreams. + toolsResult := gjson.GetBytes(body, "tools") + if toolsResult.Exists() && toolsResult.IsArray() && len(toolsResult.Array()) == 0 { + body = ensureToolsArray(body) + } + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, "") + + endpoint := strings.TrimSuffix(baseURL, "/") + iflowDefaultEndpoint + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, bytes.NewReader(body)) + if err != nil { + return nil, err + } + applyIFlowHeaders(httpReq, apiKey, true) + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: endpoint, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: body, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return nil, err + } + + helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + data, _ := io.ReadAll(httpResp.Body) + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("iflow executor: close response body error: %v", errClose) + } + helps.AppendAPIResponseChunk(ctx, e.cfg, data) + helps.LogWithRequestID(ctx).Debugf("request error, error status: %d error message: %s", httpResp.StatusCode, helps.SummarizeErrorBody(httpResp.Header.Get("Content-Type"), data)) + err = statusErr{code: httpResp.StatusCode, msg: string(data)} + return nil, err + } + + out := make(chan cliproxyexecutor.StreamChunk) + go func() { + defer close(out) + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("iflow executor: close response body error: %v", errClose) + } + }() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, 52_428_800) // 50MB + var param any + for scanner.Scan() { + line := scanner.Bytes() + helps.AppendAPIResponseChunk(ctx, e.cfg, line) + if detail, ok := helps.ParseOpenAIStreamUsage(line); ok { + reporter.Publish(ctx, detail) + } + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + } + } + if errScan := scanner.Err(); errScan != nil { + helps.RecordAPIResponseError(ctx, e.cfg, errScan) + reporter.PublishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } + // Guarantee a usage record exists even if the stream never emitted usage data. + reporter.EnsurePublished(ctx) + }() + + return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil +} + +func (e *IFlowExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false) + + enc, err := helps.TokenizerForModel(baseModel) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("iflow executor: tokenizer init failed: %w", err) + } + + count, err := helps.CountOpenAIChatTokens(enc, body) + if err != nil { + return cliproxyexecutor.Response{}, fmt.Errorf("iflow executor: token counting failed: %w", err) + } + + usageJSON := helps.BuildOpenAIUsageJSON(count) + translated := sdktranslator.TranslateTokenCount(ctx, to, from, count, usageJSON) + return cliproxyexecutor.Response{Payload: translated}, nil +} + +// Refresh refreshes OAuth tokens or cookie-based API keys and updates the stored API key. +func (e *IFlowExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + log.Debugf("iflow executor: refresh called") + if auth == nil { + return nil, fmt.Errorf("iflow executor: auth is nil") + } + + // Check if this is cookie-based authentication + var cookie string + var email string + if auth.Metadata != nil { + if v, ok := auth.Metadata["cookie"].(string); ok { + cookie = strings.TrimSpace(v) + } + if v, ok := auth.Metadata["email"].(string); ok { + email = strings.TrimSpace(v) + } + } + + // If cookie is present, use cookie-based refresh + if cookie != "" && email != "" { + return e.refreshCookieBased(ctx, auth, cookie, email) + } + + // Otherwise, use OAuth-based refresh + return e.refreshOAuthBased(ctx, auth) +} + +// refreshCookieBased refreshes API key using browser cookie +func (e *IFlowExecutor) refreshCookieBased(ctx context.Context, auth *cliproxyauth.Auth, cookie, email string) (*cliproxyauth.Auth, error) { + log.Debugf("iflow executor: checking refresh need for cookie-based API key for user: %s", email) + + // Get current expiry time from metadata + var currentExpire string + if auth.Metadata != nil { + if v, ok := auth.Metadata["expired"].(string); ok { + currentExpire = strings.TrimSpace(v) + } + } + + // Check if refresh is needed + needsRefresh, _, err := iflowauth.ShouldRefreshAPIKey(currentExpire) + if err != nil { + log.Warnf("iflow executor: failed to check refresh need: %v", err) + // If we can't check, continue with refresh anyway as a safety measure + } else if !needsRefresh { + log.Debugf("iflow executor: no refresh needed for user: %s", email) + return auth, nil + } + + log.Infof("iflow executor: refreshing cookie-based API key for user: %s", email) + + svc := iflowauth.NewIFlowAuth(e.cfg) + keyData, err := svc.RefreshAPIKey(ctx, cookie, email) + if err != nil { + log.Errorf("iflow executor: cookie-based API key refresh failed: %v", err) + return nil, err + } + + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["api_key"] = keyData.APIKey + auth.Metadata["expired"] = keyData.ExpireTime + auth.Metadata["type"] = "iflow" + auth.Metadata["last_refresh"] = time.Now().Format(time.RFC3339) + auth.Metadata["cookie"] = cookie + auth.Metadata["email"] = email + + log.Infof("iflow executor: cookie-based API key refreshed successfully, new expiry: %s", keyData.ExpireTime) + + if auth.Attributes == nil { + auth.Attributes = make(map[string]string) + } + auth.Attributes["api_key"] = keyData.APIKey + + return auth, nil +} + +// refreshOAuthBased refreshes tokens using OAuth refresh token +func (e *IFlowExecutor) refreshOAuthBased(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + refreshToken := "" + oldAccessToken := "" + if auth.Metadata != nil { + if v, ok := auth.Metadata["refresh_token"].(string); ok { + refreshToken = strings.TrimSpace(v) + } + if v, ok := auth.Metadata["access_token"].(string); ok { + oldAccessToken = strings.TrimSpace(v) + } + } + if refreshToken == "" { + return auth, nil + } + + // Log the old access token (masked) before refresh + if oldAccessToken != "" { + log.Debugf("iflow executor: refreshing access token, old: %s", util.HideAPIKey(oldAccessToken)) + } + + svc := iflowauth.NewIFlowAuth(e.cfg) + tokenData, err := svc.RefreshTokens(ctx, refreshToken) + if err != nil { + log.Errorf("iflow executor: token refresh failed: %v", err) + return nil, err + } + + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["access_token"] = tokenData.AccessToken + if tokenData.RefreshToken != "" { + auth.Metadata["refresh_token"] = tokenData.RefreshToken + } + if tokenData.APIKey != "" { + auth.Metadata["api_key"] = tokenData.APIKey + } + auth.Metadata["expired"] = tokenData.Expire + auth.Metadata["type"] = "iflow" + auth.Metadata["last_refresh"] = time.Now().Format(time.RFC3339) + + // Log the new access token (masked) after successful refresh + log.Debugf("iflow executor: token refresh successful, new: %s", util.HideAPIKey(tokenData.AccessToken)) + + if auth.Attributes == nil { + auth.Attributes = make(map[string]string) + } + if tokenData.APIKey != "" { + auth.Attributes["api_key"] = tokenData.APIKey + } + + return auth, nil +} + +func applyIFlowHeaders(r *http.Request, apiKey string, stream bool) { + r.Header.Set("Content-Type", "application/json") + r.Header.Set("Authorization", "Bearer "+apiKey) + r.Header.Set("User-Agent", iflowUserAgent) + + // Generate session-id + sessionID := "session-" + generateUUID() + r.Header.Set("session-id", sessionID) + + // Generate timestamp and signature + timestamp := time.Now().UnixMilli() + r.Header.Set("x-iflow-timestamp", fmt.Sprintf("%d", timestamp)) + + signature := createIFlowSignature(iflowUserAgent, sessionID, timestamp, apiKey) + if signature != "" { + r.Header.Set("x-iflow-signature", signature) + } + + if stream { + r.Header.Set("Accept", "text/event-stream") + } else { + r.Header.Set("Accept", "application/json") + } +} + +// createIFlowSignature generates HMAC-SHA256 signature for iFlow API requests. +// The signature payload format is: userAgent:sessionId:timestamp +func createIFlowSignature(userAgent, sessionID string, timestamp int64, apiKey string) string { + if apiKey == "" { + return "" + } + payload := fmt.Sprintf("%s:%s:%d", userAgent, sessionID, timestamp) + h := hmac.New(sha256.New, []byte(apiKey)) + h.Write([]byte(payload)) + return hex.EncodeToString(h.Sum(nil)) +} + +// generateUUID generates a random UUID v4 string. +func generateUUID() string { + return uuid.New().String() +} + +func iflowCreds(a *cliproxyauth.Auth) (apiKey, baseURL string) { + if a == nil { + return "", "" + } + if a.Attributes != nil { + if v := strings.TrimSpace(a.Attributes["api_key"]); v != "" { + apiKey = v + } + if v := strings.TrimSpace(a.Attributes["base_url"]); v != "" { + baseURL = v + } + } + if apiKey == "" && a.Metadata != nil { + if v, ok := a.Metadata["api_key"].(string); ok { + apiKey = strings.TrimSpace(v) + } + } + if baseURL == "" && a.Metadata != nil { + if v, ok := a.Metadata["base_url"].(string); ok { + baseURL = strings.TrimSpace(v) + } + } + return apiKey, baseURL +} + +func ensureToolsArray(body []byte) []byte { + placeholder := `[{"type":"function","function":{"name":"noop","description":"Placeholder tool to stabilise streaming","parameters":{"type":"object"}}}]` + updated, err := sjson.SetRawBytes(body, "tools", []byte(placeholder)) + if err != nil { + return body + } + return updated +} + +// preserveReasoningContentInMessages checks if reasoning_content from assistant messages +// is preserved in conversation history for iFlow models that support thinking. +// This is helpful for multi-turn conversations where the model may benefit from seeing +// its previous reasoning to maintain coherent thought chains. +// +// For GLM-4.6/4.7 and MiniMax M2/M2.1, it is recommended to include the full assistant +// response (including reasoning_content) in message history for better context continuity. +func preserveReasoningContentInMessages(body []byte) []byte { + model := strings.ToLower(gjson.GetBytes(body, "model").String()) + + // Only apply to models that support thinking with history preservation + needsPreservation := strings.HasPrefix(model, "glm-4") || strings.HasPrefix(model, "minimax-m2") + + if !needsPreservation { + return body + } + + messages := gjson.GetBytes(body, "messages") + if !messages.Exists() || !messages.IsArray() { + return body + } + + // Check if any assistant message already has reasoning_content preserved + hasReasoningContent := false + messages.ForEach(func(_, msg gjson.Result) bool { + role := msg.Get("role").String() + if role == "assistant" { + rc := msg.Get("reasoning_content") + if rc.Exists() && rc.String() != "" { + hasReasoningContent = true + return false // stop iteration + } + } + return true + }) + + // If reasoning content is already present, the messages are properly formatted + // No need to modify - the client has correctly preserved reasoning in history + if hasReasoningContent { + log.Debugf("iflow executor: reasoning_content found in message history for %s", model) + } + + return body +} diff --git a/internal/runtime/executor/iflow_executor_test.go b/internal/runtime/executor/iflow_executor_test.go new file mode 100644 index 0000000000..93188a0a04 --- /dev/null +++ b/internal/runtime/executor/iflow_executor_test.go @@ -0,0 +1,67 @@ +package executor + +import ( + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" +) + +func TestIFlowExecutorParseSuffix(t *testing.T) { + tests := []struct { + name string + model string + wantBase string + wantLevel string + }{ + {"no suffix", "glm-4", "glm-4", ""}, + {"glm with suffix", "glm-4.1-flash(high)", "glm-4.1-flash", "high"}, + {"minimax no suffix", "minimax-m2", "minimax-m2", ""}, + {"minimax with suffix", "minimax-m2.1(medium)", "minimax-m2.1", "medium"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := thinking.ParseSuffix(tt.model) + if result.ModelName != tt.wantBase { + t.Errorf("ParseSuffix(%q).ModelName = %q, want %q", tt.model, result.ModelName, tt.wantBase) + } + }) + } +} + +func TestPreserveReasoningContentInMessages(t *testing.T) { + tests := []struct { + name string + input []byte + want []byte // nil means output should equal input + }{ + { + "non-glm model passthrough", + []byte(`{"model":"gpt-4","messages":[]}`), + nil, + }, + { + "glm model with empty messages", + []byte(`{"model":"glm-4","messages":[]}`), + nil, + }, + { + "glm model preserves existing reasoning_content", + []byte(`{"model":"glm-4","messages":[{"role":"assistant","content":"hi","reasoning_content":"thinking..."}]}`), + nil, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := preserveReasoningContentInMessages(tt.input) + want := tt.want + if want == nil { + want = tt.input + } + if string(got) != string(want) { + t.Errorf("preserveReasoningContentInMessages() = %s, want %s", got, want) + } + }) + } +} diff --git a/internal/runtime/executor/joycode_executor.go b/internal/runtime/executor/joycode_executor.go new file mode 100644 index 0000000000..322340f687 --- /dev/null +++ b/internal/runtime/executor/joycode_executor.go @@ -0,0 +1,285 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "crypto/rand" + "encoding/hex" + "encoding/json" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/joycode" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +const ( + joycodeChatURL = "https://joycode-api.jd.com/api/saas/openai/v1/chat/completions" +) + +type JoyCodeExecutor struct { + cfg *config.Config +} + +func NewJoyCodeExecutor(cfg *config.Config) *JoyCodeExecutor { + return &JoyCodeExecutor{cfg: cfg} +} + +func (e *JoyCodeExecutor) Identifier() string { return "joycode" } + +func (e *JoyCodeExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if auth == nil || auth.Metadata == nil { + return fmt.Errorf("joycode: missing auth metadata") + } + + ptKey, _ := auth.Metadata["ptKey"].(string) + if ptKey == "" { + return fmt.Errorf("joycode: missing ptKey credential") + } + + req.Header.Set("Content-Type", "application/json; charset=UTF-8") + req.Header.Set("ptKey", ptKey) + req.Header.Set("loginType", "") + req.Header.Set("User-Agent", joycode.JoyCodeUA) + req.Header.Set("Accept", "application/json") + req.Header.Set("x-ms-client-request-id", generateJoyCodeRequestID()) + + return nil +} + +func (e *JoyCodeExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + client := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 5*time.Minute) + + if err := e.PrepareRequest(req, auth); err != nil { + return nil, err + } + + resp, err := client.Do(req) + if err != nil { + return nil, fmt.Errorf("joycode: request failed: %w", err) + } + return resp, nil +} + +func (e *JoyCodeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + parsed := thinking.ParseSuffix(req.Model) + baseModel := parsed.ModelName + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + payload := buildJoyCodePayload(req.Payload, baseModel, auth) + + httpReq, err := http.NewRequestWithContext(ctx, "POST", joycodeChatURL, bytes.NewReader(payload)) + if err != nil { + return resp, err + } + + httpResp, err := e.HttpRequest(ctx, auth, httpReq) + if err != nil { + return resp, err + } + defer httpResp.Body.Close() + + if httpResp.StatusCode != 200 { + body, _ := io.ReadAll(httpResp.Body) + return resp, statusErr{ + code: httpResp.StatusCode, + msg: fmt.Sprintf("joycode: API returned %d: %s", httpResp.StatusCode, string(body)), + } + } + + body, _ := io.ReadAll(httpResp.Body) + + from := sdktranslator.FromString("openai") + to := sdktranslator.FromString("joycode") + + var param any + translated := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, req.Payload, body, ¶m) + + promptTokens := gjson.GetBytes(body, "usage.prompt_tokens").Int() + completionTokens := gjson.GetBytes(body, "usage.completion_tokens").Int() + + reporter.Publish(ctx, usage.Detail{ + InputTokens: promptTokens, + OutputTokens: completionTokens, + }) + + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: joycodeChatURL, + Method: "POST", + Provider: "joycode", + AuthID: auth.ID, + }) + + return cliproxyexecutor.Response{Payload: translated}, nil +} + +func (e *JoyCodeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + parsed := thinking.ParseSuffix(req.Model) + baseModel := parsed.ModelName + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + payload := buildJoyCodePayload(req.Payload, baseModel, auth) + + httpReq, err := http.NewRequestWithContext(ctx, "POST", joycodeChatURL, bytes.NewReader(payload)) + if err != nil { + return nil, err + } + + httpResp, err := e.HttpRequest(ctx, auth, httpReq) + if err != nil { + return nil, err + } + + if httpResp.StatusCode != 200 { + body, _ := io.ReadAll(httpResp.Body) + httpResp.Body.Close() + return nil, statusErr{ + code: httpResp.StatusCode, + msg: fmt.Sprintf("joycode: API returned %d: %s", httpResp.StatusCode, string(body)), + } + } + + chunks := make(chan cliproxyexecutor.StreamChunk, 64) + + go func() { + defer close(chunks) + defer httpResp.Body.Close() + + from := sdktranslator.FromString("openai") + to := sdktranslator.FromString("joycode") + var streamParam any + var totalPromptTokens, totalCompletionTokens int64 + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(make([]byte, 0, 1024*1024), 1024*1024) + for scanner.Scan() { + line := scanner.Text() + if line == "" { + continue + } + + var data string + if strings.HasPrefix(line, "data: ") { + data = strings.TrimPrefix(line, "data: ") + } else if strings.HasPrefix(line, "data:") { + data = strings.TrimPrefix(line, "data:") + } else { + continue + } + + if data == "[DONE]" { + break + } + + if pt := gjson.Get(data, "usage.prompt_tokens").Int(); pt > 0 { + totalPromptTokens = pt + } + if ct := gjson.Get(data, "usage.completion_tokens").Int(); ct > 0 { + totalCompletionTokens = ct + } + + translatedChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, req.Payload, []byte(data), &streamParam) + for _, tc := range translatedChunks { + if len(tc) > 0 { + chunks <- cliproxyexecutor.StreamChunk{Payload: tc} + } + } + } + + if err := scanner.Err(); err != nil { + log.Warnf("joycode: stream scanner error: %v", err) + chunks <- cliproxyexecutor.StreamChunk{Err: err} + } + + reporter.Publish(ctx, usage.Detail{ + InputTokens: totalPromptTokens, + OutputTokens: totalCompletionTokens, + }) + + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: joycodeChatURL, + Method: "POST", + Provider: "joycode", + AuthID: auth.ID, + }) + }() + + return &cliproxyexecutor.StreamResult{ + Headers: httpResp.Header, + Chunks: chunks, + }, nil +} + +func (e *JoyCodeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, fmt.Errorf("joycode: token counting not supported") +} + +func (e *JoyCodeExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + return auth, nil +} + +func buildJoyCodePayload(openaiPayload []byte, modelName string, auth *cliproxyauth.Auth) []byte { + var payload map[string]interface{} + if err := json.Unmarshal(openaiPayload, &payload); err != nil { + log.Warnf("joycode: failed to parse payload, passing through: %v", err) + return openaiPayload + } + + payload["model"] = modelName + payload["stream_options"] = map[string]interface{}{"include_usage": true} + + if _, ok := payload["thinking"]; !ok { + payload["thinking"] = map[string]interface{}{"type": "disabled"} + } + + tenant := "" + userId := "" + if auth != nil && auth.Metadata != nil { + if t, ok := auth.Metadata["tenant"].(string); ok { + tenant = t + } + if u, ok := auth.Metadata["userId"].(string); ok { + userId = u + } + } + payload["tenant"] = tenant + payload["userId"] = userId + payload["client"] = "JoyCode" + payload["clientVersion"] = "2.4.8" + payload["language"] = "text" + payload["scene"] = "chat" + payload["source"] = "joyCoderFe" + + result, err := json.Marshal(payload) + if err != nil { + log.Errorf("joycode: failed to marshal payload: %v", err) + return openaiPayload + } + result = util.CleanupOrphanedRequiredInTools(result) + return result +} + +func generateJoyCodeRequestID() string { + b := make([]byte, 16) + if _, err := rand.Read(b); err != nil { + return fmt.Sprintf("%032d", time.Now().UnixNano()) + } + return hex.EncodeToString(b) +} diff --git a/internal/runtime/executor/joycode_models.go b/internal/runtime/executor/joycode_models.go new file mode 100644 index 0000000000..0fef4b61e1 --- /dev/null +++ b/internal/runtime/executor/joycode_models.go @@ -0,0 +1,81 @@ +package executor + +import ( + "context" + "encoding/json" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/joycode" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +func FetchJoyCodeModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + ptKey, _ := auth.Metadata["ptKey"].(string) + if ptKey == "" { + log.Debug("joycode: no ptKey found, using static models") + return getStaticJoyCodeModels() + } + + jcAuth := joycode.NewJoyCodeAuth(nil) + + modelData, err := jcAuth.FetchModelList(ctx, ptKey) + if err != nil { + log.Warnf("joycode: failed to fetch model list: %v, using static models", err) + return getStaticJoyCodeModels() + } + + now := time.Now().Unix() + var models []*registry.ModelInfo + + raw, _ := json.Marshal(modelData) + result := gjson.ParseBytes(raw) + result.ForEach(func(key, value gjson.Result) bool { + if value.Get("hidden").Bool() { + return true + } + + modelID := value.Get("chatApiModel").String() + if modelID == "" { + modelID = value.Get("label").String() + } + if modelID == "" { + return true + } + + models = append(models, ®istry.ModelInfo{ + ID: modelID, + Object: "model", + Created: now, + OwnedBy: "joycode", + Type: "joycode", + DisplayName: value.Get("label").String(), + }) + return true + }) + + if len(models) == 0 { + log.Warn("joycode: model list returned no visible models, using static models") + return getStaticJoyCodeModels() + } + + log.Infof("joycode: fetched %d models from API", len(models)) + return models +} + +func getStaticJoyCodeModels() []*registry.ModelInfo { + now := time.Now().Unix() + return []*registry.ModelInfo{ + { + ID: "JoyAI-Code", + Object: "model", + Created: now, + OwnedBy: "joycode", + Type: "joycode", + DisplayName: "JoyAI Code", + }, + } +} diff --git a/internal/runtime/executor/kilo_executor.go b/internal/runtime/executor/kilo_executor.go new file mode 100644 index 0000000000..ef7dd95d07 --- /dev/null +++ b/internal/runtime/executor/kilo_executor.go @@ -0,0 +1,460 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "errors" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +// KiloExecutor handles requests to Kilo API. +type KiloExecutor struct { + cfg *config.Config +} + +// NewKiloExecutor creates a new Kilo executor instance. +func NewKiloExecutor(cfg *config.Config) *KiloExecutor { + return &KiloExecutor{cfg: cfg} +} + +// Identifier returns the unique identifier for this executor. +func (e *KiloExecutor) Identifier() string { return "kilo" } + +// PrepareRequest prepares the HTTP request before execution. +func (e *KiloExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + accessToken, _ := kiloCredentials(auth) + if strings.TrimSpace(accessToken) == "" { + return fmt.Errorf("kilo: missing access token") + } + + req.Header.Set("Authorization", "Bearer "+accessToken) + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(req, attrs) + return nil +} + +// HttpRequest executes a raw HTTP request. +func (e *KiloExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("kilo executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if err := e.PrepareRequest(httpReq, auth); err != nil { + return nil, err + } + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +// Execute performs a non-streaming request. +func (e *KiloExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + accessToken, orgID := kiloCredentials(auth) + if accessToken == "" { + return resp, fmt.Errorf("kilo: missing access token") + } + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + endpoint := "/api/openrouter/chat/completions" + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := originalPayloadSource + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, opts.Stream) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, opts.Stream) + requestedModel := payloadRequestedModel(opts, req.Model) + translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return resp, err + } + + url := "https://api.kilo.ai" + endpoint + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) + if err != nil { + return resp, err + } + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("Authorization", "Bearer "+accessToken) + if orgID != "" { + httpReq.Header.Set("X-Kilocode-OrganizationID", orgID) + } + httpReq.Header.Set("User-Agent", "cli-proxy-kilo") + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + defer httpResp.Body.Close() + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return resp, err + } + + body, err := io.ReadAll(httpResp.Body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + appendAPIResponseChunk(ctx, e.cfg, body) + reporter.publish(ctx, parseOpenAIUsage(body)) + reporter.ensurePublished(ctx) + + var param any + out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, body, ¶m) + resp = cliproxyexecutor.Response{Payload: []byte(out)} + return resp, nil +} + +// ExecuteStream performs a streaming request. +func (e *KiloExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + + reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.trackFailure(ctx, &err) + + accessToken, orgID := kiloCredentials(auth) + if accessToken == "" { + return nil, fmt.Errorf("kilo: missing access token") + } + + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + endpoint := "/api/openrouter/chat/completions" + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := originalPayloadSource + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true) + translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) + requestedModel := payloadRequestedModel(opts, req.Model) + translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return nil, err + } + + url := "https://api.kilo.ai" + endpoint + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) + if err != nil { + return nil, err + } + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("Authorization", "Bearer "+accessToken) + if orgID != "" { + httpReq.Header.Set("X-Kilocode-OrganizationID", orgID) + } + httpReq.Header.Set("User-Agent", "cli-proxy-kilo") + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: translated, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return nil, err + } + + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + httpResp.Body.Close() + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return nil, err + } + + out := make(chan cliproxyexecutor.StreamChunk) + go func() { + defer close(out) + defer httpResp.Body.Close() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, 52_428_800) + var param any + for scanner.Scan() { + line := scanner.Bytes() + appendAPIResponseChunk(ctx, e.cfg, line) + if detail, ok := parseOpenAIStreamUsage(line); ok { + reporter.publish(ctx, detail) + } + if len(line) == 0 { + continue + } + if !bytes.HasPrefix(line, []byte("data:")) { + continue + } + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(line), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])} + } + } + if errScan := scanner.Err(); errScan != nil { + recordAPIResponseError(ctx, e.cfg, errScan) + reporter.publishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } + reporter.ensurePublished(ctx) + }() + + return &cliproxyexecutor.StreamResult{ + Headers: httpResp.Header.Clone(), + Chunks: out, + }, nil +} + +// Refresh validates the Kilo token. +func (e *KiloExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil { + return nil, fmt.Errorf("missing auth") + } + return auth, nil +} + +// CountTokens returns the token count for the given request. +func (e *KiloExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, fmt.Errorf("kilo: count tokens not supported") +} + +// kiloCredentials extracts access token and other info from auth. +func kiloCredentials(auth *cliproxyauth.Auth) (accessToken, orgID string) { + if auth == nil { + return "", "" + } + + // Prefer kilocode specific keys, then fall back to generic keys. + // Check metadata first, then attributes. + if auth.Metadata != nil { + if token, ok := auth.Metadata["kilocodeToken"].(string); ok && token != "" { + accessToken = token + } else if token, ok := auth.Metadata["access_token"].(string); ok && token != "" { + accessToken = token + } + + if org, ok := auth.Metadata["kilocodeOrganizationId"].(string); ok && org != "" { + orgID = org + } else if org, ok := auth.Metadata["organization_id"].(string); ok && org != "" { + orgID = org + } + } + + if accessToken == "" && auth.Attributes != nil { + if token := auth.Attributes["kilocodeToken"]; token != "" { + accessToken = token + } else if token := auth.Attributes["access_token"]; token != "" { + accessToken = token + } + } + + if orgID == "" && auth.Attributes != nil { + if org := auth.Attributes["kilocodeOrganizationId"]; org != "" { + orgID = org + } else if org := auth.Attributes["organization_id"]; org != "" { + orgID = org + } + } + + return accessToken, orgID +} + +// FetchKiloModels fetches models from Kilo API. +func FetchKiloModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *config.Config) []*registry.ModelInfo { + accessToken, orgID := kiloCredentials(auth) + if accessToken == "" { + log.Infof("kilo: no access token found, skipping dynamic model fetch (using static kilo/auto)") + return registry.GetKiloModels() + } + + log.Debugf("kilo: fetching dynamic models (orgID: %s)", orgID) + + httpClient := newProxyAwareHTTPClient(ctx, cfg, auth, 0) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, "https://api.kilo.ai/api/openrouter/models", nil) + if err != nil { + log.Warnf("kilo: failed to create model fetch request: %v", err) + return registry.GetKiloModels() + } + + req.Header.Set("Authorization", "Bearer "+accessToken) + if orgID != "" { + req.Header.Set("X-Kilocode-OrganizationID", orgID) + } + req.Header.Set("User-Agent", "cli-proxy-kilo") + + resp, err := httpClient.Do(req) + if err != nil { + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + log.Warnf("kilo: fetch models canceled: %v", err) + } else { + log.Warnf("kilo: using static models (API fetch failed: %v)", err) + } + return registry.GetKiloModels() + } + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil { + log.Warnf("kilo: failed to read models response: %v", err) + return registry.GetKiloModels() + } + + if resp.StatusCode != http.StatusOK { + log.Warnf("kilo: fetch models failed: status %d, body: %s", resp.StatusCode, string(body)) + return registry.GetKiloModels() + } + + result := gjson.GetBytes(body, "data") + if !result.Exists() { + // Try root if data field is missing + result = gjson.ParseBytes(body) + if !result.IsArray() { + log.Debugf("kilo: response body: %s", string(body)) + log.Warn("kilo: invalid API response format (expected array or data field with array)") + return registry.GetKiloModels() + } + } + + var dynamicModels []*registry.ModelInfo + now := time.Now().Unix() + count := 0 + totalCount := 0 + + result.ForEach(func(key, value gjson.Result) bool { + totalCount++ + id := value.Get("id").String() + pIdxResult := value.Get("preferredIndex") + preferredIndex := pIdxResult.Int() + + // Filter models where preferredIndex > 0 (Kilo-curated models) + if preferredIndex <= 0 { + return true + } + + // Check if it's free. We look for :free suffix, is_free flag, or zero pricing. + isFree := strings.HasSuffix(id, ":free") || id == "giga-potato" || value.Get("is_free").Bool() + if !isFree { + // Check pricing as fallback + promptPricing := value.Get("pricing.prompt").String() + if promptPricing == "0" || promptPricing == "0.0" { + isFree = true + } + } + + if !isFree { + log.Debugf("kilo: skipping curated paid model: %s", id) + return true + } + + log.Debugf("kilo: found curated model: %s (preferredIndex: %d)", id, preferredIndex) + + dynamicModels = append(dynamicModels, ®istry.ModelInfo{ + ID: id, + DisplayName: value.Get("name").String(), + ContextLength: int(value.Get("context_length").Int()), + OwnedBy: "kilo", + Type: "kilo", + Object: "model", + Created: now, + }) + count++ + return true + }) + + log.Infof("kilo: fetched %d models from API, %d curated free (preferredIndex > 0)", totalCount, count) + if count == 0 && totalCount > 0 { + log.Warn("kilo: no curated free models found (check API response fields)") + } + + staticModels := registry.GetKiloModels() + // Always include kilo/auto (first static model) + allModels := append(staticModels[:1], dynamicModels...) + + return allModels +} diff --git a/internal/runtime/executor/kimi_executor.go b/internal/runtime/executor/kimi_executor.go index 931e3a569f..6cfaec2052 100644 --- a/internal/runtime/executor/kimi_executor.go +++ b/internal/runtime/executor/kimi_executor.go @@ -13,14 +13,14 @@ import ( "strings" "time" - kimiauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/kimi" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + kimiauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kimi" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -108,7 +108,8 @@ func (e *KimiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, err = normalizeKimiToolMessageLinks(body) if err != nil { return resp, err @@ -217,7 +218,8 @@ func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut return nil, fmt.Errorf("kimi executor: failed to set stream_options in payload: %w", err) } requestedModel := helps.PayloadRequestedModel(opts, req.Model) - body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, requestPath) body, err = normalizeKimiToolMessageLinks(body) if err != nil { return nil, err @@ -288,17 +290,28 @@ func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut } chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), ¶m) for i := range chunks { - out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]}: + case <-ctx.Done(): + return + } } } doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), ¶m) for i := range doneChunks { - out <- cliproxyexecutor.StreamChunk{Payload: doneChunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: doneChunks[i]}: + case <-ctx.Done(): + return + } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } }() return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil @@ -320,7 +333,17 @@ func normalizeKimiToolMessageLinks(body []byte) ([]byte, error) { return body, nil } - out := body + msgs := messages.Array() + out, dropped, err := filterKimiEmptyAssistantMessages(body, msgs) + if err != nil { + return body, err + } + if dropped > 0 { + log.WithField("dropped_assistant_messages", dropped).Debug("kimi executor: dropped empty assistant messages") + } + + messages = gjson.GetBytes(out, "messages") + msgs = messages.Array() pending := make([]string, 0) patched := 0 patchedReasoning := 0 @@ -338,7 +361,6 @@ func normalizeKimiToolMessageLinks(body []byte) ([]byte, error) { } } - msgs := messages.Array() for msgIdx := range msgs { msg := msgs[msgIdx] role := strings.TrimSpace(msg.Get("role").String()) @@ -426,6 +448,96 @@ func normalizeKimiToolMessageLinks(body []byte) ([]byte, error) { return out, nil } +func filterKimiEmptyAssistantMessages(body []byte, msgs []gjson.Result) ([]byte, int, error) { + kept := make([]string, 0, len(msgs)) + dropped := 0 + for _, msg := range msgs { + if shouldDropKimiAssistantMessage(msg) { + dropped++ + continue + } + kept = append(kept, msg.Raw) + } + if dropped == 0 { + return body, 0, nil + } + + rawMessages := []byte("[" + strings.Join(kept, ",") + "]") + out, err := sjson.SetRawBytes(body, "messages", rawMessages) + if err != nil { + return body, 0, fmt.Errorf("kimi executor: failed to drop empty assistant messages: %w", err) + } + return out, dropped, nil +} + +func shouldDropKimiAssistantMessage(msg gjson.Result) bool { + if strings.TrimSpace(msg.Get("role").String()) != "assistant" { + return false + } + if hasKimiToolCalls(msg) || hasKimiLegacyFunctionCall(msg) || hasKimiAssistantReasoning(msg) { + return false + } + return isKimiAssistantContentEmpty(msg.Get("content")) +} + +func hasKimiToolCalls(msg gjson.Result) bool { + toolCalls := msg.Get("tool_calls") + return toolCalls.Exists() && toolCalls.IsArray() && len(toolCalls.Array()) > 0 +} + +func hasKimiLegacyFunctionCall(msg gjson.Result) bool { + functionCall := msg.Get("function_call") + if !functionCall.Exists() || functionCall.Type == gjson.Null { + return false + } + if functionCall.IsObject() && strings.TrimSpace(functionCall.Raw) == "{}" { + return false + } + return strings.TrimSpace(functionCall.Raw) != "" +} + +func hasKimiAssistantReasoning(msg gjson.Result) bool { + reasoning := msg.Get("reasoning_content") + return reasoning.Exists() && strings.TrimSpace(reasoning.String()) != "" +} + +func isKimiAssistantContentEmpty(content gjson.Result) bool { + if !content.Exists() || content.Type == gjson.Null { + return true + } + if content.Type == gjson.String { + return strings.TrimSpace(content.String()) == "" + } + if !content.IsArray() { + return false + } + for _, part := range content.Array() { + if !isKimiAssistantContentPartEmpty(part) { + return false + } + } + return true +} + +func isKimiAssistantContentPartEmpty(part gjson.Result) bool { + if !part.Exists() || part.Type == gjson.Null { + return true + } + if part.Type == gjson.String { + return strings.TrimSpace(part.String()) == "" + } + if !part.IsObject() { + return false + } + if text := part.Get("text"); text.Exists() { + return strings.TrimSpace(text.String()) == "" + } + if strings.TrimSpace(part.Get("type").String()) == "text" { + return true + } + return strings.TrimSpace(part.Raw) == "{}" +} + func fallbackAssistantReasoning(msg gjson.Result, hasLatest bool, latest string) string { if hasLatest && strings.TrimSpace(latest) != "" { return latest @@ -457,6 +569,9 @@ func fallbackAssistantReasoning(msg gjson.Result, hasLatest bool, latest string) // Refresh refreshes the Kimi token using the refresh token. func (e *KimiExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { log.Debugf("kimi executor: refresh called") + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } if auth == nil { return nil, fmt.Errorf("kimi executor: auth is nil") } diff --git a/internal/runtime/executor/kimi_executor_test.go b/internal/runtime/executor/kimi_executor_test.go index 210ddb0ef9..f3de70f1bd 100644 --- a/internal/runtime/executor/kimi_executor_test.go +++ b/internal/runtime/executor/kimi_executor_test.go @@ -203,3 +203,70 @@ func TestNormalizeKimiToolMessageLinks_RepairsIDsAndReasoningTogether(t *testing t.Fatalf("messages.2.reasoning_content = %q, want %q", got, "r1") } } + +func TestNormalizeKimiToolMessageLinks_DropsEmptyAssistantWithoutToolLink(t *testing.T) { + body := []byte(`{ + "messages":[ + {"role":"user","content":"start"}, + {"role":"assistant","content":""}, + {"role":"assistant","content":" "}, + {"role":"assistant","content":"","tool_calls":null}, + {"role":"assistant","content":[{"type":"text","text":" "}]}, + {"role":"assistant"}, + {"role":"assistant","content":"keep"}, + {"role":"user","content":"next"} + ] + }`) + + out, err := normalizeKimiToolMessageLinks(body) + if err != nil { + t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err) + } + + messages := gjson.GetBytes(out, "messages").Array() + if len(messages) != 3 { + t.Fatalf("messages length = %d, want 3, raw = %s", len(messages), gjson.GetBytes(out, "messages").Raw) + } + if got := messages[0].Get("content").String(); got != "start" { + t.Fatalf("messages.0.content = %q, want %q", got, "start") + } + if got := messages[1].Get("content").String(); got != "keep" { + t.Fatalf("messages.1.content = %q, want %q", got, "keep") + } + if got := messages[2].Get("content").String(); got != "next" { + t.Fatalf("messages.2.content = %q, want %q", got, "next") + } +} + +func TestNormalizeKimiToolMessageLinks_PreservesAssistantWithToolLinkOrReasoning(t *testing.T) { + body := []byte(`{ + "messages":[ + {"role":"assistant","content":"","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}, + {"role":"assistant","content":"","function_call":{"name":"legacy_call","arguments":"{}"}}, + {"role":"assistant","content":"","reasoning_content":"thought"}, + {"role":"assistant","content":[{"type":"text","text":" visible "}]} + ] + }`) + + out, err := normalizeKimiToolMessageLinks(body) + if err != nil { + t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err) + } + + messages := gjson.GetBytes(out, "messages").Array() + if len(messages) != 4 { + t.Fatalf("messages length = %d, want 4, raw = %s", len(messages), gjson.GetBytes(out, "messages").Raw) + } + if !messages[0].Get("tool_calls").Exists() { + t.Fatalf("messages.0.tool_calls should exist") + } + if !messages[1].Get("function_call").Exists() { + t.Fatalf("messages.1.function_call should exist") + } + if got := messages[2].Get("reasoning_content").String(); got != "thought" { + t.Fatalf("messages.2.reasoning_content = %q, want %q", got, "thought") + } + if got := messages[3].Get("content.0.text").String(); got != " visible " { + t.Fatalf("messages.3.content.0.text = %q, want %q", got, " visible ") + } +} diff --git a/internal/runtime/executor/kiro_executor.go b/internal/runtime/executor/kiro_executor.go new file mode 100644 index 0000000000..89f8ae7f84 --- /dev/null +++ b/internal/runtime/executor/kiro_executor.go @@ -0,0 +1,4706 @@ +package executor + +import ( + "bufio" + "bytes" + "context" + "encoding/base64" + "encoding/binary" + "encoding/json" + "errors" + "fmt" + "io" + "net" + "net/http" + "os" + "path/filepath" + "strings" + "sync" + "sync/atomic" + "syscall" + "time" + + "github.com/google/uuid" + kiroauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kiro" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + kiroclaude "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/claude" + kirocommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/common" + kiroopenai "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/openai" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" +) + +const ( + // Kiro API common constants + kiroContentType = "application/json" + kiroAcceptStream = "*/*" + + // Event Stream frame size constants for boundary protection + // AWS Event Stream binary format: prelude (12 bytes) + headers + payload + message_crc (4 bytes) + // Prelude consists of: total_length (4) + headers_length (4) + prelude_crc (4) + minEventStreamFrameSize = 16 // Minimum: 4(total_len) + 4(headers_len) + 4(prelude_crc) + 4(message_crc) + maxEventStreamMsgSize = 10 << 20 // Maximum message length: 10MB + + // Event Stream error type constants + ErrStreamFatal = "fatal" // Connection/authentication errors, not recoverable + ErrStreamMalformed = "malformed" // Format errors, data cannot be parsed + + // kiroIDEAgentMode is the agent mode header value for Kiro IDE requests + kiroIDEAgentMode = "vibe" + + // Socket retry configuration constants + // Maximum number of retry attempts for socket/network errors + kiroSocketMaxRetries = 3 + // Base delay between retry attempts (uses exponential backoff: delay * 2^attempt) + kiroSocketBaseRetryDelay = 1 * time.Second + // Maximum delay between retry attempts (cap for exponential backoff) + kiroSocketMaxRetryDelay = 30 * time.Second + // First token timeout for streaming responses (how long to wait for first response) + kiroFirstTokenTimeout = 15 * time.Second + // Streaming read timeout (how long to wait between chunks) + kiroStreamingReadTimeout = 300 * time.Second +) + +// retryableHTTPStatusCodes defines HTTP status codes that are considered retryable. +// Based on kiro2Api reference: 502 (Bad Gateway), 503 (Service Unavailable), 504 (Gateway Timeout) +var retryableHTTPStatusCodes = map[int]bool{ + 502: true, // Bad Gateway - upstream server error + 503: true, // Service Unavailable - server temporarily overloaded + 504: true, // Gateway Timeout - upstream server timeout +} + +// Real-time usage estimation configuration +// These control how often usage updates are sent during streaming +var ( + usageUpdateCharThreshold = 5000 // Send usage update every 5000 characters + usageUpdateTimeInterval = 15 * time.Second // Or every 15 seconds, whichever comes first +) + +// endpointAliases maps user preference values to canonical endpoint names. +var endpointAliases = map[string]string{ + "codewhisperer": "codewhisperer", + "ide": "codewhisperer", + "amazonq": "amazonq", + "q": "amazonq", + "cli": "amazonq", +} + +func enqueueTranslatedSSE(out chan<- cliproxyexecutor.StreamChunk, chunk []byte) { + if len(chunk) == 0 { + return + } + out <- cliproxyexecutor.StreamChunk{Payload: append(bytes.Clone(chunk), '\n', '\n')} +} + +// retryConfig holds configuration for socket retry logic. +// Based on kiro2Api Python implementation patterns. +type retryConfig struct { + MaxRetries int // Maximum number of retry attempts + BaseDelay time.Duration // Base delay between retries (exponential backoff) + MaxDelay time.Duration // Maximum delay cap + RetryableErrors []string // List of retryable error patterns + RetryableStatus map[int]bool // HTTP status codes to retry + FirstTokenTmout time.Duration // Timeout for first token in streaming + StreamReadTmout time.Duration // Timeout between stream chunks +} + +// defaultRetryConfig returns the default retry configuration for Kiro socket operations. +func defaultRetryConfig() retryConfig { + return retryConfig{ + MaxRetries: kiroSocketMaxRetries, + BaseDelay: kiroSocketBaseRetryDelay, + MaxDelay: kiroSocketMaxRetryDelay, + RetryableStatus: retryableHTTPStatusCodes, + RetryableErrors: []string{ + "connection reset", + "connection refused", + "broken pipe", + "EOF", + "timeout", + "temporary failure", + "no such host", + "network is unreachable", + "i/o timeout", + }, + FirstTokenTmout: kiroFirstTokenTimeout, + StreamReadTmout: kiroStreamingReadTimeout, + } +} + +// isRetryableError checks if an error is retryable based on error type and message. +// Returns true for network timeouts, connection resets, and temporary failures. +// Based on kiro2Api's retry logic patterns. +func isRetryableError(err error) bool { + if err == nil { + return false + } + + // Check for context cancellation - not retryable + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + return false + } + + // Check for net.Error (timeout, temporary) + var netErr net.Error + if errors.As(err, &netErr) { + if netErr.Timeout() { + log.Debugf("kiro: isRetryableError: network timeout detected") + return true + } + // Note: Temporary() is deprecated but still useful for some error types + } + + // Check for specific syscall errors (connection reset, broken pipe, etc.) + var syscallErr syscall.Errno + if errors.As(err, &syscallErr) { + switch syscallErr { + case syscall.ECONNRESET: // Connection reset by peer + log.Debugf("kiro: isRetryableError: ECONNRESET detected") + return true + case syscall.ECONNREFUSED: // Connection refused + log.Debugf("kiro: isRetryableError: ECONNREFUSED detected") + return true + case syscall.EPIPE: // Broken pipe + log.Debugf("kiro: isRetryableError: EPIPE (broken pipe) detected") + return true + case syscall.ETIMEDOUT: // Connection timed out + log.Debugf("kiro: isRetryableError: ETIMEDOUT detected") + return true + case syscall.ENETUNREACH: // Network is unreachable + log.Debugf("kiro: isRetryableError: ENETUNREACH detected") + return true + case syscall.EHOSTUNREACH: // No route to host + log.Debugf("kiro: isRetryableError: EHOSTUNREACH detected") + return true + } + } + + // Check for net.OpError wrapping other errors + var opErr *net.OpError + if errors.As(err, &opErr) { + log.Debugf("kiro: isRetryableError: net.OpError detected, op=%s", opErr.Op) + // Recursively check the wrapped error + if opErr.Err != nil { + return isRetryableError(opErr.Err) + } + return true + } + + // Check error message for retryable patterns + errMsg := strings.ToLower(err.Error()) + cfg := defaultRetryConfig() + for _, pattern := range cfg.RetryableErrors { + if strings.Contains(errMsg, pattern) { + log.Debugf("kiro: isRetryableError: pattern '%s' matched in error: %s", pattern, errMsg) + return true + } + } + + // Check for EOF which may indicate connection was closed + if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) { + log.Debugf("kiro: isRetryableError: EOF/UnexpectedEOF detected") + return true + } + + return false +} + +// isRetryableHTTPStatus checks if an HTTP status code is retryable. +// Based on kiro2Api: 502, 503, 504 are retryable server errors. +func isRetryableHTTPStatus(statusCode int) bool { + return retryableHTTPStatusCodes[statusCode] +} + +// calculateRetryDelay calculates the delay for the next retry attempt using exponential backoff. +// delay = min(baseDelay * 2^attempt, maxDelay) +// Adds ±30% jitter to prevent thundering herd. +func calculateRetryDelay(attempt int, cfg retryConfig) time.Duration { + return kiroauth.ExponentialBackoffWithJitter(attempt, cfg.BaseDelay, cfg.MaxDelay) +} + +// logRetryAttempt logs a retry attempt with relevant context. +func logRetryAttempt(attempt, maxRetries int, reason string, delay time.Duration, endpoint string) { + log.Warnf("kiro: retry attempt %d/%d for %s, waiting %v before next attempt (endpoint: %s)", + attempt+1, maxRetries, reason, delay, endpoint) +} + +// kiroHTTPClientPool provides a shared HTTP client with connection pooling for Kiro API. +// This reduces connection overhead and improves performance for concurrent requests. +// Based on kiro2Api's connection pooling pattern. +var ( + kiroHTTPClientPool *http.Client + kiroHTTPClientPoolOnce sync.Once +) + +// getKiroPooledHTTPClient returns a shared HTTP client with optimized connection pooling. +// The client is lazily initialized on first use and reused across requests. +// This is especially beneficial for: +// - Reducing TCP handshake overhead +// - Enabling HTTP/2 multiplexing +// - Better handling of keep-alive connections +func getKiroPooledHTTPClient() *http.Client { + kiroHTTPClientPoolOnce.Do(func() { + transport := &http.Transport{ + // Connection pool settings + MaxIdleConns: 100, // Max idle connections across all hosts + MaxIdleConnsPerHost: 20, // Max idle connections per host + MaxConnsPerHost: 50, // Max total connections per host + IdleConnTimeout: 90 * time.Second, // How long idle connections stay in pool + + // Timeouts for connection establishment + DialContext: (&net.Dialer{ + Timeout: 30 * time.Second, // TCP connection timeout + KeepAlive: 30 * time.Second, // TCP keep-alive interval + }).DialContext, + + // TLS handshake timeout + TLSHandshakeTimeout: 10 * time.Second, + + // Response header timeout + ResponseHeaderTimeout: 30 * time.Second, + + // Expect 100-continue timeout + ExpectContinueTimeout: 1 * time.Second, + + // Enable HTTP/2 when available + ForceAttemptHTTP2: true, + } + + kiroHTTPClientPool = &http.Client{ + Transport: transport, + // No global timeout - let individual requests set their own timeouts via context + } + + log.Debugf("kiro: initialized pooled HTTP client (MaxIdleConns=%d, MaxIdleConnsPerHost=%d, MaxConnsPerHost=%d)", + transport.MaxIdleConns, transport.MaxIdleConnsPerHost, transport.MaxConnsPerHost) + }) + + return kiroHTTPClientPool +} + +// newKiroHTTPClientWithPooling creates an HTTP client that uses connection pooling when appropriate. +// It respects proxy configuration from auth or config, falling back to the pooled client. +// This provides the best of both worlds: custom proxy support + connection reuse. +func newKiroHTTPClientWithPooling(ctx context.Context, cfg *config.Config, auth *cliproxyauth.Auth, timeout time.Duration) *http.Client { + // Check if a proxy is configured - if so, we need a custom client + var proxyURL string + if auth != nil { + proxyURL = strings.TrimSpace(auth.ProxyURL) + } + if proxyURL == "" && cfg != nil { + proxyURL = strings.TrimSpace(cfg.ProxyURL) + } + + // If proxy is configured, use the existing proxy-aware client (doesn't pool) + if proxyURL != "" { + log.Debugf("kiro: using proxy-aware HTTP client (proxy=%s)", proxyURL) + return newProxyAwareHTTPClient(ctx, cfg, auth, timeout) + } + + // No proxy - use pooled client for better performance + pooledClient := getKiroPooledHTTPClient() + + // If timeout is specified, we need to wrap the pooled transport with timeout + if timeout > 0 { + return &http.Client{ + Transport: pooledClient.Transport, + Timeout: timeout, + } + } + + return pooledClient +} + +// kiroEndpointConfig bundles endpoint URL with its compatible Origin and AmzTarget values. +// This solves the "triple mismatch" problem where different endpoints require matching +// Origin and X-Amz-Target header values. +// +// Based on reference implementations: +// - amq2api-main: Uses Amazon Q endpoint with CLI origin and AmazonQDeveloperStreamingService target +// - AIClient-2-API: Uses CodeWhisperer endpoint with AI_EDITOR origin and AmazonCodeWhispererStreamingService target +type kiroEndpointConfig struct { + URL string // Endpoint URL + Origin string // Request Origin: "CLI" for Amazon Q quota, "AI_EDITOR" for Kiro IDE quota + AmzTarget string // X-Amz-Target header value + Name string // Endpoint name for logging +} + +// kiroDefaultRegion is the default AWS region for Kiro API endpoints. +// Used when no region is specified in auth metadata. +const kiroDefaultRegion = "us-east-1" + +// extractRegionFromProfileARN extracts the AWS region from a ProfileARN. +// ARN format: arn:aws:codewhisperer:REGION:ACCOUNT:profile/PROFILE_ID +// Returns empty string if region cannot be extracted. +func extractRegionFromProfileARN(profileArn string) string { + if profileArn == "" { + return "" + } + parts := strings.Split(profileArn, ":") + if len(parts) >= 4 && parts[3] != "" { + return parts[3] + } + return "" +} + +// buildKiroEndpointConfigs creates endpoint configurations for the specified region. +// This enables dynamic region support for Enterprise/IdC users in non-us-east-1 regions. +// +// Uses Q endpoint (q.{region}.amazonaws.com) as primary for ALL auth types: +// - Works universally across all AWS regions (CodeWhisperer endpoint only exists in us-east-1) +// - Uses /generateAssistantResponse path with AI_EDITOR origin +// - Does NOT require X-Amz-Target header +// +// The AmzTarget field is kept for backward compatibility but should be empty +// to indicate that the header should NOT be set. +func buildKiroEndpointConfigs(region string) []kiroEndpointConfig { + if region == "" { + region = kiroDefaultRegion + } + return []kiroEndpointConfig{ + { + // Primary: Q endpoint - works for all regions and auth types + URL: fmt.Sprintf("https://q.%s.amazonaws.com/generateAssistantResponse", region), + Origin: "AI_EDITOR", + AmzTarget: "", // Empty = don't set X-Amz-Target header + Name: "AmazonQ", + }, + { + // Fallback: CodeWhisperer endpoint (legacy, only works in us-east-1) + URL: fmt.Sprintf("https://codewhisperer.%s.amazonaws.com/generateAssistantResponse", region), + Origin: "AI_EDITOR", + AmzTarget: "AmazonCodeWhispererStreamingService.GenerateAssistantResponse", + Name: "CodeWhisperer", + }, + } +} + +// resolveKiroAPIRegion determines the AWS region for Kiro API calls. +// Region priority: +// 1. auth.Metadata["api_region"] - explicit API region override +// 2. ProfileARN region - extracted from arn:aws:service:REGION:account:resource +// 3. kiroDefaultRegion (us-east-1) - fallback +// Note: OIDC "region" is NOT used - it's for token refresh, not API calls +func resolveKiroAPIRegion(auth *cliproxyauth.Auth) string { + if auth == nil || auth.Metadata == nil { + return kiroDefaultRegion + } + // Priority 1: Explicit api_region override + if r, ok := auth.Metadata["api_region"].(string); ok && r != "" { + log.Debugf("kiro: using region %s (source: api_region)", r) + return r + } + // Priority 2: Extract from ProfileARN + if profileArn, ok := auth.Metadata["profile_arn"].(string); ok && profileArn != "" { + if arnRegion := extractRegionFromProfileARN(profileArn); arnRegion != "" { + log.Debugf("kiro: using region %s (source: profile_arn)", arnRegion) + return arnRegion + } + } + // Note: OIDC "region" field is NOT used for API endpoint + // Kiro API only exists in us-east-1, while OIDC region can vary (e.g., ap-northeast-2) + // Using OIDC region for API calls causes DNS failures + log.Debugf("kiro: using region %s (source: default)", kiroDefaultRegion) + return kiroDefaultRegion +} + +// kiroEndpointConfigs is kept for backward compatibility with default us-east-1 region. +// Prefer using buildKiroEndpointConfigs(region) for dynamic region support. +var kiroEndpointConfigs = buildKiroEndpointConfigs(kiroDefaultRegion) + +// getKiroEndpointConfigs returns the list of Kiro API endpoint configurations to try in order. +// Supports dynamic region based on auth metadata "api_region", "profile_arn", or "region" field. +// Supports reordering based on "preferred_endpoint" in auth metadata/attributes. +// +// Region priority: +// 1. auth.Metadata["api_region"] - explicit API region override +// 2. ProfileARN region - extracted from arn:aws:service:REGION:account:resource +// 3. kiroDefaultRegion (us-east-1) - fallback +// Note: OIDC "region" is NOT used - it's for token refresh, not API calls +func getKiroEndpointConfigs(auth *cliproxyauth.Auth) []kiroEndpointConfig { + if auth == nil { + return kiroEndpointConfigs + } + + region := resolveKiroAPIRegion(auth) + log.Debugf("kiro: using region %s", region) + + configs := buildKiroEndpointConfigs(region) + + preference := getAuthValue(auth, "preferred_endpoint") + if preference == "" { + return configs + } + + targetName, ok := endpointAliases[preference] + if !ok { + return configs + } + + var preferred, others []kiroEndpointConfig + for _, cfg := range configs { + if strings.ToLower(cfg.Name) == targetName { + preferred = append(preferred, cfg) + } else { + others = append(others, cfg) + } + } + + if len(preferred) == 0 { + return configs + } + return append(preferred, others...) +} + +// KiroExecutor handles requests to AWS CodeWhisperer (Kiro) API. +type KiroExecutor struct { + cfg *config.Config + refreshMu sync.Mutex // Serializes token refresh operations to prevent race conditions + profileArnMu sync.Mutex // Serializes profileArn fetches to prevent concurrent map writes +} + +// buildKiroPayloadForFormat builds the Kiro API payload based on the source format. +// This is critical because OpenAI and Claude formats have different tool structures: +// - OpenAI: tools[].function.name, tools[].function.description +// - Claude: tools[].name, tools[].description +// headers parameter allows checking Anthropic-Beta header for thinking mode detection. +// Returns the serialized JSON payload and a boolean indicating whether thinking mode was injected. +func buildKiroPayloadForFormat(body []byte, modelID, profileArn, origin string, isAgentic, isChatOnly bool, sourceFormat sdktranslator.Format, headers http.Header) ([]byte, bool) { + switch sourceFormat.String() { + case "openai": + log.Debugf("kiro: using OpenAI payload builder for source format: %s", sourceFormat.String()) + return kiroopenai.BuildKiroPayloadFromOpenAI(body, modelID, profileArn, origin, isAgentic, isChatOnly, headers, nil) + case "kiro": + // Body is already in Kiro format — pass through directly + log.Debugf("kiro: body already in Kiro format, passing through directly") + return body, false + default: + // Default to Claude format + log.Debugf("kiro: using Claude payload builder for source format: %s", sourceFormat.String()) + return kiroclaude.BuildKiroPayload(body, modelID, profileArn, origin, isAgentic, isChatOnly, headers, nil) + } +} + +// NewKiroExecutor creates a new Kiro executor instance. +func NewKiroExecutor(cfg *config.Config) *KiroExecutor { + return &KiroExecutor{cfg: cfg} +} + +// Identifier returns the unique identifier for this executor. +func (e *KiroExecutor) Identifier() string { return "kiro" } + +// applyDynamicFingerprint applies account-specific fingerprint headers to the request. +func applyDynamicFingerprint(req *http.Request, auth *cliproxyauth.Auth) { + accountKey := getAccountKey(auth) + fp := kiroauth.GlobalFingerprintManager().GetFingerprint(accountKey) + + req.Header.Set("User-Agent", fp.BuildUserAgent()) + req.Header.Set("X-Amz-User-Agent", fp.BuildAmzUserAgent()) + req.Header.Set("x-amzn-kiro-agent-mode", kiroIDEAgentMode) + req.Header.Set("x-amzn-codewhisperer-optout", "true") + + keyPrefix := accountKey + if len(keyPrefix) > 8 { + keyPrefix = keyPrefix[:8] + } + log.Debugf("kiro: using dynamic fingerprint for account %s (SDK:%s, OS:%s/%s, Kiro:%s)", + keyPrefix+"...", fp.StreamingSDKVersion, fp.OSType, fp.OSVersion, fp.KiroVersion) +} + +// PrepareRequest prepares the HTTP request before execution. +func (e *KiroExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + accessToken, _ := kiroCredentials(auth) + if strings.TrimSpace(accessToken) == "" { + return statusErr{code: http.StatusUnauthorized, msg: "missing access token"} + } + + // Apply dynamic fingerprint-based headers + applyDynamicFingerprint(req, auth) + + req.Header.Set("Amz-Sdk-Request", "attempt=1; max=3") + req.Header.Set("Amz-Sdk-Invocation-Id", uuid.New().String()) + req.Header.Set("Authorization", "Bearer "+accessToken) + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(req, attrs) + return nil +} + +// HttpRequest injects Kiro credentials into the request and executes it. +func (e *KiroExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("kiro executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if errPrepare := e.PrepareRequest(httpReq, auth); errPrepare != nil { + return nil, errPrepare + } + httpClient := newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +// getAccountKey returns a stable account key for fingerprint lookup and rate limiting. +// Fallback order: +// 1) client_id / refresh_token (best account identity) +// 2) auth.ID (stable local auth record) +// 3) profile_arn (stable AWS profile identity) +// 4) access_token (least preferred but deterministic) +// 5) fixed anonymous seed +func getAccountKey(auth *cliproxyauth.Auth) string { + var clientID, refreshToken, profileArn string + if auth != nil && auth.Metadata != nil { + clientID, _ = auth.Metadata["client_id"].(string) + refreshToken, _ = auth.Metadata["refresh_token"].(string) + profileArn, _ = auth.Metadata["profile_arn"].(string) + } + if clientID != "" || refreshToken != "" { + return kiroauth.GetAccountKey(clientID, refreshToken) + } + if auth != nil && auth.ID != "" { + return kiroauth.GenerateAccountKey(auth.ID) + } + if profileArn != "" { + return kiroauth.GenerateAccountKey(profileArn) + } + if accessToken, _ := kiroCredentials(auth); accessToken != "" { + return kiroauth.GenerateAccountKey(accessToken) + } + return kiroauth.GenerateAccountKey("kiro-anonymous") +} + +// getAuthValue looks up a value by key in auth Metadata, then Attributes. +func getAuthValue(auth *cliproxyauth.Auth, key string) string { + if auth == nil { + return "" + } + if auth.Metadata != nil { + if v, ok := auth.Metadata[key].(string); ok && v != "" { + return strings.ToLower(strings.TrimSpace(v)) + } + } + if auth.Attributes != nil { + if v := auth.Attributes[key]; v != "" { + return strings.ToLower(strings.TrimSpace(v)) + } + } + return "" +} + +// Execute sends the request to Kiro API and returns the response. +// Supports automatic token refresh on 401/403 errors. +func (e *KiroExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + accessToken, profileArn := kiroCredentials(auth) + if accessToken == "" { + return resp, fmt.Errorf("kiro: access token not found in auth") + } + + // Rate limiting: get token key for tracking + tokenKey := getAccountKey(auth) + rateLimiter := kiroauth.GetGlobalRateLimiter() + cooldownMgr := kiroauth.GetGlobalCooldownManager() + + // Check if token is in cooldown period + if cooldownMgr.IsInCooldown(tokenKey) { + remaining := cooldownMgr.GetRemainingCooldown(tokenKey) + reason := cooldownMgr.GetCooldownReason(tokenKey) + log.Warnf("kiro: token %s is in cooldown (reason: %s), remaining: %v", tokenKey, reason, remaining) + return resp, fmt.Errorf("kiro: token is in cooldown for %v (reason: %s)", remaining, reason) + } + + // Wait for rate limiter before proceeding + log.Debugf("kiro: waiting for rate limiter for token %s", tokenKey) + rateLimiter.WaitForToken(tokenKey) + log.Debugf("kiro: rate limiter cleared for token %s", tokenKey) + + // Check if token is expired before making request (covers both normal and web_search paths) + if e.isTokenExpired(accessToken) { + log.Infof("kiro: access token expired, attempting recovery") + + // 方案 B: 先尝试从文件重新加载 token(后台刷新器可能已更新文件) + reloadedAuth, reloadErr := e.reloadAuthFromFile(auth) + if reloadErr == nil && reloadedAuth != nil { + // 文件中有更新的 token,使用它 + auth = reloadedAuth + accessToken, profileArn = kiroCredentials(auth) + log.Infof("kiro: recovered token from file (background refresh), expires_at: %v", auth.Metadata["expires_at"]) + } else { + // 文件中的 token 也过期了,执行主动刷新 + log.Debugf("kiro: file reload failed (%v), attempting active refresh", reloadErr) + refreshedAuth, refreshErr := e.Refresh(ctx, auth) + if refreshErr != nil { + log.Warnf("kiro: pre-request token refresh failed: %v", refreshErr) + } else if refreshedAuth != nil { + auth = refreshedAuth + // Persist the refreshed auth to file so subsequent requests use it + if persistErr := e.persistRefreshedAuth(auth); persistErr != nil { + log.Warnf("kiro: failed to persist refreshed auth: %v", persistErr) + } + accessToken, profileArn = kiroCredentials(auth) + log.Infof("kiro: token refreshed successfully before request") + } + } + } + + // Check for pure web_search request + // Route to MCP endpoint instead of normal Kiro API + if kiroclaude.HasWebSearchTool(req.Payload) { + log.Infof("kiro: detected pure web_search request (non-stream), routing to MCP endpoint") + return e.handleWebSearch(ctx, auth, req, opts, accessToken, profileArn) + } + + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + defer reporter.trackFailure(ctx, &err) + + from := opts.SourceFormat + to := sdktranslator.FromString("kiro") + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) + + kiroModelID := e.mapModelToKiro(req.Model) + + // Fetch profileArn if missing (for imported accounts from Kiro IDE) + if profileArn == "" { + if fetched := e.fetchAndSaveProfileArn(ctx, auth, accessToken); fetched != "" { + profileArn = fetched + } + } + + // Determine agentic mode and effective profile ARN using helper functions + isAgentic, isChatOnly := determineAgenticMode(req.Model) + effectiveProfileArn := getEffectiveProfileArnWithWarning(auth, profileArn) + + // Execute with retry on 401/403 and 429 (quota exhausted) + // Note: currentOrigin and kiroPayload are built inside executeWithRetry for each endpoint + resp, err = e.executeWithRetry(ctx, auth, req, opts, accessToken, effectiveProfileArn, nil, body, from, to, reporter, "", kiroModelID, isAgentic, isChatOnly, tokenKey) + return resp, err +} + +// executeWithRetry performs the actual HTTP request with automatic retry on auth errors. +// Supports automatic fallback between endpoints with different quotas: +// - Amazon Q endpoint (CLI origin) uses Amazon Q Developer quota +// - CodeWhisperer endpoint (AI_EDITOR origin) uses Kiro IDE quota +// Also supports multi-endpoint fallback similar to Antigravity implementation. +// tokenKey is used for rate limiting and cooldown tracking. +func (e *KiroExecutor) executeWithRetry(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options, accessToken, profileArn string, kiroPayload, body []byte, from, to sdktranslator.Format, reporter *usageReporter, currentOrigin, kiroModelID string, isAgentic, isChatOnly bool, tokenKey string) (cliproxyexecutor.Response, error) { + var resp cliproxyexecutor.Response + maxRetries := 2 // Allow retries for token refresh + endpoint fallback + rateLimiter := kiroauth.GetGlobalRateLimiter() + cooldownMgr := kiroauth.GetGlobalCooldownManager() + endpointConfigs := getKiroEndpointConfigs(auth) + var last429Err error + + for endpointIdx := 0; endpointIdx < len(endpointConfigs); endpointIdx++ { + endpointConfig := endpointConfigs[endpointIdx] + url := endpointConfig.URL + // Use this endpoint's compatible Origin (critical for avoiding 403 errors) + currentOrigin = endpointConfig.Origin + + // Rebuild payload with the correct origin for this endpoint + // Each endpoint requires its matching Origin value in the request body + kiroPayload, _ = buildKiroPayloadForFormat(body, kiroModelID, profileArn, currentOrigin, isAgentic, isChatOnly, from, opts.Headers) + + log.Debugf("kiro: trying endpoint %d/%d: %s (Name: %s, Origin: %s)", + endpointIdx+1, len(endpointConfigs), url, endpointConfig.Name, currentOrigin) + + for attempt := 0; attempt <= maxRetries; attempt++ { + // Apply human-like delay before first request (not on retries) + // This mimics natural user behavior patterns + if attempt == 0 && endpointIdx == 0 { + kiroauth.ApplyHumanLikeDelay() + } + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(kiroPayload)) + if err != nil { + return resp, err + } + + httpReq.Header.Set("Content-Type", kiroContentType) + httpReq.Header.Set("Accept", kiroAcceptStream) + // Only set X-Amz-Target if specified (Q endpoint doesn't require it) + if endpointConfig.AmzTarget != "" { + httpReq.Header.Set("X-Amz-Target", endpointConfig.AmzTarget) + } + // Kiro-specific headers + httpReq.Header.Set("x-amzn-kiro-agent-mode", kiroIDEAgentMode) + httpReq.Header.Set("x-amzn-codewhisperer-optout", "true") + + // Apply dynamic fingerprint-based headers + applyDynamicFingerprint(httpReq, auth) + + httpReq.Header.Set("Amz-Sdk-Request", "attempt=1; max=3") + httpReq.Header.Set("Amz-Sdk-Invocation-Id", uuid.New().String()) + + // Bearer token authentication for all auth types (Builder ID, IDC, social, etc.) + httpReq.Header.Set("Authorization", "Bearer "+accessToken) + + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: kiroPayload, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 120*time.Second) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + // Check for context cancellation first - client disconnected, not a server error + // Use 499 (Client Closed Request - nginx convention) instead of 500 + if errors.Is(err, context.Canceled) { + log.Debugf("kiro: request canceled by client (context.Canceled)") + return resp, statusErr{code: 499, msg: "client canceled request"} + } + + // Check for context deadline exceeded - request timed out + // Return 504 Gateway Timeout instead of 500 + if errors.Is(err, context.DeadlineExceeded) { + log.Debugf("kiro: request timed out (context.DeadlineExceeded)") + return resp, statusErr{code: http.StatusGatewayTimeout, msg: "upstream request timed out"} + } + + recordAPIResponseError(ctx, e.cfg, err) + + // Enhanced socket retry: Check if error is retryable (network timeout, connection reset, etc.) + retryCfg := defaultRetryConfig() + if isRetryableError(err) && attempt < retryCfg.MaxRetries { + delay := calculateRetryDelay(attempt, retryCfg) + logRetryAttempt(attempt, retryCfg.MaxRetries, fmt.Sprintf("socket error: %v", err), delay, endpointConfig.Name) + time.Sleep(delay) + continue + } + + return resp, err + } + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + + // Handle 429 errors (quota exhausted) - try next endpoint + // Each endpoint has its own quota pool, so we can try different endpoints + if httpResp.StatusCode == 429 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + // Record failure and set cooldown for 429 + rateLimiter.MarkTokenFailed(tokenKey) + cooldownDuration := kiroauth.CalculateCooldownFor429(attempt) + cooldownMgr.SetCooldown(tokenKey, cooldownDuration, kiroauth.CooldownReason429) + log.Warnf("kiro: rate limit hit (429), token %s set to cooldown for %v", tokenKey, cooldownDuration) + + // Preserve last 429 so callers can correctly backoff when all endpoints are exhausted + last429Err = statusErr{code: httpResp.StatusCode, msg: string(respBody)} + + log.Warnf("kiro: %s endpoint quota exhausted (429), will try next endpoint, body: %s", + endpointConfig.Name, summarizeErrorBody(httpResp.Header.Get("Content-Type"), respBody)) + + // Break inner retry loop to try next endpoint (which has different quota) + break + } + + // Handle 5xx server errors with exponential backoff retry + // Enhanced: Use retryConfig for consistent retry behavior + if httpResp.StatusCode >= 500 && httpResp.StatusCode < 600 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + retryCfg := defaultRetryConfig() + // Check if this specific 5xx code is retryable (502, 503, 504) + if isRetryableHTTPStatus(httpResp.StatusCode) && attempt < retryCfg.MaxRetries { + delay := calculateRetryDelay(attempt, retryCfg) + logRetryAttempt(attempt, retryCfg.MaxRetries, fmt.Sprintf("HTTP %d", httpResp.StatusCode), delay, endpointConfig.Name) + time.Sleep(delay) + continue + } else if attempt < maxRetries { + // Fallback for other 5xx errors (500, 501, etc.) + backoff := time.Duration(1< 30*time.Second { + backoff = 30 * time.Second + } + log.Warnf("kiro: server error %d, retrying in %v (attempt %d/%d)", httpResp.StatusCode, backoff, attempt+1, maxRetries) + time.Sleep(backoff) + continue + } + log.Errorf("kiro: server error %d after %d retries", httpResp.StatusCode, maxRetries) + return resp, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 401 errors with token refresh and retry + // 401 = Unauthorized (token expired/invalid) - refresh token + if httpResp.StatusCode == 401 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + log.Warnf("kiro: received 401 error, attempting token refresh") + refreshedAuth, refreshErr := e.Refresh(ctx, auth) + if refreshErr != nil { + log.Errorf("kiro: token refresh failed: %v", refreshErr) + return resp, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + if refreshedAuth != nil { + auth = refreshedAuth + // Persist the refreshed auth to file so subsequent requests use it + if persistErr := e.persistRefreshedAuth(auth); persistErr != nil { + log.Warnf("kiro: failed to persist refreshed auth: %v", persistErr) + // Continue anyway - the token is valid for this request + } + accessToken, profileArn = kiroCredentials(auth) + // Rebuild payload with new profile ARN if changed + kiroPayload, _ = buildKiroPayloadForFormat(body, kiroModelID, profileArn, currentOrigin, isAgentic, isChatOnly, from, opts.Headers) + if attempt < maxRetries { + log.Infof("kiro: token refreshed successfully, retrying request (attempt %d/%d)", attempt+1, maxRetries+1) + continue + } + log.Infof("kiro: token refreshed successfully, no retries remaining") + } + + log.Warnf("kiro request error, status: 401, body: %s", summarizeErrorBody(httpResp.Header.Get("Content-Type"), respBody)) + return resp, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 402 errors - Monthly Limit Reached + if httpResp.StatusCode == 402 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + log.Warnf("kiro: received 402 (monthly limit). Upstream body: %s", string(respBody)) + + // Return upstream error body directly + return resp, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 403 errors - Access Denied / Token Expired + // Do NOT switch endpoints for 403 errors + if httpResp.StatusCode == 403 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + // Log the 403 error details for debugging + log.Warnf("kiro: received 403 error (attempt %d/%d), body: %s", attempt+1, maxRetries+1, summarizeErrorBody(httpResp.Header.Get("Content-Type"), respBody)) + + respBodyStr := string(respBody) + + // Check for SUSPENDED status - return immediately without retry + if strings.Contains(respBodyStr, "SUSPENDED") || strings.Contains(respBodyStr, "TEMPORARILY_SUSPENDED") { + // Set long cooldown for suspended accounts + rateLimiter.CheckAndMarkSuspended(tokenKey, respBodyStr) + cooldownMgr.SetCooldown(tokenKey, kiroauth.LongCooldown, kiroauth.CooldownReasonSuspended) + log.Errorf("kiro: account is suspended, token %s set to cooldown for %v", tokenKey, kiroauth.LongCooldown) + return resp, statusErr{code: httpResp.StatusCode, msg: "account suspended: " + string(respBody)} + } + + // Check if this looks like a token-related 403 (some APIs return 403 for expired tokens) + isTokenRelated := strings.Contains(respBodyStr, "token") || + strings.Contains(respBodyStr, "expired") || + strings.Contains(respBodyStr, "invalid") || + strings.Contains(respBodyStr, "unauthorized") + + if isTokenRelated && attempt < maxRetries { + log.Warnf("kiro: 403 appears token-related, attempting token refresh") + refreshedAuth, refreshErr := e.Refresh(ctx, auth) + if refreshErr != nil { + log.Errorf("kiro: token refresh failed: %v", refreshErr) + // Token refresh failed - return error immediately + return resp, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + if refreshedAuth != nil { + auth = refreshedAuth + // Persist the refreshed auth to file so subsequent requests use it + if persistErr := e.persistRefreshedAuth(auth); persistErr != nil { + log.Warnf("kiro: failed to persist refreshed auth: %v", persistErr) + // Continue anyway - the token is valid for this request + } + accessToken, profileArn = kiroCredentials(auth) + kiroPayload, _ = buildKiroPayloadForFormat(body, kiroModelID, profileArn, currentOrigin, isAgentic, isChatOnly, from, opts.Headers) + log.Infof("kiro: token refreshed for 403, retrying request") + continue + } + } + + // For non-token 403 or after max retries, return error immediately + // Do NOT switch endpoints for 403 errors + log.Warnf("kiro: 403 error, returning immediately (no endpoint switch)") + return resp, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + log.Debugf("kiro request error, status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("response body close error: %v", errClose) + } + return resp, err + } + + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("response body close error: %v", errClose) + } + }() + + content, toolUses, usageInfo, stopReason, err := e.parseEventStream(httpResp.Body) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + + // Fallback for usage if missing from upstream + + // 1. Estimate InputTokens if missing + if usageInfo.InputTokens == 0 { + if enc, encErr := getTokenizer(req.Model); encErr == nil { + if inp, countErr := countOpenAIChatTokens(enc, opts.OriginalRequest); countErr == nil { + usageInfo.InputTokens = inp + } + } + } + + // 2. Estimate OutputTokens if missing and content is available + if usageInfo.OutputTokens == 0 && len(content) > 0 { + // Use tiktoken for more accurate output token calculation + if enc, encErr := getTokenizer(req.Model); encErr == nil { + if tokenCount, countErr := enc.Count(content); countErr == nil { + usageInfo.OutputTokens = int64(tokenCount) + } + } + // Fallback to character count estimation if tiktoken fails + if usageInfo.OutputTokens == 0 { + usageInfo.OutputTokens = int64(len(content) / 4) + if usageInfo.OutputTokens == 0 { + usageInfo.OutputTokens = 1 + } + } + } + + // 3. Update TotalTokens + usageInfo.TotalTokens = usageInfo.InputTokens + usageInfo.OutputTokens + + appendAPIResponseChunk(ctx, e.cfg, []byte(content)) + reporter.publish(ctx, usageInfo) + + // Record success for rate limiting + rateLimiter.MarkTokenSuccess(tokenKey) + log.Debugf("kiro: request successful, token %s marked as success", tokenKey) + + // Build response in Claude format for Kiro translator + // stopReason is extracted from upstream response by parseEventStream + requestedModel := payloadRequestedModel(opts, req.Model) + kiroResponse := kiroclaude.BuildClaudeResponse(content, toolUses, requestedModel, usageInfo, stopReason) + out := sdktranslator.TranslateNonStream(ctx, to, from, requestedModel, bytes.Clone(opts.OriginalRequest), body, kiroResponse, nil) + resp = cliproxyexecutor.Response{Payload: []byte(out)} + return resp, nil + } + // Inner retry loop exhausted for this endpoint, try next endpoint + // Note: This code is unreachable because all paths in the inner loop + // either return or continue. Kept as comment for documentation. + } + + // All endpoints exhausted + if last429Err != nil { + return resp, last429Err + } + return resp, fmt.Errorf("kiro: all endpoints exhausted") +} + +// ExecuteStream handles streaming requests to Kiro API. +// Supports automatic token refresh on 401/403 errors and quota fallback on 429. +func (e *KiroExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + accessToken, profileArn := kiroCredentials(auth) + if accessToken == "" { + return nil, fmt.Errorf("kiro: access token not found in auth") + } + + // Rate limiting: get token key for tracking + tokenKey := getAccountKey(auth) + rateLimiter := kiroauth.GetGlobalRateLimiter() + cooldownMgr := kiroauth.GetGlobalCooldownManager() + + // Check if token is in cooldown period + if cooldownMgr.IsInCooldown(tokenKey) { + remaining := cooldownMgr.GetRemainingCooldown(tokenKey) + reason := cooldownMgr.GetCooldownReason(tokenKey) + log.Warnf("kiro: token %s is in cooldown (reason: %s), remaining: %v", tokenKey, reason, remaining) + return nil, fmt.Errorf("kiro: token is in cooldown for %v (reason: %s)", remaining, reason) + } + + // Wait for rate limiter before proceeding + log.Debugf("kiro: stream waiting for rate limiter for token %s", tokenKey) + rateLimiter.WaitForToken(tokenKey) + log.Debugf("kiro: stream rate limiter cleared for token %s", tokenKey) + + // Check if token is expired before making request (covers both normal and web_search paths) + if e.isTokenExpired(accessToken) { + log.Infof("kiro: access token expired, attempting recovery before stream request") + + // 方案 B: 先尝试从文件重新加载 token(后台刷新器可能已更新文件) + reloadedAuth, reloadErr := e.reloadAuthFromFile(auth) + if reloadErr == nil && reloadedAuth != nil { + // 文件中有更新的 token,使用它 + auth = reloadedAuth + accessToken, profileArn = kiroCredentials(auth) + log.Infof("kiro: recovered token from file (background refresh) for stream, expires_at: %v", auth.Metadata["expires_at"]) + } else { + // 文件中的 token 也过期了,执行主动刷新 + log.Debugf("kiro: file reload failed (%v), attempting active refresh for stream", reloadErr) + refreshedAuth, refreshErr := e.Refresh(ctx, auth) + if refreshErr != nil { + log.Warnf("kiro: pre-request token refresh failed: %v", refreshErr) + } else if refreshedAuth != nil { + auth = refreshedAuth + // Persist the refreshed auth to file so subsequent requests use it + if persistErr := e.persistRefreshedAuth(auth); persistErr != nil { + log.Warnf("kiro: failed to persist refreshed auth: %v", persistErr) + } + accessToken, profileArn = kiroCredentials(auth) + log.Infof("kiro: token refreshed successfully before stream request") + } + } + } + + // Check for pure web_search request + // Route to MCP endpoint instead of normal Kiro API + if kiroclaude.HasWebSearchTool(req.Payload) { + log.Infof("kiro: detected pure web_search request, routing to MCP endpoint") + streamWebSearch, errWebSearch := e.handleWebSearchStream(ctx, auth, req, opts, accessToken, profileArn) + if errWebSearch != nil { + return nil, errWebSearch + } + return &cliproxyexecutor.StreamResult{Chunks: streamWebSearch}, nil + } + + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + defer reporter.trackFailure(ctx, &err) + + from := opts.SourceFormat + to := sdktranslator.FromString("kiro") + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) + + kiroModelID := e.mapModelToKiro(req.Model) + + // Fetch profileArn if missing (for imported accounts from Kiro IDE) + if profileArn == "" { + if fetched := e.fetchAndSaveProfileArn(ctx, auth, accessToken); fetched != "" { + profileArn = fetched + } + } + + // Determine agentic mode and effective profile ARN using helper functions + isAgentic, isChatOnly := determineAgenticMode(req.Model) + effectiveProfileArn := getEffectiveProfileArnWithWarning(auth, profileArn) + + // Execute stream with retry on 401/403 and 429 (quota exhausted) + // Note: currentOrigin and kiroPayload are built inside executeStreamWithRetry for each endpoint + streamKiro, errStreamKiro := e.executeStreamWithRetry(ctx, auth, req, opts, accessToken, effectiveProfileArn, nil, body, from, reporter, "", kiroModelID, isAgentic, isChatOnly, tokenKey) + if errStreamKiro != nil { + return nil, errStreamKiro + } + return &cliproxyexecutor.StreamResult{Chunks: streamKiro}, nil +} + +// executeStreamWithRetry performs the streaming HTTP request with automatic retry on auth errors. +// Supports automatic fallback between endpoints with different quotas: +// - Amazon Q endpoint (CLI origin) uses Amazon Q Developer quota +// - CodeWhisperer endpoint (AI_EDITOR origin) uses Kiro IDE quota +// Also supports multi-endpoint fallback similar to Antigravity implementation. +// tokenKey is used for rate limiting and cooldown tracking. +func (e *KiroExecutor) executeStreamWithRetry(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options, accessToken, profileArn string, kiroPayload, body []byte, from sdktranslator.Format, reporter *usageReporter, currentOrigin, kiroModelID string, isAgentic, isChatOnly bool, tokenKey string) (<-chan cliproxyexecutor.StreamChunk, error) { + maxRetries := 2 // Allow retries for token refresh + endpoint fallback + rateLimiter := kiroauth.GetGlobalRateLimiter() + cooldownMgr := kiroauth.GetGlobalCooldownManager() + endpointConfigs := getKiroEndpointConfigs(auth) + var last429Err error + + for endpointIdx := 0; endpointIdx < len(endpointConfigs); endpointIdx++ { + endpointConfig := endpointConfigs[endpointIdx] + url := endpointConfig.URL + // Use this endpoint's compatible Origin (critical for avoiding 403 errors) + currentOrigin = endpointConfig.Origin + + // Rebuild payload with the correct origin for this endpoint + // Each endpoint requires its matching Origin value in the request body + kiroPayload, thinkingEnabled := buildKiroPayloadForFormat(body, kiroModelID, profileArn, currentOrigin, isAgentic, isChatOnly, from, opts.Headers) + + log.Debugf("kiro: stream trying endpoint %d/%d: %s (Name: %s, Origin: %s)", + endpointIdx+1, len(endpointConfigs), url, endpointConfig.Name, currentOrigin) + + for attempt := 0; attempt <= maxRetries; attempt++ { + // Apply human-like delay before first streaming request (not on retries) + // This mimics natural user behavior patterns + // Note: Delay is NOT applied during streaming response - only before initial request + if attempt == 0 && endpointIdx == 0 { + kiroauth.ApplyHumanLikeDelay() + } + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(kiroPayload)) + if err != nil { + return nil, err + } + + httpReq.Header.Set("Content-Type", kiroContentType) + httpReq.Header.Set("Accept", kiroAcceptStream) + // Only set X-Amz-Target if specified (Q endpoint doesn't require it) + if endpointConfig.AmzTarget != "" { + httpReq.Header.Set("X-Amz-Target", endpointConfig.AmzTarget) + } + // Kiro-specific headers + httpReq.Header.Set("x-amzn-kiro-agent-mode", kiroIDEAgentMode) + httpReq.Header.Set("x-amzn-codewhisperer-optout", "true") + + // Apply dynamic fingerprint-based headers + applyDynamicFingerprint(httpReq, auth) + + httpReq.Header.Set("Amz-Sdk-Request", "attempt=1; max=3") + httpReq.Header.Set("Amz-Sdk-Invocation-Id", uuid.New().String()) + + // Bearer token authentication for all auth types (Builder ID, IDC, social, etc.) + httpReq.Header.Set("Authorization", "Bearer "+accessToken) + + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(httpReq, attrs) + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + recordAPIRequest(ctx, e.cfg, upstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: kiroPayload, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 0) + httpResp, err := httpClient.Do(httpReq) + if err != nil { + recordAPIResponseError(ctx, e.cfg, err) + + // Enhanced socket retry for streaming: Check if error is retryable (network timeout, connection reset, etc.) + retryCfg := defaultRetryConfig() + if isRetryableError(err) && attempt < retryCfg.MaxRetries { + delay := calculateRetryDelay(attempt, retryCfg) + logRetryAttempt(attempt, retryCfg.MaxRetries, fmt.Sprintf("stream socket error: %v", err), delay, endpointConfig.Name) + time.Sleep(delay) + continue + } + + return nil, err + } + recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + + // Handle 429 errors (quota exhausted) - try next endpoint + // Each endpoint has its own quota pool, so we can try different endpoints + if httpResp.StatusCode == 429 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + // Record failure and set cooldown for 429 + rateLimiter.MarkTokenFailed(tokenKey) + cooldownDuration := kiroauth.CalculateCooldownFor429(attempt) + cooldownMgr.SetCooldown(tokenKey, cooldownDuration, kiroauth.CooldownReason429) + log.Warnf("kiro: stream rate limit hit (429), token %s set to cooldown for %v", tokenKey, cooldownDuration) + + // Preserve last 429 so callers can correctly backoff when all endpoints are exhausted + last429Err = statusErr{code: httpResp.StatusCode, msg: string(respBody)} + + log.Warnf("kiro: stream %s endpoint quota exhausted (429), will try next endpoint, body: %s", + endpointConfig.Name, summarizeErrorBody(httpResp.Header.Get("Content-Type"), respBody)) + + // Break inner retry loop to try next endpoint (which has different quota) + break + } + + // Handle 5xx server errors with exponential backoff retry + // Enhanced: Use retryConfig for consistent retry behavior + if httpResp.StatusCode >= 500 && httpResp.StatusCode < 600 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + retryCfg := defaultRetryConfig() + // Check if this specific 5xx code is retryable (502, 503, 504) + if isRetryableHTTPStatus(httpResp.StatusCode) && attempt < retryCfg.MaxRetries { + delay := calculateRetryDelay(attempt, retryCfg) + logRetryAttempt(attempt, retryCfg.MaxRetries, fmt.Sprintf("stream HTTP %d", httpResp.StatusCode), delay, endpointConfig.Name) + time.Sleep(delay) + continue + } else if attempt < maxRetries { + // Fallback for other 5xx errors (500, 501, etc.) + backoff := time.Duration(1< 30*time.Second { + backoff = 30 * time.Second + } + log.Warnf("kiro: stream server error %d, retrying in %v (attempt %d/%d)", httpResp.StatusCode, backoff, attempt+1, maxRetries) + time.Sleep(backoff) + continue + } + log.Errorf("kiro: stream server error %d after %d retries", httpResp.StatusCode, maxRetries) + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 400 errors - Credential/Validation issues + // Do NOT switch endpoints - return error immediately + if httpResp.StatusCode == 400 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + log.Warnf("kiro: received 400 error (attempt %d/%d), body: %s", attempt+1, maxRetries+1, summarizeErrorBody(httpResp.Header.Get("Content-Type"), respBody)) + + // 400 errors indicate request validation issues - return immediately without retry + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 401 errors with token refresh and retry + // 401 = Unauthorized (token expired/invalid) - refresh token + if httpResp.StatusCode == 401 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + log.Warnf("kiro: stream received 401 error, attempting token refresh") + refreshedAuth, refreshErr := e.Refresh(ctx, auth) + if refreshErr != nil { + log.Errorf("kiro: token refresh failed: %v", refreshErr) + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + if refreshedAuth != nil { + auth = refreshedAuth + // Persist the refreshed auth to file so subsequent requests use it + if persistErr := e.persistRefreshedAuth(auth); persistErr != nil { + log.Warnf("kiro: failed to persist refreshed auth: %v", persistErr) + // Continue anyway - the token is valid for this request + } + accessToken, profileArn = kiroCredentials(auth) + // Rebuild payload with new profile ARN if changed + kiroPayload, _ = buildKiroPayloadForFormat(body, kiroModelID, profileArn, currentOrigin, isAgentic, isChatOnly, from, opts.Headers) + if attempt < maxRetries { + log.Infof("kiro: token refreshed successfully, retrying stream request (attempt %d/%d)", attempt+1, maxRetries+1) + continue + } + log.Infof("kiro: token refreshed successfully, no retries remaining") + } + + log.Warnf("kiro stream error, status: 401, body: %s", string(respBody)) + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 402 errors - Monthly Limit Reached + if httpResp.StatusCode == 402 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + log.Warnf("kiro: stream received 402 (monthly limit). Upstream body: %s", string(respBody)) + + // Return upstream error body directly + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + // Handle 403 errors - Access Denied / Token Expired + // Do NOT switch endpoints for 403 errors + if httpResp.StatusCode == 403 { + respBody, _ := io.ReadAll(httpResp.Body) + _ = httpResp.Body.Close() + appendAPIResponseChunk(ctx, e.cfg, respBody) + + // Log the 403 error details for debugging + log.Warnf("kiro: stream received 403 error (attempt %d/%d), body: %s", attempt+1, maxRetries+1, string(respBody)) + + respBodyStr := string(respBody) + + // Check for SUSPENDED status - return immediately without retry + if strings.Contains(respBodyStr, "SUSPENDED") || strings.Contains(respBodyStr, "TEMPORARILY_SUSPENDED") { + // Set long cooldown for suspended accounts + rateLimiter.CheckAndMarkSuspended(tokenKey, respBodyStr) + cooldownMgr.SetCooldown(tokenKey, kiroauth.LongCooldown, kiroauth.CooldownReasonSuspended) + log.Errorf("kiro: stream account is suspended, token %s set to cooldown for %v", tokenKey, kiroauth.LongCooldown) + return nil, statusErr{code: httpResp.StatusCode, msg: "account suspended: " + string(respBody)} + } + + // Check if this looks like a token-related 403 (some APIs return 403 for expired tokens) + isTokenRelated := strings.Contains(respBodyStr, "token") || + strings.Contains(respBodyStr, "expired") || + strings.Contains(respBodyStr, "invalid") || + strings.Contains(respBodyStr, "unauthorized") + + if isTokenRelated && attempt < maxRetries { + log.Warnf("kiro: 403 appears token-related, attempting token refresh") + refreshedAuth, refreshErr := e.Refresh(ctx, auth) + if refreshErr != nil { + log.Errorf("kiro: token refresh failed: %v", refreshErr) + // Token refresh failed - return error immediately + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + if refreshedAuth != nil { + auth = refreshedAuth + // Persist the refreshed auth to file so subsequent requests use it + if persistErr := e.persistRefreshedAuth(auth); persistErr != nil { + log.Warnf("kiro: failed to persist refreshed auth: %v", persistErr) + // Continue anyway - the token is valid for this request + } + accessToken, profileArn = kiroCredentials(auth) + kiroPayload, _ = buildKiroPayloadForFormat(body, kiroModelID, profileArn, currentOrigin, isAgentic, isChatOnly, from, opts.Headers) + log.Infof("kiro: token refreshed for 403, retrying stream request") + continue + } + } + + // For non-token 403 or after max retries, return error immediately + // Do NOT switch endpoints for 403 errors + log.Warnf("kiro: 403 error, returning immediately (no endpoint switch)") + return nil, statusErr{code: httpResp.StatusCode, msg: string(respBody)} + } + + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + appendAPIResponseChunk(ctx, e.cfg, b) + log.Debugf("kiro stream error, status: %d, body: %s", httpResp.StatusCode, string(b)) + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("response body close error: %v", errClose) + } + return nil, statusErr{code: httpResp.StatusCode, msg: string(b)} + } + + out := make(chan cliproxyexecutor.StreamChunk) + + // Record success immediately since connection was established successfully + // Streaming errors will be handled separately + rateLimiter.MarkTokenSuccess(tokenKey) + log.Debugf("kiro: stream request successful, token %s marked as success", tokenKey) + + go func(resp *http.Response, thinkingEnabled bool) { + defer close(out) + defer func() { + if r := recover(); r != nil { + log.Errorf("kiro: panic in stream handler: %v", r) + out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("internal error: %v", r)} + } + }() + defer func() { + if errClose := resp.Body.Close(); errClose != nil { + log.Errorf("response body close error: %v", errClose) + } + }() + + // Kiro API always returns tags regardless of request parameters + // So we always enable thinking parsing for Kiro responses + log.Debugf("kiro: stream thinkingEnabled = %v (always true for Kiro)", thinkingEnabled) + + e.streamToChannel(ctx, resp.Body, out, from, payloadRequestedModel(opts, req.Model), opts.OriginalRequest, body, reporter, thinkingEnabled) + }(httpResp, thinkingEnabled) + + return out, nil + } + // Inner retry loop exhausted for this endpoint, try next endpoint + // Note: This code is unreachable because all paths in the inner loop + // either return or continue. Kept as comment for documentation. + } + + // All endpoints exhausted + if last429Err != nil { + return nil, last429Err + } + return nil, fmt.Errorf("kiro: stream all endpoints exhausted") +} + +// kiroCredentials extracts access token and profile ARN from auth. +func kiroCredentials(auth *cliproxyauth.Auth) (accessToken, profileArn string) { + if auth == nil { + return "", "" + } + + // Try Metadata first (wrapper format) + if auth.Metadata != nil { + if token, ok := auth.Metadata["access_token"].(string); ok { + accessToken = token + } + if arn, ok := auth.Metadata["profile_arn"].(string); ok { + profileArn = arn + } + } + + // Try Attributes + if accessToken == "" && auth.Attributes != nil { + accessToken = auth.Attributes["access_token"] + profileArn = auth.Attributes["profile_arn"] + } + + // Try direct fields from flat JSON format (new AWS Builder ID format) + if accessToken == "" && auth.Metadata != nil { + if token, ok := auth.Metadata["accessToken"].(string); ok { + accessToken = token + } + if arn, ok := auth.Metadata["profileArn"].(string); ok { + profileArn = arn + } + } + + return accessToken, profileArn +} + +// findRealThinkingEndTag finds the real end tag, skipping false positives. +// Returns -1 if no real end tag is found. +// +// Real tags from Kiro API have specific characteristics: +// - Usually preceded by newline (.\n) +// - Usually followed by newline (\n\n) +// - Not inside code blocks or inline code +// +// False positives (discussion text) have characteristics: +// - In the middle of a sentence +// - Preceded by discussion words like "标签", "tag", "returns" +// - Inside code blocks or inline code +// +// Parameters: +// - content: the content to search in +// - alreadyInCodeBlock: whether we're already inside a code block from previous chunks +// - alreadyInInlineCode: whether we're already inside inline code from previous chunks +func findRealThinkingEndTag(content string, alreadyInCodeBlock, alreadyInInlineCode bool) int { + searchStart := 0 + for { + endIdx := strings.Index(content[searchStart:], kirocommon.ThinkingEndTag) + if endIdx < 0 { + return -1 + } + endIdx += searchStart // Adjust to absolute position + + textBeforeEnd := content[:endIdx] + textAfterEnd := content[endIdx+len(kirocommon.ThinkingEndTag):] + + // Check 1: Is it inside inline code? + // Count backticks in current content and add state from previous chunks + backtickCount := strings.Count(textBeforeEnd, "`") + effectiveInInlineCode := alreadyInInlineCode + if backtickCount%2 == 1 { + effectiveInInlineCode = !effectiveInInlineCode + } + if effectiveInInlineCode { + log.Debugf("kiro: found inside inline code at pos %d, skipping", endIdx) + searchStart = endIdx + len(kirocommon.ThinkingEndTag) + continue + } + + // Check 2: Is it inside a code block? + // Count fences in current content and add state from previous chunks + fenceCount := strings.Count(textBeforeEnd, "```") + altFenceCount := strings.Count(textBeforeEnd, "~~~") + effectiveInCodeBlock := alreadyInCodeBlock + if fenceCount%2 == 1 || altFenceCount%2 == 1 { + effectiveInCodeBlock = !effectiveInCodeBlock + } + if effectiveInCodeBlock { + log.Debugf("kiro: found inside code block at pos %d, skipping", endIdx) + searchStart = endIdx + len(kirocommon.ThinkingEndTag) + continue + } + + // Check 3: Real tags are usually preceded by newline or at start + // and followed by newline or at end. Check the format. + charBeforeTag := byte(0) + if endIdx > 0 { + charBeforeTag = content[endIdx-1] + } + charAfterTag := byte(0) + if len(textAfterEnd) > 0 { + charAfterTag = textAfterEnd[0] + } + + // Real end tag format: preceded by newline OR end of sentence (. ! ?) + // and followed by newline OR end of content + isPrecededByNewlineOrSentenceEnd := charBeforeTag == '\n' || charBeforeTag == '.' || + charBeforeTag == '!' || charBeforeTag == '?' || charBeforeTag == 0 + isFollowedByNewlineOrEnd := charAfterTag == '\n' || charAfterTag == 0 + + // If the tag has proper formatting (newline before/after), it's likely real + if isPrecededByNewlineOrSentenceEnd && isFollowedByNewlineOrEnd { + log.Debugf("kiro: found properly formatted at pos %d", endIdx) + return endIdx + } + + // Check 4: Is the tag preceded by discussion keywords on the same line? + lastNewlineIdx := strings.LastIndex(textBeforeEnd, "\n") + lineBeforeTag := textBeforeEnd + if lastNewlineIdx >= 0 { + lineBeforeTag = textBeforeEnd[lastNewlineIdx+1:] + } + lineBeforeTagLower := strings.ToLower(lineBeforeTag) + + // Discussion patterns - if found, this is likely discussion text + discussionPatterns := []string{ + "标签", "返回", "输出", "包含", "使用", "解析", "转换", "生成", // Chinese + "tag", "return", "output", "contain", "use", "parse", "emit", "convert", "generate", // English + "", // discussing both tags together + "``", // explicitly in inline code + } + isDiscussion := false + for _, pattern := range discussionPatterns { + if strings.Contains(lineBeforeTagLower, pattern) { + isDiscussion = true + break + } + } + if isDiscussion { + log.Debugf("kiro: found after discussion text at pos %d, skipping", endIdx) + searchStart = endIdx + len(kirocommon.ThinkingEndTag) + continue + } + + // Check 5: Is there text immediately after on the same line? + // Real end tags don't have text immediately after on the same line + if len(textAfterEnd) > 0 && charAfterTag != '\n' && charAfterTag != 0 { + // Find the next newline + nextNewline := strings.Index(textAfterEnd, "\n") + var textOnSameLine string + if nextNewline >= 0 { + textOnSameLine = textAfterEnd[:nextNewline] + } else { + textOnSameLine = textAfterEnd + } + // If there's non-whitespace text on the same line after the tag, it's discussion + if strings.TrimSpace(textOnSameLine) != "" { + log.Debugf("kiro: found with text after on same line at pos %d, skipping", endIdx) + searchStart = endIdx + len(kirocommon.ThinkingEndTag) + continue + } + } + + // Check 6: Is there another tag after this ? + if strings.Contains(textAfterEnd, kirocommon.ThinkingStartTag) { + nextStartIdx := strings.Index(textAfterEnd, kirocommon.ThinkingStartTag) + textBeforeNextStart := textAfterEnd[:nextStartIdx] + nextBacktickCount := strings.Count(textBeforeNextStart, "`") + nextFenceCount := strings.Count(textBeforeNextStart, "```") + nextAltFenceCount := strings.Count(textBeforeNextStart, "~~~") + + // If the next is NOT in code, then this is discussion text + if nextBacktickCount%2 == 0 && nextFenceCount%2 == 0 && nextAltFenceCount%2 == 0 { + log.Debugf("kiro: found followed by at pos %d, likely discussion text, skipping", endIdx) + searchStart = endIdx + len(kirocommon.ThinkingEndTag) + continue + } + } + + // This looks like a real end tag + return endIdx + } +} + +// determineAgenticMode determines if the model is an agentic or chat-only variant. +// Returns (isAgentic, isChatOnly) based on model name suffixes. +func determineAgenticMode(model string) (isAgentic, isChatOnly bool) { + isAgentic = strings.HasSuffix(model, "-agentic") + isChatOnly = strings.HasSuffix(model, "-chat") + return isAgentic, isChatOnly +} + +// getEffectiveProfileArnWithWarning suppresses profileArn for builder-id and AWS SSO OIDC auth. +// Builder-id users (auth_method == "builder-id") and AWS SSO OIDC users (auth_type == "aws_sso_oidc") +// don't need profileArn — sending it causes 403 errors. +// For all other auth methods (e.g. social auth), profileArn is returned as-is, +// with a warning logged if it is empty. +func getEffectiveProfileArnWithWarning(auth *cliproxyauth.Auth, profileArn string) string { + if auth != nil && auth.Metadata != nil { + // Check 1: auth_method field, skip for builder-id only + if authMethod, ok := auth.Metadata["auth_method"].(string); ok && authMethod == "builder-id" { + return "" + } + // Check 2: auth_type field (from kiro-cli tokens) + if authType, ok := auth.Metadata["auth_type"].(string); ok && authType == "aws_sso_oidc" { + return "" // AWS SSO OIDC - don't include profileArn + } + } + // For social auth and IDC, profileArn is required + if profileArn == "" { + log.Warnf("kiro: profile ARN not found in auth, API calls may fail") + } + return profileArn +} + +// mapModelToKiro maps external model names to Kiro model IDs. +// Supports both Kiro and Amazon Q prefixes since they use the same API. +// Agentic variants (-agentic suffix) map to the same backend model IDs. +func (e *KiroExecutor) mapModelToKiro(model string) string { + modelMap := map[string]string{ + // Amazon Q format (amazonq- prefix) - same API as Kiro + "amazonq-auto": "auto", + "amazonq-claude-opus-4-6": "claude-opus-4.6", + "amazonq-claude-sonnet-4-6": "claude-sonnet-4.6", + "amazonq-claude-opus-4-5": "claude-opus-4.5", + "amazonq-claude-sonnet-4-5": "claude-sonnet-4.5", + "amazonq-claude-sonnet-4-5-20250929": "claude-sonnet-4.5", + "amazonq-claude-sonnet-4": "claude-sonnet-4", + "amazonq-claude-sonnet-4-20250514": "claude-sonnet-4", + "amazonq-claude-haiku-4-5": "claude-haiku-4.5", + // Kiro format (kiro- prefix) - valid model names that should be preserved + "kiro-claude-opus-4-6": "claude-opus-4.6", + "kiro-claude-sonnet-4-6": "claude-sonnet-4.6", + "kiro-claude-opus-4-5": "claude-opus-4.5", + "kiro-claude-sonnet-4-5": "claude-sonnet-4.5", + "kiro-claude-sonnet-4-5-20250929": "claude-sonnet-4.5", + "kiro-claude-sonnet-4": "claude-sonnet-4", + "kiro-claude-sonnet-4-20250514": "claude-sonnet-4", + "kiro-claude-haiku-4-5": "claude-haiku-4.5", + "kiro-auto": "auto", + // Native format (no prefix) - used by Kiro IDE directly + "claude-opus-4-6": "claude-opus-4.6", + "claude-opus-4.6": "claude-opus-4.6", + "claude-sonnet-4-6": "claude-sonnet-4.6", + "claude-sonnet-4.6": "claude-sonnet-4.6", + "claude-opus-4-5": "claude-opus-4.5", + "claude-opus-4.5": "claude-opus-4.5", + "claude-haiku-4-5": "claude-haiku-4.5", + "claude-haiku-4.5": "claude-haiku-4.5", + "claude-sonnet-4-5": "claude-sonnet-4.5", + "claude-sonnet-4-5-20250929": "claude-sonnet-4.5", + "claude-sonnet-4.5": "claude-sonnet-4.5", + "claude-sonnet-4": "claude-sonnet-4", + "claude-sonnet-4-20250514": "claude-sonnet-4", + "auto": "auto", + // Agentic variants (same backend model IDs, but with special system prompt) + "claude-opus-4.6-agentic": "claude-opus-4.6", + "claude-sonnet-4.6-agentic": "claude-sonnet-4.6", + "claude-opus-4.5-agentic": "claude-opus-4.5", + "claude-sonnet-4.5-agentic": "claude-sonnet-4.5", + "claude-sonnet-4-agentic": "claude-sonnet-4", + "claude-haiku-4.5-agentic": "claude-haiku-4.5", + "kiro-claude-opus-4-6-agentic": "claude-opus-4.6", + "kiro-claude-sonnet-4-6-agentic": "claude-sonnet-4.6", + "kiro-claude-opus-4-5-agentic": "claude-opus-4.5", + "kiro-claude-sonnet-4-5-agentic": "claude-sonnet-4.5", + "kiro-claude-sonnet-4-agentic": "claude-sonnet-4", + "kiro-claude-haiku-4-5-agentic": "claude-haiku-4.5", + } + if kiroID, ok := modelMap[model]; ok { + return kiroID + } + + // Smart fallback: try to infer model type from name patterns + modelLower := strings.ToLower(model) + + // Check for Haiku variants + if strings.Contains(modelLower, "haiku") { + log.Debugf("kiro: unknown Haiku model '%s', mapping to claude-haiku-4.5", model) + return "claude-haiku-4.5" + } + + // Check for Sonnet variants + if strings.Contains(modelLower, "sonnet") { + // Check for specific version patterns + if strings.Contains(modelLower, "3-7") || strings.Contains(modelLower, "3.7") { + log.Debugf("kiro: unknown Sonnet 3.7 model '%s', mapping to claude-3-7-sonnet-20250219", model) + return "claude-3-7-sonnet-20250219" + } + if strings.Contains(modelLower, "4-6") || strings.Contains(modelLower, "4.6") { + log.Debugf("kiro: unknown Sonnet 4.6 model '%s', mapping to claude-sonnet-4.6", model) + return "claude-sonnet-4.6" + } + if strings.Contains(modelLower, "4-5") || strings.Contains(modelLower, "4.5") { + log.Debugf("kiro: unknown Sonnet 4.5 model '%s', mapping to claude-sonnet-4.5", model) + return "claude-sonnet-4.5" + } + // Default to Sonnet 4 + log.Debugf("kiro: unknown Sonnet model '%s', mapping to claude-sonnet-4", model) + return "claude-sonnet-4" + } + + // Check for Opus variants + if strings.Contains(modelLower, "opus") { + if strings.Contains(modelLower, "4-6") || strings.Contains(modelLower, "4.6") { + log.Debugf("kiro: unknown Opus 4.6 model '%s', mapping to claude-opus-4.6", model) + return "claude-opus-4.6" + } + log.Debugf("kiro: unknown Opus model '%s', mapping to claude-opus-4.5", model) + return "claude-opus-4.5" + } + + // Final fallback to Sonnet 4.5 (most commonly used model) + log.Warnf("kiro: unknown model '%s', falling back to claude-sonnet-4.5", model) + return "claude-sonnet-4.5" +} + +// EventStreamError represents an Event Stream processing error +type EventStreamError struct { + Type string // "fatal", "malformed" + Message string + Cause error +} + +func (e *EventStreamError) Error() string { + if e.Cause != nil { + return fmt.Sprintf("event stream %s: %s: %v", e.Type, e.Message, e.Cause) + } + return fmt.Sprintf("event stream %s: %s", e.Type, e.Message) +} + +// eventStreamMessage represents a parsed AWS Event Stream message +type eventStreamMessage struct { + EventType string // Event type from headers (e.g., "assistantResponseEvent") + Payload []byte // JSON payload of the message +} + +// NOTE: Request building functions moved to internal/translator/kiro/claude/kiro_claude_request.go +// The executor now uses kiroclaude.BuildKiroPayload() instead + +// parseEventStream parses AWS Event Stream binary format. +// Extracts text content, tool uses, and stop_reason from the response. +// Supports embedded [Called ...] tool calls and input buffering for toolUseEvent. +// Returns: content, toolUses, usageInfo, stopReason, error +func (e *KiroExecutor) parseEventStream(body io.Reader) (string, []kiroclaude.KiroToolUse, usage.Detail, string, error) { + var content strings.Builder + var toolUses []kiroclaude.KiroToolUse + var usageInfo usage.Detail + var stopReason string // Extracted from upstream response + reader := bufio.NewReader(body) + + // Tool use state tracking for input buffering and deduplication + processedIDs := make(map[string]bool) + var currentToolUse *kiroclaude.ToolUseState + + // Upstream usage tracking - Kiro API returns credit usage and context percentage + var upstreamContextPercentage float64 // Context usage percentage from upstream (e.g., 78.56) + + for { + msg, eventErr := e.readEventStreamMessage(reader) + if eventErr != nil { + log.Errorf("kiro: parseEventStream error: %v", eventErr) + return content.String(), toolUses, usageInfo, stopReason, eventErr + } + if msg == nil { + // Normal end of stream (EOF) + break + } + + eventType := msg.EventType + payload := msg.Payload + if len(payload) == 0 { + continue + } + + var event map[string]interface{} + if err := json.Unmarshal(payload, &event); err != nil { + log.Debugf("kiro: skipping malformed event: %v", err) + continue + } + + // Check for error/exception events in the payload (Kiro API may return errors with HTTP 200) + // These can appear as top-level fields or nested within the event + if errType, hasErrType := event["_type"].(string); hasErrType { + // AWS-style error: {"_type": "com.amazon.aws.codewhisperer#ValidationException", "message": "..."} + errMsg := "" + if msg, ok := event["message"].(string); ok { + errMsg = msg + } + log.Errorf("kiro: received AWS error in event stream: type=%s, message=%s", errType, errMsg) + return "", nil, usageInfo, stopReason, fmt.Errorf("kiro API error: %s - %s", errType, errMsg) + } + if errType, hasErrType := event["type"].(string); hasErrType && (errType == "error" || errType == "exception") { + // Generic error event + errMsg := "" + if msg, ok := event["message"].(string); ok { + errMsg = msg + } else if errObj, ok := event["error"].(map[string]interface{}); ok { + if msg, ok := errObj["message"].(string); ok { + errMsg = msg + } + } + log.Errorf("kiro: received error event in stream: type=%s, message=%s", errType, errMsg) + return "", nil, usageInfo, stopReason, fmt.Errorf("kiro API error: %s", errMsg) + } + + // Extract stop_reason from various event formats + // Kiro/Amazon Q API may include stop_reason in different locations + if sr := kirocommon.GetString(event, "stop_reason"); sr != "" { + stopReason = sr + log.Debugf("kiro: parseEventStream found stop_reason (top-level): %s", stopReason) + } + if sr := kirocommon.GetString(event, "stopReason"); sr != "" { + stopReason = sr + log.Debugf("kiro: parseEventStream found stopReason (top-level): %s", stopReason) + } + + // Handle different event types + switch eventType { + case "followupPromptEvent": + // Filter out followupPrompt events - these are UI suggestions, not content + log.Debugf("kiro: parseEventStream ignoring followupPrompt event") + continue + + case "assistantResponseEvent": + if assistantResp, ok := event["assistantResponseEvent"].(map[string]interface{}); ok { + if contentText, ok := assistantResp["content"].(string); ok { + content.WriteString(contentText) + } + // Extract stop_reason from assistantResponseEvent + if sr := kirocommon.GetString(assistantResp, "stop_reason"); sr != "" { + stopReason = sr + log.Debugf("kiro: parseEventStream found stop_reason in assistantResponseEvent: %s", stopReason) + } + if sr := kirocommon.GetString(assistantResp, "stopReason"); sr != "" { + stopReason = sr + log.Debugf("kiro: parseEventStream found stopReason in assistantResponseEvent: %s", stopReason) + } + // Extract tool uses from response + if toolUsesRaw, ok := assistantResp["toolUses"].([]interface{}); ok { + for _, tuRaw := range toolUsesRaw { + if tu, ok := tuRaw.(map[string]interface{}); ok { + toolUseID := kirocommon.GetStringValue(tu, "toolUseId") + // Check for duplicate + if processedIDs[toolUseID] { + log.Debugf("kiro: skipping duplicate tool use from assistantResponse: %s", toolUseID) + continue + } + processedIDs[toolUseID] = true + + toolUse := kiroclaude.KiroToolUse{ + ToolUseID: toolUseID, + Name: kirocommon.GetStringValue(tu, "name"), + } + if input, ok := tu["input"].(map[string]interface{}); ok { + toolUse.Input = input + } + toolUses = append(toolUses, toolUse) + } + } + } + } + // Also try direct format + if contentText, ok := event["content"].(string); ok { + content.WriteString(contentText) + } + // Direct tool uses + if toolUsesRaw, ok := event["toolUses"].([]interface{}); ok { + for _, tuRaw := range toolUsesRaw { + if tu, ok := tuRaw.(map[string]interface{}); ok { + toolUseID := kirocommon.GetStringValue(tu, "toolUseId") + // Check for duplicate + if processedIDs[toolUseID] { + log.Debugf("kiro: skipping duplicate direct tool use: %s", toolUseID) + continue + } + processedIDs[toolUseID] = true + + toolUse := kiroclaude.KiroToolUse{ + ToolUseID: toolUseID, + Name: kirocommon.GetStringValue(tu, "name"), + } + if input, ok := tu["input"].(map[string]interface{}); ok { + toolUse.Input = input + } + toolUses = append(toolUses, toolUse) + } + } + } + + case "toolUseEvent": + // Handle dedicated tool use events with input buffering + completedToolUses, newState := kiroclaude.ProcessToolUseEvent(event, currentToolUse, processedIDs) + currentToolUse = newState + toolUses = append(toolUses, completedToolUses...) + + case "supplementaryWebLinksEvent": + if inputTokens, ok := event["inputTokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } + if outputTokens, ok := event["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } + + case "messageStopEvent", "message_stop": + // Handle message stop events which may contain stop_reason + if sr := kirocommon.GetString(event, "stop_reason"); sr != "" { + stopReason = sr + log.Debugf("kiro: parseEventStream found stop_reason in messageStopEvent: %s", stopReason) + } + if sr := kirocommon.GetString(event, "stopReason"); sr != "" { + stopReason = sr + log.Debugf("kiro: parseEventStream found stopReason in messageStopEvent: %s", stopReason) + } + + case "messageMetadataEvent", "metadataEvent": + // Handle message metadata events which contain token counts + // Official format: { tokenUsage: { outputTokens, totalTokens, uncachedInputTokens, cacheReadInputTokens, cacheWriteInputTokens, contextUsagePercentage } } + var metadata map[string]interface{} + if m, ok := event["messageMetadataEvent"].(map[string]interface{}); ok { + metadata = m + } else if m, ok := event["metadataEvent"].(map[string]interface{}); ok { + metadata = m + } else { + metadata = event // event itself might be the metadata + } + + // Check for nested tokenUsage object (official format) + if tokenUsage, ok := metadata["tokenUsage"].(map[string]interface{}); ok { + // outputTokens - precise output token count + if outputTokens, ok := tokenUsage["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + log.Infof("kiro: parseEventStream found precise outputTokens in tokenUsage: %d", usageInfo.OutputTokens) + } + // totalTokens - precise total token count + if totalTokens, ok := tokenUsage["totalTokens"].(float64); ok { + usageInfo.TotalTokens = int64(totalTokens) + log.Infof("kiro: parseEventStream found precise totalTokens in tokenUsage: %d", usageInfo.TotalTokens) + } + // uncachedInputTokens - input tokens not from cache + if uncachedInputTokens, ok := tokenUsage["uncachedInputTokens"].(float64); ok { + usageInfo.InputTokens = int64(uncachedInputTokens) + log.Infof("kiro: parseEventStream found uncachedInputTokens in tokenUsage: %d", usageInfo.InputTokens) + } + // cacheReadInputTokens - tokens read from cache + if cacheReadTokens, ok := tokenUsage["cacheReadInputTokens"].(float64); ok { + // Add to input tokens if we have uncached tokens, otherwise use as input + if usageInfo.InputTokens > 0 { + usageInfo.InputTokens += int64(cacheReadTokens) + } else { + usageInfo.InputTokens = int64(cacheReadTokens) + } + log.Debugf("kiro: parseEventStream found cacheReadInputTokens in tokenUsage: %d", int64(cacheReadTokens)) + } + // contextUsagePercentage - can be used as fallback for input token estimation + if ctxPct, ok := tokenUsage["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: parseEventStream found contextUsagePercentage in tokenUsage: %.2f%%", ctxPct) + } + } + + // Fallback: check for direct fields in metadata (legacy format) + if usageInfo.InputTokens == 0 { + if inputTokens, ok := metadata["inputTokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + log.Debugf("kiro: parseEventStream found inputTokens in messageMetadataEvent: %d", usageInfo.InputTokens) + } + } + if usageInfo.OutputTokens == 0 { + if outputTokens, ok := metadata["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + log.Debugf("kiro: parseEventStream found outputTokens in messageMetadataEvent: %d", usageInfo.OutputTokens) + } + } + if usageInfo.TotalTokens == 0 { + if totalTokens, ok := metadata["totalTokens"].(float64); ok { + usageInfo.TotalTokens = int64(totalTokens) + log.Debugf("kiro: parseEventStream found totalTokens in messageMetadataEvent: %d", usageInfo.TotalTokens) + } + } + + case "usageEvent", "usage": + // Handle dedicated usage events + if inputTokens, ok := event["inputTokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + log.Debugf("kiro: parseEventStream found inputTokens in usageEvent: %d", usageInfo.InputTokens) + } + if outputTokens, ok := event["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + log.Debugf("kiro: parseEventStream found outputTokens in usageEvent: %d", usageInfo.OutputTokens) + } + if totalTokens, ok := event["totalTokens"].(float64); ok { + usageInfo.TotalTokens = int64(totalTokens) + log.Debugf("kiro: parseEventStream found totalTokens in usageEvent: %d", usageInfo.TotalTokens) + } + // Also check nested usage object + if usageObj, ok := event["usage"].(map[string]interface{}); ok { + if inputTokens, ok := usageObj["input_tokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } else if inputTokens, ok := usageObj["prompt_tokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } + if outputTokens, ok := usageObj["output_tokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } else if outputTokens, ok := usageObj["completion_tokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } + if totalTokens, ok := usageObj["total_tokens"].(float64); ok { + usageInfo.TotalTokens = int64(totalTokens) + } + log.Debugf("kiro: parseEventStream found usage object: input=%d, output=%d, total=%d", + usageInfo.InputTokens, usageInfo.OutputTokens, usageInfo.TotalTokens) + } + + case "metricsEvent": + // Handle metrics events which may contain usage data + if metrics, ok := event["metricsEvent"].(map[string]interface{}); ok { + if inputTokens, ok := metrics["inputTokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } + if outputTokens, ok := metrics["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } + log.Debugf("kiro: parseEventStream found metricsEvent: input=%d, output=%d", + usageInfo.InputTokens, usageInfo.OutputTokens) + } + + case "meteringEvent": + // Handle metering events from Kiro API (usage billing information) + // Official format: { unit: string, unitPlural: string, usage: number } + if metering, ok := event["meteringEvent"].(map[string]interface{}); ok { + unit := "" + if u, ok := metering["unit"].(string); ok { + unit = u + } + usageVal := 0.0 + if u, ok := metering["usage"].(float64); ok { + usageVal = u + } + log.Infof("kiro: parseEventStream received meteringEvent: usage=%.2f %s", usageVal, unit) + // Store metering info for potential billing/statistics purposes + // Note: This is separate from token counts - it's AWS billing units + } else { + // Try direct fields + unit := "" + if u, ok := event["unit"].(string); ok { + unit = u + } + usageVal := 0.0 + if u, ok := event["usage"].(float64); ok { + usageVal = u + } + if unit != "" || usageVal > 0 { + log.Infof("kiro: parseEventStream received meteringEvent (direct): usage=%.2f %s", usageVal, unit) + } + } + + case "contextUsageEvent": + // Handle context usage events from Kiro API + // Format: {"contextUsageEvent": {"contextUsagePercentage": 0.53}} + if ctxUsage, ok := event["contextUsageEvent"].(map[string]interface{}); ok { + if ctxPct, ok := ctxUsage["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: parseEventStream received contextUsageEvent: %.2f%%", ctxPct*100) + } + } else { + // Try direct field (fallback) + if ctxPct, ok := event["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: parseEventStream received contextUsagePercentage (direct): %.2f%%", ctxPct*100) + } + } + + case "error", "exception", "internalServerException", "invalidStateEvent": + // Handle error events from Kiro API stream + errMsg := "" + errType := eventType + + // Try to extract error message from various formats + if msg, ok := event["message"].(string); ok { + errMsg = msg + } else if errObj, ok := event[eventType].(map[string]interface{}); ok { + if msg, ok := errObj["message"].(string); ok { + errMsg = msg + } + if t, ok := errObj["type"].(string); ok { + errType = t + } + } else if errObj, ok := event["error"].(map[string]interface{}); ok { + if msg, ok := errObj["message"].(string); ok { + errMsg = msg + } + if t, ok := errObj["type"].(string); ok { + errType = t + } + } + + // Check for specific error reasons + if reason, ok := event["reason"].(string); ok { + errMsg = fmt.Sprintf("%s (reason: %s)", errMsg, reason) + } + + log.Errorf("kiro: parseEventStream received error event: type=%s, message=%s", errType, errMsg) + + // For invalidStateEvent, we may want to continue processing other events + if eventType == "invalidStateEvent" { + log.Warnf("kiro: invalidStateEvent received, continuing stream processing") + continue + } + + // For other errors, return the error + if errMsg != "" { + return "", nil, usageInfo, stopReason, fmt.Errorf("kiro API error (%s): %s", errType, errMsg) + } + + default: + // Check for contextUsagePercentage in any event + if ctxPct, ok := event["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: parseEventStream received context usage: %.2f%%", upstreamContextPercentage) + } + // Log unknown event types for debugging (to discover new event formats) + log.Debugf("kiro: parseEventStream unknown event type: %s, payload: %s", eventType, string(payload)) + } + + // Check for direct token fields in any event (fallback) + if usageInfo.InputTokens == 0 { + if inputTokens, ok := event["inputTokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + log.Debugf("kiro: parseEventStream found direct inputTokens: %d", usageInfo.InputTokens) + } + } + if usageInfo.OutputTokens == 0 { + if outputTokens, ok := event["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + log.Debugf("kiro: parseEventStream found direct outputTokens: %d", usageInfo.OutputTokens) + } + } + + // Check for usage object in any event (OpenAI format) + if usageInfo.InputTokens == 0 || usageInfo.OutputTokens == 0 { + if usageObj, ok := event["usage"].(map[string]interface{}); ok { + if usageInfo.InputTokens == 0 { + if inputTokens, ok := usageObj["input_tokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } else if inputTokens, ok := usageObj["prompt_tokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } + } + if usageInfo.OutputTokens == 0 { + if outputTokens, ok := usageObj["output_tokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } else if outputTokens, ok := usageObj["completion_tokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } + } + if usageInfo.TotalTokens == 0 { + if totalTokens, ok := usageObj["total_tokens"].(float64); ok { + usageInfo.TotalTokens = int64(totalTokens) + } + } + log.Debugf("kiro: parseEventStream found usage object (fallback): input=%d, output=%d, total=%d", + usageInfo.InputTokens, usageInfo.OutputTokens, usageInfo.TotalTokens) + } + } + + // Also check nested supplementaryWebLinksEvent + if usageEvent, ok := event["supplementaryWebLinksEvent"].(map[string]interface{}); ok { + if inputTokens, ok := usageEvent["inputTokens"].(float64); ok { + usageInfo.InputTokens = int64(inputTokens) + } + if outputTokens, ok := usageEvent["outputTokens"].(float64); ok { + usageInfo.OutputTokens = int64(outputTokens) + } + } + } + + // Parse embedded tool calls from content (e.g., [Called tool_name with args: {...}]) + contentStr := content.String() + cleanedContent, embeddedToolUses := kiroclaude.ParseEmbeddedToolCalls(contentStr, processedIDs) + toolUses = append(toolUses, embeddedToolUses...) + + // Deduplicate all tool uses + toolUses = kiroclaude.DeduplicateToolUses(toolUses) + + // Apply fallback logic for stop_reason if not provided by upstream + // Priority: upstream stopReason > tool_use detection > end_turn default + if stopReason == "" { + if len(toolUses) > 0 { + stopReason = "tool_use" + log.Debugf("kiro: parseEventStream using fallback stop_reason: tool_use (detected %d tool uses)", len(toolUses)) + } else { + stopReason = "end_turn" + log.Debugf("kiro: parseEventStream using fallback stop_reason: end_turn") + } + } + + // Log warning if response was truncated due to max_tokens + if stopReason == "max_tokens" { + log.Warnf("kiro: response truncated due to max_tokens limit") + } + + // Use contextUsagePercentage to calculate more accurate input tokens + // Kiro model has 200k max context, contextUsagePercentage represents the percentage used + // Formula: input_tokens = contextUsagePercentage * 200000 / 100 + if upstreamContextPercentage > 0 { + calculatedInputTokens := int64(upstreamContextPercentage * 200000 / 100) + if calculatedInputTokens > 0 { + localEstimate := usageInfo.InputTokens + usageInfo.InputTokens = calculatedInputTokens + usageInfo.TotalTokens = usageInfo.InputTokens + usageInfo.OutputTokens + log.Infof("kiro: parseEventStream using contextUsagePercentage (%.2f%%) to calculate input tokens: %d (local estimate was: %d)", + upstreamContextPercentage, calculatedInputTokens, localEstimate) + } + } + + return cleanedContent, toolUses, usageInfo, stopReason, nil +} + +// readEventStreamMessage reads and validates a single AWS Event Stream message. +// Returns the parsed message or a structured error for different failure modes. +// This function implements boundary protection and detailed error classification. +// +// AWS Event Stream binary format: +// - Prelude (12 bytes): total_length (4) + headers_length (4) + prelude_crc (4) +// - Headers (variable): header entries +// - Payload (variable): JSON data +// - Message CRC (4 bytes): CRC32C of entire message (not validated, just skipped) +func (e *KiroExecutor) readEventStreamMessage(reader *bufio.Reader) (*eventStreamMessage, *EventStreamError) { + // Read prelude (first 12 bytes: total_len + headers_len + prelude_crc) + prelude := make([]byte, 12) + _, err := io.ReadFull(reader, prelude) + if err == io.EOF { + return nil, nil // Normal end of stream + } + if err != nil { + return nil, &EventStreamError{ + Type: ErrStreamFatal, + Message: "failed to read prelude", + Cause: err, + } + } + + totalLength := binary.BigEndian.Uint32(prelude[0:4]) + headersLength := binary.BigEndian.Uint32(prelude[4:8]) + // Note: prelude[8:12] is prelude_crc - we read it but don't validate (no CRC check per requirements) + + // Boundary check: minimum frame size + if totalLength < minEventStreamFrameSize { + return nil, &EventStreamError{ + Type: ErrStreamMalformed, + Message: fmt.Sprintf("invalid message length: %d (minimum is %d)", totalLength, minEventStreamFrameSize), + } + } + + // Boundary check: maximum message size + if totalLength > maxEventStreamMsgSize { + return nil, &EventStreamError{ + Type: ErrStreamMalformed, + Message: fmt.Sprintf("message too large: %d bytes (maximum is %d)", totalLength, maxEventStreamMsgSize), + } + } + + // Boundary check: headers length within message bounds + // Message structure: prelude(12) + headers(headersLength) + payload + message_crc(4) + // So: headersLength must be <= totalLength - 16 (12 for prelude + 4 for message_crc) + if headersLength > totalLength-16 { + return nil, &EventStreamError{ + Type: ErrStreamMalformed, + Message: fmt.Sprintf("headers length %d exceeds message bounds (total: %d)", headersLength, totalLength), + } + } + + // Read the rest of the message (total - 12 bytes already read) + remaining := make([]byte, totalLength-12) + _, err = io.ReadFull(reader, remaining) + if err != nil { + return nil, &EventStreamError{ + Type: ErrStreamFatal, + Message: "failed to read message body", + Cause: err, + } + } + + // Extract event type from headers + // Headers start at beginning of 'remaining', length is headersLength + var eventType string + if headersLength > 0 && headersLength <= uint32(len(remaining)) { + eventType = e.extractEventTypeFromBytes(remaining[:headersLength]) + } + + // Calculate payload boundaries + // Payload starts after headers, ends before message_crc (last 4 bytes) + payloadStart := headersLength + payloadEnd := uint32(len(remaining)) - 4 // Skip message_crc at end + + // Validate payload boundaries + if payloadStart >= payloadEnd { + // No payload, return empty message + return &eventStreamMessage{ + EventType: eventType, + Payload: nil, + }, nil + } + + payload := remaining[payloadStart:payloadEnd] + + return &eventStreamMessage{ + EventType: eventType, + Payload: payload, + }, nil +} + +func skipEventStreamHeaderValue(headers []byte, offset int, valueType byte) (int, bool) { + switch valueType { + case 0, 1: // bool true / bool false + return offset, true + case 2: // byte + if offset+1 > len(headers) { + return offset, false + } + return offset + 1, true + case 3: // short + if offset+2 > len(headers) { + return offset, false + } + return offset + 2, true + case 4: // int + if offset+4 > len(headers) { + return offset, false + } + return offset + 4, true + case 5: // long + if offset+8 > len(headers) { + return offset, false + } + return offset + 8, true + case 6: // byte array (2-byte length + data) + if offset+2 > len(headers) { + return offset, false + } + valueLen := int(binary.BigEndian.Uint16(headers[offset : offset+2])) + offset += 2 + if offset+valueLen > len(headers) { + return offset, false + } + return offset + valueLen, true + case 8: // timestamp + if offset+8 > len(headers) { + return offset, false + } + return offset + 8, true + case 9: // uuid + if offset+16 > len(headers) { + return offset, false + } + return offset + 16, true + default: + return offset, false + } +} + +// extractEventTypeFromBytes extracts the event type from raw header bytes (without prelude CRC prefix) +func (e *KiroExecutor) extractEventTypeFromBytes(headers []byte) string { + offset := 0 + for offset < len(headers) { + nameLen := int(headers[offset]) + offset++ + if offset+nameLen > len(headers) { + break + } + name := string(headers[offset : offset+nameLen]) + offset += nameLen + + if offset >= len(headers) { + break + } + valueType := headers[offset] + offset++ + + if valueType == 7 { // String type + if offset+2 > len(headers) { + break + } + valueLen := int(binary.BigEndian.Uint16(headers[offset : offset+2])) + offset += 2 + if offset+valueLen > len(headers) { + break + } + value := string(headers[offset : offset+valueLen]) + offset += valueLen + + if name == ":event-type" { + return value + } + continue + } + + nextOffset, ok := skipEventStreamHeaderValue(headers, offset, valueType) + if !ok { + break + } + offset = nextOffset + } + return "" +} + +// NOTE: Response building functions moved to internal/translator/kiro/claude/kiro_claude_response.go +// The executor now uses kiroclaude.BuildClaudeResponse() and kiroclaude.ExtractThinkingFromContent() instead + +// streamToChannel converts AWS Event Stream to channel-based streaming. +// Supports tool calling - emits tool_use content blocks when tools are used. +// Includes embedded [Called ...] tool call parsing and input buffering for toolUseEvent. +// Implements duplicate content filtering using lastContentEvent detection (based on AIClient-2-API). +// Extracts stop_reason from upstream events when available. +// thinkingEnabled controls whether tags are parsed - only parse when request enabled thinking. +func (e *KiroExecutor) streamToChannel(ctx context.Context, body io.Reader, out chan<- cliproxyexecutor.StreamChunk, targetFormat sdktranslator.Format, model string, originalReq, claudeBody []byte, reporter *usageReporter, thinkingEnabled bool) { + reader := bufio.NewReaderSize(body, 20*1024*1024) // 20MB buffer to match other providers + var totalUsage usage.Detail + var hasToolUses bool // Track if any tool uses were emitted + var upstreamStopReason string // Track stop_reason from upstream events + + // Tool use state tracking for input buffering and deduplication + processedIDs := make(map[string]bool) + var currentToolUse *kiroclaude.ToolUseState + + // NOTE: Duplicate content filtering removed - it was causing legitimate repeated + // content (like consecutive newlines) to be incorrectly filtered out. + // The previous implementation compared lastContentEvent == contentDelta which + // is too aggressive for streaming scenarios. + + // Streaming token calculation - accumulate content for real-time token counting + // Based on AIClient-2-API implementation + var accumulatedContent strings.Builder + accumulatedContent.Grow(4096) // Pre-allocate 4KB capacity to reduce reallocations + + // Real-time usage estimation state + // These track when to send periodic usage updates during streaming + var lastUsageUpdateLen int // Last accumulated content length when usage was sent + var lastUsageUpdateTime = time.Now() // Last time usage update was sent + var lastReportedOutputTokens int64 // Last reported output token count + + // Upstream usage tracking - Kiro API returns credit usage and context percentage + var upstreamCreditUsage float64 // Credit usage from upstream (e.g., 1.458) + var upstreamContextPercentage float64 // Context usage percentage from upstream (e.g., 78.56) + var hasUpstreamUsage bool // Whether we received usage from upstream + + // Translator param for maintaining tool call state across streaming events + // IMPORTANT: This must persist across all TranslateStream calls + var translatorParam any + + // Thinking mode state tracking - tag-based parsing for tags in content + inThinkBlock := false // Whether we're currently inside a block + isThinkingBlockOpen := false // Track if thinking content block SSE event is open + thinkingBlockIndex := -1 // Index of the thinking content block + var accumulatedThinkingContent strings.Builder // Accumulate thinking content for token counting + hasOfficialReasoningEvent := false // Disable tag parsing after official reasoning events appear + + // Buffer for handling partial tag matches at chunk boundaries + var pendingContent strings.Builder // Buffer content that might be part of a tag + + // Pre-calculate input tokens from request if possible + // Kiro uses Claude format, so try Claude format first, then OpenAI format, then fallback + if enc, err := getTokenizer(model); err == nil { + var inputTokens int64 + var countMethod string + + // Try Claude format first (Kiro uses Claude API format) + if inp, err := countClaudeChatTokens(enc, claudeBody); err == nil && inp > 0 { + inputTokens = inp + countMethod = "claude" + } else if inp, err := countOpenAIChatTokens(enc, originalReq); err == nil && inp > 0 { + // Fallback to OpenAI format (for OpenAI-compatible requests) + inputTokens = inp + countMethod = "openai" + } else { + // Final fallback: estimate from raw request size (roughly 4 chars per token) + inputTokens = int64(len(claudeBody) / 4) + if inputTokens == 0 && len(claudeBody) > 0 { + inputTokens = 1 + } + countMethod = "estimate" + } + + totalUsage.InputTokens = inputTokens + log.Debugf("kiro: streamToChannel pre-calculated input tokens: %d (method: %s, claude body: %d bytes, original req: %d bytes)", + totalUsage.InputTokens, countMethod, len(claudeBody), len(originalReq)) + } + + contentBlockIndex := -1 + messageStartSent := false + isTextBlockOpen := false + var outputLen int + + // Ensure usage is published even on early return + defer func() { + reporter.publish(ctx, totalUsage) + }() + + for { + select { + case <-ctx.Done(): + return + default: + } + + msg, eventErr := e.readEventStreamMessage(reader) + if eventErr != nil { + // Log the error + log.Errorf("kiro: streamToChannel error: %v", eventErr) + + // Send error to channel for client notification + out <- cliproxyexecutor.StreamChunk{Err: eventErr} + return + } + if msg == nil { + // Normal end of stream (EOF) + // Flush any incomplete tool use before ending stream + if currentToolUse != nil && !processedIDs[currentToolUse.ToolUseID] { + log.Warnf("kiro: flushing incomplete tool use at EOF: %s (ID: %s)", currentToolUse.Name, currentToolUse.ToolUseID) + fullInput := currentToolUse.InputBuffer.String() + repairedJSON := kiroclaude.RepairJSON(fullInput) + var finalInput map[string]interface{} + if err := json.Unmarshal([]byte(repairedJSON), &finalInput); err != nil { + log.Warnf("kiro: failed to parse incomplete tool input at EOF: %v", err) + finalInput = make(map[string]interface{}) + } + + processedIDs[currentToolUse.ToolUseID] = true + contentBlockIndex++ + + // Send tool_use content block + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(contentBlockIndex, "tool_use", currentToolUse.ToolUseID, currentToolUse.Name) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + // Send tool input as delta + inputBytes, _ := json.Marshal(finalInput) + inputDelta := kiroclaude.BuildClaudeInputJsonDeltaEvent(string(inputBytes), contentBlockIndex) + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, inputDelta, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + // Close block + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + hasToolUses = true + currentToolUse = nil + } + + // DISABLED: Tag-based pending character flushing + // This code block was used for tag-based thinking detection which has been + // replaced by reasoningContentEvent handling. No pending tag chars to flush. + // Original code preserved in git history. + break + } + + eventType := msg.EventType + payload := msg.Payload + if len(payload) == 0 { + continue + } + appendAPIResponseChunk(ctx, e.cfg, payload) + + var event map[string]interface{} + if err := json.Unmarshal(payload, &event); err != nil { + log.Warnf("kiro: failed to unmarshal event payload: %v, raw: %s", err, string(payload)) + continue + } + + // Check for error/exception events in the payload (Kiro API may return errors with HTTP 200) + // These can appear as top-level fields or nested within the event + if errType, hasErrType := event["_type"].(string); hasErrType { + // AWS-style error: {"_type": "com.amazon.aws.codewhisperer#ValidationException", "message": "..."} + errMsg := "" + if msg, ok := event["message"].(string); ok { + errMsg = msg + } + log.Errorf("kiro: received AWS error in stream: type=%s, message=%s", errType, errMsg) + out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("kiro API error: %s - %s", errType, errMsg)} + return + } + if errType, hasErrType := event["type"].(string); hasErrType && (errType == "error" || errType == "exception") { + // Generic error event + errMsg := "" + if msg, ok := event["message"].(string); ok { + errMsg = msg + } else if errObj, ok := event["error"].(map[string]interface{}); ok { + if msg, ok := errObj["message"].(string); ok { + errMsg = msg + } + } + log.Errorf("kiro: received error event in stream: type=%s, message=%s", errType, errMsg) + out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("kiro API error: %s", errMsg)} + return + } + + // Extract stop_reason from various event formats (streaming) + // Kiro/Amazon Q API may include stop_reason in different locations + if sr := kirocommon.GetString(event, "stop_reason"); sr != "" { + upstreamStopReason = sr + log.Debugf("kiro: streamToChannel found stop_reason (top-level): %s", upstreamStopReason) + } + if sr := kirocommon.GetString(event, "stopReason"); sr != "" { + upstreamStopReason = sr + log.Debugf("kiro: streamToChannel found stopReason (top-level): %s", upstreamStopReason) + } + + // Send message_start on first event + if !messageStartSent { + msgStart := kiroclaude.BuildClaudeMessageStartEvent(model, totalUsage.InputTokens) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, msgStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + messageStartSent = true + } + + switch eventType { + case "followupPromptEvent": + // Filter out followupPrompt events - these are UI suggestions, not content + log.Debugf("kiro: streamToChannel ignoring followupPrompt event") + continue + + case "messageStopEvent", "message_stop": + // Handle message stop events which may contain stop_reason + if sr := kirocommon.GetString(event, "stop_reason"); sr != "" { + upstreamStopReason = sr + log.Debugf("kiro: streamToChannel found stop_reason in messageStopEvent: %s", upstreamStopReason) + } + if sr := kirocommon.GetString(event, "stopReason"); sr != "" { + upstreamStopReason = sr + log.Debugf("kiro: streamToChannel found stopReason in messageStopEvent: %s", upstreamStopReason) + } + + case "meteringEvent": + // Handle metering events from Kiro API (usage billing information) + // Official format: { unit: string, unitPlural: string, usage: number } + if metering, ok := event["meteringEvent"].(map[string]interface{}); ok { + unit := "" + if u, ok := metering["unit"].(string); ok { + unit = u + } + usageVal := 0.0 + if u, ok := metering["usage"].(float64); ok { + usageVal = u + } + upstreamCreditUsage = usageVal + hasUpstreamUsage = true + log.Infof("kiro: streamToChannel received meteringEvent: usage=%.4f %s", usageVal, unit) + } else { + // Try direct fields (event is meteringEvent itself) + if unit, ok := event["unit"].(string); ok { + if usage, ok := event["usage"].(float64); ok { + upstreamCreditUsage = usage + hasUpstreamUsage = true + log.Infof("kiro: streamToChannel received meteringEvent (direct): usage=%.4f %s", usage, unit) + } + } + } + + case "contextUsageEvent": + // Handle context usage events from Kiro API + // Format: {"contextUsageEvent": {"contextUsagePercentage": 0.53}} + if ctxUsage, ok := event["contextUsageEvent"].(map[string]interface{}); ok { + if ctxPct, ok := ctxUsage["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: streamToChannel received contextUsageEvent: %.2f%%", ctxPct*100) + } + } else { + // Try direct field (fallback) + if ctxPct, ok := event["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: streamToChannel received contextUsagePercentage (direct): %.2f%%", ctxPct*100) + } + } + + case "error", "exception", "internalServerException": + // Handle error events from Kiro API stream + errMsg := "" + errType := eventType + + // Try to extract error message from various formats + if msg, ok := event["message"].(string); ok { + errMsg = msg + } else if errObj, ok := event[eventType].(map[string]interface{}); ok { + if msg, ok := errObj["message"].(string); ok { + errMsg = msg + } + if t, ok := errObj["type"].(string); ok { + errType = t + } + } else if errObj, ok := event["error"].(map[string]interface{}); ok { + if msg, ok := errObj["message"].(string); ok { + errMsg = msg + } + } + + log.Errorf("kiro: streamToChannel received error event: type=%s, message=%s", errType, errMsg) + + // Send error to the stream and exit + if errMsg != "" { + out <- cliproxyexecutor.StreamChunk{ + Err: fmt.Errorf("kiro API error (%s): %s", errType, errMsg), + } + return + } + + case "invalidStateEvent": + // Handle invalid state events - log and continue (non-fatal) + errMsg := "" + if msg, ok := event["message"].(string); ok { + errMsg = msg + } else if stateEvent, ok := event["invalidStateEvent"].(map[string]interface{}); ok { + if msg, ok := stateEvent["message"].(string); ok { + errMsg = msg + } + } + log.Warnf("kiro: streamToChannel received invalidStateEvent: %s, continuing", errMsg) + continue + + default: + // Check for upstream usage events from Kiro API + // Format: {"unit":"credit","unitPlural":"credits","usage":1.458} + if unit, ok := event["unit"].(string); ok && unit == "credit" { + if usage, ok := event["usage"].(float64); ok { + upstreamCreditUsage = usage + hasUpstreamUsage = true + log.Debugf("kiro: received upstream credit usage: %.4f", upstreamCreditUsage) + } + } + // Format: {"contextUsagePercentage":78.56} + if ctxPct, ok := event["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: received upstream context usage: %.2f%%", upstreamContextPercentage) + } + + // Check for token counts in unknown events + if inputTokens, ok := event["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + hasUpstreamUsage = true + log.Debugf("kiro: streamToChannel found inputTokens in event %s: %d", eventType, totalUsage.InputTokens) + } + if outputTokens, ok := event["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + hasUpstreamUsage = true + log.Debugf("kiro: streamToChannel found outputTokens in event %s: %d", eventType, totalUsage.OutputTokens) + } + if totalTokens, ok := event["totalTokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + log.Debugf("kiro: streamToChannel found totalTokens in event %s: %d", eventType, totalUsage.TotalTokens) + } + + // Check for usage object in unknown events (OpenAI/Claude format) + if usageObj, ok := event["usage"].(map[string]interface{}); ok { + if inputTokens, ok := usageObj["input_tokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + hasUpstreamUsage = true + } else if inputTokens, ok := usageObj["prompt_tokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + hasUpstreamUsage = true + } + if outputTokens, ok := usageObj["output_tokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + hasUpstreamUsage = true + } else if outputTokens, ok := usageObj["completion_tokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + hasUpstreamUsage = true + } + if totalTokens, ok := usageObj["total_tokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + } + log.Debugf("kiro: streamToChannel found usage object in event %s: input=%d, output=%d, total=%d", + eventType, totalUsage.InputTokens, totalUsage.OutputTokens, totalUsage.TotalTokens) + } + + // Log unknown event types for debugging (to discover new event formats) + if eventType != "" { + log.Debugf("kiro: streamToChannel unknown event type: %s, payload: %s", eventType, string(payload)) + } + + case "assistantResponseEvent": + var contentDelta string + var toolUses []map[string]interface{} + + if assistantResp, ok := event["assistantResponseEvent"].(map[string]interface{}); ok { + if c, ok := assistantResp["content"].(string); ok { + contentDelta = c + } + // Extract stop_reason from assistantResponseEvent + if sr := kirocommon.GetString(assistantResp, "stop_reason"); sr != "" { + upstreamStopReason = sr + log.Debugf("kiro: streamToChannel found stop_reason in assistantResponseEvent: %s", upstreamStopReason) + } + if sr := kirocommon.GetString(assistantResp, "stopReason"); sr != "" { + upstreamStopReason = sr + log.Debugf("kiro: streamToChannel found stopReason in assistantResponseEvent: %s", upstreamStopReason) + } + // Extract tool uses from response + if tus, ok := assistantResp["toolUses"].([]interface{}); ok { + for _, tuRaw := range tus { + if tu, ok := tuRaw.(map[string]interface{}); ok { + toolUses = append(toolUses, tu) + } + } + } + } + if contentDelta == "" { + if c, ok := event["content"].(string); ok { + contentDelta = c + } + } + // Direct tool uses + if tus, ok := event["toolUses"].([]interface{}); ok { + for _, tuRaw := range tus { + if tu, ok := tuRaw.(map[string]interface{}); ok { + toolUses = append(toolUses, tu) + } + } + } + + // Handle text content with thinking mode support + if contentDelta != "" { + // NOTE: Duplicate content filtering was removed because it incorrectly + // filtered out legitimate repeated content (like consecutive newlines "\n\n"). + // Streaming naturally can have identical chunks that are valid content. + + outputLen += len(contentDelta) + // Accumulate content for streaming token calculation + accumulatedContent.WriteString(contentDelta) + + // Real-time usage estimation: Check if we should send a usage update + // This helps clients track context usage during long thinking sessions + shouldSendUsageUpdate := false + if accumulatedContent.Len()-lastUsageUpdateLen >= usageUpdateCharThreshold { + shouldSendUsageUpdate = true + } else if time.Since(lastUsageUpdateTime) >= usageUpdateTimeInterval && accumulatedContent.Len() > lastUsageUpdateLen { + shouldSendUsageUpdate = true + } + + if shouldSendUsageUpdate { + // Calculate current output tokens using tiktoken + var currentOutputTokens int64 + if enc, encErr := getTokenizer(model); encErr == nil { + if tokenCount, countErr := enc.Count(accumulatedContent.String()); countErr == nil { + currentOutputTokens = int64(tokenCount) + } + } + // Fallback to character estimation if tiktoken fails + if currentOutputTokens == 0 { + currentOutputTokens = int64(accumulatedContent.Len() / 4) + if currentOutputTokens == 0 { + currentOutputTokens = 1 + } + } + + // Only send update if token count has changed significantly (at least 10 tokens) + if currentOutputTokens > lastReportedOutputTokens+10 { + // Send ping event with usage information + // This is a non-blocking update that clients can optionally process + pingEvent := kiroclaude.BuildClaudePingEventWithUsage(totalUsage.InputTokens, currentOutputTokens) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, pingEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + lastReportedOutputTokens = currentOutputTokens + log.Debugf("kiro: sent real-time usage update - input: %d, output: %d (accumulated: %d chars)", + totalUsage.InputTokens, currentOutputTokens, accumulatedContent.Len()) + } + + lastUsageUpdateLen = accumulatedContent.Len() + lastUsageUpdateTime = time.Now() + } + + if hasOfficialReasoningEvent { + processText := strings.TrimSpace(strings.ReplaceAll(strings.ReplaceAll(contentDelta, kirocommon.ThinkingStartTag, ""), kirocommon.ThinkingEndTag, "")) + if processText != "" { + if !isTextBlockOpen { + contentBlockIndex++ + isTextBlockOpen = true + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(contentBlockIndex, "text", "", "") + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + claudeEvent := kiroclaude.BuildClaudeStreamEvent(processText, contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, claudeEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + continue + } + + // TAG-BASED THINKING PARSING: Parse tags from content + // Combine pending content with new content for processing + pendingContent.WriteString(contentDelta) + processContent := pendingContent.String() + pendingContent.Reset() + + // Process content looking for thinking tags + for len(processContent) > 0 { + if inThinkBlock { + // We're inside a thinking block, look for + endIdx := strings.Index(processContent, kirocommon.ThinkingEndTag) + if endIdx >= 0 { + // Found end tag - emit thinking content before the tag + thinkingText := processContent[:endIdx] + if thinkingText != "" { + // Ensure thinking block is open + if !isThinkingBlockOpen { + contentBlockIndex++ + thinkingBlockIndex = contentBlockIndex + isThinkingBlockOpen = true + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(thinkingBlockIndex, "thinking", "", "") + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + // Send thinking delta + thinkingEvent := kiroclaude.BuildClaudeThinkingDeltaEvent(thinkingText, thinkingBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, thinkingEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + accumulatedThinkingContent.WriteString(thinkingText) + } + // Close thinking block + if isThinkingBlockOpen { + blockStop := kiroclaude.BuildClaudeThinkingBlockStopEvent(thinkingBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + isThinkingBlockOpen = false + } + inThinkBlock = false + processContent = processContent[endIdx+len(kirocommon.ThinkingEndTag):] + log.Debugf("kiro: closed thinking block, remaining content: %d chars", len(processContent)) + } else { + // No end tag found - check for partial match at end + partialMatch := false + for i := 1; i < len(kirocommon.ThinkingEndTag) && i <= len(processContent); i++ { + if strings.HasSuffix(processContent, kirocommon.ThinkingEndTag[:i]) { + // Possible partial tag at end, buffer it + pendingContent.WriteString(processContent[len(processContent)-i:]) + processContent = processContent[:len(processContent)-i] + partialMatch = true + break + } + } + if !partialMatch || len(processContent) > 0 { + // Emit all as thinking content + if processContent != "" { + if !isThinkingBlockOpen { + contentBlockIndex++ + thinkingBlockIndex = contentBlockIndex + isThinkingBlockOpen = true + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(thinkingBlockIndex, "thinking", "", "") + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + thinkingEvent := kiroclaude.BuildClaudeThinkingDeltaEvent(processContent, thinkingBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, thinkingEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + accumulatedThinkingContent.WriteString(processContent) + } + } + processContent = "" + } + } else { + // Not in thinking block, look for + startIdx := strings.Index(processContent, kirocommon.ThinkingStartTag) + if startIdx >= 0 { + // Found start tag - emit text content before the tag + textBefore := processContent[:startIdx] + if textBefore != "" { + // Close thinking block if open + if isThinkingBlockOpen { + blockStop := kiroclaude.BuildClaudeThinkingBlockStopEvent(thinkingBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + isThinkingBlockOpen = false + } + // Ensure text block is open + if !isTextBlockOpen { + contentBlockIndex++ + isTextBlockOpen = true + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(contentBlockIndex, "text", "", "") + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + // Send text delta + claudeEvent := kiroclaude.BuildClaudeStreamEvent(textBefore, contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, claudeEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + // Close text block before entering thinking + if isTextBlockOpen { + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + isTextBlockOpen = false + } + inThinkBlock = true + processContent = processContent[startIdx+len(kirocommon.ThinkingStartTag):] + log.Debugf("kiro: entered thinking block") + } else { + // No start tag found - check for partial match at end + partialMatch := false + for i := 1; i < len(kirocommon.ThinkingStartTag) && i <= len(processContent); i++ { + if strings.HasSuffix(processContent, kirocommon.ThinkingStartTag[:i]) { + // Possible partial tag at end, buffer it + pendingContent.WriteString(processContent[len(processContent)-i:]) + processContent = processContent[:len(processContent)-i] + partialMatch = true + break + } + } + if !partialMatch || len(processContent) > 0 { + // Emit all as text content + if processContent != "" { + if !isTextBlockOpen { + contentBlockIndex++ + isTextBlockOpen = true + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(contentBlockIndex, "text", "", "") + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + claudeEvent := kiroclaude.BuildClaudeStreamEvent(processContent, contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, claudeEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + } + processContent = "" + } + } + } + } + + // Handle tool uses in response (with deduplication) + for _, tu := range toolUses { + toolUseID := kirocommon.GetString(tu, "toolUseId") + toolName := kirocommon.GetString(tu, "name") + + // Check for duplicate + if processedIDs[toolUseID] { + log.Debugf("kiro: skipping duplicate tool use in stream: %s", toolUseID) + continue + } + processedIDs[toolUseID] = true + + hasToolUses = true + // Close text block if open before starting tool_use block + if isTextBlockOpen && contentBlockIndex >= 0 { + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + isTextBlockOpen = false + } + + // Emit tool_use content block + contentBlockIndex++ + + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(contentBlockIndex, "tool_use", toolUseID, toolName) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + // Send input_json_delta with the tool input + if input, ok := tu["input"].(map[string]interface{}); ok { + inputJSON, err := json.Marshal(input) + if err != nil { + log.Debugf("kiro: failed to marshal tool input: %v", err) + // Don't continue - still need to close the block + } else { + inputDelta := kiroclaude.BuildClaudeInputJsonDeltaEvent(string(inputJSON), contentBlockIndex) + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, inputDelta, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + } + + // Close tool_use block (always close even if input marshal failed) + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + + case "reasoningContentEvent": + // Handle official reasoningContentEvent from Kiro API + // This replaces tag-based thinking detection with the proper event type + // Official format: { text: string, signature?: string, redactedContent?: base64 } + var thinkingText string + var signature string + + if re, ok := event["reasoningContentEvent"].(map[string]interface{}); ok { + if text, ok := re["text"].(string); ok { + thinkingText = text + } + if sig, ok := re["signature"].(string); ok { + signature = sig + if len(sig) > 20 { + log.Debugf("kiro: reasoningContentEvent has signature: %s...", sig[:20]) + } else { + log.Debugf("kiro: reasoningContentEvent has signature: %s", sig) + } + } + } else { + // Try direct fields + if text, ok := event["text"].(string); ok { + thinkingText = text + } + if sig, ok := event["signature"].(string); ok { + signature = sig + } + } + + if thinkingText != "" { + hasOfficialReasoningEvent = true + // Close text block if open before starting thinking block + if isTextBlockOpen && contentBlockIndex >= 0 { + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + isTextBlockOpen = false + } + + // Start thinking block if not already open + if !isThinkingBlockOpen { + contentBlockIndex++ + thinkingBlockIndex = contentBlockIndex + isThinkingBlockOpen = true + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(thinkingBlockIndex, "thinking", "", "") + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + + // Send thinking content + thinkingEvent := kiroclaude.BuildClaudeThinkingDeltaEvent(thinkingText, thinkingBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, thinkingEvent, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + // Accumulate for token counting + accumulatedThinkingContent.WriteString(thinkingText) + log.Debugf("kiro: received reasoningContentEvent, text length: %d, has signature: %v", len(thinkingText), signature != "") + } + + // Note: We don't close the thinking block here - it will be closed when we see + // the next assistantResponseEvent or at the end of the stream + _ = signature // Signature can be used for verification if needed + + case "toolUseEvent": + // Handle dedicated tool use events with input buffering + completedToolUses, newState := kiroclaude.ProcessToolUseEvent(event, currentToolUse, processedIDs) + currentToolUse = newState + + // Emit completed tool uses + for _, tu := range completedToolUses { + // Skip truncated tools - don't emit fake marker tool_use + if tu.IsTruncated { + log.Warnf("kiro: streamToChannel skipping truncated tool: %s (ID: %s)", tu.Name, tu.ToolUseID) + continue + } + + hasToolUses = true + + // Close text block if open + if isTextBlockOpen && contentBlockIndex >= 0 { + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + isTextBlockOpen = false + } + + contentBlockIndex++ + + blockStart := kiroclaude.BuildClaudeContentBlockStartEvent(contentBlockIndex, "tool_use", tu.ToolUseID, tu.Name) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStart, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + if tu.Input != nil { + inputJSON, err := json.Marshal(tu.Input) + if err != nil { + log.Debugf("kiro: failed to marshal tool input in toolUseEvent: %v", err) + } else { + inputDelta := kiroclaude.BuildClaudeInputJsonDeltaEvent(string(inputJSON), contentBlockIndex) + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, inputDelta, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + } + + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + + case "supplementaryWebLinksEvent": + if inputTokens, ok := event["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } + if outputTokens, ok := event["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } + + case "messageMetadataEvent", "metadataEvent": + // Handle message metadata events which contain token counts + // Official format: { tokenUsage: { outputTokens, totalTokens, uncachedInputTokens, cacheReadInputTokens, cacheWriteInputTokens, contextUsagePercentage } } + var metadata map[string]interface{} + if m, ok := event["messageMetadataEvent"].(map[string]interface{}); ok { + metadata = m + } else if m, ok := event["metadataEvent"].(map[string]interface{}); ok { + metadata = m + } else { + metadata = event // event itself might be the metadata + } + + // Check for nested tokenUsage object (official format) + if tokenUsage, ok := metadata["tokenUsage"].(map[string]interface{}); ok { + // outputTokens - precise output token count + if outputTokens, ok := tokenUsage["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + hasUpstreamUsage = true + log.Infof("kiro: streamToChannel found precise outputTokens in tokenUsage: %d", totalUsage.OutputTokens) + } + // totalTokens - precise total token count + if totalTokens, ok := tokenUsage["totalTokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + log.Infof("kiro: streamToChannel found precise totalTokens in tokenUsage: %d", totalUsage.TotalTokens) + } + // uncachedInputTokens - input tokens not from cache + if uncachedInputTokens, ok := tokenUsage["uncachedInputTokens"].(float64); ok { + totalUsage.InputTokens = int64(uncachedInputTokens) + hasUpstreamUsage = true + log.Infof("kiro: streamToChannel found uncachedInputTokens in tokenUsage: %d", totalUsage.InputTokens) + } + // cacheReadInputTokens - tokens read from cache + if cacheReadTokens, ok := tokenUsage["cacheReadInputTokens"].(float64); ok { + // Add to input tokens if we have uncached tokens, otherwise use as input + if totalUsage.InputTokens > 0 { + totalUsage.InputTokens += int64(cacheReadTokens) + } else { + totalUsage.InputTokens = int64(cacheReadTokens) + } + hasUpstreamUsage = true + log.Debugf("kiro: streamToChannel found cacheReadInputTokens in tokenUsage: %d", int64(cacheReadTokens)) + } + // contextUsagePercentage - can be used as fallback for input token estimation + if ctxPct, ok := tokenUsage["contextUsagePercentage"].(float64); ok { + upstreamContextPercentage = ctxPct + log.Debugf("kiro: streamToChannel found contextUsagePercentage in tokenUsage: %.2f%%", ctxPct) + } + } + + // Fallback: check for direct fields in metadata (legacy format) + if totalUsage.InputTokens == 0 { + if inputTokens, ok := metadata["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + hasUpstreamUsage = true + log.Debugf("kiro: streamToChannel found inputTokens in messageMetadataEvent: %d", totalUsage.InputTokens) + } + } + if totalUsage.OutputTokens == 0 { + if outputTokens, ok := metadata["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + hasUpstreamUsage = true + log.Debugf("kiro: streamToChannel found outputTokens in messageMetadataEvent: %d", totalUsage.OutputTokens) + } + } + if totalUsage.TotalTokens == 0 { + if totalTokens, ok := metadata["totalTokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + log.Debugf("kiro: streamToChannel found totalTokens in messageMetadataEvent: %d", totalUsage.TotalTokens) + } + } + + case "usageEvent", "usage": + // Handle dedicated usage events + if inputTokens, ok := event["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + log.Debugf("kiro: streamToChannel found inputTokens in usageEvent: %d", totalUsage.InputTokens) + } + if outputTokens, ok := event["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + log.Debugf("kiro: streamToChannel found outputTokens in usageEvent: %d", totalUsage.OutputTokens) + } + if totalTokens, ok := event["totalTokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + log.Debugf("kiro: streamToChannel found totalTokens in usageEvent: %d", totalUsage.TotalTokens) + } + // Also check nested usage object + if usageObj, ok := event["usage"].(map[string]interface{}); ok { + if inputTokens, ok := usageObj["input_tokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } else if inputTokens, ok := usageObj["prompt_tokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } + if outputTokens, ok := usageObj["output_tokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } else if outputTokens, ok := usageObj["completion_tokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } + if totalTokens, ok := usageObj["total_tokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + } + log.Debugf("kiro: streamToChannel found usage object: input=%d, output=%d, total=%d", + totalUsage.InputTokens, totalUsage.OutputTokens, totalUsage.TotalTokens) + } + + case "metricsEvent": + // Handle metrics events which may contain usage data + if metrics, ok := event["metricsEvent"].(map[string]interface{}); ok { + if inputTokens, ok := metrics["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } + if outputTokens, ok := metrics["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } + log.Debugf("kiro: streamToChannel found metricsEvent: input=%d, output=%d", + totalUsage.InputTokens, totalUsage.OutputTokens) + } + } + + // Check nested usage event + if usageEvent, ok := event["supplementaryWebLinksEvent"].(map[string]interface{}); ok { + if inputTokens, ok := usageEvent["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } + if outputTokens, ok := usageEvent["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } + } + + // Check for direct token fields in any event (fallback) + if totalUsage.InputTokens == 0 { + if inputTokens, ok := event["inputTokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + log.Debugf("kiro: streamToChannel found direct inputTokens: %d", totalUsage.InputTokens) + } + } + if totalUsage.OutputTokens == 0 { + if outputTokens, ok := event["outputTokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + log.Debugf("kiro: streamToChannel found direct outputTokens: %d", totalUsage.OutputTokens) + } + } + + // Check for usage object in any event (OpenAI format) + if totalUsage.InputTokens == 0 || totalUsage.OutputTokens == 0 { + if usageObj, ok := event["usage"].(map[string]interface{}); ok { + if totalUsage.InputTokens == 0 { + if inputTokens, ok := usageObj["input_tokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } else if inputTokens, ok := usageObj["prompt_tokens"].(float64); ok { + totalUsage.InputTokens = int64(inputTokens) + } + } + if totalUsage.OutputTokens == 0 { + if outputTokens, ok := usageObj["output_tokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } else if outputTokens, ok := usageObj["completion_tokens"].(float64); ok { + totalUsage.OutputTokens = int64(outputTokens) + } + } + if totalUsage.TotalTokens == 0 { + if totalTokens, ok := usageObj["total_tokens"].(float64); ok { + totalUsage.TotalTokens = int64(totalTokens) + } + } + log.Debugf("kiro: streamToChannel found usage object (fallback): input=%d, output=%d, total=%d", + totalUsage.InputTokens, totalUsage.OutputTokens, totalUsage.TotalTokens) + } + } + } + + // Close content block if open + if isTextBlockOpen && contentBlockIndex >= 0 { + blockStop := kiroclaude.BuildClaudeContentBlockStopEvent(contentBlockIndex) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, blockStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + } + + // Streaming token calculation - calculate output tokens from accumulated content + // Only use local estimation if server didn't provide usage (server-side usage takes priority) + if totalUsage.OutputTokens == 0 && accumulatedContent.Len() > 0 { + // Try to use tiktoken for accurate counting + if enc, err := getTokenizer(model); err == nil { + if tokenCount, countErr := enc.Count(accumulatedContent.String()); countErr == nil { + totalUsage.OutputTokens = int64(tokenCount) + log.Debugf("kiro: streamToChannel calculated output tokens using tiktoken: %d", totalUsage.OutputTokens) + } else { + // Fallback on count error: estimate from character count + totalUsage.OutputTokens = int64(accumulatedContent.Len() / 4) + if totalUsage.OutputTokens == 0 { + totalUsage.OutputTokens = 1 + } + log.Debugf("kiro: streamToChannel tiktoken count failed, estimated from chars: %d", totalUsage.OutputTokens) + } + } else { + // Fallback: estimate from character count (roughly 4 chars per token) + totalUsage.OutputTokens = int64(accumulatedContent.Len() / 4) + if totalUsage.OutputTokens == 0 { + totalUsage.OutputTokens = 1 + } + log.Debugf("kiro: streamToChannel estimated output tokens from chars: %d (content len: %d)", totalUsage.OutputTokens, accumulatedContent.Len()) + } + } else if totalUsage.OutputTokens == 0 && outputLen > 0 { + // Legacy fallback using outputLen + totalUsage.OutputTokens = int64(outputLen / 4) + if totalUsage.OutputTokens == 0 { + totalUsage.OutputTokens = 1 + } + } + + // Use contextUsagePercentage to calculate more accurate input tokens + // Kiro model has 200k max context, contextUsagePercentage represents the percentage used + // Formula: input_tokens = contextUsagePercentage * 200000 / 100 + // Note: The effective input context is ~170k (200k - 30k reserved for output) + if upstreamContextPercentage > 0 { + // Calculate input tokens from context percentage + // Using 200k as the base since that's what Kiro reports against + calculatedInputTokens := int64(upstreamContextPercentage * 200000 / 100) + + // Only use calculated value if it's significantly different from local estimate + // This provides more accurate token counts based on upstream data + if calculatedInputTokens > 0 { + localEstimate := totalUsage.InputTokens + totalUsage.InputTokens = calculatedInputTokens + log.Debugf("kiro: using contextUsagePercentage (%.2f%%) to calculate input tokens: %d (local estimate was: %d)", + upstreamContextPercentage, calculatedInputTokens, localEstimate) + } + } + + totalUsage.TotalTokens = totalUsage.InputTokens + totalUsage.OutputTokens + + // Log upstream usage information if received + if hasUpstreamUsage { + log.Debugf("kiro: upstream usage - credits: %.4f, context: %.2f%%, final tokens - input: %d, output: %d, total: %d", + upstreamCreditUsage, upstreamContextPercentage, + totalUsage.InputTokens, totalUsage.OutputTokens, totalUsage.TotalTokens) + } + + // Determine stop reason: prefer upstream, then detect tool_use, default to end_turn + stopReason := upstreamStopReason + if stopReason == "" { + if hasToolUses { + stopReason = "tool_use" + log.Debugf("kiro: streamToChannel using fallback stop_reason: tool_use") + } else { + stopReason = "end_turn" + log.Debugf("kiro: streamToChannel using fallback stop_reason: end_turn") + } + } + + // Log warning if response was truncated due to max_tokens + if stopReason == "max_tokens" { + log.Warnf("kiro: response truncated due to max_tokens limit (streamToChannel)") + } + + // Send message_delta event + msgDelta := kiroclaude.BuildClaudeMessageDeltaEvent(stopReason, totalUsage) + sseData := sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, msgDelta, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + + // Send message_stop event separately + msgStop := kiroclaude.BuildClaudeMessageStopOnlyEvent() + sseData = sdktranslator.TranslateStream(ctx, sdktranslator.FromString("kiro"), targetFormat, model, originalReq, claudeBody, msgStop, &translatorParam) + for _, chunk := range sseData { + enqueueTranslatedSSE(out, chunk) + } + // reporter.publish is called via defer +} + +// NOTE: Claude SSE event builders moved to internal/translator/kiro/claude/kiro_claude_stream.go +// The executor now uses kiroclaude.BuildClaude*Event() functions instead + +// CountTokens counts tokens locally using tiktoken since Kiro API doesn't expose a token counting endpoint. +// This provides approximate token counts for client requests. +func (e *KiroExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + // Use tiktoken for local token counting + enc, err := getTokenizer(req.Model) + if err != nil { + log.Warnf("kiro: CountTokens failed to get tokenizer: %v, falling back to estimate", err) + // Fallback: estimate from payload size (roughly 4 chars per token) + estimatedTokens := len(req.Payload) / 4 + if estimatedTokens == 0 && len(req.Payload) > 0 { + estimatedTokens = 1 + } + return cliproxyexecutor.Response{ + Payload: []byte(fmt.Sprintf(`{"count":%d}`, estimatedTokens)), + }, nil + } + + // Try to count tokens from the request payload + var totalTokens int64 + + // Try OpenAI chat format first + if tokens, countErr := countOpenAIChatTokens(enc, req.Payload); countErr == nil && tokens > 0 { + totalTokens = tokens + log.Debugf("kiro: CountTokens counted %d tokens using OpenAI chat format", totalTokens) + } else { + // Fallback: count raw payload tokens + if tokenCount, countErr := enc.Count(string(req.Payload)); countErr == nil { + totalTokens = int64(tokenCount) + log.Debugf("kiro: CountTokens counted %d tokens from raw payload", totalTokens) + } else { + // Final fallback: estimate from payload size + totalTokens = int64(len(req.Payload) / 4) + if totalTokens == 0 && len(req.Payload) > 0 { + totalTokens = 1 + } + log.Debugf("kiro: CountTokens estimated %d tokens from payload size", totalTokens) + } + } + + return cliproxyexecutor.Response{ + Payload: []byte(fmt.Sprintf(`{"count":%d}`, totalTokens)), + }, nil +} + +// Refresh refreshes the Kiro OAuth token. +// Supports both AWS Builder ID (SSO OIDC) and Google OAuth (social login). +// Uses mutex to prevent race conditions when multiple concurrent requests try to refresh. +func (e *KiroExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + // Serialize token refresh operations to prevent race conditions + e.refreshMu.Lock() + defer e.refreshMu.Unlock() + + var authID string + if auth != nil { + authID = auth.ID + } else { + authID = "" + } + log.Debugf("kiro executor: refresh called for auth %s", authID) + if auth == nil { + return nil, fmt.Errorf("kiro executor: auth is nil") + } + + // Double-check: After acquiring lock, verify token still needs refresh + // Another goroutine may have already refreshed while we were waiting + // NOTE: This check has a design limitation - it reads from the auth object passed in, + // not from persistent storage. If another goroutine returns a new Auth object (via Clone), + // this check won't see those updates. The mutex still prevents truly concurrent refreshes, + // but queued goroutines may still attempt redundant refreshes. This is acceptable as + // the refresh operation is idempotent and the extra API calls are infrequent. + if auth.Metadata != nil { + if lastRefresh, ok := auth.Metadata["last_refresh"].(string); ok { + if refreshTime, err := time.Parse(time.RFC3339, lastRefresh); err == nil { + // If token was refreshed within the last 30 seconds, skip refresh + if time.Since(refreshTime) < 30*time.Second { + log.Debugf("kiro executor: token was recently refreshed by another goroutine, skipping") + return auth, nil + } + } + } + // Also check if expires_at is now in the future with sufficient buffer + if expiresAt, ok := auth.Metadata["expires_at"].(string); ok { + if expTime, err := time.Parse(time.RFC3339, expiresAt); err == nil { + // If token expires more than 20 minutes from now, it's still valid + if time.Until(expTime) > 20*time.Minute { + log.Debugf("kiro executor: token is still valid (expires in %v), skipping refresh", time.Until(expTime)) + // CRITICAL FIX: Set NextRefreshAfter to prevent frequent refresh checks + // Without this, shouldRefresh() will return true again in 30 seconds + updated := auth.Clone() + // Set next refresh to 20 minutes before expiry, or at least 30 seconds from now + nextRefresh := expTime.Add(-20 * time.Minute) + minNextRefresh := time.Now().Add(30 * time.Second) + if nextRefresh.Before(minNextRefresh) { + nextRefresh = minNextRefresh + } + updated.NextRefreshAfter = nextRefresh + log.Debugf("kiro executor: setting NextRefreshAfter to %v (in %v)", nextRefresh.Format(time.RFC3339), time.Until(nextRefresh)) + return updated, nil + } + } + } + } + + var refreshToken string + var clientID, clientSecret string + var authMethod string + var region, startURL string + + if auth.Metadata != nil { + if rt, ok := auth.Metadata["refresh_token"].(string); ok { + refreshToken = rt + } + if cid, ok := auth.Metadata["client_id"].(string); ok { + clientID = cid + } + if cs, ok := auth.Metadata["client_secret"].(string); ok { + clientSecret = cs + } + if am, ok := auth.Metadata["auth_method"].(string); ok { + authMethod = am + } + if r, ok := auth.Metadata["region"].(string); ok { + region = r + } + if su, ok := auth.Metadata["start_url"].(string); ok { + startURL = su + } + } + + if refreshToken == "" { + return nil, fmt.Errorf("kiro executor: refresh token not found") + } + + var tokenData *kiroauth.KiroTokenData + var err error + + ssoClient := kiroauth.NewSSOOIDCClient(e.cfg) + + // Use SSO OIDC refresh for AWS Builder ID or IDC, otherwise use Kiro's OAuth refresh endpoint + switch { + case clientID != "" && clientSecret != "" && authMethod == "idc" && region != "": + // IDC refresh with region-specific endpoint + log.Debugf("kiro executor: using SSO OIDC refresh for IDC (region=%s)", region) + tokenData, err = ssoClient.RefreshTokenWithRegion(ctx, clientID, clientSecret, refreshToken, region, startURL) + case clientID != "" && clientSecret != "" && authMethod == "builder-id": + // Builder ID refresh with default endpoint + log.Debugf("kiro executor: using SSO OIDC refresh for AWS Builder ID") + tokenData, err = ssoClient.RefreshToken(ctx, clientID, clientSecret, refreshToken) + default: + // Fallback to Kiro's OAuth refresh endpoint (for social auth: Google/GitHub) + log.Debugf("kiro executor: using Kiro OAuth refresh endpoint") + oauth := kiroauth.NewKiroOAuth(e.cfg) + tokenData, err = oauth.RefreshToken(ctx, refreshToken) + } + + if err != nil { + return nil, fmt.Errorf("kiro executor: token refresh failed: %w", err) + } + + updated := auth.Clone() + now := time.Now() + updated.UpdatedAt = now + updated.LastRefreshedAt = now + + if updated.Metadata == nil { + updated.Metadata = make(map[string]any) + } + updated.Metadata["access_token"] = tokenData.AccessToken + updated.Metadata["refresh_token"] = tokenData.RefreshToken + updated.Metadata["expires_at"] = tokenData.ExpiresAt + updated.Metadata["last_refresh"] = now.Format(time.RFC3339) + if tokenData.ProfileArn != "" { + updated.Metadata["profile_arn"] = tokenData.ProfileArn + } + if tokenData.AuthMethod != "" { + updated.Metadata["auth_method"] = tokenData.AuthMethod + } + if tokenData.Provider != "" { + updated.Metadata["provider"] = tokenData.Provider + } + // Preserve client credentials for future refreshes (AWS Builder ID) + if tokenData.ClientID != "" { + updated.Metadata["client_id"] = tokenData.ClientID + } + if tokenData.ClientSecret != "" { + updated.Metadata["client_secret"] = tokenData.ClientSecret + } + // Preserve region and start_url for IDC token refresh + if tokenData.Region != "" { + updated.Metadata["region"] = tokenData.Region + } + if tokenData.StartURL != "" { + updated.Metadata["start_url"] = tokenData.StartURL + } + + if updated.Attributes == nil { + updated.Attributes = make(map[string]string) + } + updated.Attributes["access_token"] = tokenData.AccessToken + if tokenData.ProfileArn != "" { + updated.Attributes["profile_arn"] = tokenData.ProfileArn + } + + // NextRefreshAfter is aligned with RefreshLead (20min) + if expiresAt, parseErr := time.Parse(time.RFC3339, tokenData.ExpiresAt); parseErr == nil { + updated.NextRefreshAfter = expiresAt.Add(-20 * time.Minute) + } + + log.Infof("kiro executor: token refreshed successfully, expires at %s", tokenData.ExpiresAt) + return updated, nil +} + +// persistRefreshedAuth persists a refreshed auth record to disk. +// This ensures token refreshes from inline retry are saved to the auth file. +func (e *KiroExecutor) persistRefreshedAuth(auth *cliproxyauth.Auth) error { + if auth == nil || auth.Metadata == nil { + return fmt.Errorf("kiro executor: cannot persist nil auth or metadata") + } + + // Determine the file path from auth attributes or filename + var authPath string + if auth.Attributes != nil { + if p := strings.TrimSpace(auth.Attributes["path"]); p != "" { + authPath = p + } + } + if authPath == "" { + fileName := strings.TrimSpace(auth.FileName) + if fileName == "" { + return fmt.Errorf("kiro executor: auth has no file path or filename") + } + if filepath.IsAbs(fileName) { + authPath = fileName + } else if e.cfg != nil && e.cfg.AuthDir != "" { + authPath = filepath.Join(e.cfg.AuthDir, fileName) + } else { + return fmt.Errorf("kiro executor: cannot determine auth file path") + } + } + + // Marshal metadata to JSON + raw, err := json.Marshal(auth.Metadata) + if err != nil { + return fmt.Errorf("kiro executor: marshal metadata failed: %w", err) + } + + // Write to temp file first, then rename (atomic write) + tmp := authPath + ".tmp" + if err := os.WriteFile(tmp, raw, 0o600); err != nil { + return fmt.Errorf("kiro executor: write temp auth file failed: %w", err) + } + if err := os.Rename(tmp, authPath); err != nil { + return fmt.Errorf("kiro executor: rename auth file failed: %w", err) + } + + log.Debugf("kiro executor: persisted refreshed auth to %s", authPath) + return nil +} + +// fetchAndSaveProfileArn fetches profileArn from API if missing, updates auth and persists to file. +func (e *KiroExecutor) fetchAndSaveProfileArn(ctx context.Context, auth *cliproxyauth.Auth, accessToken string) string { + if auth == nil || auth.Metadata == nil { + return "" + } + + // Skip for Builder ID - they don't have profiles + if authMethod, ok := auth.Metadata["auth_method"].(string); ok && authMethod == "builder-id" { + log.Debugf("kiro executor: skipping profileArn fetch for builder-id auth") + return "" + } + + e.profileArnMu.Lock() + defer e.profileArnMu.Unlock() + + // Double-check: another goroutine may have already fetched and saved the profileArn + if arn, ok := auth.Metadata["profile_arn"].(string); ok && arn != "" { + return arn + } + + clientID, _ := auth.Metadata["client_id"].(string) + refreshToken, _ := auth.Metadata["refresh_token"].(string) + + ssoClient := kiroauth.NewSSOOIDCClient(e.cfg) + profileArn := ssoClient.FetchProfileArn(ctx, accessToken, clientID, refreshToken) + if profileArn == "" { + log.Debugf("kiro executor: FetchProfileArn returned no profiles") + return "" + } + + auth.Metadata["profile_arn"] = profileArn + if auth.Attributes == nil { + auth.Attributes = make(map[string]string) + } + auth.Attributes["profile_arn"] = profileArn + + if err := e.persistRefreshedAuth(auth); err != nil { + log.Warnf("kiro executor: failed to persist profileArn: %v", err) + } else { + log.Infof("kiro executor: fetched and saved profileArn: %s", profileArn) + } + + return profileArn +} + +// reloadAuthFromFile 从文件重新加载 auth 数据(方案 B: Fallback 机制) +// 当内存中的 token 已过期时,尝试从文件读取最新的 token +// 这解决了后台刷新器已更新文件但内存中 Auth 对象尚未同步的时间差问题 +func (e *KiroExecutor) reloadAuthFromFile(auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + if auth == nil { + return nil, fmt.Errorf("kiro executor: cannot reload nil auth") + } + + // 确定文件路径 + var authPath string + if auth.Attributes != nil { + if p := strings.TrimSpace(auth.Attributes["path"]); p != "" { + authPath = p + } + } + if authPath == "" { + fileName := strings.TrimSpace(auth.FileName) + if fileName == "" { + return nil, fmt.Errorf("kiro executor: auth has no file path or filename for reload") + } + if filepath.IsAbs(fileName) { + authPath = fileName + } else if e.cfg != nil && e.cfg.AuthDir != "" { + authPath = filepath.Join(e.cfg.AuthDir, fileName) + } else { + return nil, fmt.Errorf("kiro executor: cannot determine auth file path for reload") + } + } + + // 读取文件 + raw, err := os.ReadFile(authPath) + if err != nil { + return nil, fmt.Errorf("kiro executor: failed to read auth file %s: %w", authPath, err) + } + + // 解析 JSON + var metadata map[string]any + if err := json.Unmarshal(raw, &metadata); err != nil { + return nil, fmt.Errorf("kiro executor: failed to parse auth file %s: %w", authPath, err) + } + + // 检查文件中的 token 是否比内存中的更新 + fileExpiresAt, _ := metadata["expires_at"].(string) + fileAccessToken, _ := metadata["access_token"].(string) + memExpiresAt, _ := auth.Metadata["expires_at"].(string) + memAccessToken, _ := auth.Metadata["access_token"].(string) + + // 文件中必须有有效的 access_token + if fileAccessToken == "" { + return nil, fmt.Errorf("kiro executor: auth file has no access_token field") + } + + // 如果有 expires_at,检查是否过期 + if fileExpiresAt != "" { + fileExpTime, parseErr := time.Parse(time.RFC3339, fileExpiresAt) + if parseErr == nil { + // 如果文件中的 token 也已过期,不使用它 + if time.Now().After(fileExpTime) { + log.Debugf("kiro executor: file token also expired at %s, not using", fileExpiresAt) + return nil, fmt.Errorf("kiro executor: file token also expired") + } + } + } + + // 判断文件中的 token 是否比内存中的更新 + // 条件1: access_token 不同(说明已刷新) + // 条件2: expires_at 更新(说明已刷新) + isNewer := false + + // 优先检查 access_token 是否变化 + if fileAccessToken != memAccessToken { + isNewer = true + log.Debugf("kiro executor: file access_token differs from memory, using file token") + } + + // 如果 access_token 相同,检查 expires_at + if !isNewer && fileExpiresAt != "" && memExpiresAt != "" { + fileExpTime, fileParseErr := time.Parse(time.RFC3339, fileExpiresAt) + memExpTime, memParseErr := time.Parse(time.RFC3339, memExpiresAt) + if fileParseErr == nil && memParseErr == nil && fileExpTime.After(memExpTime) { + isNewer = true + log.Debugf("kiro executor: file expires_at (%s) is newer than memory (%s)", fileExpiresAt, memExpiresAt) + } + } + + // 如果文件中没有 expires_at 但 access_token 相同,无法判断是否更新 + if !isNewer && fileExpiresAt == "" && fileAccessToken == memAccessToken { + return nil, fmt.Errorf("kiro executor: cannot determine if file token is newer (no expires_at, same access_token)") + } + + if !isNewer { + log.Debugf("kiro executor: file token not newer than memory token") + return nil, fmt.Errorf("kiro executor: file token not newer") + } + + // 创建更新后的 auth 对象 + updated := auth.Clone() + updated.Metadata = metadata + updated.UpdatedAt = time.Now() + + // 同步更新 Attributes + if updated.Attributes == nil { + updated.Attributes = make(map[string]string) + } + if accessToken, ok := metadata["access_token"].(string); ok { + updated.Attributes["access_token"] = accessToken + } + if profileArn, ok := metadata["profile_arn"].(string); ok { + updated.Attributes["profile_arn"] = profileArn + } + + log.Infof("kiro executor: reloaded auth from file %s, new expires_at: %s", authPath, fileExpiresAt) + return updated, nil +} + +// isTokenExpired checks if a JWT access token has expired. +// Returns true if the token is expired or cannot be parsed. +func (e *KiroExecutor) isTokenExpired(accessToken string) bool { + if accessToken == "" { + return true + } + + // JWT tokens have 3 parts separated by dots + parts := strings.Split(accessToken, ".") + if len(parts) != 3 { + // Not a JWT token, assume not expired + return false + } + + // Decode the payload (second part) + // JWT uses base64url encoding without padding (RawURLEncoding) + payload := parts[1] + decoded, err := base64.RawURLEncoding.DecodeString(payload) + if err != nil { + // Try with padding added as fallback + switch len(payload) % 4 { + case 2: + payload += "==" + case 3: + payload += "=" + } + decoded, err = base64.URLEncoding.DecodeString(payload) + if err != nil { + log.Debugf("kiro: failed to decode JWT payload: %v", err) + return false + } + } + + var claims struct { + Exp int64 `json:"exp"` + } + if err := json.Unmarshal(decoded, &claims); err != nil { + log.Debugf("kiro: failed to parse JWT claims: %v", err) + return false + } + + if claims.Exp == 0 { + // No expiration claim, assume not expired + return false + } + + expTime := time.Unix(claims.Exp, 0) + now := time.Now() + + // Consider token expired if it expires within 1 minute (buffer for clock skew) + isExpired := now.After(expTime) || expTime.Sub(now) < time.Minute + if isExpired { + log.Debugf("kiro: token expired at %s (now: %s)", expTime.Format(time.RFC3339), now.Format(time.RFC3339)) + } + + return isExpired +} + +// ══════════════════════════════════════════════════════════════════════════════ +// Web Search Handler (MCP API) +// ══════════════════════════════════════════════════════════════════════════════ + +// fetchToolDescription caching: +// Uses a mutex + fetched flag to ensure only one goroutine fetches at a time, +// with automatic retry on failure: +// - On failure, fetched stays false so subsequent calls will retry +// - On success, fetched is set to true — subsequent calls skip immediately (mutex-free fast path) +// The cached description is stored in the translator package via kiroclaude.SetWebSearchDescription(), +// enabling the translator's convertClaudeToolsToKiro to read it when building Kiro requests. +var ( + toolDescMu sync.Mutex + toolDescFetched atomic.Bool +) + +// fetchToolDescription calls MCP tools/list to get the web_search tool description +// and caches it. Safe to call concurrently — only one goroutine fetches at a time. +// If the fetch fails, subsequent calls will retry. On success, no further fetches occur. +// The httpClient parameter allows reusing a shared pooled HTTP client. +func fetchToolDescription(ctx context.Context, mcpEndpoint, authToken string, httpClient *http.Client, auth *cliproxyauth.Auth, authAttrs map[string]string) { + // Fast path: already fetched successfully, no lock needed + if toolDescFetched.Load() { + return + } + + toolDescMu.Lock() + defer toolDescMu.Unlock() + + // Double-check after acquiring lock + if toolDescFetched.Load() { + return + } + + handler := newWebSearchHandler(ctx, mcpEndpoint, authToken, httpClient, auth, authAttrs) + reqBody := []byte(`{"id":"tools_list","jsonrpc":"2.0","method":"tools/list"}`) + log.Debugf("kiro/websearch MCP tools/list request: %d bytes", len(reqBody)) + + req, err := http.NewRequestWithContext(ctx, "POST", mcpEndpoint, bytes.NewReader(reqBody)) + if err != nil { + log.Warnf("kiro/websearch: failed to create tools/list request: %v", err) + return + } + + // Reuse same headers as callMcpAPI + handler.setMcpHeaders(req) + + resp, err := handler.httpClient.Do(req) + if err != nil { + log.Warnf("kiro/websearch: tools/list request failed: %v", err) + return + } + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil || resp.StatusCode != http.StatusOK { + log.Warnf("kiro/websearch: tools/list returned status %d", resp.StatusCode) + return + } + log.Debugf("kiro/websearch MCP tools/list response: [%d] %d bytes", resp.StatusCode, len(body)) + + // Parse: {"result":{"tools":[{"name":"web_search","description":"..."}]}} + var result struct { + Result *struct { + Tools []struct { + Name string `json:"name"` + Description string `json:"description"` + } `json:"tools"` + } `json:"result"` + } + if err := json.Unmarshal(body, &result); err != nil || result.Result == nil { + log.Warnf("kiro/websearch: failed to parse tools/list response") + return + } + + for _, tool := range result.Result.Tools { + if tool.Name == "web_search" && tool.Description != "" { + kiroclaude.SetWebSearchDescription(tool.Description) + toolDescFetched.Store(true) // success — no more fetches + log.Infof("kiro/websearch: cached web_search description from tools/list (%d bytes)", len(tool.Description)) + return + } + } + + // web_search tool not found in response + log.Warnf("kiro/websearch: web_search tool not found in tools/list response") +} + +// webSearchHandler handles web search requests via Kiro MCP API +type webSearchHandler struct { + ctx context.Context + mcpEndpoint string + httpClient *http.Client + authToken string + auth *cliproxyauth.Auth // for applyDynamicFingerprint + authAttrs map[string]string // optional, for custom headers from auth.Attributes +} + +// newWebSearchHandler creates a new webSearchHandler. +// If httpClient is nil, a default client with 30s timeout is used. +// Pass a shared pooled client (e.g. from getKiroPooledHTTPClient) for connection reuse. +func newWebSearchHandler(ctx context.Context, mcpEndpoint, authToken string, httpClient *http.Client, auth *cliproxyauth.Auth, authAttrs map[string]string) *webSearchHandler { + if httpClient == nil { + httpClient = &http.Client{ + Timeout: 30 * time.Second, + } + } + return &webSearchHandler{ + ctx: ctx, + mcpEndpoint: mcpEndpoint, + httpClient: httpClient, + authToken: authToken, + auth: auth, + authAttrs: authAttrs, + } +} + +// setMcpHeaders sets standard MCP API headers on the request, +// aligned with the GAR request pattern. +func (h *webSearchHandler) setMcpHeaders(req *http.Request) { + // 1. Content-Type & Accept (aligned with GAR) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "*/*") + + // 2. Kiro-specific headers (aligned with GAR) + req.Header.Set("x-amzn-kiro-agent-mode", "vibe") + req.Header.Set("x-amzn-codewhisperer-optout", "true") + + // 3. User-Agent: Reuse applyDynamicFingerprint for consistency + applyDynamicFingerprint(req, h.auth) + + // 4. AWS SDK identifiers + req.Header.Set("Amz-Sdk-Request", "attempt=1; max=3") + req.Header.Set("Amz-Sdk-Invocation-Id", uuid.New().String()) + + // 5. Authentication + req.Header.Set("Authorization", "Bearer "+h.authToken) + + // 6. Custom headers from auth attributes + util.ApplyCustomHeadersFromAttrs(req, h.authAttrs) +} + +// mcpMaxRetries is the maximum number of retries for MCP API calls. +const mcpMaxRetries = 2 + +// callMcpAPI calls the Kiro MCP API with the given request. +// Includes retry logic with exponential backoff for retryable errors. +func (h *webSearchHandler) callMcpAPI(request *kiroclaude.McpRequest) (*kiroclaude.McpResponse, error) { + requestBody, err := json.Marshal(request) + if err != nil { + return nil, fmt.Errorf("failed to marshal MCP request: %w", err) + } + log.Debugf("kiro/websearch MCP request → %s (%d bytes)", h.mcpEndpoint, len(requestBody)) + + var lastErr error + for attempt := 0; attempt <= mcpMaxRetries; attempt++ { + if attempt > 0 { + backoff := time.Duration(1< 10*time.Second { + backoff = 10 * time.Second + } + log.Warnf("kiro/websearch: MCP retry %d/%d after %v (last error: %v)", attempt, mcpMaxRetries, backoff, lastErr) + select { + case <-h.ctx.Done(): + return nil, h.ctx.Err() + case <-time.After(backoff): + } + } + + req, err := http.NewRequestWithContext(h.ctx, "POST", h.mcpEndpoint, bytes.NewReader(requestBody)) + if err != nil { + return nil, fmt.Errorf("failed to create HTTP request: %w", err) + } + + h.setMcpHeaders(req) + + resp, err := h.httpClient.Do(req) + if err != nil { + lastErr = fmt.Errorf("MCP API request failed: %w", err) + continue // network error → retry + } + + body, err := io.ReadAll(resp.Body) + resp.Body.Close() + if err != nil { + lastErr = fmt.Errorf("failed to read MCP response: %w", err) + continue // read error → retry + } + log.Debugf("kiro/websearch MCP response ← [%d] (%d bytes)", resp.StatusCode, len(body)) + + // Retryable HTTP status codes (aligned with GAR: 502, 503, 504) + if resp.StatusCode >= 502 && resp.StatusCode <= 504 { + lastErr = fmt.Errorf("MCP API returned retryable status %d: %s", resp.StatusCode, string(body)) + continue + } + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("MCP API returned status %d: %s", resp.StatusCode, string(body)) + } + + var mcpResponse kiroclaude.McpResponse + if err := json.Unmarshal(body, &mcpResponse); err != nil { + return nil, fmt.Errorf("failed to parse MCP response: %w", err) + } + + if mcpResponse.Error != nil { + code := -1 + if mcpResponse.Error.Code != nil { + code = *mcpResponse.Error.Code + } + msg := "Unknown error" + if mcpResponse.Error.Message != nil { + msg = *mcpResponse.Error.Message + } + return nil, fmt.Errorf("MCP error %d: %s", code, msg) + } + + return &mcpResponse, nil + } + + return nil, lastErr +} + +// webSearchAuthAttrs extracts auth attributes for MCP calls. +// Used by handleWebSearch and handleWebSearchStream to pass custom headers. +func webSearchAuthAttrs(auth *cliproxyauth.Auth) map[string]string { + if auth != nil { + return auth.Attributes + } + return nil +} + +const maxWebSearchIterations = 5 + +// handleWebSearchStream handles web_search requests: +// Step 1: tools/list (sync) → fetch/cache tool description +// Step 2+: MCP search → InjectToolResultsClaude → callKiroAndBuffer loop +// Note: We skip the "model decides to search" step because Claude Code already +// decided to use web_search. The Kiro tool description restricts non-coding +// topics, so asking the model again would cause it to refuse valid searches. +func (e *KiroExecutor) handleWebSearchStream( + ctx context.Context, + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, + opts cliproxyexecutor.Options, + accessToken, profileArn string, +) (<-chan cliproxyexecutor.StreamChunk, error) { + // Extract search query from Claude Code's web_search tool_use + query := kiroclaude.ExtractSearchQuery(req.Payload) + if query == "" { + log.Warnf("kiro/websearch: failed to extract search query, falling back to normal flow") + return e.callKiroDirectStream(ctx, auth, req, opts, accessToken, profileArn) + } + + // Build MCP endpoint using shared region resolution (supports api_region + ProfileARN fallback) + region := resolveKiroAPIRegion(auth) + mcpEndpoint := kiroclaude.BuildMcpEndpoint(region) + + // ── Step 1: tools/list (SYNC) — cache tool description ── + { + authAttrs := webSearchAuthAttrs(auth) + fetchToolDescription(ctx, mcpEndpoint, accessToken, newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 30*time.Second), auth, authAttrs) + } + + // Create output channel + out := make(chan cliproxyexecutor.StreamChunk) + + // Usage reporting: track web search requests like normal streaming requests + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + + go func() { + var wsErr error + defer reporter.trackFailure(ctx, &wsErr) + defer close(out) + + // Estimate input tokens using tokenizer (matching streamToChannel pattern) + var totalUsage usage.Detail + if enc, tokErr := getTokenizer(req.Model); tokErr == nil { + if inp, e := countClaudeChatTokens(enc, req.Payload); e == nil && inp > 0 { + totalUsage.InputTokens = inp + } else { + totalUsage.InputTokens = int64(len(req.Payload) / 4) + } + } else { + totalUsage.InputTokens = int64(len(req.Payload) / 4) + } + if totalUsage.InputTokens == 0 && len(req.Payload) > 0 { + totalUsage.InputTokens = 1 + } + var accumulatedOutputLen int + defer func() { + if wsErr != nil { + return // let trackFailure handle failure reporting + } + totalUsage.OutputTokens = int64(accumulatedOutputLen / 4) + if accumulatedOutputLen > 0 && totalUsage.OutputTokens == 0 { + totalUsage.OutputTokens = 1 + } + reporter.publish(ctx, totalUsage) + }() + + // Send message_start event to client (aligned with streamToChannel pattern) + // Use payloadRequestedModel to return user's original model alias + msgStart := kiroclaude.BuildClaudeMessageStartEvent( + payloadRequestedModel(opts, req.Model), + totalUsage.InputTokens, + ) + select { + case <-ctx.Done(): + return + case out <- cliproxyexecutor.StreamChunk{Payload: append(msgStart, '\n', '\n')}: + } + + // ── Step 2+: MCP search → InjectToolResultsClaude → callKiroAndBuffer loop ── + contentBlockIndex := 0 + currentQuery := query + + // Replace web_search tool description with a minimal one that allows re-search. + // The original tools/list description from Kiro restricts non-coding topics, + // but we've already decided to search. We keep the tool so the model can + // request additional searches when results are insufficient. + simplifiedPayload, simplifyErr := kiroclaude.ReplaceWebSearchToolDescription(bytes.Clone(req.Payload)) + if simplifyErr != nil { + log.Warnf("kiro/websearch: failed to simplify web_search tool: %v, using original payload", simplifyErr) + simplifiedPayload = bytes.Clone(req.Payload) + } + + currentClaudePayload := simplifiedPayload + totalSearches := 0 + + // Generate toolUseId for the first iteration (Claude Code already decided to search) + currentToolUseId := fmt.Sprintf("srvtoolu_%s", kiroclaude.GenerateToolUseID()) + + for iteration := 0; iteration < maxWebSearchIterations; iteration++ { + log.Infof("kiro/websearch: search iteration %d/%d", + iteration+1, maxWebSearchIterations) + + // MCP search + _, mcpRequest := kiroclaude.CreateMcpRequest(currentQuery) + + authAttrs := webSearchAuthAttrs(auth) + handler := newWebSearchHandler(ctx, mcpEndpoint, accessToken, newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 30*time.Second), auth, authAttrs) + mcpResponse, mcpErr := handler.callMcpAPI(mcpRequest) + + var searchResults *kiroclaude.WebSearchResults + if mcpErr != nil { + log.Warnf("kiro/websearch: MCP API call failed: %v, continuing with empty results", mcpErr) + } else { + searchResults = kiroclaude.ParseSearchResults(mcpResponse) + } + + resultCount := 0 + if searchResults != nil { + resultCount = len(searchResults.Results) + } + totalSearches++ + log.Infof("kiro/websearch: iteration %d — got %d search results", iteration+1, resultCount) + + // Send search indicator events to client + searchEvents := kiroclaude.GenerateSearchIndicatorEvents(currentQuery, currentToolUseId, searchResults, contentBlockIndex) + for _, event := range searchEvents { + select { + case <-ctx.Done(): + return + case out <- cliproxyexecutor.StreamChunk{Payload: event}: + } + } + contentBlockIndex += 2 + + // Inject tool_use + tool_result into Claude payload, then call GAR + var err error + currentClaudePayload, err = kiroclaude.InjectToolResultsClaude(currentClaudePayload, currentToolUseId, currentQuery, searchResults) + if err != nil { + log.Warnf("kiro/websearch: failed to inject tool results: %v", err) + wsErr = fmt.Errorf("failed to inject tool results: %w", err) + e.sendFallbackText(ctx, out, contentBlockIndex, currentQuery, searchResults) + return + } + + // Call GAR with modified Claude payload (full translation pipeline) + modifiedReq := req + modifiedReq.Payload = currentClaudePayload + kiroChunks, kiroErr := e.callKiroAndBuffer(ctx, auth, modifiedReq, opts, accessToken, profileArn) + if kiroErr != nil { + log.Warnf("kiro/websearch: Kiro API failed at iteration %d: %v", iteration+1, kiroErr) + wsErr = fmt.Errorf("Kiro API failed at iteration %d: %w", iteration+1, kiroErr) + e.sendFallbackText(ctx, out, contentBlockIndex, currentQuery, searchResults) + return + } + + // Analyze response + analysis := kiroclaude.AnalyzeBufferedStream(kiroChunks) + log.Infof("kiro/websearch: iteration %d — stop_reason: %s, has_tool_use: %v", + iteration+1, analysis.StopReason, analysis.HasWebSearchToolUse) + + if analysis.HasWebSearchToolUse && analysis.WebSearchQuery != "" && iteration+1 < maxWebSearchIterations { + // Model wants another search + filteredChunks := kiroclaude.FilterChunksForClient(kiroChunks, analysis.WebSearchToolUseIndex, contentBlockIndex) + for _, chunk := range filteredChunks { + select { + case <-ctx.Done(): + return + case out <- cliproxyexecutor.StreamChunk{Payload: chunk}: + } + } + + currentQuery = analysis.WebSearchQuery + currentToolUseId = analysis.WebSearchToolUseId + continue + } + + // Model returned final response — stream to client + for _, chunk := range kiroChunks { + if contentBlockIndex > 0 && len(chunk) > 0 { + adjusted, shouldForward := kiroclaude.AdjustSSEChunk(chunk, contentBlockIndex) + if !shouldForward { + continue + } + accumulatedOutputLen += len(adjusted) + select { + case <-ctx.Done(): + return + case out <- cliproxyexecutor.StreamChunk{Payload: adjusted}: + } + } else { + accumulatedOutputLen += len(chunk) + select { + case <-ctx.Done(): + return + case out <- cliproxyexecutor.StreamChunk{Payload: chunk}: + } + } + } + log.Infof("kiro/websearch: completed after %d search iteration(s), total searches: %d", iteration+1, totalSearches) + return + } + + log.Warnf("kiro/websearch: reached max iterations (%d), stopping search loop", maxWebSearchIterations) + }() + + return out, nil +} + +// handleWebSearch handles web_search requests for non-streaming Execute path. +// Performs MCP search synchronously, injects results into the request payload, +// then calls the normal non-streaming Kiro API path which returns a proper +// Claude JSON response (not SSE chunks). +func (e *KiroExecutor) handleWebSearch( + ctx context.Context, + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, + opts cliproxyexecutor.Options, + accessToken, profileArn string, +) (cliproxyexecutor.Response, error) { + // Extract search query from Claude Code's web_search tool_use + query := kiroclaude.ExtractSearchQuery(req.Payload) + if query == "" { + log.Warnf("kiro/websearch: non-stream: failed to extract search query, falling back to normal Execute") + // Fall through to normal non-streaming path + return e.executeNonStreamFallback(ctx, auth, req, opts, accessToken, profileArn) + } + + // Build MCP endpoint using shared region resolution (supports api_region + ProfileARN fallback) + region := resolveKiroAPIRegion(auth) + mcpEndpoint := kiroclaude.BuildMcpEndpoint(region) + + // Step 1: Fetch/cache tool description (sync) + { + authAttrs := webSearchAuthAttrs(auth) + fetchToolDescription(ctx, mcpEndpoint, accessToken, newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 30*time.Second), auth, authAttrs) + } + + // Step 2: Perform MCP search + _, mcpRequest := kiroclaude.CreateMcpRequest(query) + + authAttrs := webSearchAuthAttrs(auth) + handler := newWebSearchHandler(ctx, mcpEndpoint, accessToken, newKiroHTTPClientWithPooling(ctx, e.cfg, auth, 30*time.Second), auth, authAttrs) + mcpResponse, mcpErr := handler.callMcpAPI(mcpRequest) + + var searchResults *kiroclaude.WebSearchResults + if mcpErr != nil { + log.Warnf("kiro/websearch: non-stream: MCP API call failed: %v, continuing with empty results", mcpErr) + } else { + searchResults = kiroclaude.ParseSearchResults(mcpResponse) + } + + resultCount := 0 + if searchResults != nil { + resultCount = len(searchResults.Results) + } + log.Infof("kiro/websearch: non-stream: got %d search results", resultCount) + + // Step 3: Replace restrictive web_search tool description (align with streaming path) + simplifiedPayload, simplifyErr := kiroclaude.ReplaceWebSearchToolDescription(bytes.Clone(req.Payload)) + if simplifyErr != nil { + log.Warnf("kiro/websearch: non-stream: failed to simplify web_search tool: %v, using original payload", simplifyErr) + simplifiedPayload = bytes.Clone(req.Payload) + } + + // Step 4: Inject search tool_use + tool_result into Claude payload + currentToolUseId := fmt.Sprintf("srvtoolu_%s", kiroclaude.GenerateToolUseID()) + modifiedPayload, err := kiroclaude.InjectToolResultsClaude(simplifiedPayload, currentToolUseId, query, searchResults) + if err != nil { + log.Warnf("kiro/websearch: non-stream: failed to inject tool results: %v, falling back", err) + return e.executeNonStreamFallback(ctx, auth, req, opts, accessToken, profileArn) + } + + // Step 5: Call Kiro API via the normal non-streaming path (executeWithRetry) + // This path uses parseEventStream → BuildClaudeResponse → TranslateNonStream + // to produce a proper Claude JSON response + modifiedReq := req + modifiedReq.Payload = modifiedPayload + + resp, err := e.executeNonStreamFallback(ctx, auth, modifiedReq, opts, accessToken, profileArn) + if err != nil { + return resp, err + } + + // Step 6: Inject server_tool_use + web_search_tool_result into response + // so Claude Code can display "Did X searches in Ys" + indicators := []kiroclaude.SearchIndicator{ + { + ToolUseID: currentToolUseId, + Query: query, + Results: searchResults, + }, + } + injectedPayload, injErr := kiroclaude.InjectSearchIndicatorsInResponse(resp.Payload, indicators) + if injErr != nil { + log.Warnf("kiro/websearch: non-stream: failed to inject search indicators: %v", injErr) + } else { + resp.Payload = injectedPayload + } + + return resp, nil +} + +// callKiroAndBuffer calls the Kiro API and buffers all response chunks. +// Returns the buffered chunks for analysis before forwarding to client. +// Usage reporting is NOT done here — the caller (handleWebSearchStream) manages its own reporter. +func (e *KiroExecutor) callKiroAndBuffer( + ctx context.Context, + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, + opts cliproxyexecutor.Options, + accessToken, profileArn string, +) ([][]byte, error) { + from := opts.SourceFormat + to := sdktranslator.FromString("kiro") + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) + log.Debugf("kiro/websearch GAR request: %d bytes", len(body)) + + kiroModelID := e.mapModelToKiro(req.Model) + isAgentic, isChatOnly := determineAgenticMode(req.Model) + effectiveProfileArn := getEffectiveProfileArnWithWarning(auth, profileArn) + + tokenKey := getAccountKey(auth) + + kiroStream, err := e.executeStreamWithRetry( + ctx, auth, req, opts, accessToken, effectiveProfileArn, + nil, body, from, nil, "", kiroModelID, isAgentic, isChatOnly, tokenKey, + ) + if err != nil { + return nil, err + } + + // Buffer all chunks + var chunks [][]byte + for chunk := range kiroStream { + if chunk.Err != nil { + return chunks, chunk.Err + } + if len(chunk.Payload) > 0 { + chunks = append(chunks, bytes.Clone(chunk.Payload)) + } + } + + log.Debugf("kiro/websearch GAR response: %d chunks buffered", len(chunks)) + + return chunks, nil +} + +// callKiroDirectStream creates a direct streaming channel to Kiro API without search. +func (e *KiroExecutor) callKiroDirectStream( + ctx context.Context, + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, + opts cliproxyexecutor.Options, + accessToken, profileArn string, +) (<-chan cliproxyexecutor.StreamChunk, error) { + from := opts.SourceFormat + to := sdktranslator.FromString("kiro") + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) + + kiroModelID := e.mapModelToKiro(req.Model) + isAgentic, isChatOnly := determineAgenticMode(req.Model) + effectiveProfileArn := getEffectiveProfileArnWithWarning(auth, profileArn) + + tokenKey := getAccountKey(auth) + + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + var streamErr error + defer reporter.trackFailure(ctx, &streamErr) + + stream, streamErr := e.executeStreamWithRetry( + ctx, auth, req, opts, accessToken, effectiveProfileArn, + nil, body, from, reporter, "", kiroModelID, isAgentic, isChatOnly, tokenKey, + ) + return stream, streamErr +} + +// sendFallbackText sends a simple text response when the Kiro API fails during the search loop. +// Delegates SSE event construction to kiroclaude.BuildFallbackTextEvents() for alignment +// with how streamToChannel() uses BuildClaude*Event() functions. +func (e *KiroExecutor) sendFallbackText( + ctx context.Context, + out chan<- cliproxyexecutor.StreamChunk, + contentBlockIndex int, + query string, + searchResults *kiroclaude.WebSearchResults, +) { + events := kiroclaude.BuildFallbackTextEvents(contentBlockIndex, query, searchResults) + for _, event := range events { + select { + case <-ctx.Done(): + return + case out <- cliproxyexecutor.StreamChunk{Payload: append(event, '\n', '\n')}: + } + } +} + +// executeNonStreamFallback runs the standard non-streaming Execute path for a request. +// Used by handleWebSearch after injecting search results, or as a fallback. +func (e *KiroExecutor) executeNonStreamFallback( + ctx context.Context, + auth *cliproxyauth.Auth, + req cliproxyexecutor.Request, + opts cliproxyexecutor.Options, + accessToken, profileArn string, +) (cliproxyexecutor.Response, error) { + from := opts.SourceFormat + to := sdktranslator.FromString("kiro") + body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) + + kiroModelID := e.mapModelToKiro(req.Model) + isAgentic, isChatOnly := determineAgenticMode(req.Model) + effectiveProfileArn := getEffectiveProfileArnWithWarning(auth, profileArn) + tokenKey := getAccountKey(auth) + + reporter := newUsageReporter(ctx, e.Identifier(), req.Model, auth) + var err error + defer reporter.trackFailure(ctx, &err) + + resp, err := e.executeWithRetry(ctx, auth, req, opts, accessToken, effectiveProfileArn, nil, body, from, to, reporter, "", kiroModelID, isAgentic, isChatOnly, tokenKey) + return resp, err +} diff --git a/internal/runtime/executor/kiro_executor_test.go b/internal/runtime/executor/kiro_executor_test.go new file mode 100644 index 0000000000..c4a5e6fa69 --- /dev/null +++ b/internal/runtime/executor/kiro_executor_test.go @@ -0,0 +1,423 @@ +package executor + +import ( + "fmt" + "testing" + + kiroauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kiro" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +func TestBuildKiroEndpointConfigs(t *testing.T) { + tests := []struct { + name string + region string + expectedURL string + expectedOrigin string + expectedName string + }{ + { + name: "Empty region - defaults to us-east-1", + region: "", + expectedURL: "https://q.us-east-1.amazonaws.com/generateAssistantResponse", + expectedOrigin: "AI_EDITOR", + expectedName: "AmazonQ", + }, + { + name: "us-east-1", + region: "us-east-1", + expectedURL: "https://q.us-east-1.amazonaws.com/generateAssistantResponse", + expectedOrigin: "AI_EDITOR", + expectedName: "AmazonQ", + }, + { + name: "ap-southeast-1", + region: "ap-southeast-1", + expectedURL: "https://q.ap-southeast-1.amazonaws.com/generateAssistantResponse", + expectedOrigin: "AI_EDITOR", + expectedName: "AmazonQ", + }, + { + name: "eu-west-1", + region: "eu-west-1", + expectedURL: "https://q.eu-west-1.amazonaws.com/generateAssistantResponse", + expectedOrigin: "AI_EDITOR", + expectedName: "AmazonQ", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + configs := buildKiroEndpointConfigs(tt.region) + + if len(configs) != 2 { + t.Fatalf("expected 2 endpoint configs, got %d", len(configs)) + } + + // Check primary endpoint (AmazonQ) + primary := configs[0] + if primary.URL != tt.expectedURL { + t.Errorf("primary URL = %q, want %q", primary.URL, tt.expectedURL) + } + if primary.Origin != tt.expectedOrigin { + t.Errorf("primary Origin = %q, want %q", primary.Origin, tt.expectedOrigin) + } + if primary.Name != tt.expectedName { + t.Errorf("primary Name = %q, want %q", primary.Name, tt.expectedName) + } + if primary.AmzTarget != "" { + t.Errorf("primary AmzTarget should be empty, got %q", primary.AmzTarget) + } + + // Check fallback endpoint (CodeWhisperer) + fallback := configs[1] + if fallback.Name != "CodeWhisperer" { + t.Errorf("fallback Name = %q, want %q", fallback.Name, "CodeWhisperer") + } + // CodeWhisperer fallback uses the same region as Q endpoint + expectedRegion := tt.region + if expectedRegion == "" { + expectedRegion = kiroDefaultRegion + } + expectedFallbackURL := fmt.Sprintf("https://codewhisperer.%s.amazonaws.com/generateAssistantResponse", expectedRegion) + if fallback.URL != expectedFallbackURL { + t.Errorf("fallback URL = %q, want %q", fallback.URL, expectedFallbackURL) + } + if fallback.AmzTarget == "" { + t.Error("fallback AmzTarget should NOT be empty") + } + }) + } +} + +func TestGetKiroEndpointConfigs_NilAuth(t *testing.T) { + configs := getKiroEndpointConfigs(nil) + + if len(configs) != 2 { + t.Fatalf("expected 2 endpoint configs, got %d", len(configs)) + } + + // Should return default us-east-1 configs + if configs[0].Name != "AmazonQ" { + t.Errorf("first config Name = %q, want %q", configs[0].Name, "AmazonQ") + } + expectedURL := "https://q.us-east-1.amazonaws.com/generateAssistantResponse" + if configs[0].URL != expectedURL { + t.Errorf("first config URL = %q, want %q", configs[0].URL, expectedURL) + } +} + +func TestGetKiroEndpointConfigs_WithRegionFromProfileArn(t *testing.T) { + auth := &cliproxyauth.Auth{ + Metadata: map[string]any{ + "profile_arn": "arn:aws:codewhisperer:ap-southeast-1:123456789012:profile/ABC", + }, + } + + configs := getKiroEndpointConfigs(auth) + + if len(configs) != 2 { + t.Fatalf("expected 2 endpoint configs, got %d", len(configs)) + } + + expectedURL := "https://q.ap-southeast-1.amazonaws.com/generateAssistantResponse" + if configs[0].URL != expectedURL { + t.Errorf("primary URL = %q, want %q", configs[0].URL, expectedURL) + } +} + +func TestGetKiroEndpointConfigs_WithApiRegionOverride(t *testing.T) { + auth := &cliproxyauth.Auth{ + Metadata: map[string]any{ + "api_region": "eu-central-1", + "profile_arn": "arn:aws:codewhisperer:us-east-1:123456789012:profile/ABC", + }, + } + + configs := getKiroEndpointConfigs(auth) + + // api_region should take precedence over profile_arn + expectedURL := "https://q.eu-central-1.amazonaws.com/generateAssistantResponse" + if configs[0].URL != expectedURL { + t.Errorf("primary URL = %q, want %q", configs[0].URL, expectedURL) + } +} + +func TestGetKiroEndpointConfigs_PreferredEndpoint(t *testing.T) { + tests := []struct { + name string + preference string + expectedFirstName string + }{ + { + name: "Prefer codewhisperer", + preference: "codewhisperer", + expectedFirstName: "CodeWhisperer", + }, + { + name: "Prefer ide (alias for codewhisperer)", + preference: "ide", + expectedFirstName: "CodeWhisperer", + }, + { + name: "Prefer amazonq", + preference: "amazonq", + expectedFirstName: "AmazonQ", + }, + { + name: "Prefer q (alias for amazonq)", + preference: "q", + expectedFirstName: "AmazonQ", + }, + { + name: "Prefer cli (alias for amazonq)", + preference: "cli", + expectedFirstName: "AmazonQ", + }, + { + name: "Unknown preference - no reordering", + preference: "unknown", + expectedFirstName: "AmazonQ", + }, + { + name: "Empty preference - no reordering", + preference: "", + expectedFirstName: "AmazonQ", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + auth := &cliproxyauth.Auth{ + Metadata: map[string]any{ + "preferred_endpoint": tt.preference, + }, + } + + configs := getKiroEndpointConfigs(auth) + + if configs[0].Name != tt.expectedFirstName { + t.Errorf("first endpoint Name = %q, want %q", configs[0].Name, tt.expectedFirstName) + } + }) + } +} + +func TestGetKiroEndpointConfigs_PreferredEndpointFromAttributes(t *testing.T) { + // Test that preferred_endpoint can also come from Attributes + auth := &cliproxyauth.Auth{ + Metadata: map[string]any{}, + Attributes: map[string]string{"preferred_endpoint": "codewhisperer"}, + } + + configs := getKiroEndpointConfigs(auth) + + if configs[0].Name != "CodeWhisperer" { + t.Errorf("first endpoint Name = %q, want %q", configs[0].Name, "CodeWhisperer") + } +} + +func TestGetKiroEndpointConfigs_MetadataTakesPrecedenceOverAttributes(t *testing.T) { + auth := &cliproxyauth.Auth{ + Metadata: map[string]any{"preferred_endpoint": "amazonq"}, + Attributes: map[string]string{"preferred_endpoint": "codewhisperer"}, + } + + configs := getKiroEndpointConfigs(auth) + + // Metadata should take precedence + if configs[0].Name != "AmazonQ" { + t.Errorf("first endpoint Name = %q, want %q", configs[0].Name, "AmazonQ") + } +} + +func TestGetAuthValue(t *testing.T) { + tests := []struct { + name string + auth *cliproxyauth.Auth + key string + expected string + }{ + { + name: "From metadata", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{"test_key": "metadata_value"}, + }, + key: "test_key", + expected: "metadata_value", + }, + { + name: "From attributes (fallback)", + auth: &cliproxyauth.Auth{ + Attributes: map[string]string{"test_key": "attribute_value"}, + }, + key: "test_key", + expected: "attribute_value", + }, + { + name: "Metadata takes precedence", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{"test_key": "metadata_value"}, + Attributes: map[string]string{"test_key": "attribute_value"}, + }, + key: "test_key", + expected: "metadata_value", + }, + { + name: "Key not found", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{"other_key": "value"}, + Attributes: map[string]string{"another_key": "value"}, + }, + key: "test_key", + expected: "", + }, + { + name: "Nil metadata", + auth: &cliproxyauth.Auth{ + Attributes: map[string]string{"test_key": "attribute_value"}, + }, + key: "test_key", + expected: "attribute_value", + }, + { + name: "Both nil", + auth: &cliproxyauth.Auth{}, + key: "test_key", + expected: "", + }, + { + name: "Value is trimmed and lowercased", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{"test_key": " UPPER_VALUE "}, + }, + key: "test_key", + expected: "upper_value", + }, + { + name: "Empty string value in metadata - falls back to attributes", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{"test_key": ""}, + Attributes: map[string]string{"test_key": "attribute_value"}, + }, + key: "test_key", + expected: "attribute_value", + }, + { + name: "Non-string value in metadata - falls back to attributes", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{"test_key": 123}, + Attributes: map[string]string{"test_key": "attribute_value"}, + }, + key: "test_key", + expected: "attribute_value", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := getAuthValue(tt.auth, tt.key) + if result != tt.expected { + t.Errorf("getAuthValue() = %q, want %q", result, tt.expected) + } + }) + } +} + +func TestGetAccountKey(t *testing.T) { + tests := []struct { + name string + auth *cliproxyauth.Auth + checkFn func(t *testing.T, result string) + }{ + { + name: "From client_id", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{ + "client_id": "test-client-id-123", + "refresh_token": "test-refresh-token-456", + }, + }, + checkFn: func(t *testing.T, result string) { + expected := kiroauth.GetAccountKey("test-client-id-123", "test-refresh-token-456") + if result != expected { + t.Errorf("expected %s, got %s", expected, result) + } + }, + }, + { + name: "From refresh_token only", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{ + "refresh_token": "test-refresh-token-789", + }, + }, + checkFn: func(t *testing.T, result string) { + expected := kiroauth.GetAccountKey("", "test-refresh-token-789") + if result != expected { + t.Errorf("expected %s, got %s", expected, result) + } + }, + }, + { + name: "Nil auth", + auth: nil, + checkFn: func(t *testing.T, result string) { + if len(result) != 16 { + t.Errorf("expected 16 char key, got %d chars", len(result)) + } + }, + }, + { + name: "Nil metadata", + auth: &cliproxyauth.Auth{}, + checkFn: func(t *testing.T, result string) { + if len(result) != 16 { + t.Errorf("expected 16 char key, got %d chars", len(result)) + } + }, + }, + { + name: "Empty metadata", + auth: &cliproxyauth.Auth{ + Metadata: map[string]any{}, + }, + checkFn: func(t *testing.T, result string) { + if len(result) != 16 { + t.Errorf("expected 16 char key, got %d chars", len(result)) + } + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := getAccountKey(tt.auth) + tt.checkFn(t, result) + }) + } +} + +func TestEndpointAliases(t *testing.T) { + // Verify all expected aliases are defined + expectedAliases := map[string]string{ + "codewhisperer": "codewhisperer", + "ide": "codewhisperer", + "amazonq": "amazonq", + "q": "amazonq", + "cli": "amazonq", + } + + for alias, target := range expectedAliases { + if actual, ok := endpointAliases[alias]; !ok { + t.Errorf("missing alias %q", alias) + } else if actual != target { + t.Errorf("alias %q = %q, want %q", alias, actual, target) + } + } + + // Verify no unexpected aliases + if len(endpointAliases) != len(expectedAliases) { + t.Errorf("unexpected number of aliases: got %d, want %d", len(endpointAliases), len(expectedAliases)) + } +} diff --git a/internal/runtime/executor/openai_compat_executor.go b/internal/runtime/executor/openai_compat_executor.go index 7f202055a4..82fc9e97d8 100644 --- a/internal/runtime/executor/openai_compat_executor.go +++ b/internal/runtime/executor/openai_compat_executor.go @@ -10,13 +10,13 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor/helps" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" log "github.com/sirupsen/logrus" "github.com/tidwall/sjson" ) @@ -96,19 +96,21 @@ func (e *OpenAICompatExecutor) Execute(ctx context.Context, auth *cliproxyauth.A originalPayload := originalPayloadSource originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, opts.Stream) translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, opts.Stream) + + translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) + if err != nil { + return resp, err + } + requestedModel := helps.PayloadRequestedModel(opts, req.Model) - translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) + requestPath := helps.PayloadRequestPath(opts) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel, requestPath) if opts.Alt == "responses/compact" { if updated, errDelete := sjson.DeleteBytes(translated, "stream"); errDelete == nil { translated = updated } } - translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) - if err != nil { - return resp, err - } - url := strings.TrimSuffix(baseURL, "/") + endpoint httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated)) if err != nil { @@ -198,14 +200,16 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy originalPayload := originalPayloadSource originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true) translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true) - requestedModel := helps.PayloadRequestedModel(opts, req.Model) - translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel) translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier()) if err != nil { return nil, err } + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + requestPath := helps.PayloadRequestPath(opts) + translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel, requestPath) + // Request usage data in the final streaming chunk so that token statistics // are captured even when the upstream is an OpenAI-compatible provider. translated, _ = sjson.SetBytes(translated, "stream_options.include_usage", true) @@ -279,32 +283,57 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy if detail, ok := helps.ParseOpenAIStreamUsage(line); ok { reporter.Publish(ctx, detail) } - if len(line) == 0 { + trimmedLine := bytes.TrimSpace(line) + if len(trimmedLine) == 0 { continue } - if !bytes.HasPrefix(line, []byte("data:")) { + if !bytes.HasPrefix(trimmedLine, []byte("data:")) { + if bytes.HasPrefix(trimmedLine, []byte(":")) || bytes.HasPrefix(trimmedLine, []byte("event:")) || + bytes.HasPrefix(trimmedLine, []byte("id:")) || bytes.HasPrefix(trimmedLine, []byte("retry:")) { + continue + } + if bytes.HasPrefix(trimmedLine, []byte("{")) || bytes.HasPrefix(trimmedLine, []byte("[")) { + streamErr := statusErr{code: http.StatusBadGateway, msg: string(trimmedLine)} + helps.RecordAPIResponseError(ctx, e.cfg, streamErr) + reporter.PublishFailure(ctx, streamErr) + select { + case out <- cliproxyexecutor.StreamChunk{Err: streamErr}: + case <-ctx.Done(): + } + return + } continue } - // OpenAI-compatible streams are SSE: lines typically prefixed with "data: ". - // Pass through translator; it yields one or more chunks for the target schema. - chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(line), ¶m) + // OpenAI-compatible streams must use SSE data lines. + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(trimmedLine), ¶m) for i := range chunks { - out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]}: + case <-ctx.Done(): + return + } } } if errScan := scanner.Err(); errScan != nil { helps.RecordAPIResponseError(ctx, e.cfg, errScan) - reporter.PublishFailure(ctx) - out <- cliproxyexecutor.StreamChunk{Err: errScan} + reporter.PublishFailure(ctx, errScan) + select { + case out <- cliproxyexecutor.StreamChunk{Err: errScan}: + case <-ctx.Done(): + } } else { // In case the upstream close the stream without a terminal [DONE] marker. // Feed a synthetic done marker through the translator so pending // response.completed events are still emitted exactly once. chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, []byte("data: [DONE]"), ¶m) for i := range chunks { - out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + select { + case out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]}: + case <-ctx.Done(): + return + } } } // Ensure we record the request if no usage chunk was ever seen @@ -345,7 +374,9 @@ func (e *OpenAICompatExecutor) CountTokens(ctx context.Context, auth *cliproxyau // Refresh is a no-op for API-key based compatibility providers. func (e *OpenAICompatExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { log.Debugf("openai compat executor: refresh called") - _ = ctx + if refreshed, handled, err := helps.RefreshAuthViaHome(ctx, e.cfg, auth); handled { + return refreshed, err + } return auth, nil } @@ -378,6 +409,9 @@ func (e *OpenAICompatExecutor) resolveCompatConfig(auth *cliproxyauth.Auth) *con } for i := range e.cfg.OpenAICompatibility { compat := &e.cfg.OpenAICompatibility[i] + if compat.Disabled { + continue + } for _, candidate := range candidates { if candidate != "" && strings.EqualFold(strings.TrimSpace(candidate), compat.Name) { return compat diff --git a/internal/runtime/executor/openai_compat_executor_compact_test.go b/internal/runtime/executor/openai_compat_executor_compact_test.go index fe2812623b..3aab5c9b01 100644 --- a/internal/runtime/executor/openai_compat_executor_compact_test.go +++ b/internal/runtime/executor/openai_compat_executor_compact_test.go @@ -5,12 +5,13 @@ import ( "io" "net/http" "net/http/httptest" + "strings" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) @@ -56,3 +57,125 @@ func TestOpenAICompatExecutorCompactPassthrough(t *testing.T) { t.Fatalf("payload = %s", string(resp.Payload)) } } + +func TestOpenAICompatExecutorPayloadOverrideWinsOverThinkingSuffix(t *testing.T) { + var gotBody []byte + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + body, _ := io.ReadAll(r.Body) + gotBody = body + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"id":"chatcmpl_1","object":"chat.completion","choices":[{"index":0,"message":{"role":"assistant","content":"ok"},"finish_reason":"stop"}],"usage":{"prompt_tokens":1,"completion_tokens":1,"total_tokens":2}}`)) + })) + defer server.Close() + + executor := NewOpenAICompatExecutor("openai-compatibility", &config.Config{ + Payload: config.PayloadConfig{ + Override: []config.PayloadRule{ + { + Models: []config.PayloadModelRule{ + {Name: "custom-openai", Protocol: "openai"}, + }, + Params: map[string]any{ + "reasoning_effort": "low", + }, + }, + }, + }, + }) + auth := &cliproxyauth.Auth{Attributes: map[string]string{ + "base_url": server.URL + "/v1", + "api_key": "test", + }} + payload := []byte(`{"model":"custom-openai(high)","messages":[{"role":"user","content":"hi"}]}`) + _, err := executor.Execute(context.Background(), auth, cliproxyexecutor.Request{ + Model: "custom-openai(high)", + Payload: payload, + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + Stream: false, + }) + if err != nil { + t.Fatalf("Execute error: %v", err) + } + if got := gjson.GetBytes(gotBody, "reasoning_effort").String(); got != "low" { + t.Fatalf("reasoning_effort = %q, want %q; body=%s", got, "low", string(gotBody)) + } +} + +func TestOpenAICompatExecutorStreamRejectsPlainJSONAfterBlankLines(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("\n\n: openrouter processing\n\nevent: error\n")) + _, _ = w.Write([]byte(`{"error":{"message":"upstream failed","type":"server_error"}}` + "\n")) + })) + defer server.Close() + + executor := NewOpenAICompatExecutor("openai-compatibility", &config.Config{}) + auth := &cliproxyauth.Auth{Attributes: map[string]string{ + "base_url": server.URL + "/v1", + "api_key": "test", + }} + result, err := executor.ExecuteStream(context.Background(), auth, cliproxyexecutor.Request{ + Model: "openrouter-model", + Payload: []byte(`{"model":"openrouter-model","messages":[{"role":"user","content":"hi"}],"stream":true}`), + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + Stream: true, + }) + if err != nil { + t.Fatalf("ExecuteStream error: %v", err) + } + + var gotErr error + for chunk := range result.Chunks { + if chunk.Err != nil { + gotErr = chunk.Err + break + } + } + if gotErr == nil { + t.Fatalf("expected plain JSON stream error") + } + if status, ok := gotErr.(interface{ StatusCode() int }); !ok || status.StatusCode() != http.StatusBadGateway { + t.Fatalf("stream error status = %v, want %d", gotErr, http.StatusBadGateway) + } + if !strings.Contains(gotErr.Error(), "upstream failed") { + t.Fatalf("stream error = %v", gotErr) + } +} + +func TestOpenAICompatExecutorStreamSkipsKeepAliveUntilDataLine(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("\n\n: openrouter processing\n\nevent: ping\nid: 1\nretry: 1000\n")) + _, _ = w.Write([]byte(`data: {"id":"chatcmpl_1","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"hello"},"finish_reason":null}]}` + "\n")) + })) + defer server.Close() + + executor := NewOpenAICompatExecutor("openai-compatibility", &config.Config{}) + auth := &cliproxyauth.Auth{Attributes: map[string]string{ + "base_url": server.URL + "/v1", + "api_key": "test", + }} + result, err := executor.ExecuteStream(context.Background(), auth, cliproxyexecutor.Request{ + Model: "openrouter-model", + Payload: []byte(`{"model":"openrouter-model","messages":[{"role":"user","content":"hi"}],"stream":true}`), + }, cliproxyexecutor.Options{ + SourceFormat: sdktranslator.FromString("openai"), + Stream: true, + }) + if err != nil { + t.Fatalf("ExecuteStream error: %v", err) + } + + var got strings.Builder + for chunk := range result.Chunks { + if chunk.Err != nil { + t.Fatalf("unexpected stream error: %v", chunk.Err) + } + got.Write(chunk.Payload) + } + if gjson.Get(got.String(), "choices.0.delta.content").String() != "hello" { + t.Fatalf("stream payload = %s", got.String()) + } +} diff --git a/internal/runtime/executor/qoder_executor.go b/internal/runtime/executor/qoder_executor.go new file mode 100644 index 0000000000..0d8d455dac --- /dev/null +++ b/internal/runtime/executor/qoder_executor.go @@ -0,0 +1,682 @@ +// Package executor provides runtime execution capabilities for various AI service providers. +// This file implements the Qoder executor that proxies requests to the Qoder upstream +// using COSY authentication and custom body encoding. +package executor + +import ( + "bufio" + "bytes" + "context" + "crypto/aes" + "crypto/cipher" + "crypto/md5" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/base64" + "encoding/json" + "encoding/pem" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/google/uuid" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/qoder" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor/helps" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +// QoderExecutor handles request execution against the Qoder upstream API. +type QoderExecutor struct { + cfg *config.Config +} + +// NewQoderExecutor creates a new Qoder executor. +func NewQoderExecutor(cfg *config.Config) *QoderExecutor { + return &QoderExecutor{cfg: cfg} +} + +// Identifier returns the executor identifier. +func (e *QoderExecutor) Identifier() string { return "qoder" } + +// PrepareRequest injects Qoder COSY credentials into the outgoing HTTP request. +func (e *QoderExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error { + if req == nil { + return nil + } + // COSY auth is built per-request in Execute/ExecuteStream, so this is minimal. + var attrs map[string]string + if auth != nil { + attrs = auth.Attributes + } + util.ApplyCustomHeadersFromAttrs(req, attrs) + return nil +} + +// HttpRequest injects Qoder credentials into the request and executes it. +func (e *QoderExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) { + if req == nil { + return nil, fmt.Errorf("qoder executor: request is nil") + } + if ctx == nil { + ctx = req.Context() + } + httpReq := req.WithContext(ctx) + if err := e.PrepareRequest(httpReq, auth); err != nil { + return nil, err + } + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + return httpClient.Do(httpReq) +} + +// Execute performs a non-streaming chat completion request to Qoder. +func (e *QoderExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := bytes.Clone(originalPayloadSource) + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false) + body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false) + + body, err = thinking.ApplyThinking(body, req.Model, from.String(), "qoder", e.Identifier()) + if err != nil { + return resp, err + } + + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, "") + + // Build the Qoder-specific request body wrapping the OpenAI messages + qoderBody := e.buildQoderRequestBody(body, baseModel, false) + + url := qoder.ChatBase + qoder.ChatPath + "?" + qoder.ChatQueryExtra + qoderBodyJSON, errMarshal := json.Marshal(qoderBody) + if errMarshal != nil { + return resp, fmt.Errorf("qoder executor: failed to marshal request body: %w", errMarshal) + } + + // Build COSY authenticated request (plain JSON for non-stream) + httpReq, errReq := e.buildCosyRequest(ctx, auth, url, qoderBodyJSON, false) + if errReq != nil { + return resp, errReq + } + util.ApplyCustomHeadersFromAttrs(httpReq, auth.Attributes) + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: qoderBodyJSON, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, errDo := httpClient.Do(httpReq) + if errDo != nil { + helps.RecordAPIResponseError(ctx, e.cfg, errDo) + return resp, errDo + } + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("qoder executor: close response body error: %v", errClose) + } + }() + helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + helps.AppendAPIResponseChunk(ctx, e.cfg, b) + helps.LogWithRequestID(ctx).Debugf("request error, error status: %d, error message: %s", httpResp.StatusCode, helps.SummarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return resp, err + } + + data, err := io.ReadAll(httpResp.Body) + if err != nil { + helps.RecordAPIResponseError(ctx, e.cfg, err) + return resp, err + } + helps.AppendAPIResponseChunk(ctx, e.cfg, data) + + // Parse SSE response to extract the final completion + openAIResp := e.parseQoderSSEToCompletion(data, req.Model) + reporter.Publish(ctx, helps.ParseOpenAIUsage(openAIResp)) + + var param any + out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, openAIResp, ¶m) + resp = cliproxyexecutor.Response{Payload: out, Headers: httpResp.Header.Clone()} + return resp, nil +} + +// ExecuteStream performs a streaming chat completion request to Qoder. +func (e *QoderExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (_ *cliproxyexecutor.StreamResult, err error) { + baseModel := thinking.ParseSuffix(req.Model).ModelName + from := opts.SourceFormat + to := sdktranslator.FromString("openai") + + reporter := helps.NewUsageReporter(ctx, e.Identifier(), baseModel, auth) + defer reporter.TrackFailure(ctx, &err) + + originalPayloadSource := req.Payload + if len(opts.OriginalRequest) > 0 { + originalPayloadSource = opts.OriginalRequest + } + originalPayload := bytes.Clone(originalPayloadSource) + originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true) + body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true) + + body, err = thinking.ApplyThinking(body, req.Model, from.String(), "qoder", e.Identifier()) + if err != nil { + return nil, err + } + + body, err = sjson.SetBytes(body, "stream_options.include_usage", true) + if err != nil { + return nil, fmt.Errorf("qoder executor: failed to set stream_options in payload: %w", err) + } + requestedModel := helps.PayloadRequestedModel(opts, req.Model) + body = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel, "") + + // Build the Qoder-specific request body + qoderBody := e.buildQoderRequestBody(body, baseModel, true) + + url := qoder.ChatBase + qoder.ChatPath + "?" + qoder.ChatQueryExtra + qoderBodyJSON, errMarshal := json.Marshal(qoderBody) + if errMarshal != nil { + return nil, fmt.Errorf("qoder executor: failed to marshal request body: %w", errMarshal) + } + + // Build COSY authenticated request (plain JSON for stream) + httpReq, errReq := e.buildCosyRequest(ctx, auth, url, qoderBodyJSON, true) + if errReq != nil { + return nil, errReq + } + util.ApplyCustomHeadersFromAttrs(httpReq, auth.Attributes) + + var authID, authLabel, authType, authValue string + if auth != nil { + authID = auth.ID + authLabel = auth.Label + authType, authValue = auth.AccountInfo() + } + helps.RecordAPIRequest(ctx, e.cfg, helps.UpstreamRequestLog{ + URL: url, + Method: http.MethodPost, + Headers: httpReq.Header.Clone(), + Body: qoderBodyJSON, + Provider: e.Identifier(), + AuthID: authID, + AuthLabel: authLabel, + AuthType: authType, + AuthValue: authValue, + }) + + httpClient := helps.NewProxyAwareHTTPClient(ctx, e.cfg, auth, 0) + httpResp, errDo := httpClient.Do(httpReq) + if errDo != nil { + helps.RecordAPIResponseError(ctx, e.cfg, errDo) + return nil, errDo + } + helps.RecordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) + if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 { + b, _ := io.ReadAll(httpResp.Body) + helps.AppendAPIResponseChunk(ctx, e.cfg, b) + helps.LogWithRequestID(ctx).Debugf("request error, error status: %d, error message: %s", httpResp.StatusCode, helps.SummarizeErrorBody(httpResp.Header.Get("Content-Type"), b)) + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("qoder executor: close response body error: %v", errClose) + } + err = statusErr{code: httpResp.StatusCode, msg: string(b)} + return nil, err + } + + out := make(chan cliproxyexecutor.StreamChunk) + go func() { + defer close(out) + defer func() { + if errClose := httpResp.Body.Close(); errClose != nil { + log.Errorf("qoder executor: close response body error: %v", errClose) + } + }() + + scanner := bufio.NewScanner(httpResp.Body) + scanner.Buffer(nil, 1_048_576) // 1MB + var param any + for scanner.Scan() { + line := scanner.Bytes() + helps.AppendAPIResponseChunk(ctx, e.cfg, line) + + // Parse Qoder SSE format: data:{...} where body contains inner OpenAI chunk + openAIChunk := e.extractOpenAIChunkFromSSE(line, req.Model) + if openAIChunk == nil { + continue + } + + if detail, ok := helps.ParseOpenAIStreamUsage(openAIChunk); ok { + reporter.Publish(ctx, detail) + } + + // Wrap as SSE line for translator + sseLine := append([]byte("data: "), openAIChunk...) + chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(sseLine), ¶m) + for i := range chunks { + out <- cliproxyexecutor.StreamChunk{Payload: chunks[i]} + } + } + doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), ¶m) + for i := range doneChunks { + out <- cliproxyexecutor.StreamChunk{Payload: doneChunks[i]} + } + if errScan := scanner.Err(); errScan != nil { + helps.RecordAPIResponseError(ctx, e.cfg, errScan) + reporter.PublishFailure(ctx) + out <- cliproxyexecutor.StreamChunk{Err: errScan} + } + }() + return &cliproxyexecutor.StreamResult{Headers: httpResp.Header.Clone(), Chunks: out}, nil +} + +// Refresh is a no-op for Qoder since tokens don't expire in the standard OAuth sense. +func (e *QoderExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) { + log.Debugf("qoder executor: refresh called") + if auth == nil { + return nil, fmt.Errorf("qoder executor: auth is nil") + } + // Qoder tokens (access_token from the PKCE login) are long-lived + return auth, nil +} + +// CountTokens returns an unsupported error since Qoder does not expose a token counting endpoint. +func (e *QoderExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, statusErr{code: http.StatusNotImplemented, msg: "qoder does not support token counting"} +} + +// buildQoderRequestBody wraps OpenAI-format messages into the Qoder request envelope. +func (e *QoderExecutor) buildQoderRequestBody(openaiBody []byte, modelKey string, stream bool) map[string]any { + var messages []any + msgsRaw := gjson.GetBytes(openaiBody, "messages") + if msgsRaw.Exists() && msgsRaw.IsArray() { + _ = json.Unmarshal([]byte(msgsRaw.Raw), &messages) + } + + // Extract last user message for originalContent + lastUser := "" + if messages != nil { + for i := len(messages) - 1; i >= 0; i-- { + if m, ok := messages[i].(map[string]any); ok { + if role, _ := m["role"].(string); role == "user" { + if content, ok := m["content"].(string); ok { + lastUser = content + } + break + } + } + } + } + + body := map[string]any{ + "stream": stream, + "chat_task": "FREE_INPUT", + "is_reply": false, + "is_retry": false, + "code_language": "", + "source": 1, + "version": "3", + "chat_prompt": "", + "session_type": "qodercli", + "agent_id": "agent_common", + "task_id": "common", + "messages": messages, + "tools": []any{}, + "request_id": uuid.NewString(), + "request_set_id": uuid.NewString(), + "chat_record_id": uuid.NewString(), + "session_id": uuid.NewString(), + "parameters": map[string]any{"max_tokens": 32768}, + "chat_context": map[string]any{ + "chatPrompt": "", + "extra": map[string]any{ + "context": []any{}, + "modelConfig": map[string]any{"key": modelKey, "is_reasoning": false}, + "originalContent": map[string]any{"type": "text", "text": lastUser}, + }, + "features": []any{}, + "text": map[string]any{"type": "text", "text": lastUser}, + }, + "model_config": map[string]any{ + "key": modelKey, + "display_name": modelKey, + "model": "", + "format": "openai", + "is_vl": true, + "is_reasoning": false, + "api_key": "", + "url": "", + "source": "system", + "max_input_tokens": 180000, + }, + "business": map[string]any{ + "id": uuid.NewString(), + "type": "agent_chat_generation", + "name": "", + "begin_at": time.Now().UnixMilli(), + }, + } + return body +} + +// buildCosyRequest creates an HTTP request with COSY authentication headers. +func (e *QoderExecutor) buildCosyRequest(ctx context.Context, auth *cliproxyauth.Auth, reqURL string, body []byte, stream bool) (*http.Request, error) { + creds := qoderCreds(auth) + if creds.accessToken == "" { + return nil, fmt.Errorf("qoder executor: missing access token") + } + + // For streaming, send plain JSON + bodyForSig := string(body) + bodyBytes := body + + // Parse path for signature — match Python: path = "/" + url.split("://")[1].split("/", 1)[1] + sigPath := "" + if idx := strings.Index(reqURL, "://"); idx >= 0 { + afterScheme := reqURL[idx+3:] // "api3.qoder.sh/algo/api/v2/..." + if slashIdx := strings.Index(afterScheme, "/"); slashIdx >= 0 { + sigPath = afterScheme[slashIdx:] // "/algo/api/v2/..." + } + } + if idx := strings.Index(sigPath, "?"); idx >= 0 { + sigPath = sigPath[:idx] + } + if strings.HasPrefix(sigPath, "/algo") { + sigPath = sigPath[len("/algo"):] + } + + // Build COSY payload + aesKey := uuid.NewString()[:16] + identity, _ := json.Marshal(map[string]any{ + "uid": creds.uid, + "security_oauth_token": creds.accessToken, + "name": creds.name, + "aid": "", + "email": creds.email, + }) + info := aesEncryptB64(string(identity), aesKey) + key := base64.StdEncoding.EncodeToString(rsaEncrypt([]byte(aesKey))) + + timestamp := fmt.Sprintf("%d", time.Now().Unix()) + + payload, _ := json.Marshal(map[string]any{ + "cosyVersion": qoder.IDEVersion, + "ideVersion": "", + "info": info, + "requestId": uuid.NewString(), + "version": "v1", + }) + payloadB64 := base64.StdEncoding.EncodeToString(payload) + + sigInput := fmt.Sprintf("%s\n%s\n%s\n%s\n%s", payloadB64, key, timestamp, bodyForSig, sigPath) + sigMD5 := fmt.Sprintf("%x", md5.Sum([]byte(sigInput))) + + bodyHash := fmt.Sprintf("%x", md5.Sum(bodyBytes)) + + httpReq, errReq := http.NewRequestWithContext(ctx, http.MethodPost, reqURL, bytes.NewReader(bodyBytes)) + if errReq != nil { + return nil, fmt.Errorf("qoder executor: create request: %w", errReq) + } + + httpReq.Header.Set("Content-Type", "application/json") + httpReq.Header.Set("Accept-Encoding", "identity") + httpReq.Header.Set("Cosy-Version", qoder.IDEVersion) + httpReq.Header.Set("Cosy-Machineid", creds.machineID) + httpReq.Header.Set("Cosy-Machinetoken", creds.machineID) + httpReq.Header.Set("Cosy-Machinetype", "d19de69691ac029caa") + httpReq.Header.Set("Cosy-Machineos", "x86_64_windows") + httpReq.Header.Set("Cosy-Clienttype", "0") + httpReq.Header.Set("Cosy-Clientip", "127.0.0.1") + httpReq.Header.Set("Login-Version", "v2") + httpReq.Header.Set("Cosy-User", creds.uid) + httpReq.Header.Set("Cosy-Key", key) + httpReq.Header.Set("Cosy-Date", timestamp) + httpReq.Header.Set("Cosy-Bodyhash", bodyHash) + httpReq.Header.Set("Cosy-Bodylength", fmt.Sprintf("%d", len(bodyBytes))) + httpReq.Header.Set("Cosy-Sigpath", sigPath) + httpReq.Header.Set("Cosy-Data-Policy", "AGREE") + httpReq.Header.Set("Cosy-Organization-Id", "") + httpReq.Header.Set("Cosy-Organization-Tags", "") + httpReq.Header.Set("Authorization", fmt.Sprintf("Bearer COSY.%s.%s", payloadB64, sigMD5)) + httpReq.Header.Set("X-Request-Id", uuid.NewString()) + + if stream { + httpReq.Header.Set("Accept", "text/event-stream") + httpReq.Header.Set("Cache-Control", "no-cache") + } else { + httpReq.Header.Set("Accept", "application/json") + } + + return httpReq, nil +} + +// extractOpenAIChunkFromSSE parses a Qoder SSE line and extracts the inner OpenAI chunk. +func (e *QoderExecutor) extractOpenAIChunkFromSSE(line []byte, model string) []byte { + s := string(line) + if !strings.HasPrefix(s, "data:") { + return nil + } + raw := strings.TrimSpace(s[5:]) + if raw == "" || raw == "[DONE]" { + return nil + } + + // Parse the outer SSE envelope + outerBody := gjson.Get(raw, "body") + if !outerBody.Exists() { + return nil + } + innerRaw := outerBody.String() + if innerRaw == "[DONE]" { + return nil + } + + // Parse inner OpenAI chunk + if !gjson.Valid(innerRaw) { + return nil + } + inner := gjson.Parse(innerRaw) + if !inner.Get("choices").Exists() { + return nil + } + + // Override the model name + result, err := sjson.Set(innerRaw, "model", model) + if err != nil { + return []byte(innerRaw) + } + return []byte(result) +} + +// parseQoderSSEToCompletion parses the full SSE response and assembles a non-streaming completion. +func (e *QoderExecutor) parseQoderSSEToCompletion(data []byte, model string) []byte { + var fullContent strings.Builder + var finishReason string + + lines := strings.Split(string(data), "\n") + for _, line := range lines { + line = strings.TrimSpace(line) + if !strings.HasPrefix(line, "data:") { + continue + } + raw := strings.TrimSpace(line[5:]) + if raw == "" || raw == "[DONE]" { + continue + } + + outerBody := gjson.Get(raw, "body") + if !outerBody.Exists() { + continue + } + innerRaw := outerBody.String() + if innerRaw == "[DONE]" { + continue + } + inner := gjson.Parse(innerRaw) + if !inner.Get("choices").Exists() { + continue + } + choices := inner.Get("choices").Array() + if len(choices) == 0 { + continue + } + choice := choices[0] + delta := choice.Get("delta") + if delta.Exists() { + content := delta.Get("content").String() + fullContent.WriteString(content) + } + if fr := choice.Get("finish_reason").String(); fr != "" && fr != "null" { + finishReason = fr + } + } + + if finishReason == "" { + finishReason = "stop" + } + + result := map[string]any{ + "id": "chatcmpl-" + uuid.NewString(), + "object": "chat.completion", + "created": time.Now().Unix(), + "model": model, + "choices": []any{ + map[string]any{ + "index": 0, + "message": map[string]any{ + "role": "assistant", + "content": fullContent.String(), + }, + "finish_reason": finishReason, + }, + }, + } + out, _ := json.Marshal(result) + return out +} + +// qoderCredentials holds the extracted credentials for Qoder auth. +type qoderCredentials struct { + accessToken string + uid string + name string + email string + machineID string +} + +// qoderCreds extracts credentials from the auth record. +func qoderCreds(a *cliproxyauth.Auth) qoderCredentials { + var creds qoderCredentials + if a == nil { + return creds + } + if a.Metadata != nil { + if v, ok := a.Metadata["access_token"].(string); ok { + creds.accessToken = v + } + if v, ok := a.Metadata["uid"].(string); ok { + creds.uid = v + } + if v, ok := a.Metadata["name"].(string); ok { + creds.name = v + } + if v, ok := a.Metadata["email"].(string); ok { + creds.email = v + } + if v, ok := a.Metadata["machine_id"].(string); ok { + creds.machineID = v + } + } + if a.Attributes != nil { + if creds.accessToken == "" { + if v := a.Attributes["access_token"]; v != "" { + creds.accessToken = v + } + } + if creds.uid == "" { + if v := a.Attributes["uid"]; v != "" { + creds.uid = v + } + } + } + return creds +} + +// aesEncryptB64 encrypts plaintext with AES-CBC and returns base64-encoded ciphertext. +func aesEncryptB64(plaintext, keyStr string) string { + block, err := aes.NewCipher([]byte(keyStr)) + if err != nil { + log.Errorf("qoder executor: AES cipher creation failed: %v", err) + return "" + } + data := pkcs7Pad([]byte(plaintext), block.BlockSize()) + iv := []byte(keyStr)[:16] + encrypted := make([]byte, len(data)) + mode := cipher.NewCBCEncrypter(block, iv) + mode.CryptBlocks(encrypted, data) + return base64.StdEncoding.EncodeToString(encrypted) +} + +// pkcs7Pad pads data to a multiple of blockSize using PKCS#7 padding. +func pkcs7Pad(data []byte, blockSize int) []byte { + padding := blockSize - len(data)%blockSize + padtext := bytes.Repeat([]byte{byte(padding)}, padding) + return append(data, padtext...) +} + +// rsaEncrypt encrypts data with the Qoder server public key. +func rsaEncrypt(data []byte) []byte { + block, _ := pem.Decode([]byte(qoder.ServerPublicKeyPEM)) + if block == nil { + log.Error("qoder executor: failed to parse PEM block") + return nil + } + pub, err := x509.ParsePKIXPublicKey(block.Bytes) + if err != nil { + log.Errorf("qoder executor: failed to parse public key: %v", err) + return nil + } + rsaPub, ok := pub.(*rsa.PublicKey) + if !ok { + log.Error("qoder executor: public key is not RSA") + return nil + } + encrypted, err := rsa.EncryptPKCS1v15(rand.Reader, rsaPub, data) + if err != nil { + log.Errorf("qoder executor: RSA encryption failed: %v", err) + return nil + } + return encrypted +} diff --git a/internal/store/gitstore.go b/internal/store/gitstore.go index bd84d99a23..ba9fe59e2b 100644 --- a/internal/store/gitstore.go +++ b/internal/store/gitstore.go @@ -18,7 +18,7 @@ import ( "github.com/go-git/go-git/v6/plumbing/object" "github.com/go-git/go-git/v6/plumbing/transport" "github.com/go-git/go-git/v6/plumbing/transport/http" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // gcInterval defines minimum time between garbage collection runs. @@ -287,10 +287,18 @@ func (s *GitTokenStore) Save(_ context.Context, auth *cliproxyauth.Auth) (string switch { case auth.Storage != nil: + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["disabled"] = auth.Disabled + if setter, ok := auth.Storage.(interface{ SetMetadata(map[string]any) }); ok { + setter.SetMetadata(auth.Metadata) + } if err = auth.Storage.SaveTokenToFile(path); err != nil { return "", err } case auth.Metadata != nil: + auth.Metadata["disabled"] = auth.Disabled raw, errMarshal := json.Marshal(auth.Metadata) if errMarshal != nil { return "", fmt.Errorf("auth filestore: marshal metadata failed: %w", errMarshal) diff --git a/internal/store/objectstore.go b/internal/store/objectstore.go index a33f6ef8f4..5626e6c65b 100644 --- a/internal/store/objectstore.go +++ b/internal/store/objectstore.go @@ -17,8 +17,8 @@ import ( "github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7/pkg/credentials" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) @@ -184,10 +184,18 @@ func (s *ObjectTokenStore) Save(ctx context.Context, auth *cliproxyauth.Auth) (s switch { case auth.Storage != nil: + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["disabled"] = auth.Disabled + if setter, ok := auth.Storage.(interface{ SetMetadata(map[string]any) }); ok { + setter.SetMetadata(auth.Metadata) + } if err = auth.Storage.SaveTokenToFile(path); err != nil { return "", err } case auth.Metadata != nil: + auth.Metadata["disabled"] = auth.Disabled raw, errMarshal := json.Marshal(auth.Metadata) if errMarshal != nil { return "", fmt.Errorf("object store: marshal metadata: %w", errMarshal) diff --git a/internal/store/postgresstore.go b/internal/store/postgresstore.go index 527b25cc12..43b125003d 100644 --- a/internal/store/postgresstore.go +++ b/internal/store/postgresstore.go @@ -14,8 +14,8 @@ import ( "time" _ "github.com/jackc/pgx/v5/stdlib" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) @@ -214,10 +214,18 @@ func (s *PostgresStore) Save(ctx context.Context, auth *cliproxyauth.Auth) (stri switch { case auth.Storage != nil: + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["disabled"] = auth.Disabled + if setter, ok := auth.Storage.(interface{ SetMetadata(map[string]any) }); ok { + setter.SetMetadata(auth.Metadata) + } if err = auth.Storage.SaveTokenToFile(path); err != nil { return "", err } case auth.Metadata != nil: + auth.Metadata["disabled"] = auth.Disabled raw, errMarshal := json.Marshal(auth.Metadata) if errMarshal != nil { return "", fmt.Errorf("postgres store: marshal metadata: %w", errMarshal) diff --git a/internal/thinking/apply.go b/internal/thinking/apply.go index 1edeac874c..d422a8d8b2 100644 --- a/internal/thinking/apply.go +++ b/internal/thinking/apply.go @@ -4,7 +4,7 @@ package thinking import ( "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" ) diff --git a/internal/thinking/apply_user_defined_test.go b/internal/thinking/apply_user_defined_test.go index aa24ab8e9c..c485d2521a 100644 --- a/internal/thinking/apply_user_defined_test.go +++ b/internal/thinking/apply_user_defined_test.go @@ -3,9 +3,9 @@ package thinking_test import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/claude" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/claude" "github.com/tidwall/gjson" ) diff --git a/internal/thinking/convert.go b/internal/thinking/convert.go index b22a0879ed..31945daa7c 100644 --- a/internal/thinking/convert.go +++ b/internal/thinking/convert.go @@ -3,7 +3,7 @@ package thinking import ( "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" ) // levelToBudgetMap defines the standard Level → Budget mapping. diff --git a/internal/thinking/provider/antigravity/apply.go b/internal/thinking/provider/antigravity/apply.go index d202035fc6..0a8f1c4537 100644 --- a/internal/thinking/provider/antigravity/apply.go +++ b/internal/thinking/provider/antigravity/apply.go @@ -9,8 +9,8 @@ package antigravity import ( "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/provider/claude/apply.go b/internal/thinking/provider/claude/apply.go index 275be46924..140a8135f7 100644 --- a/internal/thinking/provider/claude/apply.go +++ b/internal/thinking/provider/claude/apply.go @@ -9,8 +9,8 @@ package claude import ( - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/provider/codearts/apply.go b/internal/thinking/provider/codearts/apply.go new file mode 100644 index 0000000000..973466736f --- /dev/null +++ b/internal/thinking/provider/codearts/apply.go @@ -0,0 +1,28 @@ +package codearts + +import ( + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" +) + +type Applier struct{} + +var _ thinking.ProviderApplier = (*Applier)(nil) + +func NewApplier() *Applier { + return &Applier{} +} + +func init() { + thinking.RegisterProvider("codearts", NewApplier()) +} + +func (a *Applier) Apply(body []byte, config thinking.ThinkingConfig, modelInfo *registry.ModelInfo) ([]byte, error) { + if len(body) == 0 { + return body, nil + } + if modelInfo == nil || modelInfo.Thinking == nil { + return body, nil + } + return body, nil +} diff --git a/internal/thinking/provider/codebuddy/apply.go b/internal/thinking/provider/codebuddy/apply.go new file mode 100644 index 0000000000..d34764d263 --- /dev/null +++ b/internal/thinking/provider/codebuddy/apply.go @@ -0,0 +1,97 @@ +package codebuddy + +import ( + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +type Applier struct{} + +var _ thinking.ProviderApplier = (*Applier)(nil) + +func NewApplier() *Applier { + return &Applier{} +} + +func init() { + applier := NewApplier() + thinking.RegisterProvider("codebuddy", applier) + thinking.RegisterProvider("codebuddy-ai", applier) +} + +func (a *Applier) Apply(body []byte, config thinking.ThinkingConfig, modelInfo *registry.ModelInfo) ([]byte, error) { + if thinking.IsUserDefinedModel(modelInfo) { + return applyCompatibleCodeBuddy(body, config) + } + if modelInfo.Thinking == nil { + return body, nil + } + + if config.Mode != thinking.ModeLevel && config.Mode != thinking.ModeNone { + return body, nil + } + + if len(body) == 0 || !gjson.ValidBytes(body) { + body = []byte(`{}`) + } + + if config.Mode == thinking.ModeLevel { + result, _ := sjson.SetBytes(body, "reasoning_effort", string(config.Level)) + return result, nil + } + + effort := "" + support := modelInfo.Thinking + if config.Budget == 0 { + if support.ZeroAllowed || thinking.HasLevel(support.Levels, string(thinking.LevelNone)) { + effort = string(thinking.LevelNone) + } + } + if effort == "" && config.Level != "" { + effort = string(config.Level) + } + if effort == "" && len(support.Levels) > 0 { + effort = support.Levels[0] + } + if effort == "" { + return body, nil + } + + result, _ := sjson.SetBytes(body, "reasoning_effort", effort) + return result, nil +} + +func applyCompatibleCodeBuddy(body []byte, config thinking.ThinkingConfig) ([]byte, error) { + if len(body) == 0 || !gjson.ValidBytes(body) { + body = []byte(`{}`) + } + + var effort string + switch config.Mode { + case thinking.ModeLevel: + if config.Level == "" { + return body, nil + } + effort = string(config.Level) + case thinking.ModeNone: + effort = string(thinking.LevelNone) + if config.Level != "" { + effort = string(config.Level) + } + case thinking.ModeAuto: + effort = string(thinking.LevelAuto) + case thinking.ModeBudget: + level, ok := thinking.ConvertBudgetToLevel(config.Budget) + if !ok { + return body, nil + } + effort = level + default: + return body, nil + } + + result, _ := sjson.SetBytes(body, "reasoning_effort", effort) + return result, nil +} diff --git a/internal/thinking/provider/codex/apply.go b/internal/thinking/provider/codex/apply.go index 0f33635950..83f5ae8457 100644 --- a/internal/thinking/provider/codex/apply.go +++ b/internal/thinking/provider/codex/apply.go @@ -7,8 +7,8 @@ package codex import ( - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/provider/gemini/apply.go b/internal/thinking/provider/gemini/apply.go index 39bb4231d0..8e6e83f330 100644 --- a/internal/thinking/provider/gemini/apply.go +++ b/internal/thinking/provider/gemini/apply.go @@ -12,8 +12,8 @@ package gemini import ( - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/provider/geminicli/apply.go b/internal/thinking/provider/geminicli/apply.go index 5908b6bce5..e9311e8c18 100644 --- a/internal/thinking/provider/geminicli/apply.go +++ b/internal/thinking/provider/geminicli/apply.go @@ -5,8 +5,8 @@ package geminicli import ( - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/provider/iflow/apply.go b/internal/thinking/provider/iflow/apply.go new file mode 100644 index 0000000000..83140dcc7f --- /dev/null +++ b/internal/thinking/provider/iflow/apply.go @@ -0,0 +1,173 @@ +// Package iflow implements thinking configuration for iFlow models. +// +// iFlow models use boolean toggle semantics: +// - Models using chat_template_kwargs.enable_thinking (boolean toggle) +// - MiniMax models: reasoning_split (boolean) +// +// Level values are converted to boolean: none=false, all others=true +// See: _bmad-output/planning-artifacts/architecture.md#Epic-9 +package iflow + +import ( + "strings" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +// Applier implements thinking.ProviderApplier for iFlow models. +// +// iFlow-specific behavior: +// - enable_thinking toggle models: enable_thinking boolean +// - GLM models: enable_thinking boolean + clear_thinking=false +// - MiniMax models: reasoning_split boolean +// - Level to boolean: none=false, others=true +// - No quantized support (only on/off) +type Applier struct{} + +var _ thinking.ProviderApplier = (*Applier)(nil) + +// NewApplier creates a new iFlow thinking applier. +func NewApplier() *Applier { + return &Applier{} +} + +func init() { + thinking.RegisterProvider("iflow", NewApplier()) +} + +// Apply applies thinking configuration to iFlow request body. +// +// Expected output format (GLM): +// +// { +// "chat_template_kwargs": { +// "enable_thinking": true, +// "clear_thinking": false +// } +// } +// +// Expected output format (MiniMax): +// +// { +// "reasoning_split": true +// } +func (a *Applier) Apply(body []byte, config thinking.ThinkingConfig, modelInfo *registry.ModelInfo) ([]byte, error) { + if thinking.IsUserDefinedModel(modelInfo) { + return body, nil + } + if modelInfo.Thinking == nil { + return body, nil + } + + if isEnableThinkingModel(modelInfo.ID) { + return applyEnableThinking(body, config, isGLMModel(modelInfo.ID)), nil + } + + if isMiniMaxModel(modelInfo.ID) { + return applyMiniMax(body, config), nil + } + + return body, nil +} + +// configToBoolean converts ThinkingConfig to boolean for iFlow models. +// +// Conversion rules: +// - ModeNone: false +// - ModeAuto: true +// - ModeBudget + Budget=0: false +// - ModeBudget + Budget>0: true +// - ModeLevel + Level="none": false +// - ModeLevel + any other level: true +// - Default (unknown mode): true +func configToBoolean(config thinking.ThinkingConfig) bool { + switch config.Mode { + case thinking.ModeNone: + return false + case thinking.ModeAuto: + return true + case thinking.ModeBudget: + return config.Budget > 0 + case thinking.ModeLevel: + return config.Level != thinking.LevelNone + default: + return true + } +} + +// applyEnableThinking applies thinking configuration for models that use +// chat_template_kwargs.enable_thinking format. +// +// Output format when enabled: +// +// {"chat_template_kwargs": {"enable_thinking": true, "clear_thinking": false}} +// +// Output format when disabled: +// +// {"chat_template_kwargs": {"enable_thinking": false}} +// +// Note: clear_thinking is only set for GLM models when thinking is enabled. +func applyEnableThinking(body []byte, config thinking.ThinkingConfig, setClearThinking bool) []byte { + enableThinking := configToBoolean(config) + + if len(body) == 0 || !gjson.ValidBytes(body) { + body = []byte(`{}`) + } + + result, _ := sjson.SetBytes(body, "chat_template_kwargs.enable_thinking", enableThinking) + + // clear_thinking is a GLM-only knob, strip it for other models. + result, _ = sjson.DeleteBytes(result, "chat_template_kwargs.clear_thinking") + + // clear_thinking only needed when thinking is enabled + if enableThinking && setClearThinking { + result, _ = sjson.SetBytes(result, "chat_template_kwargs.clear_thinking", false) + } + + return result +} + +// applyMiniMax applies thinking configuration for MiniMax models. +// +// Output format: +// +// {"reasoning_split": true/false} +func applyMiniMax(body []byte, config thinking.ThinkingConfig) []byte { + reasoningSplit := configToBoolean(config) + + if len(body) == 0 || !gjson.ValidBytes(body) { + body = []byte(`{}`) + } + + result, _ := sjson.SetBytes(body, "reasoning_split", reasoningSplit) + + return result +} + +// isEnableThinkingModel determines if the model uses chat_template_kwargs.enable_thinking format. +func isEnableThinkingModel(modelID string) bool { + if isGLMModel(modelID) { + return true + } + id := strings.ToLower(modelID) + switch id { + case "deepseek-v3.2", "deepseek-v3.1": + return true + default: + return false + } +} + +// isGLMModel determines if the model is a GLM series model. +func isGLMModel(modelID string) bool { + return strings.HasPrefix(strings.ToLower(modelID), "glm") +} + +// isMiniMaxModel determines if the model is a MiniMax series model. +// MiniMax models use reasoning_split format. +func isMiniMaxModel(modelID string) bool { + return strings.HasPrefix(strings.ToLower(modelID), "minimax") +} diff --git a/internal/thinking/provider/kimi/apply.go b/internal/thinking/provider/kimi/apply.go index ff47c46d03..ea3ed572f0 100644 --- a/internal/thinking/provider/kimi/apply.go +++ b/internal/thinking/provider/kimi/apply.go @@ -7,8 +7,8 @@ package kimi import ( "fmt" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/provider/kimi/apply_test.go b/internal/thinking/provider/kimi/apply_test.go index 707f11c758..78069424ed 100644 --- a/internal/thinking/provider/kimi/apply_test.go +++ b/internal/thinking/provider/kimi/apply_test.go @@ -3,8 +3,8 @@ package kimi import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" ) diff --git a/internal/thinking/provider/openai/apply.go b/internal/thinking/provider/openai/apply.go index c77c1ab8e4..1e87b72b37 100644 --- a/internal/thinking/provider/openai/apply.go +++ b/internal/thinking/provider/openai/apply.go @@ -6,8 +6,8 @@ package openai import ( - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/thinking/types.go b/internal/thinking/types.go index a31d798197..39868a02f4 100644 --- a/internal/thinking/types.go +++ b/internal/thinking/types.go @@ -4,7 +4,7 @@ // thinking configurations across various AI providers (Claude, Gemini, OpenAI, Codex, Antigravity, Kimi). package thinking -import "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" +import "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" // ThinkingMode represents the type of thinking configuration mode. type ThinkingMode int diff --git a/internal/thinking/validate.go b/internal/thinking/validate.go index 4a3ca97ce8..2baa93f1da 100644 --- a/internal/thinking/validate.go +++ b/internal/thinking/validate.go @@ -5,7 +5,7 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" log "github.com/sirupsen/logrus" ) diff --git a/internal/translator/antigravity/claude/antigravity_claude_request.go b/internal/translator/antigravity/claude/antigravity_claude_request.go index 8ae69648db..7f36b11ccb 100644 --- a/internal/translator/antigravity/claude/antigravity_claude_request.go +++ b/internal/translator/antigravity/claude/antigravity_claude_request.go @@ -8,10 +8,10 @@ package claude import ( "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/antigravity/claude/antigravity_claude_request_test.go b/internal/translator/antigravity/claude/antigravity_claude_request_test.go index 919e29062a..bb3cdf4f34 100644 --- a/internal/translator/antigravity/claude/antigravity_claude_request_test.go +++ b/internal/translator/antigravity/claude/antigravity_claude_request_test.go @@ -6,7 +6,7 @@ import ( "strings" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" "github.com/tidwall/gjson" "google.golang.org/protobuf/encoding/protowire" ) diff --git a/internal/translator/antigravity/claude/antigravity_claude_response.go b/internal/translator/antigravity/claude/antigravity_claude_response.go index 17a31f217f..427551df6c 100644 --- a/internal/translator/antigravity/claude/antigravity_claude_response.go +++ b/internal/translator/antigravity/claude/antigravity_claude_response.go @@ -15,9 +15,9 @@ import ( "sync/atomic" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" diff --git a/internal/translator/antigravity/claude/antigravity_claude_response_test.go b/internal/translator/antigravity/claude/antigravity_claude_response_test.go index 05a3df899d..1490ab3cbd 100644 --- a/internal/translator/antigravity/claude/antigravity_claude_response_test.go +++ b/internal/translator/antigravity/claude/antigravity_claude_response_test.go @@ -6,7 +6,7 @@ import ( "strings" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" ) // ============================================================================ diff --git a/internal/translator/antigravity/claude/init.go b/internal/translator/antigravity/claude/init.go index 21fe0b26ed..4d9bd721ff 100644 --- a/internal/translator/antigravity/claude/init.go +++ b/internal/translator/antigravity/claude/init.go @@ -1,9 +1,9 @@ package claude import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/antigravity/claude/signature_validation.go b/internal/translator/antigravity/claude/signature_validation.go index 63203abdce..f82fc2e364 100644 --- a/internal/translator/antigravity/claude/signature_validation.go +++ b/internal/translator/antigravity/claude/signature_validation.go @@ -53,7 +53,7 @@ import ( "strings" "unicode/utf8" - "github.com/router-for-me/CLIProxyAPI/v6/internal/cache" + "github.com/router-for-me/CLIProxyAPI/v7/internal/cache" "github.com/tidwall/gjson" "github.com/tidwall/sjson" "google.golang.org/protobuf/encoding/protowire" diff --git a/internal/translator/antigravity/gemini/antigravity_gemini_request.go b/internal/translator/antigravity/gemini/antigravity_gemini_request.go index 3612c0fb1a..b33b9c40e1 100644 --- a/internal/translator/antigravity/gemini/antigravity_gemini_request.go +++ b/internal/translator/antigravity/gemini/antigravity_gemini_request.go @@ -9,8 +9,8 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/antigravity/gemini/antigravity_gemini_response.go b/internal/translator/antigravity/gemini/antigravity_gemini_response.go index 7b43c48db2..b0deb7320a 100644 --- a/internal/translator/antigravity/gemini/antigravity_gemini_response.go +++ b/internal/translator/antigravity/gemini/antigravity_gemini_response.go @@ -9,7 +9,7 @@ import ( "bytes" "context" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/antigravity/gemini/init.go b/internal/translator/antigravity/gemini/init.go index 3955824863..dcb331618a 100644 --- a/internal/translator/antigravity/gemini/init.go +++ b/internal/translator/antigravity/gemini/init.go @@ -1,9 +1,9 @@ package gemini import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/antigravity/openai/chat-completions/antigravity_openai_request.go b/internal/translator/antigravity/openai/chat-completions/antigravity_openai_request.go index b33be50bd0..0d9ee6fe0a 100644 --- a/internal/translator/antigravity/openai/chat-completions/antigravity_openai_request.go +++ b/internal/translator/antigravity/openai/chat-completions/antigravity_openai_request.go @@ -6,9 +6,9 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/antigravity/openai/chat-completions/antigravity_openai_response.go b/internal/translator/antigravity/openai/chat-completions/antigravity_openai_response.go index 9188c75a2c..2be24102ff 100644 --- a/internal/translator/antigravity/openai/chat-completions/antigravity_openai_response.go +++ b/internal/translator/antigravity/openai/chat-completions/antigravity_openai_response.go @@ -13,10 +13,10 @@ import ( "sync/atomic" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/chat-completions" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/chat-completions" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/antigravity/openai/chat-completions/init.go b/internal/translator/antigravity/openai/chat-completions/init.go index 5c5c71e461..2217e7919c 100644 --- a/internal/translator/antigravity/openai/chat-completions/init.go +++ b/internal/translator/antigravity/openai/chat-completions/init.go @@ -1,9 +1,9 @@ package chat_completions import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/antigravity/openai/responses/antigravity_openai-responses_request.go b/internal/translator/antigravity/openai/responses/antigravity_openai-responses_request.go index 90bfa14c05..94a6b852b0 100644 --- a/internal/translator/antigravity/openai/responses/antigravity_openai-responses_request.go +++ b/internal/translator/antigravity/openai/responses/antigravity_openai-responses_request.go @@ -1,8 +1,8 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/gemini" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/antigravity/gemini" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/responses" ) func ConvertOpenAIResponsesRequestToAntigravity(modelName string, inputRawJSON []byte, stream bool) []byte { diff --git a/internal/translator/antigravity/openai/responses/antigravity_openai-responses_response.go b/internal/translator/antigravity/openai/responses/antigravity_openai-responses_response.go index a087e0bd0f..3256950461 100644 --- a/internal/translator/antigravity/openai/responses/antigravity_openai-responses_response.go +++ b/internal/translator/antigravity/openai/responses/antigravity_openai-responses_response.go @@ -3,7 +3,7 @@ package responses import ( "context" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/responses" "github.com/tidwall/gjson" ) diff --git a/internal/translator/antigravity/openai/responses/init.go b/internal/translator/antigravity/openai/responses/init.go index 8d13703239..49041f2905 100644 --- a/internal/translator/antigravity/openai/responses/init.go +++ b/internal/translator/antigravity/openai/responses/init.go @@ -1,9 +1,9 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/claude/gemini-cli/claude_gemini-cli_request.go b/internal/translator/claude/gemini-cli/claude_gemini-cli_request.go index 831d784db3..fd68a957f5 100644 --- a/internal/translator/claude/gemini-cli/claude_gemini-cli_request.go +++ b/internal/translator/claude/gemini-cli/claude_gemini-cli_request.go @@ -6,7 +6,7 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/gemini" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/claude/gemini" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/claude/gemini-cli/claude_gemini-cli_response.go b/internal/translator/claude/gemini-cli/claude_gemini-cli_response.go index 62e2650fd9..858886c272 100644 --- a/internal/translator/claude/gemini-cli/claude_gemini-cli_response.go +++ b/internal/translator/claude/gemini-cli/claude_gemini-cli_response.go @@ -7,8 +7,8 @@ package geminiCLI import ( "context" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/gemini" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/claude/gemini" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" ) // ConvertClaudeResponseToGeminiCLI converts Claude Code streaming response format to Gemini CLI format. diff --git a/internal/translator/claude/gemini-cli/init.go b/internal/translator/claude/gemini-cli/init.go index ca364a6ee0..33a1332daf 100644 --- a/internal/translator/claude/gemini-cli/init.go +++ b/internal/translator/claude/gemini-cli/init.go @@ -1,9 +1,9 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/claude/gemini/claude_gemini_request.go b/internal/translator/claude/gemini/claude_gemini_request.go index d2a215e7de..d716d28f35 100644 --- a/internal/translator/claude/gemini/claude_gemini_request.go +++ b/internal/translator/claude/gemini/claude_gemini_request.go @@ -14,9 +14,9 @@ import ( "strings" "github.com/google/uuid" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/claude/gemini/claude_gemini_response.go b/internal/translator/claude/gemini/claude_gemini_response.go index 846c26056f..3f127e3205 100644 --- a/internal/translator/claude/gemini/claude_gemini_response.go +++ b/internal/translator/claude/gemini/claude_gemini_response.go @@ -12,7 +12,7 @@ import ( "strings" "time" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/claude/gemini/init.go b/internal/translator/claude/gemini/init.go index 8924f62c87..0ed533cebf 100644 --- a/internal/translator/claude/gemini/init.go +++ b/internal/translator/claude/gemini/init.go @@ -1,9 +1,9 @@ package gemini import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/claude/openai/chat-completions/claude_openai_request.go b/internal/translator/claude/openai/chat-completions/claude_openai_request.go index e9d8d35b09..bad56d1273 100644 --- a/internal/translator/claude/openai/chat-completions/claude_openai_request.go +++ b/internal/translator/claude/openai/chat-completions/claude_openai_request.go @@ -14,8 +14,8 @@ import ( "strings" "github.com/google/uuid" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/claude/openai/chat-completions/claude_openai_response.go b/internal/translator/claude/openai/chat-completions/claude_openai_response.go index 1fd3f2ae16..99c7523874 100644 --- a/internal/translator/claude/openai/chat-completions/claude_openai_response.go +++ b/internal/translator/claude/openai/chat-completions/claude_openai_response.go @@ -25,10 +25,19 @@ type ConvertAnthropicResponseToOpenAIParams struct { CreatedAt int64 ResponseID string FinishReason string + Usage claudeUsageTokens // Tool calls accumulator for streaming ToolCallsAccumulator map[int]*ToolCallAccumulator } +type claudeUsageTokens struct { + InputTokens int64 + OutputTokens int64 + CacheCreationInputTokens int64 + CacheReadInputTokens int64 + HasUsage bool +} + // ToolCallAccumulator holds the state for accumulating tool call data type ToolCallAccumulator struct { ID string @@ -36,15 +45,30 @@ type ToolCallAccumulator struct { Arguments strings.Builder } -func calculateClaudeUsageTokens(usage gjson.Result) (promptTokens, completionTokens, totalTokens, cachedTokens int64) { - inputTokens := usage.Get("input_tokens").Int() - completionTokens = usage.Get("output_tokens").Int() - cachedTokens = usage.Get("cache_read_input_tokens").Int() - cacheCreationInputTokens := usage.Get("cache_creation_input_tokens").Int() +func (u *claudeUsageTokens) Merge(usage gjson.Result) { + if !usage.Exists() { + return + } + u.HasUsage = true + if inputTokens := usage.Get("input_tokens"); inputTokens.Exists() { + u.InputTokens = inputTokens.Int() + } + if outputTokens := usage.Get("output_tokens"); outputTokens.Exists() { + u.OutputTokens = outputTokens.Int() + } + if cacheCreationInputTokens := usage.Get("cache_creation_input_tokens"); cacheCreationInputTokens.Exists() { + u.CacheCreationInputTokens = cacheCreationInputTokens.Int() + } + if cacheReadInputTokens := usage.Get("cache_read_input_tokens"); cacheReadInputTokens.Exists() { + u.CacheReadInputTokens = cacheReadInputTokens.Int() + } +} - promptTokens = inputTokens + cacheCreationInputTokens + cachedTokens +func (u claudeUsageTokens) OpenAIUsage() (promptTokens, completionTokens, totalTokens, cachedTokens int64) { + cachedTokens = u.CacheReadInputTokens + promptTokens = u.InputTokens + u.CacheCreationInputTokens + cachedTokens + completionTokens = u.OutputTokens totalTokens = promptTokens + completionTokens - return promptTokens, completionTokens, totalTokens, cachedTokens } @@ -112,6 +136,7 @@ func ConvertClaudeResponseToOpenAI(_ context.Context, modelName string, original if (*param).(*ConvertAnthropicResponseToOpenAIParams).ToolCallsAccumulator == nil { (*param).(*ConvertAnthropicResponseToOpenAIParams).ToolCallsAccumulator = make(map[int]*ToolCallAccumulator) } + (*param).(*ConvertAnthropicResponseToOpenAIParams).Usage.Merge(message.Get("usage")) } return [][]byte{template} @@ -215,7 +240,8 @@ func ConvertClaudeResponseToOpenAI(_ context.Context, modelName string, original // Handle usage information for token counts if usage := root.Get("usage"); usage.Exists() { - promptTokens, completionTokens, totalTokens, cachedTokens := calculateClaudeUsageTokens(usage) + (*param).(*ConvertAnthropicResponseToOpenAIParams).Usage.Merge(usage) + promptTokens, completionTokens, totalTokens, cachedTokens := (*param).(*ConvertAnthropicResponseToOpenAIParams).Usage.OpenAIUsage() template, _ = sjson.SetBytes(template, "usage.prompt_tokens", promptTokens) template, _ = sjson.SetBytes(template, "usage.completion_tokens", completionTokens) template, _ = sjson.SetBytes(template, "usage.total_tokens", totalTokens) @@ -296,6 +322,7 @@ func ConvertClaudeResponseToOpenAINonStream(_ context.Context, _ string, origina var stopReason string var contentParts []string var reasoningParts []string + usageTokens := claudeUsageTokens{} toolCallsAccumulator := make(map[int]*ToolCallAccumulator) for _, chunk := range chunks { @@ -309,6 +336,7 @@ func ConvertClaudeResponseToOpenAINonStream(_ context.Context, _ string, origina messageID = message.Get("id").String() model = message.Get("model").String() createdAt = time.Now().Unix() + usageTokens.Merge(message.Get("usage")) } case "content_block_start": @@ -371,15 +399,19 @@ func ConvertClaudeResponseToOpenAINonStream(_ context.Context, _ string, origina } } if usage := root.Get("usage"); usage.Exists() { - promptTokens, completionTokens, totalTokens, cachedTokens := calculateClaudeUsageTokens(usage) - out, _ = sjson.SetBytes(out, "usage.prompt_tokens", promptTokens) - out, _ = sjson.SetBytes(out, "usage.completion_tokens", completionTokens) - out, _ = sjson.SetBytes(out, "usage.total_tokens", totalTokens) - out, _ = sjson.SetBytes(out, "usage.prompt_tokens_details.cached_tokens", cachedTokens) + usageTokens.Merge(usage) } } } + if usageTokens.HasUsage { + promptTokens, completionTokens, totalTokens, cachedTokens := usageTokens.OpenAIUsage() + out, _ = sjson.SetBytes(out, "usage.prompt_tokens", promptTokens) + out, _ = sjson.SetBytes(out, "usage.completion_tokens", completionTokens) + out, _ = sjson.SetBytes(out, "usage.total_tokens", totalTokens) + out, _ = sjson.SetBytes(out, "usage.prompt_tokens_details.cached_tokens", cachedTokens) + } + // Set basic response fields including message ID, creation time, and model out, _ = sjson.SetBytes(out, "id", messageID) out, _ = sjson.SetBytes(out, "created", createdAt) diff --git a/internal/translator/claude/openai/chat-completions/claude_openai_response_test.go b/internal/translator/claude/openai/chat-completions/claude_openai_response_test.go index 7bd6eb1f15..5a9a6d3ad5 100644 --- a/internal/translator/claude/openai/chat-completions/claude_openai_response_test.go +++ b/internal/translator/claude/openai/chat-completions/claude_openai_response_test.go @@ -37,6 +37,44 @@ func TestConvertClaudeResponseToOpenAI_StreamUsageIncludesCachedTokens(t *testin } } +func TestConvertClaudeResponseToOpenAI_StreamUsageMergesMessageStartUsage(t *testing.T) { + ctx := context.Background() + var param any + + ConvertClaudeResponseToOpenAI( + ctx, + "claude-opus-4-6", + nil, + nil, + []byte(`data: {"type":"message_start","message":{"id":"msg_123","model":"claude-opus-4-6","usage":{"input_tokens":13,"output_tokens":1,"cache_read_input_tokens":22000,"cache_creation_input_tokens":31}}}`), + ¶m, + ) + out := ConvertClaudeResponseToOpenAI( + ctx, + "claude-opus-4-6", + nil, + nil, + []byte(`data: {"type":"message_delta","delta":{"stop_reason":"end_turn"},"usage":{"output_tokens":4}}`), + ¶m, + ) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + if gotPromptTokens := gjson.GetBytes(out[0], "usage.prompt_tokens").Int(); gotPromptTokens != 22044 { + t.Fatalf("expected prompt_tokens %d, got %d", 22044, gotPromptTokens) + } + if gotCompletionTokens := gjson.GetBytes(out[0], "usage.completion_tokens").Int(); gotCompletionTokens != 4 { + t.Fatalf("expected completion_tokens %d, got %d", 4, gotCompletionTokens) + } + if gotTotalTokens := gjson.GetBytes(out[0], "usage.total_tokens").Int(); gotTotalTokens != 22048 { + t.Fatalf("expected total_tokens %d, got %d", 22048, gotTotalTokens) + } + if gotCachedTokens := gjson.GetBytes(out[0], "usage.prompt_tokens_details.cached_tokens").Int(); gotCachedTokens != 22000 { + t.Fatalf("expected cached_tokens %d, got %d", 22000, gotCachedTokens) + } +} + func TestConvertClaudeResponseToOpenAINonStream_UsageIncludesCachedTokens(t *testing.T) { rawJSON := []byte("data: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_123\",\"model\":\"claude-opus-4-6\"}}\n" + "data: {\"type\":\"message_delta\",\"delta\":{\"stop_reason\":\"end_turn\"},\"usage\":{\"input_tokens\":13,\"output_tokens\":4,\"cache_read_input_tokens\":22000,\"cache_creation_input_tokens\":31}}\n") @@ -56,3 +94,23 @@ func TestConvertClaudeResponseToOpenAINonStream_UsageIncludesCachedTokens(t *tes t.Fatalf("expected cached_tokens %d, got %d", 22000, gotCachedTokens) } } + +func TestConvertClaudeResponseToOpenAINonStream_UsageMergesMessageStartUsage(t *testing.T) { + rawJSON := []byte("data: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_123\",\"model\":\"claude-opus-4-6\",\"usage\":{\"input_tokens\":13,\"output_tokens\":1,\"cache_read_input_tokens\":22000,\"cache_creation_input_tokens\":31}}}\n" + + "data: {\"type\":\"message_delta\",\"delta\":{\"stop_reason\":\"end_turn\"},\"usage\":{\"output_tokens\":4}}\n") + + out := ConvertClaudeResponseToOpenAINonStream(context.Background(), "", nil, nil, rawJSON, nil) + + if gotPromptTokens := gjson.GetBytes(out, "usage.prompt_tokens").Int(); gotPromptTokens != 22044 { + t.Fatalf("expected prompt_tokens %d, got %d", 22044, gotPromptTokens) + } + if gotCompletionTokens := gjson.GetBytes(out, "usage.completion_tokens").Int(); gotCompletionTokens != 4 { + t.Fatalf("expected completion_tokens %d, got %d", 4, gotCompletionTokens) + } + if gotTotalTokens := gjson.GetBytes(out, "usage.total_tokens").Int(); gotTotalTokens != 22048 { + t.Fatalf("expected total_tokens %d, got %d", 22048, gotTotalTokens) + } + if gotCachedTokens := gjson.GetBytes(out, "usage.prompt_tokens_details.cached_tokens").Int(); gotCachedTokens != 22000 { + t.Fatalf("expected cached_tokens %d, got %d", 22000, gotCachedTokens) + } +} diff --git a/internal/translator/claude/openai/chat-completions/init.go b/internal/translator/claude/openai/chat-completions/init.go index a18840bace..7474fb2a38 100644 --- a/internal/translator/claude/openai/chat-completions/init.go +++ b/internal/translator/claude/openai/chat-completions/init.go @@ -1,9 +1,9 @@ package chat_completions import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/claude/openai/responses/claude_openai-responses_request.go b/internal/translator/claude/openai/responses/claude_openai-responses_request.go index 514129ca9b..1398749573 100644 --- a/internal/translator/claude/openai/responses/claude_openai-responses_request.go +++ b/internal/translator/claude/openai/responses/claude_openai-responses_request.go @@ -9,8 +9,8 @@ import ( "strings" "github.com/google/uuid" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -339,25 +339,21 @@ func ConvertOpenAIResponsesRequestToClaude(modelName string, inputRawJSON []byte }) } + includedToolNames := map[string]struct{}{} + toolNameMap := map[string]string{} + // tools mapping: parameters -> input_schema if tools := root.Get("tools"); tools.Exists() && tools.IsArray() { toolsJSON := []byte("[]") tools.ForEach(func(_, tool gjson.Result) bool { - tJSON := []byte(`{"name":"","description":"","input_schema":{}}`) - if n := tool.Get("name"); n.Exists() { - tJSON, _ = sjson.SetBytes(tJSON, "name", n.String()) - } - if d := tool.Get("description"); d.Exists() { - tJSON, _ = sjson.SetBytes(tJSON, "description", d.String()) - } - - if params := tool.Get("parameters"); params.Exists() { - tJSON, _ = sjson.SetRawBytes(tJSON, "input_schema", []byte(params.Raw)) - } else if params = tool.Get("parametersJsonSchema"); params.Exists() { - tJSON, _ = sjson.SetRawBytes(tJSON, "input_schema", []byte(params.Raw)) + convertedTools := convertResponsesToolToClaudeTools(tool, toolNameMap) + for _, tJSON := range convertedTools { + toolName := gjson.GetBytes(tJSON, "name").String() + if toolName != "" { + includedToolNames[toolName] = struct{}{} + } + toolsJSON, _ = sjson.SetRawBytes(toolsJSON, "-1", tJSON) } - - toolsJSON, _ = sjson.SetRawBytes(toolsJSON, "-1", tJSON) return true }) if parsedTools := gjson.ParseBytes(toolsJSON); parsedTools.IsArray() && len(parsedTools.Array()) > 0 { @@ -375,14 +371,24 @@ func ConvertOpenAIResponsesRequestToClaude(modelName string, inputRawJSON []byte case "none": // Leave unset; implies no tools case "required": - out, _ = sjson.SetRawBytes(out, "tool_choice", []byte(`{"type":"any"}`)) + if len(includedToolNames) > 0 { + out, _ = sjson.SetRawBytes(out, "tool_choice", []byte(`{"type":"any"}`)) + } } case gjson.JSON: if toolChoice.Get("type").String() == "function" { fn := toolChoice.Get("function.name").String() - toolChoiceJSON := []byte(`{"name":"","type":"tool"}`) - toolChoiceJSON, _ = sjson.SetBytes(toolChoiceJSON, "name", fn) - out, _ = sjson.SetRawBytes(out, "tool_choice", toolChoiceJSON) + if fn == "" { + fn = toolChoice.Get("name").String() + } + if mappedName := toolNameMap[fn]; mappedName != "" { + fn = mappedName + } + if _, ok := includedToolNames[fn]; ok { + toolChoiceJSON := []byte(`{"name":"","type":"tool"}`) + toolChoiceJSON, _ = sjson.SetBytes(toolChoiceJSON, "name", fn) + out, _ = sjson.SetRawBytes(out, "tool_choice", toolChoiceJSON) + } } default: @@ -391,3 +397,167 @@ func ConvertOpenAIResponsesRequestToClaude(modelName string, inputRawJSON []byte return out } + +func convertResponsesToolToClaudeTools(tool gjson.Result, toolNameMap map[string]string) [][]byte { + toolType := strings.TrimSpace(tool.Get("type").String()) + switch toolType { + case "", "function": + if tJSON, ok := convertResponsesFunctionToolToClaude(tool, ""); ok { + return [][]byte{tJSON} + } + case "namespace": + return convertResponsesNamespaceToolToClaude(tool, toolNameMap) + case "web_search": + if tJSON, ok := convertResponsesWebSearchToolToClaude(tool); ok { + if name := gjson.GetBytes(tJSON, "name").String(); name != "" { + toolNameMap[name] = name + } + return [][]byte{tJSON} + } + default: + if isUnsupportedOpenAIBuiltinToolType(toolType) { + return nil + } + if tool.Get("name").String() != "" { + return [][]byte{[]byte(tool.Raw)} + } + } + return nil +} + +func convertResponsesNamespaceToolToClaude(tool gjson.Result, toolNameMap map[string]string) [][]byte { + namespaceName := strings.TrimSpace(tool.Get("name").String()) + children := tool.Get("tools") + if !children.Exists() || !children.IsArray() { + return nil + } + + var out [][]byte + children.ForEach(func(_, child gjson.Result) bool { + childName := responsesToolName(child) + qualifiedName := qualifyResponsesNamespaceToolName(namespaceName, childName) + if tJSON, ok := convertResponsesFunctionToolToClaude(child, qualifiedName); ok { + out = append(out, tJSON) + toolNameMap[qualifiedName] = qualifiedName + if childName != "" { + toolNameMap[childName] = qualifiedName + } + } + return true + }) + return out +} + +func convertResponsesFunctionToolToClaude(tool gjson.Result, overrideName string) ([]byte, bool) { + name := strings.TrimSpace(overrideName) + if name == "" { + name = responsesToolName(tool) + } + if name == "" { + return nil, false + } + + tJSON := []byte(`{"name":"","description":"","input_schema":{}}`) + tJSON, _ = sjson.SetBytes(tJSON, "name", name) + if d := responsesToolDescription(tool); d != "" { + tJSON, _ = sjson.SetBytes(tJSON, "description", d) + } + tJSON, _ = sjson.SetRawBytes(tJSON, "input_schema", normalizeClaudeToolInputSchema(responsesToolParameters(tool))) + return tJSON, true +} + +func convertResponsesWebSearchToolToClaude(tool gjson.Result) ([]byte, bool) { + if externalWebAccess := tool.Get("external_web_access"); externalWebAccess.Exists() && !externalWebAccess.Bool() { + return nil, false + } + + name := strings.TrimSpace(tool.Get("name").String()) + if name == "" { + name = "web_search" + } + tJSON := []byte(`{"type":"web_search_20250305","name":""}`) + tJSON, _ = sjson.SetBytes(tJSON, "name", name) + if maxUses := tool.Get("max_uses"); maxUses.Exists() { + tJSON, _ = sjson.SetBytes(tJSON, "max_uses", maxUses.Int()) + } + if allowedDomains := tool.Get("filters.allowed_domains"); allowedDomains.Exists() && allowedDomains.IsArray() { + tJSON, _ = sjson.SetRawBytes(tJSON, "allowed_domains", []byte(allowedDomains.Raw)) + } + if userLocation := tool.Get("user_location"); userLocation.Exists() && userLocation.IsObject() { + tJSON, _ = sjson.SetRawBytes(tJSON, "user_location", []byte(userLocation.Raw)) + } + return tJSON, true +} + +func responsesToolName(tool gjson.Result) string { + if name := strings.TrimSpace(tool.Get("name").String()); name != "" { + return name + } + return strings.TrimSpace(tool.Get("function.name").String()) +} + +func responsesToolDescription(tool gjson.Result) string { + if description := tool.Get("description").String(); description != "" { + return description + } + return tool.Get("function.description").String() +} + +func responsesToolParameters(tool gjson.Result) gjson.Result { + for _, path := range []string{ + "parameters", + "parametersJsonSchema", + "input_schema", + "function.parameters", + "function.parametersJsonSchema", + } { + if parameters := tool.Get(path); parameters.Exists() { + return parameters + } + } + return gjson.Result{} +} + +func normalizeClaudeToolInputSchema(parameters gjson.Result) []byte { + raw := strings.TrimSpace(parameters.Raw) + if raw == "" || raw == "null" || !gjson.Valid(raw) { + return []byte(`{"type":"object","properties":{}}`) + } + result := gjson.Parse(raw) + if !result.IsObject() { + return []byte(`{"type":"object","properties":{}}`) + } + schema := []byte(raw) + schemaType := result.Get("type").String() + if schemaType == "" { + schema, _ = sjson.SetBytes(schema, "type", "object") + schemaType = "object" + } + if schemaType == "object" && !result.Get("properties").Exists() { + schema, _ = sjson.SetRawBytes(schema, "properties", []byte(`{}`)) + } + return schema +} + +func qualifyResponsesNamespaceToolName(namespaceName, childName string) string { + childName = strings.TrimSpace(childName) + if childName == "" || namespaceName == "" || strings.HasPrefix(childName, "mcp__") { + return childName + } + if strings.HasPrefix(childName, namespaceName) { + return childName + } + if strings.HasSuffix(namespaceName, "__") { + return namespaceName + childName + } + return namespaceName + "__" + childName +} + +func isUnsupportedOpenAIBuiltinToolType(toolType string) bool { + switch toolType { + case "image_generation", "file_search", "code_interpreter", "computer_use_preview": + return true + default: + return false + } +} diff --git a/internal/translator/claude/openai/responses/claude_openai-responses_response.go b/internal/translator/claude/openai/responses/claude_openai-responses_response.go index ef2cc1f845..6c6b96b30d 100644 --- a/internal/translator/claude/openai/responses/claude_openai-responses_response.go +++ b/internal/translator/claude/openai/responses/claude_openai-responses_response.go @@ -8,7 +8,7 @@ import ( "strings" "time" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -26,7 +26,8 @@ type claudeToResponsesState struct { FuncNames map[int]string // index -> function name FuncCallIDs map[int]string // index -> call id // message text aggregation - TextBuf strings.Builder + TextBuf strings.Builder + CurrentTextBuf strings.Builder // reasoning state ReasoningActive bool ReasoningItemID string @@ -80,6 +81,7 @@ func ConvertClaudeResponseToOpenAIResponses(ctx context.Context, modelName strin st.CreatedAt = time.Now().Unix() // Reset per-message aggregation state st.TextBuf.Reset() + st.CurrentTextBuf.Reset() st.ReasoningBuf.Reset() st.ReasoningActive = false st.InTextBlock = false @@ -128,6 +130,7 @@ func ConvertClaudeResponseToOpenAIResponses(ctx context.Context, modelName strin if typ == "text" { // open message item + content part st.InTextBlock = true + st.CurrentTextBuf.Reset() st.CurrentMsgID = fmt.Sprintf("msg_%s_0", st.ResponseID) item := []byte(`{"type":"response.output_item.added","sequence_number":0,"output_index":0,"item":{"id":"","type":"message","status":"in_progress","content":[],"role":"assistant"}}`) item, _ = sjson.SetBytes(item, "sequence_number", nextSeq()) @@ -189,6 +192,7 @@ func ConvertClaudeResponseToOpenAIResponses(ctx context.Context, modelName strin out = append(out, emitEvent("response.output_text.delta", msg)) // aggregate text for response.output st.TextBuf.WriteString(t.String()) + st.CurrentTextBuf.WriteString(t.String()) } } else if dt == "input_json_delta" { idx := int(root.Get("index").Int()) @@ -220,17 +224,21 @@ func ConvertClaudeResponseToOpenAIResponses(ctx context.Context, modelName strin case "content_block_stop": idx := int(root.Get("index").Int()) if st.InTextBlock { + fullText := st.CurrentTextBuf.String() done := []byte(`{"type":"response.output_text.done","sequence_number":0,"item_id":"","output_index":0,"content_index":0,"text":"","logprobs":[]}`) done, _ = sjson.SetBytes(done, "sequence_number", nextSeq()) done, _ = sjson.SetBytes(done, "item_id", st.CurrentMsgID) + done, _ = sjson.SetBytes(done, "text", fullText) out = append(out, emitEvent("response.output_text.done", done)) partDone := []byte(`{"type":"response.content_part.done","sequence_number":0,"item_id":"","output_index":0,"content_index":0,"part":{"type":"output_text","annotations":[],"logprobs":[],"text":""}}`) partDone, _ = sjson.SetBytes(partDone, "sequence_number", nextSeq()) partDone, _ = sjson.SetBytes(partDone, "item_id", st.CurrentMsgID) + partDone, _ = sjson.SetBytes(partDone, "part.text", fullText) out = append(out, emitEvent("response.content_part.done", partDone)) final := []byte(`{"type":"response.output_item.done","sequence_number":0,"output_index":0,"item":{"id":"","type":"message","status":"completed","content":[{"type":"output_text","text":""}],"role":"assistant"}}`) final, _ = sjson.SetBytes(final, "sequence_number", nextSeq()) final, _ = sjson.SetBytes(final, "item.id", st.CurrentMsgID) + final, _ = sjson.SetBytes(final, "item.content.0.text", fullText) out = append(out, emitEvent("response.output_item.done", final)) st.InTextBlock = false } else if st.InFuncBlock { diff --git a/internal/translator/claude/openai/responses/init.go b/internal/translator/claude/openai/responses/init.go index 595fecc6ef..575c9ec71a 100644 --- a/internal/translator/claude/openai/responses/init.go +++ b/internal/translator/claude/openai/responses/init.go @@ -1,9 +1,9 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/codearts/openai/codearts_openai.go b/internal/translator/codearts/openai/codearts_openai.go new file mode 100644 index 0000000000..46cb4fed19 --- /dev/null +++ b/internal/translator/codearts/openai/codearts_openai.go @@ -0,0 +1,23 @@ +package openai + +import ( + "context" +) + +// ConvertCodeArtsStreamToOpenAI passes through SSE chunks. +// The executor already converts CodeArts SSE to OpenAI SSE format. +func ConvertCodeArtsStreamToOpenAI(ctx context.Context, model string, originalRequest, translatedRequest, chunk []byte, state *any) [][]byte { + if len(chunk) == 0 { + return nil + } + return [][]byte{chunk} +} + +// ConvertCodeArtsNonStreamToOpenAI passes through non-stream responses. +// The executor already builds OpenAI-format responses. +func ConvertCodeArtsNonStreamToOpenAI(ctx context.Context, model string, originalRequest, translatedRequest, response []byte, param *any) []byte { + if len(response) == 0 { + return nil + } + return response +} diff --git a/internal/translator/codearts/openai/codearts_openai_request.go b/internal/translator/codearts/openai/codearts_openai_request.go new file mode 100644 index 0000000000..d820c1abb5 --- /dev/null +++ b/internal/translator/codearts/openai/codearts_openai_request.go @@ -0,0 +1,8 @@ +package openai + +// ConvertOpenAIRequestToCodeArts passes through the OpenAI-format request payload. +// Actual conversion to CodeArts format happens in the executor (buildCodeArtsPayload), +// following the same pattern as Kiro's translator. +func ConvertOpenAIRequestToCodeArts(model string, rawJSON []byte, stream bool) []byte { + return rawJSON +} diff --git a/internal/translator/codearts/openai/init.go b/internal/translator/codearts/openai/init.go new file mode 100644 index 0000000000..2ecb08de4f --- /dev/null +++ b/internal/translator/codearts/openai/init.go @@ -0,0 +1,19 @@ +package openai + +import ( + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" +) + +func init() { + translator.Register( + OpenAI, + CodeArts, + ConvertOpenAIRequestToCodeArts, + interfaces.TranslateResponse{ + Stream: ConvertCodeArtsStreamToOpenAI, + NonStream: ConvertCodeArtsNonStreamToOpenAI, + }, + ) +} diff --git a/internal/translator/codex/claude/codex_claude_request.go b/internal/translator/codex/claude/codex_claude_request.go index adff9a038d..029db14e7d 100644 --- a/internal/translator/codex/claude/codex_claude_request.go +++ b/internal/translator/codex/claude/codex_claude_request.go @@ -6,11 +6,12 @@ package claude import ( + "encoding/base64" "fmt" "strconv" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -39,6 +40,7 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) template := []byte(`{"model":"","instructions":"","input":[]}`) rootResult := gjson.ParseBytes(rawJSON) + toolNameMap := buildReverseMapFromClaudeOriginalToShort(rawJSON) template, _ = sjson.SetBytes(template, "model", modelName) // Process system messages and convert them to input content format. @@ -120,6 +122,22 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) hasContent = true } + appendReasoningContent := func(part gjson.Result) { + if messageRole != "assistant" { + return + } + + signature := part.Get("signature").String() + if !isFernetLikeReasoningSignature(signature) { + return + } + + flushMessage() + reasoningItem := []byte(`{"type":"reasoning","summary":[],"content":null}`) + reasoningItem, _ = sjson.SetBytes(reasoningItem, "encrypted_content", signature) + template, _ = sjson.SetRawBytes(template, "input.-1", reasoningItem) + } + messageContentsResult := messageResult.Get("content") if messageContentsResult.IsArray() { messageContentResults := messageContentsResult.Array() @@ -130,6 +148,8 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) switch contentType { case "text": appendTextContent(messageContentResult.Get("text").String()) + case "thinking": + appendReasoningContent(messageContentResult) case "image": sourceResult := messageContentResult.Get("source") if sourceResult.Exists() { @@ -155,8 +175,7 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) functionCallMessage, _ = sjson.SetBytes(functionCallMessage, "call_id", messageContentResult.Get("id").String()) { name := messageContentResult.Get("name").String() - toolMap := buildReverseMapFromClaudeOriginalToShort(rawJSON) - if short, ok := toolMap[name]; ok { + if short, ok := toolNameMap[name]; ok { name = short } else { name = shortenNameIfNeeded(name) @@ -230,23 +249,14 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) toolsResult := rootResult.Get("tools") if toolsResult.IsArray() { template, _ = sjson.SetRawBytes(template, "tools", []byte(`[]`)) - template, _ = sjson.SetBytes(template, "tool_choice", `auto`) + webSearchToolNames := buildClaudeWebSearchToolNameSet(toolsResult) + template, _ = sjson.SetRawBytes(template, "tool_choice", convertClaudeToolChoiceToCodex(rootResult.Get("tool_choice"), toolNameMap, webSearchToolNames)) toolResults := toolsResult.Array() - // Build short name map from declared tools - var names []string - for i := 0; i < len(toolResults); i++ { - n := toolResults[i].Get("name").String() - if n != "" { - names = append(names, n) - } - } - shortMap := buildShortNameMap(names) for i := 0; i < len(toolResults); i++ { toolResult := toolResults[i] // Special handling: map Claude web search tool to Codex web_search - if toolResult.Get("type").String() == "web_search_20250305" { - // Replace the tool content entirely with {"type":"web_search"} - template, _ = sjson.SetRawBytes(template, "tools.-1", []byte(`{"type":"web_search"}`)) + if isClaudeWebSearchToolType(toolResult.Get("type").String()) { + template, _ = sjson.SetRawBytes(template, "tools.-1", convertClaudeWebSearchToolToCodex(toolResult)) continue } tool := []byte(toolResult.Raw) @@ -254,7 +264,7 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) // Apply shortened name if needed if v := toolResult.Get("name"); v.Exists() { name := v.String() - if short, ok := shortMap[name]; ok { + if short, ok := toolNameMap[name]; ok { name = short } else { name = shortenNameIfNeeded(name) @@ -318,6 +328,114 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) return template } +// isFernetLikeReasoningSignature checks only the encrypted_content envelope shape +// observed in OpenAI reasoning signatures. It does not authenticate source or payload type. +func isFernetLikeReasoningSignature(signature string) bool { + const ( + fernetVersionLen = 1 + fernetTimestamp = 8 + fernetIV = 16 + fernetHMAC = 32 + aesBlockSize = 16 + ) + + signature = strings.TrimSpace(signature) + if !strings.HasPrefix(signature, "gAAAA") { + return false + } + + decoded, err := base64.URLEncoding.DecodeString(signature) + if err != nil { + decoded, err = base64.RawURLEncoding.DecodeString(signature) + if err != nil { + return false + } + } + + minLen := fernetVersionLen + fernetTimestamp + fernetIV + aesBlockSize + fernetHMAC + if len(decoded) < minLen || decoded[0] != 0x80 { + return false + } + + ciphertextLen := len(decoded) - fernetVersionLen - fernetTimestamp - fernetIV - fernetHMAC + return ciphertextLen > 0 && ciphertextLen%aesBlockSize == 0 +} + +func isClaudeWebSearchToolType(toolType string) bool { + return toolType == "web_search_20250305" || toolType == "web_search_20260209" +} + +func buildClaudeWebSearchToolNameSet(tools gjson.Result) map[string]struct{} { + names := map[string]struct{}{} + if !tools.IsArray() { + return names + } + + tools.ForEach(func(_, tool gjson.Result) bool { + toolType := tool.Get("type").String() + if !isClaudeWebSearchToolType(toolType) { + return true + } + + if name := tool.Get("name").String(); name != "" { + names[name] = struct{}{} + } + return true + }) + + return names +} + +func convertClaudeToolChoiceToCodex(toolChoice gjson.Result, toolNameMap map[string]string, webSearchToolNames map[string]struct{}) []byte { + if !toolChoice.Exists() || toolChoice.Type == gjson.Null { + return []byte(`"auto"`) + } + + choiceType := toolChoice.Get("type").String() + if choiceType == "" && toolChoice.Type == gjson.String { + choiceType = toolChoice.String() + } + + switch choiceType { + case "auto", "": + return []byte(`"auto"`) + case "any": + return []byte(`"required"`) + case "none": + return []byte(`"none"`) + case "tool": + name := toolChoice.Get("name").String() + if _, ok := webSearchToolNames[name]; ok { + return []byte(`{"type":"web_search"}`) + } + if short, ok := toolNameMap[name]; ok { + name = short + } else { + name = shortenNameIfNeeded(name) + } + if name == "" { + return []byte(`"auto"`) + } + + choice := []byte(`{"type":"function","name":""}`) + choice, _ = sjson.SetBytes(choice, "name", name) + return choice + default: + return []byte(`"auto"`) + } +} + +func convertClaudeWebSearchToolToCodex(tool gjson.Result) []byte { + out := []byte(`{"type":"web_search"}`) + if allowedDomains := tool.Get("allowed_domains"); allowedDomains.Exists() && allowedDomains.IsArray() { + out, _ = sjson.SetRawBytes(out, "filters.allowed_domains", []byte(allowedDomains.Raw)) + } + if userLocation := tool.Get("user_location"); userLocation.Exists() && userLocation.IsObject() { + out, _ = sjson.SetRawBytes(out, "user_location", []byte(userLocation.Raw)) + } + return out +} + // shortenNameIfNeeded applies a simple shortening rule for a single name. func shortenNameIfNeeded(name string) string { const limit = 64 diff --git a/internal/translator/codex/claude/codex_claude_request_test.go b/internal/translator/codex/claude/codex_claude_request_test.go index 3cf0236962..16bb46c9ef 100644 --- a/internal/translator/codex/claude/codex_claude_request_test.go +++ b/internal/translator/codex/claude/codex_claude_request_test.go @@ -1,6 +1,8 @@ package claude import ( + "encoding/base64" + "strings" "testing" "github.com/tidwall/gjson" @@ -133,3 +135,278 @@ func TestConvertClaudeRequestToCodex_ParallelToolCalls(t *testing.T) { }) } } + +func TestConvertClaudeRequestToCodex_ToolChoiceModeMapping(t *testing.T) { + tests := []struct { + name string + claudeToolChoice string + wantCodexToolChoice string + }{ + { + name: "Any requires at least one tool", + claudeToolChoice: `{"type":"any"}`, + wantCodexToolChoice: "required", + }, + { + name: "None disables tools", + claudeToolChoice: `{"type":"none"}`, + wantCodexToolChoice: "none", + }, + { + name: "Auto stays auto", + claudeToolChoice: `{"type":"auto"}`, + wantCodexToolChoice: "auto", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + inputJSON := `{ + "model": "claude-3-opus", + "tools": [ + {"name": "lookup", "description": "Lookup", "input_schema": {"type":"object","properties":{}}} + ], + "tool_choice": ` + tt.claudeToolChoice + `, + "messages": [{"role": "user", "content": "hello"}] + }` + + result := ConvertClaudeRequestToCodex("test-model", []byte(inputJSON), false) + resultJSON := gjson.ParseBytes(result) + + if got := resultJSON.Get("tool_choice").String(); got != tt.wantCodexToolChoice { + t.Fatalf("tool_choice = %q, want %q. Output: %s", got, tt.wantCodexToolChoice, string(result)) + } + }) + } +} + +func TestConvertClaudeRequestToCodex_ToolChoiceSpecificFunctionUsesConvertedName(t *testing.T) { + longName := "mcp__server_with_a_very_long_name_that_exceeds_sixty_four_characters__search" + inputJSON := `{ + "model": "claude-3-opus", + "tools": [ + {"name": "` + longName + `", "description": "Search", "input_schema": {"type":"object","properties":{}}} + ], + "tool_choice": {"type":"tool","name":"` + longName + `"}, + "messages": [{"role": "user", "content": "hello"}] + }` + + result := ConvertClaudeRequestToCodex("test-model", []byte(inputJSON), false) + resultJSON := gjson.ParseBytes(result) + + if got := resultJSON.Get("tool_choice.type").String(); got != "function" { + t.Fatalf("tool_choice.type = %q, want function. Output: %s", got, string(result)) + } + toolName := resultJSON.Get("tools.0.name").String() + choiceName := resultJSON.Get("tool_choice.name").String() + if choiceName != toolName { + t.Fatalf("tool_choice.name = %q, want converted tool name %q. Output: %s", choiceName, toolName, string(result)) + } + if choiceName == longName { + t.Fatalf("tool_choice.name should use shortened Codex tool name. Output: %s", string(result)) + } +} + +func TestConvertClaudeRequestToCodex_WebSearchToolMapping(t *testing.T) { + inputJSON := `{ + "model": "claude-3-opus", + "tools": [ + { + "type": "web_search_20260209", + "name": "web_search", + "allowed_domains": ["example.com"], + "blocked_domains": ["blocked.example"], + "user_location": { + "type": "approximate", + "city": "Beijing", + "country": "CN", + "timezone": "Asia/Shanghai" + } + } + ], + "tool_choice": {"type":"tool","name":"web_search"}, + "messages": [{"role": "user", "content": "hello"}] + }` + + result := ConvertClaudeRequestToCodex("test-model", []byte(inputJSON), false) + resultJSON := gjson.ParseBytes(result) + + if got := resultJSON.Get("tools.0.type").String(); got != "web_search" { + t.Fatalf("tools.0.type = %q, want web_search. Output: %s", got, string(result)) + } + if got := resultJSON.Get("tools.0.filters.allowed_domains.0").String(); got != "example.com" { + t.Fatalf("tools.0.filters.allowed_domains.0 = %q, want example.com. Output: %s", got, string(result)) + } + if resultJSON.Get("tools.0.blocked_domains").Exists() { + t.Fatalf("tools.0.blocked_domains should not be forwarded to Codex. Output: %s", string(result)) + } + if got := resultJSON.Get("tools.0.user_location.city").String(); got != "Beijing" { + t.Fatalf("tools.0.user_location.city = %q, want Beijing. Output: %s", got, string(result)) + } + if got := resultJSON.Get("tool_choice.type").String(); got != "web_search" { + t.Fatalf("tool_choice.type = %q, want web_search. Output: %s", got, string(result)) + } +} + +func TestConvertClaudeRequestToCodex_WebSearchToolChoiceUsesDeclaredTypedToolName(t *testing.T) { + inputJSON := `{ + "model": "claude-opus-4-7", + "tools": [ + {"type": "web_search_20250305", "name": "browser_search"}, + {"name": "web_search", "description": "Local search", "input_schema": {"type":"object","properties":{}}} + ], + "tool_choice": {"type":"tool","name":"web_search"}, + "messages": [{"role": "user", "content": "hello"}] + }` + + result := ConvertClaudeRequestToCodex("test-model", []byte(inputJSON), false) + resultJSON := gjson.ParseBytes(result) + + if got := resultJSON.Get("tool_choice.type").String(); got != "function" { + t.Fatalf("tool_choice.type = %q, want function. Output: %s", got, string(result)) + } + if got := resultJSON.Get("tool_choice.name").String(); got != "web_search" { + t.Fatalf("tool_choice.name = %q, want web_search. Output: %s", got, string(result)) + } +} + +func TestConvertClaudeRequestToCodex_AssistantThinkingSignatureToReasoningItem(t *testing.T) { + signature := validCodexReasoningSignature() + inputJSON := `{ + "model": "claude-3-opus", + "messages": [ + { + "role": "assistant", + "content": [ + { + "type": "thinking", + "thinking": "visible summary must not be replayed", + "signature": "` + signature + `" + }, + { + "type": "text", + "text": "visible answer" + } + ] + }, + { + "role": "user", + "content": "continue" + } + ] + }` + + result := ConvertClaudeRequestToCodex("test-model", []byte(inputJSON), false) + resultJSON := gjson.ParseBytes(result) + inputs := resultJSON.Get("input").Array() + if len(inputs) != 3 { + t.Fatalf("got %d input items, want 3. Output: %s", len(inputs), string(result)) + } + + reasoning := inputs[0] + if got := reasoning.Get("type").String(); got != "reasoning" { + t.Fatalf("first input type = %q, want reasoning. Output: %s", got, string(result)) + } + if got := reasoning.Get("encrypted_content").String(); got != signature { + t.Fatalf("encrypted_content = %q, want %q", got, signature) + } + if got := reasoning.Get("summary").Raw; got != "[]" { + t.Fatalf("summary = %s, want []", got) + } + if got := reasoning.Get("content").Raw; got != "null" { + t.Fatalf("content = %s, want null", got) + } + + assistantMessage := inputs[1] + if got := assistantMessage.Get("role").String(); got != "assistant" { + t.Fatalf("second input role = %q, want assistant. Output: %s", got, string(result)) + } + if got := assistantMessage.Get("content.0.type").String(); got != "output_text" { + t.Fatalf("assistant content type = %q, want output_text", got) + } + if got := assistantMessage.Get("content.0.text").String(); got != "visible answer" { + t.Fatalf("assistant text = %q, want visible answer", got) + } + if strings.Contains(string(result), "visible summary must not be replayed") { + t.Fatalf("thinking text should not be replayed into Codex input. Output: %s", string(result)) + } +} + +func TestConvertClaudeRequestToCodex_IgnoresNonCodexThinkingSignatures(t *testing.T) { + tests := []struct { + name string + inputJSON string + }{ + { + name: "Ignore user thinking even with Codex-shaped signature", + inputJSON: `{ + "model": "claude-3-opus", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "thinking", + "thinking": "user supplied thinking", + "signature": "` + validCodexReasoningSignature() + `" + }, + { + "type": "text", + "text": "hello" + } + ] + } + ] + }`, + }, + { + name: "Ignore Anthropic native signature", + inputJSON: `{ + "model": "claude-3-opus", + "messages": [ + { + "role": "assistant", + "content": [ + { + "type": "thinking", + "thinking": "anthropic thinking", + "signature": "Eo8Canthropic-state" + }, + { + "type": "text", + "text": "visible answer" + } + ] + } + ] + }`, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ConvertClaudeRequestToCodex("test-model", []byte(tt.inputJSON), false) + if got := countRequestInputItemsByType(result, "reasoning"); got != 0 { + t.Fatalf("got %d reasoning items, want 0. Output: %s", got, string(result)) + } + }) + } +} + +func countRequestInputItemsByType(result []byte, itemType string) int { + count := 0 + gjson.GetBytes(result, "input").ForEach(func(_, item gjson.Result) bool { + if item.Get("type").String() == itemType { + count++ + } + return true + }) + return count +} + +func validCodexReasoningSignature() string { + raw := make([]byte, 1+8+16+16+32) + raw[0] = 0x80 + raw[8] = 1 + return base64.URLEncoding.EncodeToString(raw) +} diff --git a/internal/translator/codex/claude/codex_claude_response.go b/internal/translator/codex/claude/codex_claude_response.go index 388b907ae9..7a40ca4c55 100644 --- a/internal/translator/codex/claude/codex_claude_response.go +++ b/internal/translator/codex/claude/codex_claude_response.go @@ -11,8 +11,8 @@ import ( "context" "strings" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -31,6 +31,7 @@ type ConvertCodexResponseToClaudeParams struct { ThinkingBlockOpen bool ThinkingStopPending bool ThinkingSignature string + ThinkingSummarySeen bool } // ConvertCodexResponseToClaude performs sophisticated streaming response format conversion. @@ -67,7 +68,7 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa params := (*param).(*ConvertCodexResponseToClaudeParams) if params.ThinkingBlockOpen && params.ThinkingStopPending { switch rootResult.Get("type").String() { - case "response.content_part.added", "response.completed": + case "response.content_part.added", "response.completed", "response.incomplete": output = append(output, finalizeCodexThinkingBlock(params)...) } } @@ -86,12 +87,8 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa if params.ThinkingBlockOpen && params.ThinkingStopPending { output = append(output, finalizeCodexThinkingBlock(params)...) } - template = []byte(`{"type":"content_block_start","index":0,"content_block":{"type":"thinking","thinking":""}}`) - template, _ = sjson.SetBytes(template, "index", params.BlockIndex) - params.ThinkingBlockOpen = true - params.ThinkingStopPending = false - - output = translatorcommon.AppendSSEEventBytes(output, "content_block_start", template, 2) + params.ThinkingSummarySeen = true + output = append(output, startCodexThinkingBlock(params)...) } else if typeStr == "response.reasoning_summary_text.delta" { template = []byte(`{"type":"content_block_delta","index":0,"delta":{"type":"thinking_delta","thinking":""}}`) template, _ = sjson.SetBytes(template, "index", params.BlockIndex) @@ -100,9 +97,6 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa output = translatorcommon.AppendSSEEventBytes(output, "content_block_delta", template, 2) } else if typeStr == "response.reasoning_summary_part.done" { params.ThinkingStopPending = true - if params.ThinkingSignature != "" { - output = append(output, finalizeCodexThinkingBlock(params)...) - } } else if typeStr == "response.content_part.added" { template = []byte(`{"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}`) template, _ = sjson.SetBytes(template, "index", params.BlockIndex) @@ -123,18 +117,12 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa params.BlockIndex++ output = translatorcommon.AppendSSEEventBytes(output, "content_block_stop", template, 2) - } else if typeStr == "response.completed" { + } else if typeStr == "response.completed" || typeStr == "response.incomplete" { template = []byte(`{"type":"message_delta","delta":{"stop_reason":"tool_use","stop_sequence":null},"usage":{"input_tokens":0,"output_tokens":0}}`) - p := params.HasToolCall - stopReason := rootResult.Get("response.stop_reason").String() - if p { - template, _ = sjson.SetBytes(template, "delta.stop_reason", "tool_use") - } else if stopReason == "max_tokens" || stopReason == "stop" { - template, _ = sjson.SetBytes(template, "delta.stop_reason", stopReason) - } else { - template, _ = sjson.SetBytes(template, "delta.stop_reason", "end_turn") - } - inputTokens, outputTokens, cachedTokens := extractResponsesUsage(rootResult.Get("response.usage")) + responseData := rootResult.Get("response") + template, _ = sjson.SetBytes(template, "delta.stop_reason", mapCodexStopReasonToClaude(codexStopReason(responseData), params.HasToolCall)) + template = setClaudeStopSequence(template, "delta.stop_sequence", responseData) + inputTokens, outputTokens, cachedTokens := extractResponsesUsage(responseData.Get("usage")) template, _ = sjson.SetBytes(template, "usage.input_tokens", inputTokens) template, _ = sjson.SetBytes(template, "usage.output_tokens", outputTokens) if cachedTokens > 0 { @@ -169,10 +157,8 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa output = translatorcommon.AppendSSEEventBytes(output, "content_block_delta", template, 2) } else if itemType == "reasoning" { + params.ThinkingSummarySeen = false params.ThinkingSignature = itemResult.Get("encrypted_content").String() - if params.ThinkingStopPending { - output = append(output, finalizeCodexThinkingBlock(params)...) - } } } else if typeStr == "response.output_item.done" { itemResult := rootResult.Get("item") @@ -229,8 +215,13 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa if signature := itemResult.Get("encrypted_content").String(); signature != "" { params.ThinkingSignature = signature } - output = append(output, finalizeCodexThinkingBlock(params)...) + if params.ThinkingSummarySeen { + output = append(output, finalizeCodexThinkingBlock(params)...) + } else { + output = append(output, finalizeCodexSignatureOnlyThinkingBlock(params)...) + } params.ThinkingSignature = "" + params.ThinkingSummarySeen = false } } else if typeStr == "response.function_call_arguments.delta" { params.HasReceivedArgumentsDelta = true @@ -262,7 +253,8 @@ func ConvertCodexResponseToClaudeNonStream(_ context.Context, _ string, original revNames := buildReverseMapFromClaudeOriginalShortToOriginal(originalRequestRawJSON) rootResult := gjson.ParseBytes(rawJSON) - if rootResult.Get("type").String() != "response.completed" { + typeStr := rootResult.Get("type").String() + if typeStr != "response.completed" && typeStr != "response.incomplete" { return []byte{} } @@ -374,18 +366,57 @@ func ConvertCodexResponseToClaudeNonStream(_ context.Context, _ string, original }) } + out, _ = sjson.SetBytes(out, "stop_reason", mapCodexStopReasonToClaude(codexStopReason(responseData), hasToolCall)) + out = setClaudeStopSequence(out, "stop_sequence", responseData) + + return out +} + +func codexStopReason(responseData gjson.Result) string { if stopReason := responseData.Get("stop_reason"); stopReason.Exists() && stopReason.String() != "" { - out, _ = sjson.SetBytes(out, "stop_reason", stopReason.String()) - } else if hasToolCall { - out, _ = sjson.SetBytes(out, "stop_reason", "tool_use") - } else { - out, _ = sjson.SetBytes(out, "stop_reason", "end_turn") + if stopReason.String() == "stop" && codexStopSequence(responseData).String() != "" { + return "stop_sequence" + } + return stopReason.String() + } + if reason := responseData.Get("incomplete_details.reason"); reason.Exists() && reason.String() != "" { + return reason.String() } + if codexStopSequence(responseData).String() != "" { + return "stop_sequence" + } + return "" +} - if stopSequence := responseData.Get("stop_sequence"); stopSequence.Exists() && stopSequence.String() != "" { - out, _ = sjson.SetRawBytes(out, "stop_sequence", []byte(stopSequence.Raw)) +func mapCodexStopReasonToClaude(stopReason string, hasToolCall bool) string { + if hasToolCall { + return "tool_use" } + switch stopReason { + case "", "stop", "completed": + return "end_turn" + case "max_tokens", "max_output_tokens": + return "max_tokens" + case "tool_use", "tool_calls", "function_call": + return "tool_use" + case "end_turn", "stop_sequence", "pause_turn", "refusal", "model_context_window_exceeded": + return stopReason + case "content_filter": + return "refusal" + default: + return "end_turn" + } +} + +func codexStopSequence(responseData gjson.Result) gjson.Result { + return responseData.Get("stop_sequence") +} + +func setClaudeStopSequence(out []byte, path string, responseData gjson.Result) []byte { + if stopSequence := codexStopSequence(responseData); stopSequence.Exists() && stopSequence.String() != "" { + out, _ = sjson.SetRawBytes(out, path, []byte(stopSequence.Raw)) + } return out } @@ -437,6 +468,29 @@ func ClaudeTokenCount(_ context.Context, count int64) []byte { return translatorcommon.ClaudeInputTokensJSON(count) } +func startCodexThinkingBlock(params *ConvertCodexResponseToClaudeParams) []byte { + if params.ThinkingBlockOpen { + return nil + } + + template := []byte(`{"type":"content_block_start","index":0,"content_block":{"type":"thinking","thinking":""}}`) + template, _ = sjson.SetBytes(template, "index", params.BlockIndex) + params.ThinkingBlockOpen = true + params.ThinkingStopPending = false + + return translatorcommon.AppendSSEEventBytes(nil, "content_block_start", template, 2) +} + +func finalizeCodexSignatureOnlyThinkingBlock(params *ConvertCodexResponseToClaudeParams) []byte { + if params.ThinkingSignature == "" { + return nil + } + + output := startCodexThinkingBlock(params) + output = append(output, finalizeCodexThinkingBlock(params)...) + return output +} + func finalizeCodexThinkingBlock(params *ConvertCodexResponseToClaudeParams) []byte { if !params.ThinkingBlockOpen { return nil diff --git a/internal/translator/codex/claude/codex_claude_response_test.go b/internal/translator/codex/claude/codex_claude_response_test.go index c36c9edb68..565e8156bb 100644 --- a/internal/translator/codex/claude/codex_claude_response_test.go +++ b/internal/translator/codex/claude/codex_claude_response_test.go @@ -243,6 +243,147 @@ func TestConvertCodexResponseToClaude_StreamThinkingUsesEarlyCapturedSignatureWh } } +func TestConvertCodexResponseToClaude_StreamThinkingUsesFinalDoneSignature(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"messages":[]}`) + var param any + + chunks := [][]byte{ + []byte("data: {\"type\":\"response.output_item.added\",\"item\":{\"type\":\"reasoning\",\"encrypted_content\":\"enc_sig_initial\"}}"), + []byte("data: {\"type\":\"response.reasoning_summary_part.added\"}"), + []byte("data: {\"type\":\"response.reasoning_summary_text.delta\",\"delta\":\"Let me think\"}"), + []byte("data: {\"type\":\"response.reasoning_summary_part.done\"}"), + []byte("data: {\"type\":\"response.output_item.done\",\"item\":{\"type\":\"reasoning\",\"encrypted_content\":\"enc_sig_final\"}}"), + } + + var outputs [][]byte + for _, chunk := range chunks { + outputs = append(outputs, ConvertCodexResponseToClaude(ctx, "", originalRequest, nil, chunk, ¶m)...) + } + + signatureDeltaCount := 0 + events := []string{} + for _, out := range outputs { + for _, line := range strings.Split(string(out), "\n") { + if !strings.HasPrefix(line, "data: ") { + continue + } + data := gjson.Parse(strings.TrimPrefix(line, "data: ")) + if data.Get("type").String() == "content_block_start" && data.Get("content_block.type").String() == "thinking" { + events = append(events, "thinking_start") + } + if data.Get("type").String() == "content_block_delta" && data.Get("delta.type").String() == "thinking_delta" { + events = append(events, "thinking_delta") + } + if data.Get("type").String() == "content_block_stop" && data.Get("index").Int() == 0 { + events = append(events, "thinking_stop") + } + if data.Get("type").String() != "content_block_delta" || data.Get("delta.type").String() != "signature_delta" { + continue + } + events = append(events, "signature_delta") + signatureDeltaCount++ + if got := data.Get("delta.signature").String(); got != "enc_sig_final" { + t.Fatalf("signature delta = %q, want final done signature", got) + } + } + } + + if signatureDeltaCount != 1 { + t.Fatalf("expected one signature_delta, got %d", signatureDeltaCount) + } + if got, want := strings.Join(events, ","), "thinking_start,thinking_delta,signature_delta,thinking_stop"; got != want { + t.Fatalf("thinking event order = %s, want %s", got, want) + } +} + +func TestConvertCodexResponseToClaude_StreamSignatureOnlyReasoningEmitsThinkingSignature(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"messages":[]}`) + var param any + + chunks := [][]byte{ + []byte("data: {\"type\":\"response.created\",\"response\":{\"id\":\"resp_123\",\"model\":\"gpt-5\"}}"), + []byte("data: {\"type\":\"response.output_item.added\",\"item\":{\"type\":\"reasoning\",\"encrypted_content\":\"enc_sig_initial\"}}"), + []byte("data: {\"type\":\"response.output_item.done\",\"item\":{\"type\":\"reasoning\",\"encrypted_content\":\"enc_sig_only\"}}"), + []byte("data: {\"type\":\"response.content_part.added\"}"), + []byte("data: {\"type\":\"response.output_text.delta\",\"delta\":\"ok\"}"), + } + + var outputs [][]byte + for _, chunk := range chunks { + outputs = append(outputs, ConvertCodexResponseToClaude(ctx, "", originalRequest, nil, chunk, ¶m)...) + } + + thinkingStartFound := false + thinkingDeltaFound := false + signatureDeltaFound := false + thinkingStopFound := false + textStartIndex := int64(-1) + events := []string{} + + for _, out := range outputs { + for _, line := range strings.Split(string(out), "\n") { + if !strings.HasPrefix(line, "data: ") { + continue + } + data := gjson.Parse(strings.TrimPrefix(line, "data: ")) + switch data.Get("type").String() { + case "content_block_start": + if data.Get("content_block.type").String() == "thinking" { + events = append(events, "thinking_start") + thinkingStartFound = true + if got := data.Get("index").Int(); got != 0 { + t.Fatalf("thinking block index = %d, want 0", got) + } + } + if data.Get("content_block.type").String() == "text" { + events = append(events, "text_start") + textStartIndex = data.Get("index").Int() + } + case "content_block_delta": + switch data.Get("delta.type").String() { + case "thinking_delta": + thinkingDeltaFound = true + case "signature_delta": + events = append(events, "signature_delta") + signatureDeltaFound = true + if got := data.Get("index").Int(); got != 0 { + t.Fatalf("signature delta index = %d, want 0", got) + } + if got := data.Get("delta.signature").String(); got != "enc_sig_only" { + t.Fatalf("unexpected signature delta: %q", got) + } + } + case "content_block_stop": + if data.Get("index").Int() == 0 { + events = append(events, "thinking_stop") + thinkingStopFound = true + } + } + } + } + + if !thinkingStartFound { + t.Fatal("expected signature-only reasoning to start a thinking block") + } + if thinkingDeltaFound { + t.Fatal("did not expect thinking_delta when upstream omitted summary text") + } + if !signatureDeltaFound { + t.Fatal("expected signature_delta from encrypted_content-only reasoning") + } + if !thinkingStopFound { + t.Fatal("expected signature-only thinking block to stop") + } + if textStartIndex != 1 { + t.Fatalf("text block index = %d, want 1 after signature-only thinking block", textStartIndex) + } + if got, want := strings.Join(events, ","), "thinking_start,signature_delta,thinking_stop,text_start"; got != want { + t.Fatalf("signature-only event order = %s, want %s", got, want) + } +} + func TestConvertCodexResponseToClaudeNonStream_ThinkingIncludesSignature(t *testing.T) { ctx := context.Background() originalRequest := []byte(`{"messages":[]}`) @@ -317,3 +458,207 @@ func TestConvertCodexResponseToClaude_StreamEmptyOutputUsesOutputItemDoneMessage t.Fatalf("expected fallback content from response.output_item.done message; outputs=%q", outputs) } } + +func TestConvertCodexResponseToClaude_StreamStopReasonMapping(t *testing.T) { + tests := []struct { + name string + chunks [][]byte + wantReason string + }{ + { + name: "Stop maps to end_turn", + chunks: [][]byte{ + []byte("data: {\"type\":\"response.completed\",\"response\":{\"stop_reason\":\"stop\",\"usage\":{\"input_tokens\":1,\"output_tokens\":1}}}"), + }, + wantReason: "end_turn", + }, + { + name: "Incomplete max output maps to max_tokens", + chunks: [][]byte{ + []byte("data: {\"type\":\"response.incomplete\",\"response\":{\"incomplete_details\":{\"reason\":\"max_output_tokens\"},\"usage\":{\"input_tokens\":1,\"output_tokens\":1}}}"), + }, + wantReason: "max_tokens", + }, + { + name: "Tool call wins over stop", + chunks: [][]byte{ + []byte("data: {\"type\":\"response.output_item.added\",\"item\":{\"type\":\"function_call\",\"call_id\":\"call_1\",\"name\":\"lookup\"}}"), + []byte("data: {\"type\":\"response.completed\",\"response\":{\"stop_reason\":\"stop\",\"usage\":{\"input_tokens\":1,\"output_tokens\":1}}}"), + }, + wantReason: "tool_use", + }, + { + name: "Content filter maps to Claude refusal", + chunks: [][]byte{ + []byte("data: {\"type\":\"response.incomplete\",\"response\":{\"incomplete_details\":{\"reason\":\"content_filter\"},\"usage\":{\"input_tokens\":1,\"output_tokens\":1}}}"), + }, + wantReason: "refusal", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"tools":[{"name":"lookup","input_schema":{"type":"object","properties":{}}}]}`) + var param any + var outputs [][]byte + + for _, chunk := range tt.chunks { + outputs = append(outputs, ConvertCodexResponseToClaude(ctx, "", originalRequest, nil, chunk, ¶m)...) + } + + got, ok := findClaudeStreamStopReason(outputs) + if !ok { + t.Fatalf("did not find message_delta stop_reason; outputs=%q", outputs) + } + if got != tt.wantReason { + t.Fatalf("stop_reason = %q, want %q. Outputs=%q", got, tt.wantReason, outputs) + } + }) + } +} + +func TestConvertCodexResponseToClaude_StreamStopSequenceMapping(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"messages":[]}`) + var param any + + outputs := ConvertCodexResponseToClaude(ctx, "", originalRequest, nil, []byte("data: {\"type\":\"response.completed\",\"response\":{\"stop_reason\":\"stop\",\"stop_sequence\":\"\\nEND\",\"usage\":{\"input_tokens\":1,\"output_tokens\":1}}}"), ¶m) + messageDelta, ok := findClaudeStreamMessageDelta(outputs) + if !ok { + t.Fatalf("did not find message_delta; outputs=%q", outputs) + } + if got := messageDelta.Get("delta.stop_reason").String(); got != "stop_sequence" { + t.Fatalf("stop_reason = %q, want stop_sequence. Outputs=%q", got, outputs) + } + if got := messageDelta.Get("delta.stop_sequence").String(); got != "\nEND" { + t.Fatalf("stop_sequence = %q, want newline END. Outputs=%q", got, outputs) + } +} + +func TestConvertCodexResponseToClaudeNonStream_StopReasonMapping(t *testing.T) { + tests := []struct { + name string + response []byte + wantReason string + }{ + { + name: "Stop maps to end_turn", + response: []byte(`{ + "type":"response.completed", + "response":{ + "id":"resp_1", + "model":"gpt-5", + "stop_reason":"stop", + "usage":{"input_tokens":1,"output_tokens":1}, + "output":[] + } + }`), + wantReason: "end_turn", + }, + { + name: "Incomplete max output maps to max_tokens", + response: []byte(`{ + "type":"response.incomplete", + "response":{ + "id":"resp_1", + "model":"gpt-5", + "incomplete_details":{"reason":"max_output_tokens"}, + "usage":{"input_tokens":1,"output_tokens":1}, + "output":[] + } + }`), + wantReason: "max_tokens", + }, + { + name: "Tool call wins over stop", + response: []byte(`{ + "type":"response.completed", + "response":{ + "id":"resp_1", + "model":"gpt-5", + "stop_reason":"stop", + "usage":{"input_tokens":1,"output_tokens":1}, + "output":[{"type":"function_call","call_id":"call_1","name":"lookup","arguments":"{}"}] + } + }`), + wantReason: "tool_use", + }, + { + name: "Content filter maps to Claude refusal", + response: []byte(`{ + "type":"response.incomplete", + "response":{ + "id":"resp_1", + "model":"gpt-5", + "incomplete_details":{"reason":"content_filter"}, + "usage":{"input_tokens":1,"output_tokens":1}, + "output":[] + } + }`), + wantReason: "refusal", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"tools":[{"name":"lookup","input_schema":{"type":"object","properties":{}}}]}`) + out := ConvertCodexResponseToClaudeNonStream(ctx, "", originalRequest, nil, tt.response, nil) + parsed := gjson.ParseBytes(out) + + if got := parsed.Get("stop_reason").String(); got != tt.wantReason { + t.Fatalf("stop_reason = %q, want %q. Output: %s", got, tt.wantReason, string(out)) + } + }) + } +} + +func TestConvertCodexResponseToClaudeNonStream_StopSequenceMapping(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"messages":[]}`) + response := []byte(`{ + "type":"response.completed", + "response":{ + "id":"resp_1", + "model":"gpt-5", + "stop_reason":"stop", + "stop_sequence":"\nEND", + "usage":{"input_tokens":1,"output_tokens":1}, + "output":[] + } + }`) + + out := ConvertCodexResponseToClaudeNonStream(ctx, "", originalRequest, nil, response, nil) + parsed := gjson.ParseBytes(out) + + if got := parsed.Get("stop_reason").String(); got != "stop_sequence" { + t.Fatalf("stop_reason = %q, want stop_sequence. Output: %s", got, string(out)) + } + if got := parsed.Get("stop_sequence").String(); got != "\nEND" { + t.Fatalf("stop_sequence = %q, want newline END. Output: %s", got, string(out)) + } +} + +func findClaudeStreamStopReason(outputs [][]byte) (string, bool) { + messageDelta, ok := findClaudeStreamMessageDelta(outputs) + if !ok { + return "", false + } + return messageDelta.Get("delta.stop_reason").String(), true +} + +func findClaudeStreamMessageDelta(outputs [][]byte) (gjson.Result, bool) { + for _, out := range outputs { + for _, line := range strings.Split(string(out), "\n") { + if !strings.HasPrefix(line, "data: ") { + continue + } + data := gjson.Parse(strings.TrimPrefix(line, "data: ")) + if data.Get("type").String() == "message_delta" { + return data, true + } + } + } + return gjson.Result{}, false +} diff --git a/internal/translator/codex/claude/init.go b/internal/translator/codex/claude/init.go index 7126edc303..af44b9dd49 100644 --- a/internal/translator/codex/claude/init.go +++ b/internal/translator/codex/claude/init.go @@ -1,9 +1,9 @@ package claude import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/codex/gemini-cli/codex_gemini-cli_request.go b/internal/translator/codex/gemini-cli/codex_gemini-cli_request.go index 8b32453d26..b69bab11ee 100644 --- a/internal/translator/codex/gemini-cli/codex_gemini-cli_request.go +++ b/internal/translator/codex/gemini-cli/codex_gemini-cli_request.go @@ -6,7 +6,7 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/gemini" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/gemini" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/codex/gemini-cli/codex_gemini-cli_request_test.go b/internal/translator/codex/gemini-cli/codex_gemini-cli_request_test.go new file mode 100644 index 0000000000..fc41452b10 --- /dev/null +++ b/internal/translator/codex/gemini-cli/codex_gemini-cli_request_test.go @@ -0,0 +1,78 @@ +package geminiCLI + +import ( + "testing" + + "github.com/tidwall/gjson" +) + +func TestConvertGeminiCLIRequestToCodex_PreservesSchemaPropertyNamedType(t *testing.T) { + input := []byte(`{ + "request": { + "tools": [ + { + "functionDeclarations": [ + { + "name": "ask_user", + "description": "Ask the user one or more questions.", + "parametersJsonSchema": { + "type": "object", + "properties": { + "questions": { + "type": "array", + "items": { + "type": "object", + "properties": { + "header": { + "type": "string" + }, + "type": { + "default": "choice", + "description": "Question type.", + "enum": [ + "choice", + "text", + "yesno" + ], + "type": "string" + } + }, + "required": [ + "question", + "header", + "type" + ] + } + } + }, + "required": [ + "questions" + ] + } + } + ] + } + ] + } + }`) + + out := ConvertGeminiCLIRequestToCodex("gpt-5.2", input, true) + tool := gjson.GetBytes(out, "tools.0") + if got := tool.Get("type").String(); got != "function" { + t.Fatalf("expected tool type %q, got %q; output=%s", "function", got, string(out)) + } + + typeProperty := tool.Get("parameters.properties.questions.items.properties.type") + if !typeProperty.IsObject() { + t.Fatalf("expected schema property named type to stay an object; output=%s", string(out)) + } + if got := typeProperty.Get("type").String(); got != "string" { + t.Fatalf("expected schema property type %q, got %q; output=%s", "string", got, string(out)) + } + if got := typeProperty.Get("default").String(); got != "choice" { + t.Fatalf("expected default %q, got %q; output=%s", "choice", got, string(out)) + } + if got := typeProperty.Get("enum.2").String(); got != "yesno" { + t.Fatalf("expected enum value %q, got %q; output=%s", "yesno", got, string(out)) + } +} diff --git a/internal/translator/codex/gemini-cli/codex_gemini-cli_response.go b/internal/translator/codex/gemini-cli/codex_gemini-cli_response.go index 0f0068c842..01dbc0f831 100644 --- a/internal/translator/codex/gemini-cli/codex_gemini-cli_response.go +++ b/internal/translator/codex/gemini-cli/codex_gemini-cli_response.go @@ -7,8 +7,8 @@ package geminiCLI import ( "context" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/gemini" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/gemini" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" ) // ConvertCodexResponseToGeminiCLI converts Codex streaming response format to Gemini CLI format. diff --git a/internal/translator/codex/gemini-cli/init.go b/internal/translator/codex/gemini-cli/init.go index 8bcd3de5fd..2958e0a825 100644 --- a/internal/translator/codex/gemini-cli/init.go +++ b/internal/translator/codex/gemini-cli/init.go @@ -1,9 +1,9 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/codex/gemini/codex_gemini_request.go b/internal/translator/codex/gemini/codex_gemini_request.go index 23dae7d71e..5789890f20 100644 --- a/internal/translator/codex/gemini/codex_gemini_request.go +++ b/internal/translator/codex/gemini/codex_gemini_request.go @@ -12,8 +12,8 @@ import ( "strconv" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -284,7 +284,11 @@ func ConvertGeminiRequestToCodex(modelName string, inputRawJSON []byte, _ bool) util.Walk(toolsResult, "", "type", &pathsToLower) for _, p := range pathsToLower { fullPath := fmt.Sprintf("tools.%s", p) - out, _ = sjson.SetBytes(out, fullPath, strings.ToLower(gjson.GetBytes(out, fullPath).String())) + typeValue := gjson.GetBytes(out, fullPath) + if typeValue.Type != gjson.String { + continue + } + out, _ = sjson.SetBytes(out, fullPath, strings.ToLower(typeValue.String())) } return out diff --git a/internal/translator/codex/gemini/codex_gemini_response.go b/internal/translator/codex/gemini/codex_gemini_response.go index f6ef87710a..ecf9cf4de8 100644 --- a/internal/translator/codex/gemini/codex_gemini_response.go +++ b/internal/translator/codex/gemini/codex_gemini_response.go @@ -7,9 +7,11 @@ package gemini import ( "bytes" "context" + "crypto/sha256" + "strings" "time" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -25,6 +27,7 @@ type ConvertCodexResponseToGeminiParams struct { ResponseID string LastStorageOutput []byte HasOutputTextDelta bool + LastImageHashByID map[string][32]byte } // ConvertCodexResponseToGemini converts Codex streaming response format to Gemini format. @@ -48,6 +51,7 @@ func ConvertCodexResponseToGemini(_ context.Context, modelName string, originalR ResponseID: "", LastStorageOutput: nil, HasOutputTextDelta: false, + LastImageHashByID: make(map[string][32]byte), } } @@ -74,10 +78,63 @@ func ConvertCodexResponseToGemini(_ context.Context, modelName string, originalR template, _ = sjson.SetBytes(template, "responseId", params.ResponseID) } + if typeStr == "response.image_generation_call.partial_image" { + itemID := rootResult.Get("item_id").String() + b64 := rootResult.Get("partial_image_b64").String() + if b64 == "" { + return [][]byte{} + } + if itemID != "" { + if params.LastImageHashByID == nil { + params.LastImageHashByID = make(map[string][32]byte) + } + hash := sha256.Sum256([]byte(b64)) + if last, ok := params.LastImageHashByID[itemID]; ok && last == hash { + return [][]byte{} + } + params.LastImageHashByID[itemID] = hash + } + + outputFormat := rootResult.Get("output_format").String() + mimeType := mimeTypeFromCodexOutputFormat(outputFormat) + + part := []byte(`{"inlineData":{"data":"","mimeType":""}}`) + part, _ = sjson.SetBytes(part, "inlineData.data", b64) + part, _ = sjson.SetBytes(part, "inlineData.mimeType", mimeType) + template, _ = sjson.SetRawBytes(template, "candidates.0.content.parts.-1", part) + return [][]byte{template} + } + // Handle function call completion if typeStr == "response.output_item.done" { itemResult := rootResult.Get("item") itemType := itemResult.Get("type").String() + if itemType == "image_generation_call" { + itemID := itemResult.Get("id").String() + b64 := itemResult.Get("result").String() + if b64 == "" { + return [][]byte{} + } + if itemID != "" { + if params.LastImageHashByID == nil { + params.LastImageHashByID = make(map[string][32]byte) + } + hash := sha256.Sum256([]byte(b64)) + if last, ok := params.LastImageHashByID[itemID]; ok && last == hash { + return [][]byte{} + } + params.LastImageHashByID[itemID] = hash + } + + outputFormat := itemResult.Get("output_format").String() + mimeType := mimeTypeFromCodexOutputFormat(outputFormat) + + part := []byte(`{"inlineData":{"data":"","mimeType":""}}`) + part, _ = sjson.SetBytes(part, "inlineData.data", b64) + part, _ = sjson.SetBytes(part, "inlineData.mimeType", mimeType) + template, _ = sjson.SetRawBytes(template, "candidates.0.content.parts.-1", part) + return [][]byte{template} + } if itemType == "function_call" { // Create function call part functionCall := []byte(`{"functionCall":{"name":"","args":{}}}`) @@ -270,6 +327,20 @@ func ConvertCodexResponseToGeminiNonStream(_ context.Context, modelName string, }) } + case "image_generation_call": + flushPendingFunctionCalls() + b64 := value.Get("result").String() + if b64 == "" { + break + } + outputFormat := value.Get("output_format").String() + mimeType := mimeTypeFromCodexOutputFormat(outputFormat) + + part := []byte(`{"inlineData":{"data":"","mimeType":""}}`) + part, _ = sjson.SetBytes(part, "inlineData.data", b64) + part, _ = sjson.SetBytes(part, "inlineData.mimeType", mimeType) + template, _ = sjson.SetRawBytes(template, "candidates.0.content.parts.-1", part) + case "function_call": // Collect function call for potential merging with consecutive ones hasToolCall = true @@ -342,3 +413,24 @@ func buildReverseMapFromGeminiOriginal(original []byte) map[string]string { func GeminiTokenCount(ctx context.Context, count int64) []byte { return translatorcommon.GeminiTokenCountJSON(count) } + +func mimeTypeFromCodexOutputFormat(outputFormat string) string { + if outputFormat == "" { + return "image/png" + } + if strings.Contains(outputFormat, "/") { + return outputFormat + } + switch strings.ToLower(outputFormat) { + case "png": + return "image/png" + case "jpg", "jpeg": + return "image/jpeg" + case "webp": + return "image/webp" + case "gif": + return "image/gif" + default: + return "image/png" + } +} diff --git a/internal/translator/codex/gemini/codex_gemini_response_test.go b/internal/translator/codex/gemini/codex_gemini_response_test.go index b8f227beb5..547ee84715 100644 --- a/internal/translator/codex/gemini/codex_gemini_response_test.go +++ b/internal/translator/codex/gemini/codex_gemini_response_test.go @@ -33,3 +33,79 @@ func TestConvertCodexResponseToGemini_StreamEmptyOutputUsesOutputItemDoneMessage t.Fatalf("expected fallback content from response.output_item.done message; outputs=%q", outputs) } } + +func TestConvertCodexResponseToGemini_StreamPartialImageEmitsInlineData(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"tools":[]}`) + var param any + + chunk := []byte(`data: {"type":"response.image_generation_call.partial_image","item_id":"ig_123","output_format":"png","partial_image_b64":"aGVsbG8=","partial_image_index":0}`) + out := ConvertCodexResponseToGemini(ctx, "gemini-2.5-pro", originalRequest, nil, chunk, ¶m) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + got := gjson.GetBytes(out[0], "candidates.0.content.parts.0.inlineData.data").String() + if got != "aGVsbG8=" { + t.Fatalf("expected inlineData.data %q, got %q; chunk=%s", "aGVsbG8=", got, string(out[0])) + } + + gotMime := gjson.GetBytes(out[0], "candidates.0.content.parts.0.inlineData.mimeType").String() + if gotMime != "image/png" { + t.Fatalf("expected inlineData.mimeType %q, got %q; chunk=%s", "image/png", gotMime, string(out[0])) + } + + out = ConvertCodexResponseToGemini(ctx, "gemini-2.5-pro", originalRequest, nil, chunk, ¶m) + if len(out) != 0 { + t.Fatalf("expected duplicate image chunk to be suppressed, got %d", len(out)) + } +} + +func TestConvertCodexResponseToGemini_StreamImageGenerationCallDoneEmitsInlineData(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"tools":[]}`) + var param any + + out := ConvertCodexResponseToGemini(ctx, "gemini-2.5-pro", originalRequest, nil, []byte(`data: {"type":"response.image_generation_call.partial_image","item_id":"ig_123","output_format":"png","partial_image_b64":"aGVsbG8=","partial_image_index":0}`), ¶m) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + out = ConvertCodexResponseToGemini(ctx, "gemini-2.5-pro", originalRequest, nil, []byte(`data: {"type":"response.output_item.done","item":{"id":"ig_123","type":"image_generation_call","output_format":"png","result":"aGVsbG8="}}`), ¶m) + if len(out) != 0 { + t.Fatalf("expected output_item.done to be suppressed when identical to last partial image, got %d", len(out)) + } + + out = ConvertCodexResponseToGemini(ctx, "gemini-2.5-pro", originalRequest, nil, []byte(`data: {"type":"response.output_item.done","item":{"id":"ig_123","type":"image_generation_call","output_format":"jpeg","result":"Ymll"}}`), ¶m) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + got := gjson.GetBytes(out[0], "candidates.0.content.parts.0.inlineData.data").String() + if got != "Ymll" { + t.Fatalf("expected inlineData.data %q, got %q; chunk=%s", "Ymll", got, string(out[0])) + } + + gotMime := gjson.GetBytes(out[0], "candidates.0.content.parts.0.inlineData.mimeType").String() + if gotMime != "image/jpeg" { + t.Fatalf("expected inlineData.mimeType %q, got %q; chunk=%s", "image/jpeg", gotMime, string(out[0])) + } +} + +func TestConvertCodexResponseToGemini_NonStreamImageGenerationCallAddsInlineDataPart(t *testing.T) { + ctx := context.Background() + originalRequest := []byte(`{"tools":[]}`) + + raw := []byte(`{"type":"response.completed","response":{"id":"resp_123","created_at":1700000000,"usage":{"input_tokens":1,"output_tokens":1},"output":[{"type":"message","content":[{"type":"output_text","text":"ok"}]},{"type":"image_generation_call","output_format":"png","result":"aGVsbG8="}]}}`) + out := ConvertCodexResponseToGeminiNonStream(ctx, "gemini-2.5-pro", originalRequest, nil, raw, nil) + + got := gjson.GetBytes(out, "candidates.0.content.parts.1.inlineData.data").String() + if got != "aGVsbG8=" { + t.Fatalf("expected inlineData.data %q, got %q; chunk=%s", "aGVsbG8=", got, string(out)) + } + + gotMime := gjson.GetBytes(out, "candidates.0.content.parts.1.inlineData.mimeType").String() + if gotMime != "image/png" { + t.Fatalf("expected inlineData.mimeType %q, got %q; chunk=%s", "image/png", gotMime, string(out)) + } +} diff --git a/internal/translator/codex/gemini/init.go b/internal/translator/codex/gemini/init.go index 41d30559a6..b670d8d9b4 100644 --- a/internal/translator/codex/gemini/init.go +++ b/internal/translator/codex/gemini/init.go @@ -1,9 +1,9 @@ package gemini import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/codex/openai/chat-completions/codex_openai_request.go b/internal/translator/codex/openai/chat-completions/codex_openai_request.go index 6cc701e707..569e06e316 100644 --- a/internal/translator/codex/openai/chat-completions/codex_openai_request.go +++ b/internal/translator/codex/openai/chat-completions/codex_openai_request.go @@ -121,13 +121,13 @@ func ConvertOpenAIRequestToCodex(modelName string, inputRawJSON []byte, stream b case "tool": // Handle tool response messages as top-level function_call_output objects toolCallID := m.Get("tool_call_id").String() - content := m.Get("content").String() + content := m.Get("content") // Create function_call_output object funcOutput := []byte(`{}`) funcOutput, _ = sjson.SetBytes(funcOutput, "type", "function_call_output") funcOutput, _ = sjson.SetBytes(funcOutput, "call_id", toolCallID) - funcOutput, _ = sjson.SetBytes(funcOutput, "output", content) + funcOutput = setToolCallOutputContent(funcOutput, content) out, _ = sjson.SetRawBytes(out, "input.-1", funcOutput) default: @@ -359,6 +359,91 @@ func ConvertOpenAIRequestToCodex(modelName string, inputRawJSON []byte, stream b return out } +func setToolCallOutputContent(funcOutput []byte, content gjson.Result) []byte { + switch { + case content.Type == gjson.String: + funcOutput, _ = sjson.SetBytes(funcOutput, "output", content.String()) + case content.IsArray(): + output := []byte(`[]`) + for _, item := range content.Array() { + output = appendToolOutputContentPart(output, item) + } + funcOutput, _ = sjson.SetRawBytes(funcOutput, "output", output) + default: + fallbackOutput := content.Raw + if fallbackOutput == "" { + fallbackOutput = content.String() + } + funcOutput, _ = sjson.SetBytes(funcOutput, "output", fallbackOutput) + } + return funcOutput +} + +func appendToolOutputContentPart(output []byte, item gjson.Result) []byte { + switch item.Get("type").String() { + case "text": + part := []byte(`{}`) + part, _ = sjson.SetBytes(part, "type", "input_text") + part, _ = sjson.SetBytes(part, "text", item.Get("text").String()) + output, _ = sjson.SetRawBytes(output, "-1", part) + case "image_url": + imageURL := item.Get("image_url.url").String() + fileID := item.Get("image_url.file_id").String() + if imageURL == "" && fileID == "" { + return appendToolOutputFallbackPart(output, item) + } + part := []byte(`{}`) + part, _ = sjson.SetBytes(part, "type", "input_image") + if imageURL != "" { + part, _ = sjson.SetBytes(part, "image_url", imageURL) + } + if fileID != "" { + part, _ = sjson.SetBytes(part, "file_id", fileID) + } + if detail := item.Get("image_url.detail").String(); detail != "" { + part, _ = sjson.SetBytes(part, "detail", detail) + } + output, _ = sjson.SetRawBytes(output, "-1", part) + case "file": + fileID := item.Get("file.file_id").String() + fileData := item.Get("file.file_data").String() + fileURL := item.Get("file.file_url").String() + if fileID == "" && fileData == "" && fileURL == "" { + return appendToolOutputFallbackPart(output, item) + } + part := []byte(`{}`) + part, _ = sjson.SetBytes(part, "type", "input_file") + if fileID != "" { + part, _ = sjson.SetBytes(part, "file_id", fileID) + } + if fileData != "" { + part, _ = sjson.SetBytes(part, "file_data", fileData) + } + if fileURL != "" { + part, _ = sjson.SetBytes(part, "file_url", fileURL) + } + if filename := item.Get("file.filename").String(); filename != "" { + part, _ = sjson.SetBytes(part, "filename", filename) + } + output, _ = sjson.SetRawBytes(output, "-1", part) + default: + output = appendToolOutputFallbackPart(output, item) + } + return output +} + +func appendToolOutputFallbackPart(output []byte, item gjson.Result) []byte { + text := item.Raw + if text == "" { + text = item.String() + } + part := []byte(`{}`) + part, _ = sjson.SetBytes(part, "type", "input_text") + part, _ = sjson.SetBytes(part, "text", text) + output, _ = sjson.SetRawBytes(output, "-1", part) + return output +} + // shortenNameIfNeeded applies the simple shortening rule for a single name. // If the name length exceeds 64, it will try to preserve the "mcp__" prefix and last segment. // Otherwise it truncates to 64 characters. diff --git a/internal/translator/codex/openai/chat-completions/codex_openai_request_test.go b/internal/translator/codex/openai/chat-completions/codex_openai_request_test.go index 84c8dad2cc..e31db6d373 100644 --- a/internal/translator/codex/openai/chat-completions/codex_openai_request_test.go +++ b/internal/translator/codex/openai/chat-completions/codex_openai_request_test.go @@ -176,6 +176,182 @@ func TestToolCallWithContent(t *testing.T) { } } +func TestToolCallOutputWithMultimodalContent(t *testing.T) { + input := []byte(`{ + "model": "gpt-4o", + "messages": [ + {"role": "user", "content": "Show me the generated result."}, + { + "role": "assistant", + "content": null, + "tool_calls": [ + { + "id": "call_output_1", + "type": "function", + "function": {"name": "render_output", "arguments": "{}"} + } + ] + }, + { + "role": "tool", + "tool_call_id": "call_output_1", + "content": [ + {"type":"text","text":"Rendered result attached."}, + {"type":"image_url","image_url":{"url":"https://example.com/generated.png","detail":"high"}}, + {"type":"image_url","image_url":{"file_id":"file-img-123"}}, + {"type":"file","file":{"file_id":"file-doc-123","filename":"doc.pdf"}}, + {"type":"file","file":{"file_data":"SGVsbG8=","filename":"inline.txt"}}, + {"type":"file","file":{"file_url":"https://example.com/report.pdf","filename":"report.pdf"}} + ] + } + ], + "tools": [ + { + "type": "function", + "function": {"name": "render_output", "description": "Render output", "parameters": {"type": "object", "properties": {}}} + } + ] + }`) + + out := ConvertOpenAIRequestToCodex("gpt-4o", input, true) + result := string(out) + + output := gjson.Get(result, "input.2.output") + if !output.IsArray() { + t.Fatalf("expected tool output to be an array, got: %s", output.Raw) + } + + parts := output.Array() + if len(parts) != 6 { + t.Fatalf("expected 6 output parts, got %d: %s", len(parts), output.Raw) + } + if parts[0].Get("type").String() != "input_text" || parts[0].Get("text").String() != "Rendered result attached." { + t.Fatalf("part 0: expected input_text with rendered text, got %s", parts[0].Raw) + } + if parts[1].Get("type").String() != "input_image" { + t.Fatalf("part 1: expected input_image, got %s", parts[1].Raw) + } + if parts[1].Get("image_url").String() != "https://example.com/generated.png" { + t.Errorf("part 1: unexpected image_url %s", parts[1].Get("image_url").String()) + } + if parts[1].Get("detail").String() != "high" { + t.Errorf("part 1: unexpected detail %s", parts[1].Get("detail").String()) + } + if parts[2].Get("type").String() != "input_image" || parts[2].Get("file_id").String() != "file-img-123" { + t.Fatalf("part 2: expected file_id-backed input_image, got %s", parts[2].Raw) + } + if parts[3].Get("type").String() != "input_file" || parts[3].Get("file_id").String() != "file-doc-123" { + t.Fatalf("part 3: expected file_id-backed input_file, got %s", parts[3].Raw) + } + if parts[3].Get("filename").String() != "doc.pdf" { + t.Errorf("part 3: unexpected filename %s", parts[3].Get("filename").String()) + } + if parts[4].Get("type").String() != "input_file" || parts[4].Get("file_data").String() != "SGVsbG8=" { + t.Fatalf("part 4: expected file_data-backed input_file, got %s", parts[4].Raw) + } + if parts[5].Get("type").String() != "input_file" || parts[5].Get("file_url").String() != "https://example.com/report.pdf" { + t.Fatalf("part 5: expected file_url-backed input_file, got %s", parts[5].Raw) + } +} + +func TestToolCallOutputFallsBackForInvalidStructuredParts(t *testing.T) { + input := []byte(`{ + "model": "gpt-4o", + "messages": [ + {"role": "user", "content": "Check tool output."}, + { + "role": "assistant", + "content": null, + "tool_calls": [ + {"id": "call_invalid_parts", "type": "function", "function": {"name": "inspect", "arguments": "{}"}} + ] + }, + { + "role": "tool", + "tool_call_id": "call_invalid_parts", + "content": [ + {"type":"image_url","image_url":{"detail":"low"}}, + {"type":"file","file":{"filename":"orphan.txt"}}, + {"type":"unknown_type","foo":"bar","nested":{"a":1}} + ] + } + ], + "tools": [ + {"type": "function", "function": {"name": "inspect", "description": "Inspect", "parameters": {"type": "object", "properties": {}}}} + ] + }`) + + out := ConvertOpenAIRequestToCodex("gpt-4o", input, true) + result := string(out) + + parts := gjson.Get(result, "input.2.output").Array() + if len(parts) != 3 { + t.Fatalf("expected 3 output parts, got %d: %s", len(parts), gjson.Get(result, "input.2.output").Raw) + } + + expectedFallbacks := []string{ + `{"type":"image_url","image_url":{"detail":"low"}}`, + `{"type":"file","file":{"filename":"orphan.txt"}}`, + `{"type":"unknown_type","foo":"bar","nested":{"a":1}}`, + } + for i, expectedFallback := range expectedFallbacks { + if parts[i].Get("type").String() != "input_text" { + t.Fatalf("part %d: expected input_text fallback, got %s", i, parts[i].Raw) + } + if parts[i].Get("text").String() != expectedFallback { + t.Fatalf("part %d: expected fallback %s, got %s", i, expectedFallback, parts[i].Get("text").String()) + } + } +} + +func TestToolCallOutputWithNonStringJSONContent(t *testing.T) { + tests := []struct { + name string + content string + expectedOutput string + }{ + {name: "null", content: `null`, expectedOutput: `null`}, + {name: "object", content: `{"status":"ok","count":2}`, expectedOutput: `{"status":"ok","count":2}`}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + input := []byte(`{ + "model": "gpt-4o", + "messages": [ + {"role": "user", "content": "Check tool output."}, + { + "role": "assistant", + "content": null, + "tool_calls": [ + {"id": "call_json", "type": "function", "function": {"name": "inspect", "arguments": "{}"}} + ] + }, + { + "role": "tool", + "tool_call_id": "call_json", + "content": ` + tt.content + ` + } + ], + "tools": [ + {"type": "function", "function": {"name": "inspect", "description": "Inspect", "parameters": {"type": "object", "properties": {}}}} + ] + }`) + + out := ConvertOpenAIRequestToCodex("gpt-4o", input, true) + result := string(out) + + output := gjson.Get(result, "input.2.output") + if !output.Exists() { + t.Fatalf("expected output field to exist: %s", gjson.Get(result, "input.2").Raw) + } + if output.String() != tt.expectedOutput { + t.Fatalf("expected output %s, got %s", tt.expectedOutput, output.String()) + } + }) + } +} + // Parallel tool calls: assistant invokes 3 tools at once, all call_ids // and outputs must be translated and paired correctly. func TestMultipleToolCalls(t *testing.T) { diff --git a/internal/translator/codex/openai/chat-completions/codex_openai_response.go b/internal/translator/codex/openai/chat-completions/codex_openai_response.go index afae35d48d..75b5b848b3 100644 --- a/internal/translator/codex/openai/chat-completions/codex_openai_response.go +++ b/internal/translator/codex/openai/chat-completions/codex_openai_response.go @@ -8,6 +8,8 @@ package chat_completions import ( "bytes" "context" + "crypto/sha256" + "strings" "time" "github.com/tidwall/gjson" @@ -26,6 +28,7 @@ type ConvertCliToOpenAIParams struct { FunctionCallIndex int HasReceivedArgumentsDelta bool HasToolCallAnnounced bool + LastImageHashByItemID map[string][32]byte } // ConvertCodexResponseToOpenAI translates a single chunk of a streaming response from the @@ -51,6 +54,7 @@ func ConvertCodexResponseToOpenAI(_ context.Context, modelName string, originalR FunctionCallIndex: -1, HasReceivedArgumentsDelta: false, HasToolCallAnnounced: false, + LastImageHashByItemID: make(map[string][32]byte), } } @@ -70,6 +74,9 @@ func ConvertCodexResponseToOpenAI(_ context.Context, modelName string, originalR (*param).(*ConvertCliToOpenAIParams).ResponseID = rootResult.Get("response.id").String() (*param).(*ConvertCliToOpenAIParams).CreatedAt = rootResult.Get("response.created_at").Int() (*param).(*ConvertCliToOpenAIParams).Model = rootResult.Get("response.model").String() + if (*param).(*ConvertCliToOpenAIParams).LastImageHashByItemID == nil { + (*param).(*ConvertCliToOpenAIParams).LastImageHashByItemID = make(map[string][32]byte) + } return [][]byte{} } @@ -120,6 +127,39 @@ func ConvertCodexResponseToOpenAI(_ context.Context, modelName string, originalR template, _ = sjson.SetBytes(template, "choices.0.delta.role", "assistant") template, _ = sjson.SetBytes(template, "choices.0.delta.content", deltaResult.String()) } + } else if dataType == "response.image_generation_call.partial_image" { + itemID := rootResult.Get("item_id").String() + b64 := rootResult.Get("partial_image_b64").String() + if b64 == "" { + return [][]byte{} + } + if itemID != "" { + p := (*param).(*ConvertCliToOpenAIParams) + if p.LastImageHashByItemID == nil { + p.LastImageHashByItemID = make(map[string][32]byte) + } + hash := sha256.Sum256([]byte(b64)) + if last, ok := p.LastImageHashByItemID[itemID]; ok && last == hash { + return [][]byte{} + } + p.LastImageHashByItemID[itemID] = hash + } + + outputFormat := rootResult.Get("output_format").String() + mimeType := mimeTypeFromCodexOutputFormat(outputFormat) + imageURL := "data:" + mimeType + ";base64," + b64 + + imagesResult := gjson.GetBytes(template, "choices.0.delta.images") + if !imagesResult.Exists() || !imagesResult.IsArray() { + template, _ = sjson.SetRawBytes(template, "choices.0.delta.images", []byte(`[]`)) + } + imageIndex := len(gjson.GetBytes(template, "choices.0.delta.images").Array()) + imagePayload := []byte(`{"type":"image_url","image_url":{"url":""}}`) + imagePayload, _ = sjson.SetBytes(imagePayload, "index", imageIndex) + imagePayload, _ = sjson.SetBytes(imagePayload, "image_url.url", imageURL) + + template, _ = sjson.SetBytes(template, "choices.0.delta.role", "assistant") + template, _ = sjson.SetRawBytes(template, "choices.0.delta.images.-1", imagePayload) } else if dataType == "response.completed" { finishReason := "stop" if (*param).(*ConvertCliToOpenAIParams).FunctionCallIndex != -1 { @@ -183,7 +223,46 @@ func ConvertCodexResponseToOpenAI(_ context.Context, modelName string, originalR } else if dataType == "response.output_item.done" { itemResult := rootResult.Get("item") - if !itemResult.Exists() || itemResult.Get("type").String() != "function_call" { + if !itemResult.Exists() { + return [][]byte{} + } + itemType := itemResult.Get("type").String() + if itemType == "image_generation_call" { + itemID := itemResult.Get("id").String() + b64 := itemResult.Get("result").String() + if b64 == "" { + return [][]byte{} + } + if itemID != "" { + p := (*param).(*ConvertCliToOpenAIParams) + if p.LastImageHashByItemID == nil { + p.LastImageHashByItemID = make(map[string][32]byte) + } + hash := sha256.Sum256([]byte(b64)) + if last, ok := p.LastImageHashByItemID[itemID]; ok && last == hash { + return [][]byte{} + } + p.LastImageHashByItemID[itemID] = hash + } + + outputFormat := itemResult.Get("output_format").String() + mimeType := mimeTypeFromCodexOutputFormat(outputFormat) + imageURL := "data:" + mimeType + ";base64," + b64 + + imagesResult := gjson.GetBytes(template, "choices.0.delta.images") + if !imagesResult.Exists() || !imagesResult.IsArray() { + template, _ = sjson.SetRawBytes(template, "choices.0.delta.images", []byte(`[]`)) + } + imageIndex := len(gjson.GetBytes(template, "choices.0.delta.images").Array()) + imagePayload := []byte(`{"type":"image_url","image_url":{"url":""}}`) + imagePayload, _ = sjson.SetBytes(imagePayload, "index", imageIndex) + imagePayload, _ = sjson.SetBytes(imagePayload, "image_url.url", imageURL) + + template, _ = sjson.SetBytes(template, "choices.0.delta.role", "assistant") + template, _ = sjson.SetRawBytes(template, "choices.0.delta.images.-1", imagePayload) + return [][]byte{template} + } + if itemType != "function_call" { return [][]byte{} } @@ -285,6 +364,7 @@ func ConvertCodexResponseToOpenAINonStream(_ context.Context, _ string, original // Process the output array for content and function calls var toolCalls [][]byte + var images [][]byte outputResult := responseResult.Get("output") if outputResult.IsArray() { outputArray := outputResult.Array() @@ -339,6 +419,19 @@ func ConvertCodexResponseToOpenAINonStream(_ context.Context, _ string, original } toolCalls = append(toolCalls, functionCallTemplate) + case "image_generation_call": + b64 := outputItem.Get("result").String() + if b64 == "" { + break + } + outputFormat := outputItem.Get("output_format").String() + mimeType := mimeTypeFromCodexOutputFormat(outputFormat) + imageURL := "data:" + mimeType + ";base64," + b64 + + imagePayload := []byte(`{"type":"image_url","image_url":{"url":""}}`) + imagePayload, _ = sjson.SetBytes(imagePayload, "index", len(images)) + imagePayload, _ = sjson.SetBytes(imagePayload, "image_url.url", imageURL) + images = append(images, imagePayload) } } @@ -361,6 +454,15 @@ func ConvertCodexResponseToOpenAINonStream(_ context.Context, _ string, original } template, _ = sjson.SetBytes(template, "choices.0.message.role", "assistant") } + + // Add images if any + if len(images) > 0 { + template, _ = sjson.SetRawBytes(template, "choices.0.message.images", []byte(`[]`)) + for _, image := range images { + template, _ = sjson.SetRawBytes(template, "choices.0.message.images.-1", image) + } + template, _ = sjson.SetBytes(template, "choices.0.message.role", "assistant") + } } // Extract and set the finish reason based on status @@ -409,3 +511,24 @@ func buildReverseMapFromOriginalOpenAI(original []byte) map[string]string { } return rev } + +func mimeTypeFromCodexOutputFormat(outputFormat string) string { + if outputFormat == "" { + return "image/png" + } + if strings.Contains(outputFormat, "/") { + return outputFormat + } + switch strings.ToLower(outputFormat) { + case "png": + return "image/png" + case "jpg", "jpeg": + return "image/jpeg" + case "webp": + return "image/webp" + case "gif": + return "image/gif" + default: + return "image/png" + } +} diff --git a/internal/translator/codex/openai/chat-completions/codex_openai_response_test.go b/internal/translator/codex/openai/chat-completions/codex_openai_response_test.go index 534884c229..a6bb486fdf 100644 --- a/internal/translator/codex/openai/chat-completions/codex_openai_response_test.go +++ b/internal/translator/codex/openai/chat-completions/codex_openai_response_test.go @@ -90,3 +90,62 @@ func TestConvertCodexResponseToOpenAI_ToolCallArgumentsDeltaOmitsNullContentFiel t.Fatalf("expected tool call arguments delta to exist, got %s", string(out[0])) } } + +func TestConvertCodexResponseToOpenAI_StreamPartialImageEmitsDeltaImages(t *testing.T) { + ctx := context.Background() + var param any + + chunk := []byte(`data: {"type":"response.image_generation_call.partial_image","item_id":"ig_123","output_format":"png","partial_image_b64":"aGVsbG8=","partial_image_index":0}`) + + out := ConvertCodexResponseToOpenAI(ctx, "gpt-5.4", nil, nil, chunk, ¶m) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + gotURL := gjson.GetBytes(out[0], "choices.0.delta.images.0.image_url.url").String() + if gotURL != "data:image/png;base64,aGVsbG8=" { + t.Fatalf("expected image url %q, got %q; chunk=%s", "data:image/png;base64,aGVsbG8=", gotURL, string(out[0])) + } + + out = ConvertCodexResponseToOpenAI(ctx, "gpt-5.4", nil, nil, chunk, ¶m) + if len(out) != 0 { + t.Fatalf("expected duplicate image chunk to be suppressed, got %d", len(out)) + } +} + +func TestConvertCodexResponseToOpenAI_StreamImageGenerationCallDoneEmitsDeltaImages(t *testing.T) { + ctx := context.Background() + var param any + + out := ConvertCodexResponseToOpenAI(ctx, "gpt-5.4", nil, nil, []byte(`data: {"type":"response.image_generation_call.partial_image","item_id":"ig_123","output_format":"png","partial_image_b64":"aGVsbG8=","partial_image_index":0}`), ¶m) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + out = ConvertCodexResponseToOpenAI(ctx, "gpt-5.4", nil, nil, []byte(`data: {"type":"response.output_item.done","item":{"id":"ig_123","type":"image_generation_call","output_format":"png","result":"aGVsbG8="}}`), ¶m) + if len(out) != 0 { + t.Fatalf("expected output_item.done to be suppressed when identical to last partial image, got %d", len(out)) + } + + out = ConvertCodexResponseToOpenAI(ctx, "gpt-5.4", nil, nil, []byte(`data: {"type":"response.output_item.done","item":{"id":"ig_123","type":"image_generation_call","output_format":"jpeg","result":"Ymll"}}`), ¶m) + if len(out) != 1 { + t.Fatalf("expected 1 chunk, got %d", len(out)) + } + + gotURL := gjson.GetBytes(out[0], "choices.0.delta.images.0.image_url.url").String() + if gotURL != "data:image/jpeg;base64,Ymll" { + t.Fatalf("expected image url %q, got %q; chunk=%s", "data:image/jpeg;base64,Ymll", gotURL, string(out[0])) + } +} + +func TestConvertCodexResponseToOpenAI_NonStreamImageGenerationCallAddsMessageImages(t *testing.T) { + ctx := context.Background() + + raw := []byte(`{"type":"response.completed","response":{"id":"resp_123","created_at":1700000000,"model":"gpt-5.4","status":"completed","usage":{"input_tokens":1,"output_tokens":1,"total_tokens":2},"output":[{"type":"message","content":[{"type":"output_text","text":"ok"}]},{"type":"image_generation_call","output_format":"png","result":"aGVsbG8="}]}}`) + out := ConvertCodexResponseToOpenAINonStream(ctx, "gpt-5.4", nil, nil, raw, nil) + + gotURL := gjson.GetBytes(out, "choices.0.message.images.0.image_url.url").String() + if gotURL != "data:image/png;base64,aGVsbG8=" { + t.Fatalf("expected image url %q, got %q; chunk=%s", "data:image/png;base64,aGVsbG8=", gotURL, string(out)) + } +} diff --git a/internal/translator/codex/openai/chat-completions/init.go b/internal/translator/codex/openai/chat-completions/init.go index 8f782fdae1..94db2a7db8 100644 --- a/internal/translator/codex/openai/chat-completions/init.go +++ b/internal/translator/codex/openai/chat-completions/init.go @@ -1,9 +1,9 @@ package chat_completions import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/codex/openai/responses/init.go b/internal/translator/codex/openai/responses/init.go index cab759f297..24e7e3561c 100644 --- a/internal/translator/codex/openai/responses/init.go +++ b/internal/translator/codex/openai/responses/init.go @@ -1,9 +1,9 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini-cli/claude/gemini-cli_claude_request.go b/internal/translator/gemini-cli/claude/gemini-cli_claude_request.go index 57ebbc2cde..3e77b3f757 100644 --- a/internal/translator/gemini-cli/claude/gemini-cli_claude_request.go +++ b/internal/translator/gemini-cli/claude/gemini-cli_claude_request.go @@ -8,8 +8,8 @@ package claude import ( "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini-cli/claude/gemini-cli_claude_response.go b/internal/translator/gemini-cli/claude/gemini-cli_claude_response.go index 0bf4d6225c..607d6b9fc0 100644 --- a/internal/translator/gemini-cli/claude/gemini-cli_claude_response.go +++ b/internal/translator/gemini-cli/claude/gemini-cli_claude_response.go @@ -14,8 +14,8 @@ import ( "sync/atomic" "time" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini-cli/claude/init.go b/internal/translator/gemini-cli/claude/init.go index 79ed03c68e..fa2fabdf77 100644 --- a/internal/translator/gemini-cli/claude/init.go +++ b/internal/translator/gemini-cli/claude/init.go @@ -1,9 +1,9 @@ package claude import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini-cli/gemini/gemini-cli_gemini_request.go b/internal/translator/gemini-cli/gemini/gemini-cli_gemini_request.go index 9bdce33973..83dc626041 100644 --- a/internal/translator/gemini-cli/gemini/gemini-cli_gemini_request.go +++ b/internal/translator/gemini-cli/gemini/gemini-cli_gemini_request.go @@ -9,8 +9,8 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/gemini-cli/gemini/gemini-cli_gemini_response.go b/internal/translator/gemini-cli/gemini/gemini-cli_gemini_response.go index 8e23f1d3d6..0e100c1489 100644 --- a/internal/translator/gemini-cli/gemini/gemini-cli_gemini_response.go +++ b/internal/translator/gemini-cli/gemini/gemini-cli_gemini_response.go @@ -9,7 +9,7 @@ import ( "bytes" "context" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini-cli/gemini/init.go b/internal/translator/gemini-cli/gemini/init.go index fbad4ab50b..1c2f38f215 100644 --- a/internal/translator/gemini-cli/gemini/init.go +++ b/internal/translator/gemini-cli/gemini/init.go @@ -1,9 +1,9 @@ package gemini import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go b/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go index 95bca2d7b6..1aa3132b49 100644 --- a/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go +++ b/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go @@ -6,9 +6,9 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_response.go b/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_response.go index 0947371a5a..926040588e 100644 --- a/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_response.go +++ b/internal/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_response.go @@ -13,8 +13,8 @@ import ( "sync/atomic" "time" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/chat-completions" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/chat-completions" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/gemini-cli/openai/chat-completions/init.go b/internal/translator/gemini-cli/openai/chat-completions/init.go index 3bd76c517d..fcd85f2450 100644 --- a/internal/translator/gemini-cli/openai/chat-completions/init.go +++ b/internal/translator/gemini-cli/openai/chat-completions/init.go @@ -1,9 +1,9 @@ package chat_completions import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_request.go b/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_request.go index 657e45fdb2..bea4b7a1fe 100644 --- a/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_request.go +++ b/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_request.go @@ -1,8 +1,8 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/gemini" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini-cli/gemini" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/responses" ) func ConvertOpenAIResponsesRequestToGeminiCLI(modelName string, inputRawJSON []byte, stream bool) []byte { diff --git a/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_response.go b/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_response.go index 9bb3ced9ef..29db8c19ef 100644 --- a/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_response.go +++ b/internal/translator/gemini-cli/openai/responses/gemini-cli_openai-responses_response.go @@ -3,7 +3,7 @@ package responses import ( "context" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/responses" "github.com/tidwall/gjson" ) diff --git a/internal/translator/gemini-cli/openai/responses/init.go b/internal/translator/gemini-cli/openai/responses/init.go index b25d670851..e1d437715f 100644 --- a/internal/translator/gemini-cli/openai/responses/init.go +++ b/internal/translator/gemini-cli/openai/responses/init.go @@ -1,9 +1,9 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini/claude/gemini_claude_request.go b/internal/translator/gemini/claude/gemini_claude_request.go index e230f5fd0d..454668cbc2 100644 --- a/internal/translator/gemini/claude/gemini_claude_request.go +++ b/internal/translator/gemini/claude/gemini_claude_request.go @@ -9,9 +9,9 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini/claude/gemini_claude_response.go b/internal/translator/gemini/claude/gemini_claude_response.go index 28722de1db..797636d857 100644 --- a/internal/translator/gemini/claude/gemini_claude_response.go +++ b/internal/translator/gemini/claude/gemini_claude_response.go @@ -13,8 +13,8 @@ import ( "strings" "sync/atomic" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini/claude/init.go b/internal/translator/gemini/claude/init.go index 66fe51e739..d03140957c 100644 --- a/internal/translator/gemini/claude/init.go +++ b/internal/translator/gemini/claude/init.go @@ -1,9 +1,9 @@ package claude import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini/gemini-cli/gemini_gemini-cli_request.go b/internal/translator/gemini/gemini-cli/gemini_gemini-cli_request.go index 1b2cdb4636..71e7b4a5fd 100644 --- a/internal/translator/gemini/gemini-cli/gemini_gemini-cli_request.go +++ b/internal/translator/gemini/gemini-cli/gemini_gemini-cli_request.go @@ -8,8 +8,8 @@ package geminiCLI import ( "fmt" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini/gemini-cli/gemini_gemini-cli_response.go b/internal/translator/gemini/gemini-cli/gemini_gemini-cli_response.go index d15ea21acc..36fa0d39b5 100644 --- a/internal/translator/gemini/gemini-cli/gemini_gemini-cli_response.go +++ b/internal/translator/gemini/gemini-cli/gemini_gemini-cli_response.go @@ -8,7 +8,7 @@ import ( "bytes" "context" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini/gemini-cli/init.go b/internal/translator/gemini/gemini-cli/init.go index 2c2224f7d0..ed18b5f0af 100644 --- a/internal/translator/gemini/gemini-cli/init.go +++ b/internal/translator/gemini/gemini-cli/init.go @@ -1,9 +1,9 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini/gemini/gemini_gemini_request.go b/internal/translator/gemini/gemini/gemini_gemini_request.go index abc176b2e2..35e22d7160 100644 --- a/internal/translator/gemini/gemini/gemini_gemini_request.go +++ b/internal/translator/gemini/gemini/gemini_gemini_request.go @@ -7,8 +7,8 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/gemini/gemini/gemini_gemini_response.go b/internal/translator/gemini/gemini/gemini_gemini_response.go index 242dd98059..74669a7e72 100644 --- a/internal/translator/gemini/gemini/gemini_gemini_response.go +++ b/internal/translator/gemini/gemini/gemini_gemini_response.go @@ -4,7 +4,7 @@ import ( "bytes" "context" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" ) // PassthroughGeminiResponseStream forwards Gemini responses unchanged. diff --git a/internal/translator/gemini/gemini/init.go b/internal/translator/gemini/gemini/init.go index 28c9708338..ca9de2c672 100644 --- a/internal/translator/gemini/gemini/init.go +++ b/internal/translator/gemini/gemini/init.go @@ -1,9 +1,9 @@ package gemini import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) // Register a no-op response translator and a request normalizer for Gemini→Gemini. diff --git a/internal/translator/gemini/openai/chat-completions/gemini_openai_request.go b/internal/translator/gemini/openai/chat-completions/gemini_openai_request.go index c0c4d329f5..20eaec76f9 100644 --- a/internal/translator/gemini/openai/chat-completions/gemini_openai_request.go +++ b/internal/translator/gemini/openai/chat-completions/gemini_openai_request.go @@ -6,9 +6,9 @@ import ( "fmt" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/gemini/openai/chat-completions/gemini_openai_response.go b/internal/translator/gemini/openai/chat-completions/gemini_openai_response.go index 3dc5b095c3..cc9117f905 100644 --- a/internal/translator/gemini/openai/chat-completions/gemini_openai_response.go +++ b/internal/translator/gemini/openai/chat-completions/gemini_openai_response.go @@ -13,7 +13,7 @@ import ( "sync/atomic" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" diff --git a/internal/translator/gemini/openai/chat-completions/init.go b/internal/translator/gemini/openai/chat-completions/init.go index 800e07db3d..2eb673310f 100644 --- a/internal/translator/gemini/openai/chat-completions/init.go +++ b/internal/translator/gemini/openai/chat-completions/init.go @@ -1,9 +1,9 @@ package chat_completions import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/gemini/openai/responses/gemini_openai-responses_request.go b/internal/translator/gemini/openai/responses/gemini_openai-responses_request.go index 8f3a59fa45..e741757641 100644 --- a/internal/translator/gemini/openai/responses/gemini_openai-responses_request.go +++ b/internal/translator/gemini/openai/responses/gemini_openai-responses_request.go @@ -4,8 +4,8 @@ import ( "encoding/json" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini/openai/responses/gemini_openai-responses_response.go b/internal/translator/gemini/openai/responses/gemini_openai-responses_response.go index 15729aae92..36d30df753 100644 --- a/internal/translator/gemini/openai/responses/gemini_openai-responses_response.go +++ b/internal/translator/gemini/openai/responses/gemini_openai-responses_response.go @@ -8,8 +8,8 @@ import ( "sync/atomic" "time" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/gemini/openai/responses/init.go b/internal/translator/gemini/openai/responses/init.go index b53cac3d81..404dd68ae5 100644 --- a/internal/translator/gemini/openai/responses/init.go +++ b/internal/translator/gemini/openai/responses/init.go @@ -1,9 +1,9 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/init.go b/internal/translator/init.go index 084ea7ac23..5f88a400ec 100644 --- a/internal/translator/init.go +++ b/internal/translator/init.go @@ -1,36 +1,36 @@ package translator import ( - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/gemini-cli" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/openai/chat-completions" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/openai/responses" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/claude/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/claude/gemini-cli" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/claude/openai/chat-completions" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/claude/openai/responses" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/gemini-cli" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/openai/chat-completions" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/openai/responses" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/gemini-cli" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/openai/chat-completions" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/codex/openai/responses" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/openai/chat-completions" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/openai/responses" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini-cli/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini-cli/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini-cli/openai/chat-completions" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini-cli/openai/responses" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/gemini-cli" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/chat-completions" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/gemini-cli" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/chat-completions" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/gemini/openai/responses" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/gemini-cli" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/openai/chat-completions" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/openai/responses" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/gemini-cli" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/openai/chat-completions" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/openai/responses" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/openai/chat-completions" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/openai/responses" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/antigravity/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/antigravity/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/antigravity/openai/chat-completions" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/antigravity/openai/responses" ) diff --git a/internal/translator/joycode/openai/init.go b/internal/translator/joycode/openai/init.go new file mode 100644 index 0000000000..1e36826a69 --- /dev/null +++ b/internal/translator/joycode/openai/init.go @@ -0,0 +1,19 @@ +package openai + +import ( + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" +) + +func init() { + translator.Register( + OpenAI, + JoyCode, + ConvertOpenAIRequestToJoyCode, + interfaces.TranslateResponse{ + Stream: ConvertJoyCodeStreamToOpenAI, + NonStream: ConvertJoyCodeNonStreamToOpenAI, + }, + ) +} diff --git a/internal/translator/joycode/openai/joycode_openai.go b/internal/translator/joycode/openai/joycode_openai.go new file mode 100644 index 0000000000..49c213086c --- /dev/null +++ b/internal/translator/joycode/openai/joycode_openai.go @@ -0,0 +1,19 @@ +package openai + +import ( + "context" +) + +func ConvertJoyCodeStreamToOpenAI(ctx context.Context, model string, originalRequest, translatedRequest, chunk []byte, state *any) [][]byte { + if len(chunk) == 0 { + return nil + } + return [][]byte{chunk} +} + +func ConvertJoyCodeNonStreamToOpenAI(ctx context.Context, model string, originalRequest, translatedRequest, response []byte, param *any) []byte { + if len(response) == 0 { + return nil + } + return response +} diff --git a/internal/translator/joycode/openai/joycode_openai_request.go b/internal/translator/joycode/openai/joycode_openai_request.go new file mode 100644 index 0000000000..486b3e768b --- /dev/null +++ b/internal/translator/joycode/openai/joycode_openai_request.go @@ -0,0 +1,5 @@ +package openai + +func ConvertOpenAIRequestToJoyCode(model string, rawJSON []byte, stream bool) []byte { + return rawJSON +} diff --git a/internal/translator/kiro/claude/init.go b/internal/translator/kiro/claude/init.go new file mode 100644 index 0000000000..d457e1762b --- /dev/null +++ b/internal/translator/kiro/claude/init.go @@ -0,0 +1,20 @@ +// Package claude provides translation between Kiro and Claude formats. +package claude + +import ( + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" +) + +func init() { + translator.Register( + Claude, + Kiro, + ConvertClaudeRequestToKiro, + interfaces.TranslateResponse{ + Stream: ConvertKiroStreamToClaude, + NonStream: ConvertKiroNonStreamToClaude, + }, + ) +} diff --git a/internal/translator/kiro/claude/kiro_claude.go b/internal/translator/kiro/claude/kiro_claude.go new file mode 100644 index 0000000000..abfe2a3449 --- /dev/null +++ b/internal/translator/kiro/claude/kiro_claude.go @@ -0,0 +1,21 @@ +// Package claude provides translation between Kiro and Claude formats. +// Since Kiro executor generates Claude-compatible SSE format internally (with event: prefix), +// translations are pass-through for streaming, but responses need proper formatting. +package claude + +import ( + "context" +) + +// ConvertKiroStreamToClaude converts Kiro streaming response to Claude format. +// Kiro executor already generates complete SSE format with "event:" prefix, +// so this is a simple pass-through. +func ConvertKiroStreamToClaude(ctx context.Context, model string, originalRequest, request, rawResponse []byte, param *any) [][]byte { + return [][]byte{rawResponse} +} + +// ConvertKiroNonStreamToClaude converts Kiro non-streaming response to Claude format. +// The response is already in Claude format, so this is a pass-through. +func ConvertKiroNonStreamToClaude(ctx context.Context, model string, originalRequest, request, rawResponse []byte, param *any) []byte { + return rawResponse +} diff --git a/internal/translator/kiro/claude/kiro_claude_request.go b/internal/translator/kiro/claude/kiro_claude_request.go new file mode 100644 index 0000000000..899a710ecc --- /dev/null +++ b/internal/translator/kiro/claude/kiro_claude_request.go @@ -0,0 +1,961 @@ +// Package claude provides request translation functionality for Claude API to Kiro format. +// It handles parsing and transforming Claude API requests into the Kiro/Amazon Q API format, +// extracting model information, system instructions, message contents, and tool declarations. +package claude + +import ( + "encoding/json" + "fmt" + "net/http" + "strings" + "time" + "unicode/utf8" + + "github.com/google/uuid" + kirocommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/common" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +// remoteWebSearchDescription is a minimal fallback for when dynamic fetch from MCP tools/list hasn't completed yet. +const remoteWebSearchDescription = "WebSearch looks up information outside the model's training data. Supports multiple queries to gather comprehensive information." + +// Kiro API request structs - field order determines JSON key order + +// KiroPayload is the top-level request structure for Kiro API +type KiroPayload struct { + ConversationState KiroConversationState `json:"conversationState"` + ProfileArn string `json:"profileArn,omitempty"` + InferenceConfig *KiroInferenceConfig `json:"inferenceConfig,omitempty"` +} + +// KiroInferenceConfig contains inference parameters for the Kiro API. +type KiroInferenceConfig struct { + MaxTokens int `json:"maxTokens,omitempty"` + Temperature float64 `json:"temperature,omitempty"` + TopP float64 `json:"topP,omitempty"` +} + +// KiroConversationState holds the conversation context +type KiroConversationState struct { + AgentContinuationID string `json:"agentContinuationId,omitempty"` + AgentTaskType string `json:"agentTaskType,omitempty"` + ChatTriggerType string `json:"chatTriggerType"` // Required: "MANUAL" + ConversationID string `json:"conversationId"` + CurrentMessage KiroCurrentMessage `json:"currentMessage"` + History []KiroHistoryMessage `json:"history,omitempty"` +} + +// KiroCurrentMessage wraps the current user message +type KiroCurrentMessage struct { + UserInputMessage KiroUserInputMessage `json:"userInputMessage"` +} + +// KiroHistoryMessage represents a message in the conversation history +type KiroHistoryMessage struct { + UserInputMessage *KiroUserInputMessage `json:"userInputMessage,omitempty"` + AssistantResponseMessage *KiroAssistantResponseMessage `json:"assistantResponseMessage,omitempty"` +} + +// KiroImage represents an image in Kiro API format +type KiroImage struct { + Format string `json:"format"` + Source KiroImageSource `json:"source"` +} + +// KiroImageSource contains the image data +type KiroImageSource struct { + Bytes string `json:"bytes"` // base64 encoded image data +} + +// KiroUserInputMessage represents a user message +type KiroUserInputMessage struct { + Content string `json:"content"` + ModelID string `json:"modelId"` + Origin string `json:"origin"` + Images []KiroImage `json:"images,omitempty"` + UserInputMessageContext *KiroUserInputMessageContext `json:"userInputMessageContext,omitempty"` +} + +// KiroUserInputMessageContext contains tool-related context +type KiroUserInputMessageContext struct { + ToolResults []KiroToolResult `json:"toolResults,omitempty"` + Tools []KiroToolWrapper `json:"tools,omitempty"` +} + +// KiroToolResult represents a tool execution result +type KiroToolResult struct { + Content []KiroTextContent `json:"content"` + Status string `json:"status"` + ToolUseID string `json:"toolUseId"` +} + +// KiroTextContent represents text content +type KiroTextContent struct { + Text string `json:"text"` +} + +// KiroToolWrapper wraps a tool specification +type KiroToolWrapper struct { + ToolSpecification KiroToolSpecification `json:"toolSpecification"` +} + +// KiroToolSpecification defines a tool's schema +type KiroToolSpecification struct { + Name string `json:"name"` + Description string `json:"description"` + InputSchema KiroInputSchema `json:"inputSchema"` +} + +// KiroInputSchema wraps the JSON schema for tool input +type KiroInputSchema struct { + JSON interface{} `json:"json"` +} + +// KiroAssistantResponseMessage represents an assistant message +type KiroAssistantResponseMessage struct { + Content string `json:"content"` + ToolUses []KiroToolUse `json:"toolUses,omitempty"` +} + +// KiroToolUse represents a tool invocation by the assistant +type KiroToolUse struct { + ToolUseID string `json:"toolUseId"` + Name string `json:"name"` + Input map[string]interface{} `json:"input"` + IsTruncated bool `json:"-"` // Internal flag, not serialized + TruncationInfo *TruncationInfo `json:"-"` // Truncation details, not serialized +} + +// ConvertClaudeRequestToKiro converts a Claude API request to Kiro format. +// This is the main entry point for request translation. +func ConvertClaudeRequestToKiro(modelName string, inputRawJSON []byte, stream bool) []byte { + // For Kiro, we pass through the Claude format since buildKiroPayload + // expects Claude format and does the conversion internally. + // The actual conversion happens in the executor when building the HTTP request. + return inputRawJSON +} + +// BuildKiroPayload constructs the Kiro API request payload from Claude format. +// Supports tool calling - tools are passed via userInputMessageContext. +// origin parameter determines which quota to use: "CLI" for Amazon Q, "AI_EDITOR" for Kiro IDE. +// isAgentic parameter enables chunked write optimization prompt for -agentic model variants. +// isChatOnly parameter disables tool calling for -chat model variants (pure conversation mode). +// headers parameter allows checking Anthropic-Beta header for thinking mode detection. +// metadata parameter is kept for API compatibility but no longer used for thinking configuration. +// Supports thinking mode - when enabled, injects thinking tags into system prompt. +// Returns the payload and a boolean indicating whether thinking mode was injected. +func BuildKiroPayload(claudeBody []byte, modelID, profileArn, origin string, isAgentic, isChatOnly bool, headers http.Header, metadata map[string]any) ([]byte, bool) { + // Extract max_tokens for potential use in inferenceConfig + // Handle -1 as "use maximum" (Kiro max output is ~32000 tokens) + const kiroMaxOutputTokens = 32000 + var maxTokens int64 + if mt := gjson.GetBytes(claudeBody, "max_tokens"); mt.Exists() { + maxTokens = mt.Int() + if maxTokens == -1 { + maxTokens = kiroMaxOutputTokens + log.Debugf("kiro: max_tokens=-1 converted to %d", kiroMaxOutputTokens) + } + } + + // Extract temperature if specified + var temperature float64 + var hasTemperature bool + if temp := gjson.GetBytes(claudeBody, "temperature"); temp.Exists() { + temperature = temp.Float() + hasTemperature = true + } + + // Extract top_p if specified + var topP float64 + var hasTopP bool + if tp := gjson.GetBytes(claudeBody, "top_p"); tp.Exists() { + topP = tp.Float() + hasTopP = true + log.Debugf("kiro: extracted top_p: %.2f", topP) + } + + // Normalize origin value for Kiro API compatibility + origin = normalizeOrigin(origin) + log.Debugf("kiro: normalized origin value: %s", origin) + + messages := gjson.GetBytes(claudeBody, "messages") + + // For chat-only mode, don't include tools + var tools gjson.Result + if !isChatOnly { + tools = gjson.GetBytes(claudeBody, "tools") + } + + // Extract system prompt + systemPrompt := extractSystemPrompt(claudeBody) + + // Check for thinking mode using the comprehensive IsThinkingEnabledWithHeaders function + // This supports Claude API format, OpenAI reasoning_effort, AMP/Cursor format, and Anthropic-Beta header + thinkingEnabled := IsThinkingEnabledWithHeaders(claudeBody, headers) + + // Inject timestamp context + timestamp := time.Now().Format("2006-01-02 15:04:05 MST") + timestampContext := fmt.Sprintf("[Context: Current time is %s]", timestamp) + if systemPrompt != "" { + systemPrompt = timestampContext + "\n\n" + systemPrompt + } else { + systemPrompt = timestampContext + } + log.Debugf("kiro: injected timestamp context: %s", timestamp) + + // Inject agentic optimization prompt for -agentic model variants + if isAgentic { + if systemPrompt != "" { + systemPrompt += "\n" + } + systemPrompt += kirocommon.KiroAgenticSystemPrompt + } + + // Handle tool_choice parameter - Kiro doesn't support it natively, so we inject system prompt hints + // Claude tool_choice values: {"type": "auto/any/tool", "name": "..."} + toolChoiceHint := extractClaudeToolChoiceHint(claudeBody) + if toolChoiceHint != "" { + if systemPrompt != "" { + systemPrompt += "\n" + } + systemPrompt += toolChoiceHint + log.Debugf("kiro: injected tool_choice hint into system prompt") + } + + // Convert Claude tools to Kiro format + kiroTools := convertClaudeToolsToKiro(tools) + + // Thinking mode implementation: + // Kiro API supports official thinking/reasoning mode via tag. + // When set to "enabled", Kiro returns reasoning content as official reasoningContentEvent + // rather than inline tags in assistantResponseEvent. + // We cap max_thinking_length to reserve space for tool outputs and prevent truncation. + if thinkingEnabled { + thinkingHint := `enabled +16000` + if systemPrompt != "" { + systemPrompt = thinkingHint + "\n\n" + systemPrompt + } else { + systemPrompt = thinkingHint + } + log.Infof("kiro: injected thinking prompt (official mode), has_tools: %v", len(kiroTools) > 0) + } + + // Process messages and build history + history, currentUserMsg, currentToolResults := processMessages(messages, modelID, origin) + + // Build content with system prompt. + // Keep thinking tags on subsequent turns so multi-turn Claude sessions + // continue to emit reasoning events. + if currentUserMsg != nil { + currentUserMsg.Content = buildFinalContent(currentUserMsg.Content, systemPrompt, currentToolResults) + + // Deduplicate currentToolResults + currentToolResults = deduplicateToolResults(currentToolResults) + + // Build userInputMessageContext with tools and tool results + if len(kiroTools) > 0 || len(currentToolResults) > 0 { + currentUserMsg.UserInputMessageContext = &KiroUserInputMessageContext{ + Tools: kiroTools, + ToolResults: currentToolResults, + } + } + } + + // Build payload + var currentMessage KiroCurrentMessage + if currentUserMsg != nil { + currentMessage = KiroCurrentMessage{UserInputMessage: *currentUserMsg} + } else { + fallbackContent := "" + if systemPrompt != "" { + fallbackContent = "--- SYSTEM PROMPT ---\n" + systemPrompt + "\n--- END SYSTEM PROMPT ---\n" + } + currentMessage = KiroCurrentMessage{UserInputMessage: KiroUserInputMessage{ + Content: fallbackContent, + ModelID: modelID, + Origin: origin, + }} + } + + // Build inferenceConfig if we have any inference parameters + // Note: Kiro API doesn't actually use max_tokens for thinking budget + var inferenceConfig *KiroInferenceConfig + if maxTokens > 0 || hasTemperature || hasTopP { + inferenceConfig = &KiroInferenceConfig{} + if maxTokens > 0 { + inferenceConfig.MaxTokens = int(maxTokens) + } + if hasTemperature { + inferenceConfig.Temperature = temperature + } + if hasTopP { + inferenceConfig.TopP = topP + } + } + + // Session IDs: extract from messages[].additional_kwargs (LangChain format) or random + conversationID := extractMetadataFromMessages(messages, "conversationId") + continuationID := extractMetadataFromMessages(messages, "continuationId") + if conversationID == "" { + conversationID = uuid.New().String() + } + + payload := KiroPayload{ + ConversationState: KiroConversationState{ + AgentTaskType: "vibe", + ChatTriggerType: "MANUAL", + ConversationID: conversationID, + CurrentMessage: currentMessage, + History: history, + }, + ProfileArn: profileArn, + InferenceConfig: inferenceConfig, + } + + // Only set AgentContinuationID if client provided + if continuationID != "" { + payload.ConversationState.AgentContinuationID = continuationID + } + + result, err := json.Marshal(payload) + if err != nil { + log.Debugf("kiro: failed to marshal payload: %v", err) + return nil, false + } + + return result, thinkingEnabled +} + +// normalizeOrigin normalizes origin value for Kiro API compatibility +func normalizeOrigin(origin string) string { + switch origin { + case "KIRO_CLI": + return "CLI" + case "KIRO_AI_EDITOR": + return "AI_EDITOR" + case "AMAZON_Q": + return "CLI" + case "KIRO_IDE": + return "AI_EDITOR" + default: + return origin + } +} + +// extractMetadataFromMessages extracts metadata from messages[].additional_kwargs (LangChain format). +// Searches from the last message backwards, returns empty string if not found. +func extractMetadataFromMessages(messages gjson.Result, key string) string { + arr := messages.Array() + for i := len(arr) - 1; i >= 0; i-- { + if val := arr[i].Get("additional_kwargs." + key); val.Exists() && val.String() != "" { + return val.String() + } + } + return "" +} + +// extractSystemPrompt extracts system prompt from Claude request +func extractSystemPrompt(claudeBody []byte) string { + systemField := gjson.GetBytes(claudeBody, "system") + if systemField.IsArray() { + var sb strings.Builder + for _, block := range systemField.Array() { + if block.Get("type").String() == "text" { + sb.WriteString(block.Get("text").String()) + } else if block.Type == gjson.String { + sb.WriteString(block.String()) + } + } + return sb.String() + } + return systemField.String() +} + +// checkThinkingMode checks if thinking mode is enabled in the Claude request +func checkThinkingMode(claudeBody []byte) (bool, int64) { + thinkingEnabled := false + var budgetTokens int64 = 24000 + + thinkingField := gjson.GetBytes(claudeBody, "thinking") + if thinkingField.Exists() { + thinkingType := thinkingField.Get("type").String() + if thinkingType == "enabled" { + thinkingEnabled = true + if bt := thinkingField.Get("budget_tokens"); bt.Exists() { + budgetTokens = bt.Int() + if budgetTokens <= 0 { + thinkingEnabled = false + log.Debugf("kiro: thinking mode disabled via budget_tokens <= 0") + } + } + if thinkingEnabled { + log.Debugf("kiro: thinking mode enabled via Claude API parameter, budget_tokens: %d", budgetTokens) + } + } + } + + return thinkingEnabled, budgetTokens +} + +// hasThinkingTagInBody checks if the request body already contains thinking configuration tags. +// This is used to prevent duplicate injection when client (e.g., AMP/Cursor) already includes thinking config. +func hasThinkingTagInBody(body []byte) bool { + bodyStr := string(body) + return strings.Contains(bodyStr, "") || strings.Contains(bodyStr, "") +} + +// IsThinkingEnabledFromHeader checks if thinking mode is enabled via Anthropic-Beta header. +// Claude CLI uses "Anthropic-Beta: interleaved-thinking-2025-05-14" to enable thinking. +func IsThinkingEnabledFromHeader(headers http.Header) bool { + if headers == nil { + return false + } + betaHeader := headers.Get("Anthropic-Beta") + if betaHeader == "" { + return false + } + // Check for interleaved-thinking beta feature + if strings.Contains(betaHeader, "interleaved-thinking") { + log.Debugf("kiro: thinking mode enabled via Anthropic-Beta header: %s", betaHeader) + return true + } + return false +} + +// IsThinkingEnabled is a public wrapper to check if thinking mode is enabled. +// This is used by the executor to determine whether to parse tags in responses. +// When thinking is NOT enabled in the request, tags in responses should be +// treated as regular text content, not as thinking blocks. +// +// Supports multiple formats: +// - Claude API format: thinking.type = "enabled" +// - OpenAI format: reasoning_effort parameter +// - AMP/Cursor format: interleaved in system prompt +func IsThinkingEnabled(body []byte) bool { + return IsThinkingEnabledWithHeaders(body, nil) +} + +// IsThinkingEnabledWithHeaders checks if thinking mode is enabled from body or headers. +// This is the comprehensive check that supports all thinking detection methods: +// - Claude API format: thinking.type = "enabled" +// - OpenAI format: reasoning_effort parameter +// - AMP/Cursor format: interleaved in system prompt +// - Anthropic-Beta header: interleaved-thinking-2025-05-14 +func IsThinkingEnabledWithHeaders(body []byte, headers http.Header) bool { + // Check Anthropic-Beta header first (Claude Code uses this) + if IsThinkingEnabledFromHeader(headers) { + return true + } + + // Check Claude API format first (thinking.type = "enabled") + enabled, _ := checkThinkingMode(body) + if enabled { + log.Debugf("kiro: IsThinkingEnabled returning true (Claude API format)") + return true + } + + // Check OpenAI format: reasoning_effort parameter + // Valid values: "low", "medium", "high", "auto" (not "none") + reasoningEffort := gjson.GetBytes(body, "reasoning_effort") + if reasoningEffort.Exists() { + effort := reasoningEffort.String() + if effort != "" && effort != "none" { + log.Debugf("kiro: thinking mode enabled via OpenAI reasoning_effort: %s", effort) + return true + } + } + + // Check AMP/Cursor format: interleaved in system prompt + // This is how AMP client passes thinking configuration + bodyStr := string(body) + if strings.Contains(bodyStr, "") && strings.Contains(bodyStr, "") { + // Extract thinking mode value + startTag := "" + endTag := "" + startIdx := strings.Index(bodyStr, startTag) + if startIdx >= 0 { + startIdx += len(startTag) + endIdx := strings.Index(bodyStr[startIdx:], endTag) + if endIdx >= 0 { + thinkingMode := bodyStr[startIdx : startIdx+endIdx] + if thinkingMode == "interleaved" || thinkingMode == "enabled" { + log.Debugf("kiro: thinking mode enabled via AMP/Cursor format: %s", thinkingMode) + return true + } + } + } + } + + // Check OpenAI format: max_completion_tokens with reasoning (o1-style) + // Some clients use this to indicate reasoning mode + if gjson.GetBytes(body, "max_completion_tokens").Exists() { + // If max_completion_tokens is set, check if model name suggests reasoning + model := gjson.GetBytes(body, "model").String() + if strings.Contains(strings.ToLower(model), "thinking") || + strings.Contains(strings.ToLower(model), "reason") { + log.Debugf("kiro: thinking mode enabled via model name hint: %s", model) + return true + } + } + + // Check model name directly for thinking hints. + // This enables thinking variants even when clients don't send explicit thinking fields. + model := strings.TrimSpace(gjson.GetBytes(body, "model").String()) + modelLower := strings.ToLower(model) + if strings.Contains(modelLower, "thinking") || strings.Contains(modelLower, "-reason") { + log.Debugf("kiro: thinking mode enabled via model name hint: %s", model) + return true + } + + log.Debugf("kiro: IsThinkingEnabled returning false (no thinking mode detected)") + return false +} + +// shortenToolNameIfNeeded shortens tool names that exceed 64 characters. +// MCP tools often have long names like "mcp__server-name__tool-name". +// This preserves the "mcp__" prefix and last segment when possible. +func shortenToolNameIfNeeded(name string) string { + const limit = 64 + if len(name) <= limit { + return name + } + // For MCP tools, try to preserve prefix and last segment + if strings.HasPrefix(name, "mcp__") { + idx := strings.LastIndex(name, "__") + if idx > 0 { + cand := "mcp__" + name[idx+2:] + if len(cand) > limit { + return cand[:limit] + } + return cand + } + } + return name[:limit] +} + +func ensureKiroInputSchema(parameters interface{}) interface{} { + if parameters != nil { + return parameters + } + return map[string]interface{}{ + "type": "object", + "properties": map[string]interface{}{}, + } +} + +// convertClaudeToolsToKiro converts Claude tools to Kiro format +func convertClaudeToolsToKiro(tools gjson.Result) []KiroToolWrapper { + var kiroTools []KiroToolWrapper + if !tools.IsArray() { + return kiroTools + } + + for _, tool := range tools.Array() { + name := tool.Get("name").String() + description := tool.Get("description").String() + inputSchemaResult := tool.Get("input_schema") + var inputSchema interface{} + if inputSchemaResult.Exists() && inputSchemaResult.Type != gjson.Null { + inputSchema = inputSchemaResult.Value() + } + inputSchema = ensureKiroInputSchema(inputSchema) + + // Shorten tool name if it exceeds 64 characters (common with MCP tools) + originalName := name + name = shortenToolNameIfNeeded(name) + if name != originalName { + log.Debugf("kiro: shortened tool name from '%s' to '%s'", originalName, name) + } + + // CRITICAL FIX: Kiro API requires non-empty description + if strings.TrimSpace(description) == "" { + description = fmt.Sprintf("Tool: %s", name) + log.Debugf("kiro: tool '%s' has empty description, using default: %s", name, description) + } + + // Rename web_search → remote_web_search for Kiro API compatibility + if name == "web_search" { + name = "remote_web_search" + // Prefer dynamically fetched description, fall back to hardcoded constant + if cached := GetWebSearchDescription(); cached != "" { + description = cached + } else { + description = remoteWebSearchDescription + } + log.Debugf("kiro: renamed tool web_search → remote_web_search") + } + + // Truncate long descriptions (individual tool limit) + if len(description) > kirocommon.KiroMaxToolDescLen { + truncLen := kirocommon.KiroMaxToolDescLen - 30 + for truncLen > 0 && !utf8.RuneStart(description[truncLen]) { + truncLen-- + } + description = description[:truncLen] + "... (description truncated)" + } + + kiroTools = append(kiroTools, KiroToolWrapper{ + ToolSpecification: KiroToolSpecification{ + Name: name, + Description: description, + InputSchema: KiroInputSchema{JSON: inputSchema}, + }, + }) + } + + return kiroTools +} + +// processMessages processes Claude messages and builds Kiro history +func processMessages(messages gjson.Result, modelID, origin string) ([]KiroHistoryMessage, *KiroUserInputMessage, []KiroToolResult) { + var history []KiroHistoryMessage + var currentUserMsg *KiroUserInputMessage + var currentToolResults []KiroToolResult + + // Merge adjacent messages with the same role + messagesArray := kirocommon.MergeAdjacentMessages(messages.Array()) + + // FIX: Kiro API requires history to start with a user message. + // Some clients (e.g., OpenClaw) send conversations starting with an assistant message, + // which is valid for the Claude API but causes "Improperly formed request" on Kiro. + // Prepend a placeholder user message so the history alternation is correct. + if len(messagesArray) > 0 && messagesArray[0].Get("role").String() == "assistant" { + placeholder := `{"role":"user","content":"."}` + messagesArray = append([]gjson.Result{gjson.Parse(placeholder)}, messagesArray...) + log.Infof("kiro: messages started with assistant role, prepended placeholder user message for Kiro API compatibility") + } + + for i, msg := range messagesArray { + role := msg.Get("role").String() + isLastMessage := i == len(messagesArray)-1 + + if role == "user" { + userMsg, toolResults := BuildUserMessageStruct(msg, modelID, origin) + // CRITICAL: Kiro API requires content to be non-empty for ALL user messages + // This includes both history messages and the current message. + // When user message contains only tool_result (no text), content will be empty. + // This commonly happens in compaction requests from OpenCode. + if strings.TrimSpace(userMsg.Content) == "" { + if len(toolResults) > 0 { + userMsg.Content = kirocommon.DefaultUserContentWithToolResults + } else { + userMsg.Content = kirocommon.DefaultUserContent + } + log.Debugf("kiro: user content was empty, using default: %s", userMsg.Content) + } + if isLastMessage { + currentUserMsg = &userMsg + currentToolResults = toolResults + } else { + // For history messages, embed tool results in context + if len(toolResults) > 0 { + userMsg.UserInputMessageContext = &KiroUserInputMessageContext{ + ToolResults: toolResults, + } + } + history = append(history, KiroHistoryMessage{ + UserInputMessage: &userMsg, + }) + } + } else if role == "assistant" { + assistantMsg := BuildAssistantMessageStruct(msg) + if isLastMessage { + history = append(history, KiroHistoryMessage{ + AssistantResponseMessage: &assistantMsg, + }) + // Create a "Continue" user message as currentMessage + currentUserMsg = &KiroUserInputMessage{ + Content: "Continue", + ModelID: modelID, + Origin: origin, + } + } else { + history = append(history, KiroHistoryMessage{ + AssistantResponseMessage: &assistantMsg, + }) + } + } + } + + // POST-PROCESSING: Remove orphaned tool_results that have no matching tool_use + // in any assistant message. This happens when Claude Code compaction truncates + // the conversation and removes the assistant message containing the tool_use, + // but keeps the user message with the corresponding tool_result. + // Without this fix, Kiro API returns "Improperly formed request". + validToolUseIDs := make(map[string]bool) + for _, h := range history { + if h.AssistantResponseMessage != nil { + for _, tu := range h.AssistantResponseMessage.ToolUses { + validToolUseIDs[tu.ToolUseID] = true + } + } + } + + // Filter orphaned tool results from history user messages + for i, h := range history { + if h.UserInputMessage != nil && h.UserInputMessage.UserInputMessageContext != nil { + ctx := h.UserInputMessage.UserInputMessageContext + if len(ctx.ToolResults) > 0 { + filtered := make([]KiroToolResult, 0, len(ctx.ToolResults)) + for _, tr := range ctx.ToolResults { + if validToolUseIDs[tr.ToolUseID] { + filtered = append(filtered, tr) + } else { + log.Debugf("kiro: dropping orphaned tool_result in history[%d]: toolUseId=%s (no matching tool_use)", i, tr.ToolUseID) + } + } + ctx.ToolResults = filtered + if len(ctx.ToolResults) == 0 && len(ctx.Tools) == 0 { + h.UserInputMessage.UserInputMessageContext = nil + } + } + } + } + + // Filter orphaned tool results from current message + if len(currentToolResults) > 0 { + filtered := make([]KiroToolResult, 0, len(currentToolResults)) + for _, tr := range currentToolResults { + if validToolUseIDs[tr.ToolUseID] { + filtered = append(filtered, tr) + } else { + log.Debugf("kiro: dropping orphaned tool_result in currentMessage: toolUseId=%s (no matching tool_use)", tr.ToolUseID) + } + } + if len(filtered) != len(currentToolResults) { + log.Infof("kiro: dropped %d orphaned tool_result(s) from currentMessage (compaction artifact)", len(currentToolResults)-len(filtered)) + } + currentToolResults = filtered + } + + return history, currentUserMsg, currentToolResults +} + +// buildFinalContent builds the final content with system prompt +func buildFinalContent(content, systemPrompt string, toolResults []KiroToolResult) string { + var contentBuilder strings.Builder + + if systemPrompt != "" { + contentBuilder.WriteString("--- SYSTEM PROMPT ---\n") + contentBuilder.WriteString(systemPrompt) + contentBuilder.WriteString("\n--- END SYSTEM PROMPT ---\n\n") + } + + contentBuilder.WriteString(content) + finalContent := contentBuilder.String() + + // CRITICAL: Kiro API requires content to be non-empty + if strings.TrimSpace(finalContent) == "" { + if len(toolResults) > 0 { + finalContent = "Tool results provided." + } else { + finalContent = "Continue" + } + log.Debugf("kiro: content was empty, using default: %s", finalContent) + } + + return finalContent +} + +// deduplicateToolResults removes duplicate tool results +func deduplicateToolResults(toolResults []KiroToolResult) []KiroToolResult { + if len(toolResults) == 0 { + return toolResults + } + + seenIDs := make(map[string]bool) + unique := make([]KiroToolResult, 0, len(toolResults)) + for _, tr := range toolResults { + if !seenIDs[tr.ToolUseID] { + seenIDs[tr.ToolUseID] = true + unique = append(unique, tr) + } else { + log.Debugf("kiro: skipping duplicate toolResult in currentMessage: %s", tr.ToolUseID) + } + } + return unique +} + +// extractClaudeToolChoiceHint extracts tool_choice from Claude request and returns a system prompt hint. +// Claude tool_choice values: +// - {"type": "auto"}: Model decides (default, no hint needed) +// - {"type": "any"}: Must use at least one tool +// - {"type": "tool", "name": "..."}: Must use specific tool +func extractClaudeToolChoiceHint(claudeBody []byte) string { + toolChoice := gjson.GetBytes(claudeBody, "tool_choice") + if !toolChoice.Exists() { + return "" + } + + toolChoiceType := toolChoice.Get("type").String() + switch toolChoiceType { + case "any": + return "[INSTRUCTION: You MUST use at least one of the available tools to respond. Do not respond with text only - always make a tool call.]" + case "tool": + toolName := toolChoice.Get("name").String() + if toolName != "" { + return fmt.Sprintf("[INSTRUCTION: You MUST use the tool named '%s' to respond. Do not use any other tool or respond with text only.]", toolName) + } + case "auto": + // Default behavior, no hint needed + return "" + } + + return "" +} + +// BuildUserMessageStruct builds a user message and extracts tool results +func BuildUserMessageStruct(msg gjson.Result, modelID, origin string) (KiroUserInputMessage, []KiroToolResult) { + content := msg.Get("content") + var contentBuilder strings.Builder + var toolResults []KiroToolResult + var images []KiroImage + + // Track seen toolUseIds to deduplicate + seenToolUseIDs := make(map[string]bool) + + if content.IsArray() { + for _, part := range content.Array() { + partType := part.Get("type").String() + switch partType { + case "text": + contentBuilder.WriteString(part.Get("text").String()) + case "image": + mediaType := part.Get("source.media_type").String() + data := part.Get("source.data").String() + + format := "" + if idx := strings.LastIndex(mediaType, "/"); idx != -1 { + format = mediaType[idx+1:] + } + + if format != "" && data != "" { + images = append(images, KiroImage{ + Format: format, + Source: KiroImageSource{ + Bytes: data, + }, + }) + } + case "tool_result": + toolUseID := part.Get("tool_use_id").String() + + // Skip duplicate toolUseIds + if seenToolUseIDs[toolUseID] { + log.Debugf("kiro: skipping duplicate tool_result with toolUseId: %s", toolUseID) + continue + } + seenToolUseIDs[toolUseID] = true + + isError := part.Get("is_error").Bool() + resultContent := part.Get("content") + + var textContents []KiroTextContent + + if resultContent.IsArray() { + for _, item := range resultContent.Array() { + if item.Get("type").String() == "text" { + textContents = append(textContents, KiroTextContent{Text: item.Get("text").String()}) + } else if item.Type == gjson.String { + textContents = append(textContents, KiroTextContent{Text: item.String()}) + } + } + } else if resultContent.Type == gjson.String { + textContents = append(textContents, KiroTextContent{Text: resultContent.String()}) + } + + if len(textContents) == 0 { + textContents = append(textContents, KiroTextContent{Text: "Tool use was cancelled by the user"}) + } + + status := "success" + if isError { + status = "error" + } + + toolResults = append(toolResults, KiroToolResult{ + ToolUseID: toolUseID, + Content: textContents, + Status: status, + }) + } + } + } else { + contentBuilder.WriteString(content.String()) + } + + userMsg := KiroUserInputMessage{ + Content: contentBuilder.String(), + ModelID: modelID, + Origin: origin, + } + + if len(images) > 0 { + userMsg.Images = images + } + + return userMsg, toolResults +} + +// BuildAssistantMessageStruct builds an assistant message with tool uses +func BuildAssistantMessageStruct(msg gjson.Result) KiroAssistantResponseMessage { + content := msg.Get("content") + var contentBuilder strings.Builder + var toolUses []KiroToolUse + + if content.IsArray() { + for _, part := range content.Array() { + partType := part.Get("type").String() + switch partType { + case "text": + contentBuilder.WriteString(part.Get("text").String()) + case "tool_use": + toolUseID := part.Get("id").String() + toolName := part.Get("name").String() + toolInput := part.Get("input") + + var inputMap map[string]interface{} + if toolInput.IsObject() { + inputMap = make(map[string]interface{}) + toolInput.ForEach(func(key, value gjson.Result) bool { + inputMap[key.String()] = value.Value() + return true + }) + } + + // Rename web_search → remote_web_search to match convertClaudeToolsToKiro + if toolName == "web_search" { + toolName = "remote_web_search" + } + + toolUses = append(toolUses, KiroToolUse{ + ToolUseID: toolUseID, + Name: toolName, + Input: inputMap, + }) + } + } + } else { + contentBuilder.WriteString(content.String()) + } + + // CRITICAL FIX: Kiro API requires non-empty content for assistant messages + // This can happen with compaction requests where assistant messages have only tool_use + // (no text content). Without this fix, Kiro API returns "Improperly formed request" error. + finalContent := contentBuilder.String() + if strings.TrimSpace(finalContent) == "" { + if len(toolUses) > 0 { + finalContent = kirocommon.DefaultAssistantContentWithTools + } else { + finalContent = kirocommon.DefaultAssistantContent + } + log.Debugf("kiro: assistant content was empty, using default: %s", finalContent) + } + + return KiroAssistantResponseMessage{ + Content: finalContent, + ToolUses: toolUses, + } +} diff --git a/internal/translator/kiro/claude/kiro_claude_response.go b/internal/translator/kiro/claude/kiro_claude_response.go new file mode 100644 index 0000000000..06ea84dfbb --- /dev/null +++ b/internal/translator/kiro/claude/kiro_claude_response.go @@ -0,0 +1,209 @@ +// Package claude provides response translation functionality for Kiro API to Claude format. +// This package handles the conversion of Kiro API responses into Claude-compatible format, +// including support for thinking blocks and tool use. +package claude + +import ( + "crypto/sha256" + "encoding/base64" + "encoding/json" + "strings" + + "github.com/google/uuid" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + log "github.com/sirupsen/logrus" + + kirocommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/common" +) + +// generateThinkingSignature generates a signature for thinking content. +// This is required by Claude API for thinking blocks in non-streaming responses. +// The signature is a base64-encoded hash of the thinking content. +func generateThinkingSignature(thinkingContent string) string { + if thinkingContent == "" { + return "" + } + // Generate a deterministic signature based on content hash + hash := sha256.Sum256([]byte(thinkingContent)) + return base64.StdEncoding.EncodeToString(hash[:]) +} + +// Local references to kirocommon constants for thinking block parsing +var ( + thinkingStartTag = kirocommon.ThinkingStartTag + thinkingEndTag = kirocommon.ThinkingEndTag +) + +// BuildClaudeResponse constructs a Claude-compatible response. +// Supports tool_use blocks when tools are present in the response. +// Supports thinking blocks - parses tags and converts to Claude thinking content blocks. +// stopReason is passed from upstream; fallback logic applied if empty. +func BuildClaudeResponse(content string, toolUses []KiroToolUse, model string, usageInfo usage.Detail, stopReason string) []byte { + var contentBlocks []map[string]interface{} + + // Extract thinking blocks and text from content + if content != "" { + blocks := ExtractThinkingFromContent(content) + contentBlocks = append(contentBlocks, blocks...) + + // Log if thinking blocks were extracted + for _, block := range blocks { + if block["type"] == "thinking" { + thinkingContent := block["thinking"].(string) + log.Infof("kiro: buildClaudeResponse extracted thinking block (len: %d)", len(thinkingContent)) + } + } + } + + // Add tool_use blocks - skip truncated tools and log warning + for _, toolUse := range toolUses { + if toolUse.IsTruncated && toolUse.TruncationInfo != nil { + log.Warnf("kiro: buildClaudeResponse skipping truncated tool: %s (ID: %s)", toolUse.Name, toolUse.ToolUseID) + continue + } + contentBlocks = append(contentBlocks, map[string]interface{}{ + "type": "tool_use", + "id": toolUse.ToolUseID, + "name": toolUse.Name, + "input": toolUse.Input, + }) + } + + // Ensure at least one content block (Claude API requires non-empty content) + if len(contentBlocks) == 0 { + contentBlocks = append(contentBlocks, map[string]interface{}{ + "type": "text", + "text": "", + }) + } + + // Use upstream stopReason; apply fallback logic if not provided + // SOFT_LIMIT_REACHED: Keep stop_reason = "tool_use" so Claude continues the loop + if stopReason == "" { + stopReason = "end_turn" + if len(toolUses) > 0 { + stopReason = "tool_use" + } + log.Debugf("kiro: buildClaudeResponse using fallback stop_reason: %s", stopReason) + } + + // Log warning if response was truncated due to max_tokens + if stopReason == "max_tokens" { + log.Warnf("kiro: response truncated due to max_tokens limit (buildClaudeResponse)") + } + + response := map[string]interface{}{ + "id": "msg_" + uuid.New().String()[:24], + "type": "message", + "role": "assistant", + "model": model, + "content": contentBlocks, + "stop_reason": stopReason, + "usage": map[string]interface{}{ + "input_tokens": usageInfo.InputTokens, + "output_tokens": usageInfo.OutputTokens, + }, + } + result, _ := json.Marshal(response) + return result +} + +// ExtractThinkingFromContent parses content to extract thinking blocks and text. +// Returns a list of content blocks in the order they appear in the content. +// Handles interleaved thinking and text blocks correctly. +func ExtractThinkingFromContent(content string) []map[string]interface{} { + var blocks []map[string]interface{} + + if content == "" { + return blocks + } + + // Check if content contains thinking tags at all + if !strings.Contains(content, thinkingStartTag) { + // No thinking tags, return as plain text + return []map[string]interface{}{ + { + "type": "text", + "text": content, + }, + } + } + + log.Debugf("kiro: extractThinkingFromContent - found thinking tags in content (len: %d)", len(content)) + + remaining := content + + for len(remaining) > 0 { + // Look for tag + startIdx := strings.Index(remaining, thinkingStartTag) + + if startIdx == -1 { + // No more thinking tags, add remaining as text + if strings.TrimSpace(remaining) != "" { + blocks = append(blocks, map[string]interface{}{ + "type": "text", + "text": remaining, + }) + } + break + } + + // Add text before thinking tag (if any meaningful content) + if startIdx > 0 { + textBefore := remaining[:startIdx] + if strings.TrimSpace(textBefore) != "" { + blocks = append(blocks, map[string]interface{}{ + "type": "text", + "text": textBefore, + }) + } + } + + // Move past the opening tag + remaining = remaining[startIdx+len(thinkingStartTag):] + + // Find closing tag + endIdx := strings.Index(remaining, thinkingEndTag) + + if endIdx == -1 { + // No closing tag found, treat rest as thinking content (incomplete response) + if strings.TrimSpace(remaining) != "" { + // Generate signature for thinking content (required by Claude API) + signature := generateThinkingSignature(remaining) + blocks = append(blocks, map[string]interface{}{ + "type": "thinking", + "thinking": remaining, + "signature": signature, + }) + log.Warnf("kiro: extractThinkingFromContent - missing closing tag") + } + break + } + + // Extract thinking content between tags + thinkContent := remaining[:endIdx] + if strings.TrimSpace(thinkContent) != "" { + // Generate signature for thinking content (required by Claude API) + signature := generateThinkingSignature(thinkContent) + blocks = append(blocks, map[string]interface{}{ + "type": "thinking", + "thinking": thinkContent, + "signature": signature, + }) + log.Debugf("kiro: extractThinkingFromContent - extracted thinking block (len: %d)", len(thinkContent)) + } + + // Move past the closing tag + remaining = remaining[endIdx+len(thinkingEndTag):] + } + + // If no blocks were created (all whitespace), return empty text block + if len(blocks) == 0 { + blocks = append(blocks, map[string]interface{}{ + "type": "text", + "text": "", + }) + } + + return blocks +} diff --git a/internal/translator/kiro/claude/kiro_claude_stream.go b/internal/translator/kiro/claude/kiro_claude_stream.go new file mode 100644 index 0000000000..dc7559bf90 --- /dev/null +++ b/internal/translator/kiro/claude/kiro_claude_stream.go @@ -0,0 +1,306 @@ +// Package claude provides streaming SSE event building for Claude format. +// This package handles the construction of Claude-compatible Server-Sent Events (SSE) +// for streaming responses from Kiro API. +package claude + +import ( + "encoding/json" + + "github.com/google/uuid" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" +) + +// BuildClaudeMessageStartEvent creates the message_start SSE event +func BuildClaudeMessageStartEvent(model string, inputTokens int64) []byte { + event := map[string]interface{}{ + "type": "message_start", + "message": map[string]interface{}{ + "id": "msg_" + uuid.New().String()[:24], + "type": "message", + "role": "assistant", + "content": []interface{}{}, + "model": model, + "stop_reason": nil, + "stop_sequence": nil, + "usage": map[string]interface{}{"input_tokens": inputTokens, "output_tokens": 0}, + }, + } + result, _ := json.Marshal(event) + return []byte("event: message_start\ndata: " + string(result)) +} + +// BuildClaudeContentBlockStartEvent creates a content_block_start SSE event +func BuildClaudeContentBlockStartEvent(index int, blockType, toolUseID, toolName string) []byte { + var contentBlock map[string]interface{} + switch blockType { + case "tool_use": + contentBlock = map[string]interface{}{ + "type": "tool_use", + "id": toolUseID, + "name": toolName, + "input": map[string]interface{}{}, + } + case "thinking": + contentBlock = map[string]interface{}{ + "type": "thinking", + "thinking": "", + } + default: + contentBlock = map[string]interface{}{ + "type": "text", + "text": "", + } + } + + event := map[string]interface{}{ + "type": "content_block_start", + "index": index, + "content_block": contentBlock, + } + result, _ := json.Marshal(event) + return []byte("event: content_block_start\ndata: " + string(result)) +} + +// BuildClaudeStreamEvent creates a text_delta content_block_delta SSE event +func BuildClaudeStreamEvent(contentDelta string, index int) []byte { + event := map[string]interface{}{ + "type": "content_block_delta", + "index": index, + "delta": map[string]interface{}{ + "type": "text_delta", + "text": contentDelta, + }, + } + result, _ := json.Marshal(event) + return []byte("event: content_block_delta\ndata: " + string(result)) +} + +// BuildClaudeInputJsonDeltaEvent creates an input_json_delta event for tool use streaming +func BuildClaudeInputJsonDeltaEvent(partialJSON string, index int) []byte { + event := map[string]interface{}{ + "type": "content_block_delta", + "index": index, + "delta": map[string]interface{}{ + "type": "input_json_delta", + "partial_json": partialJSON, + }, + } + result, _ := json.Marshal(event) + return []byte("event: content_block_delta\ndata: " + string(result)) +} + +// BuildClaudeContentBlockStopEvent creates a content_block_stop SSE event +func BuildClaudeContentBlockStopEvent(index int) []byte { + event := map[string]interface{}{ + "type": "content_block_stop", + "index": index, + } + result, _ := json.Marshal(event) + return []byte("event: content_block_stop\ndata: " + string(result)) +} + +// BuildClaudeThinkingBlockStopEvent creates a content_block_stop SSE event for thinking blocks. +func BuildClaudeThinkingBlockStopEvent(index int) []byte { + event := map[string]interface{}{ + "type": "content_block_stop", + "index": index, + } + result, _ := json.Marshal(event) + return []byte("event: content_block_stop\ndata: " + string(result)) +} + +// BuildClaudeMessageDeltaEvent creates the message_delta event with stop_reason and usage +func BuildClaudeMessageDeltaEvent(stopReason string, usageInfo usage.Detail) []byte { + deltaEvent := map[string]interface{}{ + "type": "message_delta", + "delta": map[string]interface{}{ + "stop_reason": stopReason, + "stop_sequence": nil, + }, + "usage": map[string]interface{}{ + "input_tokens": usageInfo.InputTokens, + "output_tokens": usageInfo.OutputTokens, + }, + } + deltaResult, _ := json.Marshal(deltaEvent) + return []byte("event: message_delta\ndata: " + string(deltaResult)) +} + +// BuildClaudeMessageStopOnlyEvent creates only the message_stop event +func BuildClaudeMessageStopOnlyEvent() []byte { + stopEvent := map[string]interface{}{ + "type": "message_stop", + } + stopResult, _ := json.Marshal(stopEvent) + return []byte("event: message_stop\ndata: " + string(stopResult)) +} + +// BuildClaudePingEventWithUsage creates a ping event with embedded usage information. +// This is used for real-time usage estimation during streaming. +func BuildClaudePingEventWithUsage(inputTokens, outputTokens int64) []byte { + event := map[string]interface{}{ + "type": "ping", + "usage": map[string]interface{}{ + "input_tokens": inputTokens, + "output_tokens": outputTokens, + "total_tokens": inputTokens + outputTokens, + "estimated": true, + }, + } + result, _ := json.Marshal(event) + return []byte("event: ping\ndata: " + string(result)) +} + +// BuildClaudeThinkingDeltaEvent creates a thinking_delta event for Claude API compatibility. +// This is used when streaming thinking content wrapped in tags. +func BuildClaudeThinkingDeltaEvent(thinkingDelta string, index int) []byte { + event := map[string]interface{}{ + "type": "content_block_delta", + "index": index, + "delta": map[string]interface{}{ + "type": "thinking_delta", + "thinking": thinkingDelta, + }, + } + result, _ := json.Marshal(event) + return []byte("event: content_block_delta\ndata: " + string(result)) +} + +// PendingTagSuffix detects if the buffer ends with a partial prefix of the given tag. +// Returns the length of the partial match (0 if no match). +// Based on amq2api implementation for handling cross-chunk tag boundaries. +func PendingTagSuffix(buffer, tag string) int { + if buffer == "" || tag == "" { + return 0 + } + maxLen := len(buffer) + if maxLen > len(tag)-1 { + maxLen = len(tag) - 1 + } + for length := maxLen; length > 0; length-- { + if len(buffer) >= length && buffer[len(buffer)-length:] == tag[:length] { + return length + } + } + return 0 +} + +// GenerateSearchIndicatorEvents generates ONLY the search indicator SSE events +// (server_tool_use + web_search_tool_result) without text summary or message termination. +// These events trigger Claude Code's search indicator UI. +// The caller is responsible for sending message_start before and message_delta/stop after. +func GenerateSearchIndicatorEvents( + query string, + toolUseID string, + searchResults *WebSearchResults, + startIndex int, +) [][]byte { + events := make([][]byte, 0, 5) + + // 1. content_block_start (server_tool_use) + event1 := map[string]interface{}{ + "type": "content_block_start", + "index": startIndex, + "content_block": map[string]interface{}{ + "id": toolUseID, + "type": "server_tool_use", + "name": "web_search", + "input": map[string]interface{}{}, + }, + } + data1, _ := json.Marshal(event1) + events = append(events, []byte("event: content_block_start\ndata: "+string(data1)+"\n\n")) + + // 2. content_block_delta (input_json_delta) + inputJSON, _ := json.Marshal(map[string]string{"query": query}) + event2 := map[string]interface{}{ + "type": "content_block_delta", + "index": startIndex, + "delta": map[string]interface{}{ + "type": "input_json_delta", + "partial_json": string(inputJSON), + }, + } + data2, _ := json.Marshal(event2) + events = append(events, []byte("event: content_block_delta\ndata: "+string(data2)+"\n\n")) + + // 3. content_block_stop (server_tool_use) + event3 := map[string]interface{}{ + "type": "content_block_stop", + "index": startIndex, + } + data3, _ := json.Marshal(event3) + events = append(events, []byte("event: content_block_stop\ndata: "+string(data3)+"\n\n")) + + // 4. content_block_start (web_search_tool_result) + searchContent := make([]map[string]interface{}, 0) + if searchResults != nil { + for _, r := range searchResults.Results { + snippet := "" + if r.Snippet != nil { + snippet = *r.Snippet + } + searchContent = append(searchContent, map[string]interface{}{ + "type": "web_search_result", + "title": r.Title, + "url": r.URL, + "encrypted_content": snippet, + "page_age": nil, + }) + } + } + event4 := map[string]interface{}{ + "type": "content_block_start", + "index": startIndex + 1, + "content_block": map[string]interface{}{ + "type": "web_search_tool_result", + "tool_use_id": toolUseID, + "content": searchContent, + }, + } + data4, _ := json.Marshal(event4) + events = append(events, []byte("event: content_block_start\ndata: "+string(data4)+"\n\n")) + + // 5. content_block_stop (web_search_tool_result) + event5 := map[string]interface{}{ + "type": "content_block_stop", + "index": startIndex + 1, + } + data5, _ := json.Marshal(event5) + events = append(events, []byte("event: content_block_stop\ndata: "+string(data5)+"\n\n")) + + return events +} + +// BuildFallbackTextEvents generates SSE events for a fallback text response +// when the Kiro API fails during the search loop. Uses BuildClaude*Event() +// functions to align with streamToChannel patterns. +// Returns raw SSE byte slices ready to be sent to the client channel. +func BuildFallbackTextEvents(contentBlockIndex int, query string, results *WebSearchResults) [][]byte { + summary := FormatSearchContextPrompt(query, results) + outputTokens := len(summary) / 4 + if len(summary) > 0 && outputTokens == 0 { + outputTokens = 1 + } + + var events [][]byte + + // content_block_start (text) + events = append(events, BuildClaudeContentBlockStartEvent(contentBlockIndex, "text", "", "")) + + // content_block_delta (text_delta) + events = append(events, BuildClaudeStreamEvent(summary, contentBlockIndex)) + + // content_block_stop + events = append(events, BuildClaudeContentBlockStopEvent(contentBlockIndex)) + + // message_delta with end_turn + events = append(events, BuildClaudeMessageDeltaEvent("end_turn", usage.Detail{ + OutputTokens: int64(outputTokens), + })) + + // message_stop + events = append(events, BuildClaudeMessageStopOnlyEvent()) + + return events +} diff --git a/internal/translator/kiro/claude/kiro_claude_stream_parser.go b/internal/translator/kiro/claude/kiro_claude_stream_parser.go new file mode 100644 index 0000000000..03422072b8 --- /dev/null +++ b/internal/translator/kiro/claude/kiro_claude_stream_parser.go @@ -0,0 +1,350 @@ +package claude + +import ( + "encoding/json" + "strings" + + log "github.com/sirupsen/logrus" +) + +// sseEvent represents a Server-Sent Event +type sseEvent struct { + Event string + Data interface{} +} + +// ToSSEString converts the event to SSE wire format +func (e *sseEvent) ToSSEString() string { + dataBytes, _ := json.Marshal(e.Data) + return "event: " + e.Event + "\ndata: " + string(dataBytes) + "\n\n" +} + +// AdjustStreamIndices adjusts content block indices in SSE event data by adding an offset. +// It also suppresses duplicate message_start events (returns shouldForward=false). +// This is used to combine search indicator events (indices 0,1) with Kiro model response events. +// +// The data parameter is a single SSE "data:" line payload (JSON). +// Returns: adjusted data, shouldForward (false = skip this event). +func AdjustStreamIndices(data []byte, offset int) ([]byte, bool) { + if len(data) == 0 { + return data, true + } + + // Quick check: parse the JSON + var event map[string]interface{} + if err := json.Unmarshal(data, &event); err != nil { + // Not valid JSON, pass through + return data, true + } + + eventType, _ := event["type"].(string) + + // Suppress duplicate message_start events + if eventType == "message_start" { + return data, false + } + + // Adjust index for content_block events + switch eventType { + case "content_block_start", "content_block_delta", "content_block_stop": + if idx, ok := event["index"].(float64); ok { + event["index"] = int(idx) + offset + adjusted, err := json.Marshal(event) + if err != nil { + return data, true + } + return adjusted, true + } + } + + // Pass through all other events unchanged (message_delta, message_stop, ping, etc.) + return data, true +} + +// AdjustSSEChunk processes a raw SSE chunk (potentially containing multiple "event:/data:" pairs) +// and adjusts content block indices. Suppresses duplicate message_start events. +// Returns the adjusted chunk and whether it should be forwarded. +func AdjustSSEChunk(chunk []byte, offset int) ([]byte, bool) { + chunkStr := string(chunk) + + // Fast path: if no "data:" prefix, pass through + if !strings.Contains(chunkStr, "data: ") { + return chunk, true + } + + var result strings.Builder + hasContent := false + + lines := strings.Split(chunkStr, "\n") + for i := 0; i < len(lines); i++ { + line := lines[i] + + if strings.HasPrefix(line, "data: ") { + dataPayload := strings.TrimPrefix(line, "data: ") + dataPayload = strings.TrimSpace(dataPayload) + + if dataPayload == "[DONE]" { + result.WriteString(line + "\n") + hasContent = true + continue + } + + adjusted, shouldForward := AdjustStreamIndices([]byte(dataPayload), offset) + if !shouldForward { + // Skip this event and its preceding "event:" line + // Also skip the trailing empty line + continue + } + + result.WriteString("data: " + string(adjusted) + "\n") + hasContent = true + } else if strings.HasPrefix(line, "event: ") { + // Check if the next data line will be suppressed + if i+1 < len(lines) && strings.HasPrefix(lines[i+1], "data: ") { + dataPayload := strings.TrimPrefix(lines[i+1], "data: ") + dataPayload = strings.TrimSpace(dataPayload) + + var event map[string]interface{} + if err := json.Unmarshal([]byte(dataPayload), &event); err == nil { + if eventType, ok := event["type"].(string); ok && eventType == "message_start" { + // Skip both the event: and data: lines + i++ // skip the data: line too + continue + } + } + } + result.WriteString(line + "\n") + hasContent = true + } else { + result.WriteString(line + "\n") + if strings.TrimSpace(line) != "" { + hasContent = true + } + } + } + + if !hasContent { + return nil, false + } + + return []byte(result.String()), true +} + +// BufferedStreamResult contains the analysis of buffered SSE chunks from a Kiro API response. +type BufferedStreamResult struct { + // StopReason is the detected stop_reason from the stream (e.g., "end_turn", "tool_use") + StopReason string + // WebSearchQuery is the extracted query if the model requested another web_search + WebSearchQuery string + // WebSearchToolUseId is the tool_use ID from the model's response (needed for toolResults) + WebSearchToolUseId string + // HasWebSearchToolUse indicates whether the model requested web_search + HasWebSearchToolUse bool + // WebSearchToolUseIndex is the content_block index of the web_search tool_use + WebSearchToolUseIndex int +} + +// AnalyzeBufferedStream scans buffered SSE chunks to detect stop_reason and web_search tool_use. +// This is used in the search loop to determine if the model wants another search round. +func AnalyzeBufferedStream(chunks [][]byte) BufferedStreamResult { + result := BufferedStreamResult{WebSearchToolUseIndex: -1} + + // Track tool use state across chunks + var currentToolName string + var currentToolIndex int = -1 + var toolInputBuilder strings.Builder + + for _, chunk := range chunks { + chunkStr := string(chunk) + lines := strings.Split(chunkStr, "\n") + for _, line := range lines { + if !strings.HasPrefix(line, "data: ") { + continue + } + dataPayload := strings.TrimPrefix(line, "data: ") + dataPayload = strings.TrimSpace(dataPayload) + if dataPayload == "[DONE]" || dataPayload == "" { + continue + } + + var event map[string]interface{} + if err := json.Unmarshal([]byte(dataPayload), &event); err != nil { + continue + } + + eventType, _ := event["type"].(string) + + switch eventType { + case "message_delta": + // Extract stop_reason from message_delta + if delta, ok := event["delta"].(map[string]interface{}); ok { + if sr, ok := delta["stop_reason"].(string); ok && sr != "" { + result.StopReason = sr + } + } + + case "content_block_start": + // Detect tool_use content blocks + if cb, ok := event["content_block"].(map[string]interface{}); ok { + if cbType, ok := cb["type"].(string); ok && cbType == "tool_use" { + if name, ok := cb["name"].(string); ok { + currentToolName = strings.ToLower(name) + if idx, ok := event["index"].(float64); ok { + currentToolIndex = int(idx) + } + // Capture tool use ID only for web_search toolResults handshake + if id, ok := cb["id"].(string); ok && (currentToolName == "web_search" || currentToolName == "remote_web_search") { + result.WebSearchToolUseId = id + } + toolInputBuilder.Reset() + } + } + } + + case "content_block_delta": + // Accumulate tool input JSON + if currentToolName != "" { + if delta, ok := event["delta"].(map[string]interface{}); ok { + if deltaType, ok := delta["type"].(string); ok && deltaType == "input_json_delta" { + if partial, ok := delta["partial_json"].(string); ok { + toolInputBuilder.WriteString(partial) + } + } + } + } + + case "content_block_stop": + // Finalize tool use detection + if currentToolName == "web_search" || currentToolName == "websearch" || currentToolName == "remote_web_search" { + result.HasWebSearchToolUse = true + result.WebSearchToolUseIndex = currentToolIndex + // Extract query from accumulated input JSON + inputJSON := toolInputBuilder.String() + var input map[string]string + if err := json.Unmarshal([]byte(inputJSON), &input); err == nil { + if q, ok := input["query"]; ok { + result.WebSearchQuery = q + } + } + log.Debugf("kiro/websearch: detected web_search tool_use") + } + currentToolName = "" + currentToolIndex = -1 + toolInputBuilder.Reset() + } + } + } + + return result +} + +// FilterChunksForClient processes buffered SSE chunks and removes web_search tool_use +// content blocks. This prevents the client from seeing "Tool use" prompts for web_search +// when the proxy is handling the search loop internally. +// Also suppresses message_start and message_delta/message_stop events since those +// are managed by the outer handleWebSearchStream. +func FilterChunksForClient(chunks [][]byte, wsToolIndex int, indexOffset int) [][]byte { + var filtered [][]byte + + for _, chunk := range chunks { + chunkStr := string(chunk) + lines := strings.Split(chunkStr, "\n") + + var resultBuilder strings.Builder + hasContent := false + + for i := 0; i < len(lines); i++ { + line := lines[i] + + if strings.HasPrefix(line, "data: ") { + dataPayload := strings.TrimPrefix(line, "data: ") + dataPayload = strings.TrimSpace(dataPayload) + + if dataPayload == "[DONE]" { + // Skip [DONE] — the outer loop manages stream termination + continue + } + + var event map[string]interface{} + if err := json.Unmarshal([]byte(dataPayload), &event); err != nil { + resultBuilder.WriteString(line + "\n") + hasContent = true + continue + } + + eventType, _ := event["type"].(string) + + // Skip message_start (outer loop sends its own) + if eventType == "message_start" { + continue + } + + // Skip message_delta and message_stop (outer loop manages these) + if eventType == "message_delta" || eventType == "message_stop" { + continue + } + + // Check if this event belongs to the web_search tool_use block + if wsToolIndex >= 0 { + if idx, ok := event["index"].(float64); ok && int(idx) == wsToolIndex { + // Skip events for the web_search tool_use block + continue + } + } + + // Apply index offset for remaining events + if indexOffset > 0 { + switch eventType { + case "content_block_start", "content_block_delta", "content_block_stop": + if idx, ok := event["index"].(float64); ok { + event["index"] = int(idx) + indexOffset + adjusted, err := json.Marshal(event) + if err == nil { + resultBuilder.WriteString("data: " + string(adjusted) + "\n") + hasContent = true + continue + } + } + } + } + + resultBuilder.WriteString(line + "\n") + hasContent = true + } else if strings.HasPrefix(line, "event: ") { + // Check if the next data line will be suppressed + if i+1 < len(lines) && strings.HasPrefix(lines[i+1], "data: ") { + nextData := strings.TrimPrefix(lines[i+1], "data: ") + nextData = strings.TrimSpace(nextData) + + var nextEvent map[string]interface{} + if err := json.Unmarshal([]byte(nextData), &nextEvent); err == nil { + nextType, _ := nextEvent["type"].(string) + if nextType == "message_start" || nextType == "message_delta" || nextType == "message_stop" { + i++ // skip the data line + continue + } + if wsToolIndex >= 0 { + if idx, ok := nextEvent["index"].(float64); ok && int(idx) == wsToolIndex { + i++ // skip the data line + continue + } + } + } + } + resultBuilder.WriteString(line + "\n") + hasContent = true + } else { + resultBuilder.WriteString(line + "\n") + if strings.TrimSpace(line) != "" { + hasContent = true + } + } + } + + if hasContent { + filtered = append(filtered, []byte(resultBuilder.String())) + } + } + + return filtered +} diff --git a/internal/translator/kiro/claude/kiro_claude_tools.go b/internal/translator/kiro/claude/kiro_claude_tools.go new file mode 100644 index 0000000000..554b0f90b8 --- /dev/null +++ b/internal/translator/kiro/claude/kiro_claude_tools.go @@ -0,0 +1,543 @@ +// Package claude provides tool calling support for Kiro to Claude translation. +// This package handles parsing embedded tool calls, JSON repair, and deduplication. +package claude + +import ( + "encoding/json" + "regexp" + "strings" + + "github.com/google/uuid" + kirocommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/common" + log "github.com/sirupsen/logrus" +) + +// ToolUseState tracks the state of an in-progress tool use during streaming. +type ToolUseState struct { + ToolUseID string + Name string + InputBuffer strings.Builder + IsComplete bool + TruncationInfo *TruncationInfo // Truncation detection result (set when complete) +} + +// Pre-compiled regex patterns for performance +var ( + // embeddedToolCallPattern matches [Called tool_name with args: {...}] format + embeddedToolCallPattern = regexp.MustCompile(`\[Called\s+([A-Za-z0-9_.-]+)\s+with\s+args:\s*`) + // trailingCommaPattern matches trailing commas before closing braces/brackets + trailingCommaPattern = regexp.MustCompile(`,\s*([}\]])`) +) + +// ParseEmbeddedToolCalls extracts [Called tool_name with args: {...}] format from text. +// Kiro sometimes embeds tool calls in text content instead of using toolUseEvent. +// Returns the cleaned text (with tool calls removed) and extracted tool uses. +func ParseEmbeddedToolCalls(text string, processedIDs map[string]bool) (string, []KiroToolUse) { + if !strings.Contains(text, "[Called") { + return text, nil + } + + var toolUses []KiroToolUse + cleanText := text + + // Find all [Called markers + matches := embeddedToolCallPattern.FindAllStringSubmatchIndex(text, -1) + if len(matches) == 0 { + return text, nil + } + + // Process matches in reverse order to maintain correct indices + for i := len(matches) - 1; i >= 0; i-- { + matchStart := matches[i][0] + toolNameStart := matches[i][2] + toolNameEnd := matches[i][3] + + if toolNameStart < 0 || toolNameEnd < 0 { + continue + } + + toolName := text[toolNameStart:toolNameEnd] + + // Find the JSON object start (after "with args:") + jsonStart := matches[i][1] + if jsonStart >= len(text) { + continue + } + + // Skip whitespace to find the opening brace + for jsonStart < len(text) && (text[jsonStart] == ' ' || text[jsonStart] == '\t') { + jsonStart++ + } + + if jsonStart >= len(text) || text[jsonStart] != '{' { + continue + } + + // Find matching closing bracket + jsonEnd := findMatchingBracket(text, jsonStart) + if jsonEnd < 0 { + continue + } + + // Extract JSON and find the closing bracket of [Called ...] + jsonStr := text[jsonStart : jsonEnd+1] + + // Find the closing ] after the JSON + closingBracket := jsonEnd + 1 + for closingBracket < len(text) && text[closingBracket] != ']' { + closingBracket++ + } + if closingBracket >= len(text) { + continue + } + + // End index of the full tool call (closing ']' inclusive) + matchEnd := closingBracket + 1 + + // Repair and parse JSON + repairedJSON := RepairJSON(jsonStr) + var inputMap map[string]interface{} + if err := json.Unmarshal([]byte(repairedJSON), &inputMap); err != nil { + log.Debugf("kiro: failed to parse embedded tool call JSON: %v, raw: %s", err, jsonStr) + continue + } + + // Generate unique tool ID + toolUseID := "toolu_" + uuid.New().String()[:12] + + // Check for duplicates using name+input as key + dedupeKey := toolName + ":" + repairedJSON + if processedIDs != nil { + if processedIDs[dedupeKey] { + log.Debugf("kiro: skipping duplicate embedded tool call: %s", toolName) + // Still remove from text even if duplicate + if matchStart >= 0 && matchEnd <= len(cleanText) && matchStart <= matchEnd { + cleanText = cleanText[:matchStart] + cleanText[matchEnd:] + } + continue + } + processedIDs[dedupeKey] = true + } + + toolUses = append(toolUses, KiroToolUse{ + ToolUseID: toolUseID, + Name: toolName, + Input: inputMap, + }) + + log.Infof("kiro: extracted embedded tool call: %s (ID: %s)", toolName, toolUseID) + + // Remove from clean text (index-based removal to avoid deleting the wrong occurrence) + if matchStart >= 0 && matchEnd <= len(cleanText) && matchStart <= matchEnd { + cleanText = cleanText[:matchStart] + cleanText[matchEnd:] + } + } + + return cleanText, toolUses +} + +// findMatchingBracket finds the index of the closing brace/bracket that matches +// the opening one at startPos. Handles nested objects and strings correctly. +func findMatchingBracket(text string, startPos int) int { + if startPos >= len(text) { + return -1 + } + + openChar := text[startPos] + var closeChar byte + switch openChar { + case '{': + closeChar = '}' + case '[': + closeChar = ']' + default: + return -1 + } + + depth := 1 + inString := false + escapeNext := false + + for i := startPos + 1; i < len(text); i++ { + char := text[i] + + if escapeNext { + escapeNext = false + continue + } + + if char == '\\' && inString { + escapeNext = true + continue + } + + if char == '"' { + inString = !inString + continue + } + + if !inString { + if char == openChar { + depth++ + } else if char == closeChar { + depth-- + if depth == 0 { + return i + } + } + } + } + + return -1 +} + +// RepairJSON attempts to fix common JSON issues that may occur in tool call arguments. +// Conservative repair strategy: +// 1. First try to parse JSON directly - if valid, return as-is +// 2. Only attempt repair if parsing fails +// 3. After repair, validate the result - if still invalid, return original +func RepairJSON(jsonString string) string { + // Handle empty or invalid input + if jsonString == "" { + return "{}" + } + + str := strings.TrimSpace(jsonString) + if str == "" { + return "{}" + } + + // CONSERVATIVE STRATEGY: First try to parse directly + var testParse interface{} + if err := json.Unmarshal([]byte(str), &testParse); err == nil { + log.Debugf("kiro: repairJSON - JSON is already valid, returning unchanged") + return str + } + + log.Debugf("kiro: repairJSON - JSON parse failed, attempting repair") + originalStr := str + + // First, escape unescaped newlines/tabs within JSON string values + str = escapeNewlinesInStrings(str) + // Remove trailing commas before closing braces/brackets + str = trailingCommaPattern.ReplaceAllString(str, "$1") + + // Calculate bracket balance + braceCount := 0 + bracketCount := 0 + inString := false + escape := false + lastValidIndex := -1 + + for i := 0; i < len(str); i++ { + char := str[i] + + if escape { + escape = false + continue + } + + if char == '\\' { + escape = true + continue + } + + if char == '"' { + inString = !inString + continue + } + + if inString { + continue + } + + switch char { + case '{': + braceCount++ + case '}': + braceCount-- + case '[': + bracketCount++ + case ']': + bracketCount-- + } + + if braceCount >= 0 && bracketCount >= 0 { + lastValidIndex = i + } + } + + // If brackets are unbalanced, try to repair + if braceCount > 0 || bracketCount > 0 { + if lastValidIndex > 0 && lastValidIndex < len(str)-1 { + truncated := str[:lastValidIndex+1] + // Recount brackets after truncation + braceCount = 0 + bracketCount = 0 + inString = false + escape = false + for i := 0; i < len(truncated); i++ { + char := truncated[i] + if escape { + escape = false + continue + } + if char == '\\' { + escape = true + continue + } + if char == '"' { + inString = !inString + continue + } + if inString { + continue + } + switch char { + case '{': + braceCount++ + case '}': + braceCount-- + case '[': + bracketCount++ + case ']': + bracketCount-- + } + } + str = truncated + } + + // Add missing closing brackets + for braceCount > 0 { + str += "}" + braceCount-- + } + for bracketCount > 0 { + str += "]" + bracketCount-- + } + } + + // Validate repaired JSON + if err := json.Unmarshal([]byte(str), &testParse); err != nil { + log.Warnf("kiro: repairJSON - repair failed to produce valid JSON, returning original") + return originalStr + } + + log.Debugf("kiro: repairJSON - successfully repaired JSON") + return str +} + +// escapeNewlinesInStrings escapes literal newlines, tabs, and other control characters +// that appear inside JSON string values. +func escapeNewlinesInStrings(raw string) string { + var result strings.Builder + result.Grow(len(raw) + 100) + + inString := false + escaped := false + + for i := 0; i < len(raw); i++ { + c := raw[i] + + if escaped { + result.WriteByte(c) + escaped = false + continue + } + + if c == '\\' && inString { + result.WriteByte(c) + escaped = true + continue + } + + if c == '"' { + inString = !inString + result.WriteByte(c) + continue + } + + if inString { + switch c { + case '\n': + result.WriteString("\\n") + case '\r': + result.WriteString("\\r") + case '\t': + result.WriteString("\\t") + default: + result.WriteByte(c) + } + } else { + result.WriteByte(c) + } + } + + return result.String() +} + +// ProcessToolUseEvent handles a toolUseEvent from the Kiro stream. +// It accumulates input fragments and emits tool_use blocks when complete. +// Returns events to emit and updated state. +func ProcessToolUseEvent(event map[string]interface{}, currentToolUse *ToolUseState, processedIDs map[string]bool) ([]KiroToolUse, *ToolUseState) { + var toolUses []KiroToolUse + + // Extract from nested toolUseEvent or direct format + tu := event + if nested, ok := event["toolUseEvent"].(map[string]interface{}); ok { + tu = nested + } + + toolUseID := kirocommon.GetString(tu, "toolUseId") + toolName := kirocommon.GetString(tu, "name") + isStop := false + if stop, ok := tu["stop"].(bool); ok { + isStop = stop + } + + // Get input - can be string (fragment) or object (complete) + var inputFragment string + var inputMap map[string]interface{} + + if inputRaw, ok := tu["input"]; ok { + switch v := inputRaw.(type) { + case string: + inputFragment = v + case map[string]interface{}: + inputMap = v + } + } + + // New tool use starting + if toolUseID != "" && toolName != "" { + if currentToolUse != nil && currentToolUse.ToolUseID != toolUseID { + log.Warnf("kiro: interleaved tool use detected - new ID %s arrived while %s in progress, completing previous", + toolUseID, currentToolUse.ToolUseID) + if !processedIDs[currentToolUse.ToolUseID] { + incomplete := KiroToolUse{ + ToolUseID: currentToolUse.ToolUseID, + Name: currentToolUse.Name, + } + if currentToolUse.InputBuffer.Len() > 0 { + raw := currentToolUse.InputBuffer.String() + repaired := RepairJSON(raw) + + var input map[string]interface{} + if err := json.Unmarshal([]byte(repaired), &input); err != nil { + log.Warnf("kiro: failed to parse interleaved tool input: %v, raw: %s", err, raw) + input = make(map[string]interface{}) + } + incomplete.Input = input + } + toolUses = append(toolUses, incomplete) + processedIDs[currentToolUse.ToolUseID] = true + } + currentToolUse = nil + } + + if currentToolUse == nil { + if processedIDs != nil && processedIDs[toolUseID] { + log.Debugf("kiro: skipping duplicate toolUseEvent: %s", toolUseID) + return nil, nil + } + + currentToolUse = &ToolUseState{ + ToolUseID: toolUseID, + Name: toolName, + } + log.Infof("kiro: starting new tool use: %s (ID: %s)", toolName, toolUseID) + } + } + + // Accumulate input fragments + if currentToolUse != nil && inputFragment != "" { + currentToolUse.InputBuffer.WriteString(inputFragment) + log.Debugf("kiro: accumulated input fragment, total length: %d", currentToolUse.InputBuffer.Len()) + } + + // If complete input object provided directly + if currentToolUse != nil && inputMap != nil { + inputBytes, _ := json.Marshal(inputMap) + currentToolUse.InputBuffer.Reset() + currentToolUse.InputBuffer.Write(inputBytes) + } + + // Tool use complete + if isStop && currentToolUse != nil { + fullInput := currentToolUse.InputBuffer.String() + + // Repair and parse the accumulated JSON + repairedJSON := RepairJSON(fullInput) + var finalInput map[string]interface{} + if err := json.Unmarshal([]byte(repairedJSON), &finalInput); err != nil { + log.Warnf("kiro: failed to parse accumulated tool input: %v, raw: %s", err, fullInput) + finalInput = make(map[string]interface{}) + } + + // Detect truncation for all tools + truncInfo := DetectTruncation(currentToolUse.Name, currentToolUse.ToolUseID, fullInput, finalInput) + if truncInfo.IsTruncated { + log.Warnf("kiro: TRUNCATION DETECTED for tool %s (ID: %s): type=%s, raw_size=%d bytes", + currentToolUse.Name, currentToolUse.ToolUseID, truncInfo.TruncationType, len(fullInput)) + log.Warnf("kiro: truncation details: %s", truncInfo.ErrorMessage) + if len(truncInfo.ParsedFields) > 0 { + log.Infof("kiro: partial fields received: %v", truncInfo.ParsedFields) + } + // Store truncation info in the state for upstream handling + currentToolUse.TruncationInfo = &truncInfo + } else { + log.Infof("kiro: tool use %s input length: %d bytes (no truncation)", currentToolUse.Name, len(fullInput)) + } + + // Create the tool use with truncation info if applicable + toolUse := KiroToolUse{ + ToolUseID: currentToolUse.ToolUseID, + Name: currentToolUse.Name, + Input: finalInput, + IsTruncated: truncInfo.IsTruncated, + TruncationInfo: nil, // Will be set below if truncated + } + if truncInfo.IsTruncated { + toolUse.TruncationInfo = &truncInfo + } + toolUses = append(toolUses, toolUse) + + if processedIDs != nil { + processedIDs[currentToolUse.ToolUseID] = true + } + + log.Infof("kiro: completed tool use: %s (ID: %s, truncated: %v)", currentToolUse.Name, currentToolUse.ToolUseID, truncInfo.IsTruncated) + return toolUses, nil + } + + return toolUses, currentToolUse +} + +// DeduplicateToolUses removes duplicate tool uses based on toolUseId and content. +func DeduplicateToolUses(toolUses []KiroToolUse) []KiroToolUse { + seenIDs := make(map[string]bool) + seenContent := make(map[string]bool) + var unique []KiroToolUse + + for _, tu := range toolUses { + if seenIDs[tu.ToolUseID] { + log.Debugf("kiro: removing ID-duplicate tool use: %s (name: %s)", tu.ToolUseID, tu.Name) + continue + } + + inputJSON, _ := json.Marshal(tu.Input) + contentKey := tu.Name + ":" + string(inputJSON) + + if seenContent[contentKey] { + log.Debugf("kiro: removing content-duplicate tool use: %s (id: %s)", tu.Name, tu.ToolUseID) + continue + } + + seenIDs[tu.ToolUseID] = true + seenContent[contentKey] = true + unique = append(unique, tu) + } + + return unique +} diff --git a/internal/translator/kiro/claude/kiro_websearch.go b/internal/translator/kiro/claude/kiro_websearch.go new file mode 100644 index 0000000000..b9da38294c --- /dev/null +++ b/internal/translator/kiro/claude/kiro_websearch.go @@ -0,0 +1,495 @@ +// Package claude provides web search functionality for Kiro translator. +// This file implements detection, MCP request/response types, and pure data +// transformation utilities for web search. SSE event generation, stream analysis, +// and HTTP I/O logic reside in the executor package (kiro_executor.go). +package claude + +import ( + "encoding/json" + "fmt" + "strings" + "sync/atomic" + "time" + + "github.com/google/uuid" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +// cachedToolDescription stores the dynamically-fetched web_search tool description. +// Written by the executor via SetWebSearchDescription, read by the translator +// when building the remote_web_search tool for Kiro API requests. +var cachedToolDescription atomic.Value // stores string + +// GetWebSearchDescription returns the cached web_search tool description, +// or empty string if not yet fetched. Lock-free via atomic.Value. +func GetWebSearchDescription() string { + if v := cachedToolDescription.Load(); v != nil { + return v.(string) + } + return "" +} + +// SetWebSearchDescription stores the dynamically-fetched web_search tool description. +// Called by the executor after fetching from MCP tools/list. +func SetWebSearchDescription(desc string) { + cachedToolDescription.Store(desc) +} + +// McpRequest represents a JSON-RPC 2.0 request to Kiro MCP API +type McpRequest struct { + ID string `json:"id"` + JSONRPC string `json:"jsonrpc"` + Method string `json:"method"` + Params McpParams `json:"params"` +} + +// McpParams represents MCP request parameters +type McpParams struct { + Name string `json:"name"` + Arguments McpArguments `json:"arguments"` +} + +// McpArgumentsMeta represents the _meta field in MCP arguments +type McpArgumentsMeta struct { + IsValid bool `json:"_isValid"` + ActivePath []string `json:"_activePath"` + CompletedPaths [][]string `json:"_completedPaths"` +} + +// McpArguments represents MCP request arguments +type McpArguments struct { + Query string `json:"query"` + Meta *McpArgumentsMeta `json:"_meta,omitempty"` +} + +// McpResponse represents a JSON-RPC 2.0 response from Kiro MCP API +type McpResponse struct { + Error *McpError `json:"error,omitempty"` + ID string `json:"id"` + JSONRPC string `json:"jsonrpc"` + Result *McpResult `json:"result,omitempty"` +} + +// McpError represents an MCP error +type McpError struct { + Code *int `json:"code,omitempty"` + Message *string `json:"message,omitempty"` +} + +// McpResult represents MCP result +type McpResult struct { + Content []McpContent `json:"content"` + IsError bool `json:"isError"` +} + +// McpContent represents MCP content item +type McpContent struct { + ContentType string `json:"type"` + Text string `json:"text"` +} + +// WebSearchResults represents parsed search results +type WebSearchResults struct { + Results []WebSearchResult `json:"results"` + TotalResults *int `json:"totalResults,omitempty"` + Query *string `json:"query,omitempty"` + Error *string `json:"error,omitempty"` +} + +// WebSearchResult represents a single search result +type WebSearchResult struct { + Title string `json:"title"` + URL string `json:"url"` + Snippet *string `json:"snippet,omitempty"` + PublishedDate *int64 `json:"publishedDate,omitempty"` + ID *string `json:"id,omitempty"` + Domain *string `json:"domain,omitempty"` + MaxVerbatimWordLimit *int `json:"maxVerbatimWordLimit,omitempty"` + PublicDomain *bool `json:"publicDomain,omitempty"` +} + +// isWebSearchTool checks if a tool name or type indicates a web_search tool. +func isWebSearchTool(name, toolType string) bool { + return name == "web_search" || + strings.HasPrefix(toolType, "web_search") || + toolType == "web_search_20250305" +} + +// HasWebSearchTool checks if the request contains ONLY a web_search tool. +// Returns true only if tools array has exactly one tool named "web_search". +// Only intercept pure web_search requests (single-tool array). +func HasWebSearchTool(body []byte) bool { + tools := gjson.GetBytes(body, "tools") + if !tools.IsArray() { + return false + } + + toolsArray := tools.Array() + if len(toolsArray) != 1 { + return false + } + + // Check if the single tool is web_search + tool := toolsArray[0] + + // Check both name and type fields for web_search detection + name := strings.ToLower(tool.Get("name").String()) + toolType := strings.ToLower(tool.Get("type").String()) + + return isWebSearchTool(name, toolType) +} + +// ExtractSearchQuery extracts the search query from the request. +// Reads messages[0].content and removes "Perform a web search for the query: " prefix. +func ExtractSearchQuery(body []byte) string { + messages := gjson.GetBytes(body, "messages") + if !messages.IsArray() || len(messages.Array()) == 0 { + return "" + } + + firstMsg := messages.Array()[0] + content := firstMsg.Get("content") + + var text string + if content.IsArray() { + // Array format: [{"type": "text", "text": "..."}] + for _, block := range content.Array() { + if block.Get("type").String() == "text" { + text = block.Get("text").String() + break + } + } + } else { + // String format + text = content.String() + } + + // Remove prefix "Perform a web search for the query: " + const prefix = "Perform a web search for the query: " + if strings.HasPrefix(text, prefix) { + text = text[len(prefix):] + } + + return strings.TrimSpace(text) +} + +// generateRandomID8 generates an 8-character random lowercase alphanumeric string +func generateRandomID8() string { + u := uuid.New() + return strings.ToLower(strings.ReplaceAll(u.String(), "-", "")[:8]) +} + +// CreateMcpRequest creates an MCP request for web search. +// Returns (toolUseID, McpRequest) +// ID format: web_search_tooluse_{22 random}_{timestamp_millis}_{8 random} +func CreateMcpRequest(query string) (string, *McpRequest) { + random22 := GenerateToolUseID() + timestamp := time.Now().UnixMilli() + random8 := generateRandomID8() + + requestID := fmt.Sprintf("web_search_tooluse_%s_%d_%s", random22, timestamp, random8) + + // tool_use_id format: srvtoolu_{32 hex chars} + toolUseID := "srvtoolu_" + strings.ReplaceAll(uuid.New().String(), "-", "")[:32] + + request := &McpRequest{ + ID: requestID, + JSONRPC: "2.0", + Method: "tools/call", + Params: McpParams{ + Name: "web_search", + Arguments: McpArguments{ + Query: query, + Meta: &McpArgumentsMeta{ + IsValid: true, + ActivePath: []string{"query"}, + CompletedPaths: [][]string{{"query"}}, + }, + }, + }, + } + + return toolUseID, request +} + +// GenerateToolUseID generates a Kiro-style tool use ID (base62-like UUID) +func GenerateToolUseID() string { + return strings.ReplaceAll(uuid.New().String(), "-", "")[:22] +} + +// ReplaceWebSearchToolDescription replaces the web_search tool description with +// a minimal version that allows re-search without the restrictive "do not search +// non-coding topics" instruction from the original Kiro tools/list response. +// This keeps the tool available so the model can request additional searches. +func ReplaceWebSearchToolDescription(body []byte) ([]byte, error) { + tools := gjson.GetBytes(body, "tools") + if !tools.IsArray() { + return body, nil + } + + var updated []json.RawMessage + for _, tool := range tools.Array() { + name := strings.ToLower(tool.Get("name").String()) + toolType := strings.ToLower(tool.Get("type").String()) + + if isWebSearchTool(name, toolType) { + // Replace with a minimal web_search tool definition + minimalTool := map[string]interface{}{ + "name": "web_search", + "description": "Search the web for information. Use this when the previous search results are insufficient or when you need additional information on a different aspect of the query. Provide a refined or different search query.", + "input_schema": map[string]interface{}{ + "type": "object", + "properties": map[string]interface{}{ + "query": map[string]interface{}{ + "type": "string", + "description": "The search query to execute", + }, + }, + "required": []string{"query"}, + "additionalProperties": false, + }, + } + minimalJSON, err := json.Marshal(minimalTool) + if err != nil { + return body, fmt.Errorf("failed to marshal minimal tool: %w", err) + } + updated = append(updated, json.RawMessage(minimalJSON)) + } else { + updated = append(updated, json.RawMessage(tool.Raw)) + } + } + + updatedJSON, err := json.Marshal(updated) + if err != nil { + return body, fmt.Errorf("failed to marshal updated tools: %w", err) + } + result, err := sjson.SetRawBytes(body, "tools", updatedJSON) + if err != nil { + return body, fmt.Errorf("failed to set updated tools: %w", err) + } + + return result, nil +} + +// FormatSearchContextPrompt formats search results as a structured text block +// for injection into the system prompt. +func FormatSearchContextPrompt(query string, results *WebSearchResults) string { + var sb strings.Builder + sb.WriteString(fmt.Sprintf("[Web Search Results for \"%s\"]\n", query)) + + if results != nil && len(results.Results) > 0 { + for i, r := range results.Results { + sb.WriteString(fmt.Sprintf("%d. %s - %s\n", i+1, r.Title, r.URL)) + if r.Snippet != nil && *r.Snippet != "" { + snippet := *r.Snippet + if len(snippet) > 500 { + snippet = snippet[:500] + "..." + } + sb.WriteString(fmt.Sprintf(" %s\n", snippet)) + } + } + } else { + sb.WriteString("No results found.\n") + } + + sb.WriteString("[End Web Search Results]") + return sb.String() +} + +// FormatToolResultText formats search results as JSON text for the toolResults content field. +// This matches the format observed in Kiro IDE HAR captures. +func FormatToolResultText(results *WebSearchResults) string { + if results == nil || len(results.Results) == 0 { + return "No search results found." + } + + text := fmt.Sprintf("Found %d search result(s):\n\n", len(results.Results)) + resultJSON, err := json.MarshalIndent(results.Results, "", " ") + if err != nil { + return text + "Error formatting results." + } + return text + string(resultJSON) +} + +// InjectToolResultsClaude modifies a Claude-format JSON payload to append +// tool_use (assistant) and tool_result (user) messages to the messages array. +// BuildKiroPayload correctly translates: +// - assistant tool_use → KiroAssistantResponseMessage.toolUses +// - user tool_result → KiroUserInputMessageContext.toolResults +// +// This produces the exact same GAR request format as the Kiro IDE (HAR captures). +// IMPORTANT: The web_search tool must remain in the "tools" array for this to work. +// Use ReplaceWebSearchToolDescription to keep the tool available with a minimal description. +func InjectToolResultsClaude(claudePayload []byte, toolUseId, query string, results *WebSearchResults) ([]byte, error) { + var payload map[string]interface{} + if err := json.Unmarshal(claudePayload, &payload); err != nil { + return claudePayload, fmt.Errorf("failed to parse claude payload: %w", err) + } + + messages, _ := payload["messages"].([]interface{}) + + // 1. Append assistant message with tool_use (matches HAR: assistantResponseMessage.toolUses) + assistantMsg := map[string]interface{}{ + "role": "assistant", + "content": []interface{}{ + map[string]interface{}{ + "type": "tool_use", + "id": toolUseId, + "name": "web_search", + "input": map[string]interface{}{"query": query}, + }, + }, + } + messages = append(messages, assistantMsg) + + // 2. Append user message with tool_result + search behavior instructions. + // NOTE: We embed search instructions HERE (not in system prompt) because + // BuildKiroPayload clears the system prompt when len(history) > 0, + // which is always true after injecting assistant + user messages. + now := time.Now() + searchGuidance := fmt.Sprintf(` +Current date: %s (%s) + +IMPORTANT: Evaluate the search results above carefully. If the results are: +- Mostly spam, SEO junk, or unrelated websites +- Missing actual information about the query topic +- Outdated or not matching the requested time frame + +Then you MUST use the web_search tool again with a refined query. Try: +- Rephrasing in English for better coverage +- Using more specific keywords +- Adding date context + +Do NOT apologize for bad results without first attempting a re-search. +`, now.Format("January 2, 2006"), now.Format("Monday")) + + userMsg := map[string]interface{}{ + "role": "user", + "content": []interface{}{ + map[string]interface{}{ + "type": "tool_result", + "tool_use_id": toolUseId, + "content": FormatToolResultText(results), + }, + map[string]interface{}{ + "type": "text", + "text": searchGuidance, + }, + }, + } + messages = append(messages, userMsg) + + payload["messages"] = messages + + result, err := json.Marshal(payload) + if err != nil { + return claudePayload, fmt.Errorf("failed to marshal updated payload: %w", err) + } + + log.Infof("kiro/websearch: injected tool_use+tool_result (toolUseId=%s, messages=%d)", + toolUseId, len(messages)) + + return result, nil +} + +// InjectSearchIndicatorsInResponse prepends server_tool_use + web_search_tool_result +// content blocks into a non-streaming Claude JSON response. Claude Code counts +// server_tool_use blocks to display "Did X searches in Ys". +// +// Input response: {"content": [{"type":"text","text":"..."}], ...} +// Output response: {"content": [{"type":"server_tool_use",...}, {"type":"web_search_tool_result",...}, {"type":"text","text":"..."}], ...} +func InjectSearchIndicatorsInResponse(responsePayload []byte, searches []SearchIndicator) ([]byte, error) { + if len(searches) == 0 { + return responsePayload, nil + } + + var resp map[string]interface{} + if err := json.Unmarshal(responsePayload, &resp); err != nil { + return responsePayload, fmt.Errorf("failed to parse response: %w", err) + } + + existingContent, _ := resp["content"].([]interface{}) + + // Build new content: search indicators first, then existing content + newContent := make([]interface{}, 0, len(searches)*2+len(existingContent)) + + for _, s := range searches { + // server_tool_use block + newContent = append(newContent, map[string]interface{}{ + "type": "server_tool_use", + "id": s.ToolUseID, + "name": "web_search", + "input": map[string]interface{}{"query": s.Query}, + }) + + // web_search_tool_result block + searchContent := make([]map[string]interface{}, 0) + if s.Results != nil { + for _, r := range s.Results.Results { + snippet := "" + if r.Snippet != nil { + snippet = *r.Snippet + } + searchContent = append(searchContent, map[string]interface{}{ + "type": "web_search_result", + "title": r.Title, + "url": r.URL, + "encrypted_content": snippet, + "page_age": nil, + }) + } + } + newContent = append(newContent, map[string]interface{}{ + "type": "web_search_tool_result", + "tool_use_id": s.ToolUseID, + "content": searchContent, + }) + } + + // Append existing content blocks + newContent = append(newContent, existingContent...) + resp["content"] = newContent + + result, err := json.Marshal(resp) + if err != nil { + return responsePayload, fmt.Errorf("failed to marshal response: %w", err) + } + + log.Infof("kiro/websearch: injected %d search indicator(s) into non-stream response", len(searches)) + return result, nil +} + +// SearchIndicator holds the data for one search operation to inject into a response. +type SearchIndicator struct { + ToolUseID string + Query string + Results *WebSearchResults +} + +// BuildMcpEndpoint constructs the MCP endpoint URL for the given AWS region. +// Centralizes the URL pattern used by both handleWebSearch and handleWebSearchStream. +func BuildMcpEndpoint(region string) string { + return fmt.Sprintf("https://q.%s.amazonaws.com/mcp", region) +} + +// ParseSearchResults extracts WebSearchResults from MCP response +func ParseSearchResults(response *McpResponse) *WebSearchResults { + if response == nil || response.Result == nil || len(response.Result.Content) == 0 { + return nil + } + + content := response.Result.Content[0] + if content.ContentType != "text" { + return nil + } + + var results WebSearchResults + if err := json.Unmarshal([]byte(content.Text), &results); err != nil { + log.Warnf("kiro/websearch: failed to parse search results: %v", err) + return nil + } + + return &results +} diff --git a/internal/translator/kiro/claude/truncation_detector.go b/internal/translator/kiro/claude/truncation_detector.go new file mode 100644 index 0000000000..cc0d34848a --- /dev/null +++ b/internal/translator/kiro/claude/truncation_detector.go @@ -0,0 +1,537 @@ +// Package claude provides truncation detection for Kiro tool call responses. +// When Kiro API reaches its output token limit, tool call JSON may be truncated, +// resulting in incomplete or unparseable input parameters. +package claude + +import ( + "encoding/json" + "strings" + + log "github.com/sirupsen/logrus" +) + +// TruncationInfo contains details about detected truncation in a tool use event. +type TruncationInfo struct { + IsTruncated bool // Whether truncation was detected + TruncationType string // Type of truncation detected + ToolName string // Name of the truncated tool + ToolUseID string // ID of the truncated tool use + RawInput string // The raw (possibly truncated) input string + ParsedFields map[string]string // Fields that were successfully parsed before truncation + ErrorMessage string // Human-readable error message +} + +// TruncationType constants for different truncation scenarios +const ( + TruncationTypeNone = "" // No truncation detected + TruncationTypeEmptyInput = "empty_input" // No input data received at all + TruncationTypeInvalidJSON = "invalid_json" // JSON is syntactically invalid (truncated mid-value) + TruncationTypeMissingFields = "missing_fields" // JSON parsed but critical fields are missing + TruncationTypeIncompleteString = "incomplete_string" // String value was cut off mid-content +) + +// KnownWriteTools lists tool names that typically write content and have a "content" field. +// These tools are checked for content field truncation specifically. +var KnownWriteTools = map[string]bool{ + "Write": true, + "write_to_file": true, + "fsWrite": true, + "create_file": true, + "edit_file": true, + "apply_diff": true, + "str_replace_editor": true, + "insert": true, +} + +// KnownCommandTools lists tool names that execute commands. +var KnownCommandTools = map[string]bool{ + "Bash": true, + "execute": true, + "run_command": true, + "shell": true, + "terminal": true, + "execute_python": true, +} + +// RequiredFieldsByTool maps tool names to their required field groups. +// Each outer element is a required group; each inner slice lists alternative field names (OR logic). +// A group is satisfied when ANY one of its alternatives exists in the parsed input. +// All groups must be satisfied for the tool input to be considered valid. +// +// Example: +// {{"cmd", "command"}} means the tool needs EITHER "cmd" OR "command". +// {{"file_path"}, {"content"}} means the tool needs BOTH "file_path" AND "content". +var RequiredFieldsByTool = map[string][][]string{ + "Write": {{"file_path"}, {"content"}}, + "write_to_file": {{"path"}, {"content"}}, + "fsWrite": {{"path"}, {"content"}}, + "create_file": {{"path"}, {"content"}}, + "edit_file": {{"path"}}, + "apply_diff": {{"path"}, {"diff"}}, + "str_replace_editor": {{"path"}, {"old_str"}, {"new_str"}}, + "Bash": {{"cmd", "command"}}, + "execute": {{"command"}}, + "run_command": {{"command"}}, +} + +// DetectTruncation checks if the tool use input appears to be truncated. +// It returns detailed information about the truncation status and type. +func DetectTruncation(toolName, toolUseID, rawInput string, parsedInput map[string]interface{}) TruncationInfo { + info := TruncationInfo{ + ToolName: toolName, + ToolUseID: toolUseID, + RawInput: rawInput, + ParsedFields: make(map[string]string), + } + + // Scenario 1: Empty input buffer - only flag as truncation if tool has required fields + // Many tools (e.g. TaskList, TaskGet) have no required params, so empty input is valid + if strings.TrimSpace(rawInput) == "" { + if _, hasRequirements := RequiredFieldsByTool[toolName]; hasRequirements { + info.IsTruncated = true + info.TruncationType = TruncationTypeEmptyInput + info.ErrorMessage = "Tool input was completely empty - API response may have been truncated before tool parameters were transmitted" + log.Warnf("kiro: truncation detected [%s] for tool %s (ID: %s): empty input buffer", + info.TruncationType, toolName, toolUseID) + return info + } + log.Debugf("kiro: empty input for tool %s (ID: %s) - no required fields, treating as valid", toolName, toolUseID) + return info + } + + // Scenario 2: JSON parse failure - syntactically invalid JSON + if parsedInput == nil || len(parsedInput) == 0 { + // Check if the raw input looks like truncated JSON + if looksLikeTruncatedJSON(rawInput) { + info.IsTruncated = true + info.TruncationType = TruncationTypeInvalidJSON + info.ParsedFields = extractPartialFields(rawInput) + info.ErrorMessage = buildTruncationErrorMessage(toolName, info.TruncationType, info.ParsedFields, rawInput) + log.Warnf("kiro: truncation detected [%s] for tool %s (ID: %s): JSON parse failed, raw length=%d bytes", + info.TruncationType, toolName, toolUseID, len(rawInput)) + return info + } + } + + // Scenario 3: JSON parsed but critical fields are missing + if parsedInput != nil { + requiredGroups, hasRequirements := RequiredFieldsByTool[toolName] + if hasRequirements { + missingFields := findMissingRequiredFields(parsedInput, requiredGroups) + if len(missingFields) > 0 { + info.IsTruncated = true + info.TruncationType = TruncationTypeMissingFields + info.ParsedFields = extractParsedFieldNames(parsedInput) + info.ErrorMessage = buildMissingFieldsErrorMessage(toolName, missingFields, info.ParsedFields) + log.Warnf("kiro: truncation detected [%s] for tool %s (ID: %s): missing required fields: %v", + info.TruncationType, toolName, toolUseID, missingFields) + return info + } + } + + // Scenario 4: Check for incomplete string values (very short content for write tools) + if isWriteTool(toolName) { + if contentTruncation := detectContentTruncation(parsedInput, rawInput); contentTruncation != "" { + info.IsTruncated = true + info.TruncationType = TruncationTypeIncompleteString + info.ParsedFields = extractParsedFieldNames(parsedInput) + info.ErrorMessage = contentTruncation + log.Warnf("kiro: truncation detected [%s] for tool %s (ID: %s): %s", + info.TruncationType, toolName, toolUseID, contentTruncation) + return info + } + } + } + + // No truncation detected + info.IsTruncated = false + info.TruncationType = TruncationTypeNone + return info +} + +// looksLikeTruncatedJSON checks if the raw string appears to be truncated JSON. +func looksLikeTruncatedJSON(raw string) bool { + trimmed := strings.TrimSpace(raw) + if trimmed == "" { + return false + } + + // Must start with { to be considered JSON + if !strings.HasPrefix(trimmed, "{") { + return false + } + + // Count brackets to detect imbalance + openBraces := strings.Count(trimmed, "{") + closeBraces := strings.Count(trimmed, "}") + openBrackets := strings.Count(trimmed, "[") + closeBrackets := strings.Count(trimmed, "]") + + // Bracket imbalance suggests truncation + if openBraces > closeBraces || openBrackets > closeBrackets { + return true + } + + // Check for obvious truncation patterns + // - Ends with a quote but no closing brace + // - Ends with a colon (mid key-value) + // - Ends with a comma (mid object/array) + lastChar := trimmed[len(trimmed)-1] + if lastChar != '}' && lastChar != ']' { + // Check if it's not a complete simple value + if lastChar == '"' || lastChar == ':' || lastChar == ',' { + return true + } + } + + // Check for unclosed strings (odd number of unescaped quotes) + inString := false + escaped := false + for i := 0; i < len(trimmed); i++ { + c := trimmed[i] + if escaped { + escaped = false + continue + } + if c == '\\' { + escaped = true + continue + } + if c == '"' { + inString = !inString + } + } + if inString { + return true // Unclosed string + } + + return false +} + +// extractPartialFields attempts to extract any field names from malformed JSON. +// This helps provide context about what was received before truncation. +func extractPartialFields(raw string) map[string]string { + fields := make(map[string]string) + + // Simple pattern matching for "key": "value" or "key": value patterns + // This works even with truncated JSON + trimmed := strings.TrimSpace(raw) + if !strings.HasPrefix(trimmed, "{") { + return fields + } + + // Remove opening brace + content := strings.TrimPrefix(trimmed, "{") + + // Split by comma (rough parsing) + parts := strings.Split(content, ",") + for _, part := range parts { + part = strings.TrimSpace(part) + if colonIdx := strings.Index(part, ":"); colonIdx > 0 { + key := strings.TrimSpace(part[:colonIdx]) + key = strings.Trim(key, `"`) + value := strings.TrimSpace(part[colonIdx+1:]) + + // Truncate long values for display + if len(value) > 50 { + value = value[:50] + "..." + } + fields[key] = value + } + } + + return fields +} + +// extractParsedFieldNames returns the field names from a successfully parsed map. +func extractParsedFieldNames(parsed map[string]interface{}) map[string]string { + fields := make(map[string]string) + for key, val := range parsed { + switch v := val.(type) { + case string: + if len(v) > 50 { + fields[key] = v[:50] + "..." + } else { + fields[key] = v + } + case nil: + fields[key] = "" + default: + // For complex types, just indicate presence + fields[key] = "" + } + } + return fields +} + +// findMissingRequiredFields checks which required field groups are unsatisfied. +// Each group is a slice of alternative field names; the group is satisfied when ANY alternative exists. +// Returns the list of unsatisfied groups (represented by their alternatives joined with "/"). +func findMissingRequiredFields(parsed map[string]interface{}, requiredGroups [][]string) []string { + var missing []string + for _, group := range requiredGroups { + satisfied := false + for _, field := range group { + if _, exists := parsed[field]; exists { + satisfied = true + break + } + } + if !satisfied { + missing = append(missing, strings.Join(group, "/")) + } + } + return missing +} + +// isWriteTool checks if the tool is a known write/file operation tool. +func isWriteTool(toolName string) bool { + return KnownWriteTools[toolName] +} + +// detectContentTruncation checks if the content field appears truncated for write tools. +func detectContentTruncation(parsed map[string]interface{}, rawInput string) string { + // Check for content field + content, hasContent := parsed["content"] + if !hasContent { + return "" + } + + contentStr, isString := content.(string) + if !isString { + return "" + } + + // Heuristic: if raw input is very large but content is suspiciously short, + // it might indicate truncation during JSON repair + if len(rawInput) > 1000 && len(contentStr) < 100 { + return "content field appears suspiciously short compared to raw input size" + } + + // Check for code blocks that appear to be cut off + if strings.Contains(contentStr, "```") { + openFences := strings.Count(contentStr, "```") + if openFences%2 != 0 { + return "content contains unclosed code fence (```) suggesting truncation" + } + } + + return "" +} + +// buildTruncationErrorMessage creates a human-readable error message for truncation. +func buildTruncationErrorMessage(toolName, truncationType string, parsedFields map[string]string, rawInput string) string { + var sb strings.Builder + sb.WriteString("Tool input was truncated by the API. ") + + switch truncationType { + case TruncationTypeEmptyInput: + sb.WriteString("No input data was received.") + case TruncationTypeInvalidJSON: + sb.WriteString("JSON was cut off mid-transmission. ") + if len(parsedFields) > 0 { + sb.WriteString("Partial fields received: ") + first := true + for k := range parsedFields { + if !first { + sb.WriteString(", ") + } + sb.WriteString(k) + first = false + } + } + case TruncationTypeMissingFields: + sb.WriteString("Required fields are missing from the input.") + case TruncationTypeIncompleteString: + sb.WriteString("Content appears to be shortened or incomplete.") + } + + sb.WriteString(" Received ") + sb.WriteString(formatInt(len(rawInput))) + sb.WriteString(" bytes. Please retry with smaller content chunks.") + + return sb.String() +} + +// buildMissingFieldsErrorMessage creates an error message for missing required fields. +func buildMissingFieldsErrorMessage(toolName string, missingFields []string, parsedFields map[string]string) string { + var sb strings.Builder + sb.WriteString("Tool '") + sb.WriteString(toolName) + sb.WriteString("' is missing required fields: ") + sb.WriteString(strings.Join(missingFields, ", ")) + sb.WriteString(". Fields received: ") + + first := true + for k := range parsedFields { + if !first { + sb.WriteString(", ") + } + sb.WriteString(k) + first = false + } + + sb.WriteString(". This usually indicates the API response was truncated.") + return sb.String() +} + +// IsTruncated is a convenience function to check if a tool use appears truncated. +func IsTruncated(toolName, rawInput string, parsedInput map[string]interface{}) bool { + info := DetectTruncation(toolName, "", rawInput, parsedInput) + return info.IsTruncated +} + +// GetTruncationSummary returns a short summary string for logging. +func GetTruncationSummary(info TruncationInfo) string { + if !info.IsTruncated { + return "" + } + + result, _ := json.Marshal(map[string]interface{}{ + "tool": info.ToolName, + "type": info.TruncationType, + "parsed_fields": info.ParsedFields, + "raw_input_size": len(info.RawInput), + }) + return string(result) +} + +// SoftFailureMessage contains the message structure for a truncation soft failure. +// This is returned to Claude as a tool_result to guide retry behavior. +type SoftFailureMessage struct { + Status string // "incomplete" - not an error, just incomplete + Reason string // Why the tool call was incomplete + Guidance []string // Step-by-step retry instructions + Context string // Any context about what was received + MaxLineHint int // Suggested maximum lines per chunk +} + +// BuildSoftFailureMessage creates a structured message for Claude when truncation is detected. +// This follows the "soft failure" pattern: +// - For Claude: Clear explanation of what happened and how to fix +// - For User: Hidden or minimized (appears as normal processing) +// +// Key principle: "Conclusion First" +// 1. First state what happened (incomplete) +// 2. Then explain how to fix (chunked approach) +// 3. Provide specific guidance (line limits) +func BuildSoftFailureMessage(info TruncationInfo) SoftFailureMessage { + msg := SoftFailureMessage{ + Status: "incomplete", + MaxLineHint: 300, // Conservative default + } + + // Build reason based on truncation type + switch info.TruncationType { + case TruncationTypeEmptyInput: + msg.Reason = "Your tool call was too large and the input was completely lost during transmission." + msg.MaxLineHint = 200 + case TruncationTypeInvalidJSON: + msg.Reason = "Your tool call was truncated mid-transmission, resulting in incomplete JSON." + msg.MaxLineHint = 250 + case TruncationTypeMissingFields: + msg.Reason = "Your tool call was partially received but critical fields were cut off." + msg.MaxLineHint = 300 + case TruncationTypeIncompleteString: + msg.Reason = "Your tool call content was truncated - the full content did not arrive." + msg.MaxLineHint = 350 + default: + msg.Reason = "Your tool call was truncated by the API due to output size limits." + } + + // Build context from parsed fields + if len(info.ParsedFields) > 0 { + var parts []string + for k, v := range info.ParsedFields { + if len(v) > 30 { + v = v[:30] + "..." + } + parts = append(parts, k+"="+v) + } + msg.Context = "Received partial data: " + strings.Join(parts, ", ") + } + + // Build retry guidance - CRITICAL: Conclusion first approach + msg.Guidance = []string{ + "CONCLUSION: Split your output into smaller chunks and retry.", + "", + "REQUIRED APPROACH:", + "1. For file writes: Write in chunks of ~" + formatInt(msg.MaxLineHint) + " lines maximum", + "2. For new files: First create with initial chunk, then append remaining sections", + "3. For edits: Make surgical, targeted changes - avoid rewriting entire files", + "", + "EXAMPLE (writing a 600-line file):", + " - Step 1: Write lines 1-300 (create file)", + " - Step 2: Append lines 301-600 (extend file)", + "", + "DO NOT attempt to write the full content again in a single call.", + "The API has a hard output limit that cannot be bypassed.", + } + + return msg +} + +// formatInt converts an integer to string (helper to avoid strconv import) +func formatInt(n int) string { + if n == 0 { + return "0" + } + result := "" + for n > 0 { + result = string(rune('0'+n%10)) + result + n /= 10 + } + return result +} + +// BuildSoftFailureToolResult creates a tool_result content for Claude. +// This is what Claude will see when a tool call is truncated. +// Returns a string that should be used as the tool_result content. +func BuildSoftFailureToolResult(info TruncationInfo) string { + msg := BuildSoftFailureMessage(info) + + var sb strings.Builder + sb.WriteString("TOOL_CALL_INCOMPLETE\n") + sb.WriteString("status: ") + sb.WriteString(msg.Status) + sb.WriteString("\n") + sb.WriteString("reason: ") + sb.WriteString(msg.Reason) + sb.WriteString("\n") + + if msg.Context != "" { + sb.WriteString("context: ") + sb.WriteString(msg.Context) + sb.WriteString("\n") + } + + sb.WriteString("\n") + for _, line := range msg.Guidance { + if line != "" { + sb.WriteString(line) + sb.WriteString("\n") + } + } + + return sb.String() +} + +// CreateTruncationToolResult creates a KiroToolUse that represents a soft failure. +// Instead of returning the truncated tool_use, we return a tool with a special +// error result that guides Claude to retry with smaller chunks. +// +// This is the key mechanism for "soft failure": +// - stop_reason remains "tool_use" so Claude continues +// - The tool_result content explains the issue and how to fix it +// - Claude will read this and adjust its approach +func CreateTruncationToolResult(info TruncationInfo) KiroToolUse { + // We create a pseudo tool_use that represents the failed attempt + // The executor will convert this to a tool_result with the guidance message + return KiroToolUse{ + ToolUseID: info.ToolUseID, + Name: info.ToolName, + Input: nil, // No input since it was truncated + IsTruncated: true, + TruncationInfo: &info, + } +} diff --git a/internal/translator/kiro/common/constants.go b/internal/translator/kiro/common/constants.go new file mode 100644 index 0000000000..a7c21e6eae --- /dev/null +++ b/internal/translator/kiro/common/constants.go @@ -0,0 +1,95 @@ +// Package common provides shared constants and utilities for Kiro translator. +package common + +const ( + // KiroMaxToolDescLen is the maximum description length for Kiro API tools. + // Kiro API limit is 10240 bytes, leave room for "..." + KiroMaxToolDescLen = 10237 + + // ThinkingStartTag is the start tag for thinking blocks in responses. + ThinkingStartTag = "" + + // ThinkingEndTag is the end tag for thinking blocks in responses. + ThinkingEndTag = "" + + // CodeFenceMarker is the markdown code fence marker. + CodeFenceMarker = "```" + + // AltCodeFenceMarker is the alternative markdown code fence marker. + AltCodeFenceMarker = "~~~" + + // InlineCodeMarker is the markdown inline code marker (backtick). + InlineCodeMarker = "`" + + // DefaultAssistantContentWithTools is the fallback content for assistant messages + // that have tool_use but no text content. Kiro API requires non-empty content. + // IMPORTANT: Use a minimal neutral string that the model won't mimic in responses. + // Previously "I'll help you with that." which caused the model to parrot it back. + DefaultAssistantContentWithTools = "." + + // DefaultAssistantContent is the fallback content for assistant messages + // that have no content at all. Kiro API requires non-empty content. + // IMPORTANT: Use a minimal neutral string that the model won't mimic in responses. + // Previously "I understand." which could leak into model behavior. + DefaultAssistantContent = "." + + // DefaultUserContentWithToolResults is the fallback content for user messages + // that have only tool_result (no text). Kiro API requires non-empty content. + DefaultUserContentWithToolResults = "Tool results provided." + + // DefaultUserContent is the fallback content for user messages + // that have no content at all. Kiro API requires non-empty content. + DefaultUserContent = "Continue" + + // KiroAgenticSystemPrompt is injected only for -agentic models to prevent timeouts on large writes. + // AWS Kiro API has a 2-3 minute timeout for large file write operations. + KiroAgenticSystemPrompt = ` +# CRITICAL: CHUNKED WRITE PROTOCOL (MANDATORY) + +You MUST follow these rules for ALL file operations. Violation causes server timeouts and task failure. + +## ABSOLUTE LIMITS +- **MAXIMUM 350 LINES** per single write/edit operation - NO EXCEPTIONS +- **RECOMMENDED 300 LINES** or less for optimal performance +- **NEVER** write entire files in one operation if >300 lines + +## MANDATORY CHUNKED WRITE STRATEGY + +### For NEW FILES (>300 lines total): +1. FIRST: Write initial chunk (first 250-300 lines) using write_to_file/fsWrite +2. THEN: Append remaining content in 250-300 line chunks using file append operations +3. REPEAT: Continue appending until complete + +### For EDITING EXISTING FILES: +1. Use surgical edits (apply_diff/targeted edits) - change ONLY what's needed +2. NEVER rewrite entire files - use incremental modifications +3. Split large refactors into multiple small, focused edits + +### For LARGE CODE GENERATION: +1. Generate in logical sections (imports, types, functions separately) +2. Write each section as a separate operation +3. Use append operations for subsequent sections + +## EXAMPLES OF CORRECT BEHAVIOR + +✅ CORRECT: Writing a 600-line file +- Operation 1: Write lines 1-300 (initial file creation) +- Operation 2: Append lines 301-600 + +✅ CORRECT: Editing multiple functions +- Operation 1: Edit function A +- Operation 2: Edit function B +- Operation 3: Edit function C + +❌ WRONG: Writing 500 lines in single operation → TIMEOUT +❌ WRONG: Rewriting entire file to change 5 lines → TIMEOUT +❌ WRONG: Generating massive code blocks without chunking → TIMEOUT + +## WHY THIS MATTERS +- Server has 2-3 minute timeout for operations +- Large writes exceed timeout and FAIL completely +- Chunked writes are FASTER and more RELIABLE +- Failed writes waste time and require retry + +REMEMBER: When in doubt, write LESS per operation. Multiple small operations > one large operation.` +) diff --git a/internal/translator/kiro/common/message_merge.go b/internal/translator/kiro/common/message_merge.go new file mode 100644 index 0000000000..2765fc6e98 --- /dev/null +++ b/internal/translator/kiro/common/message_merge.go @@ -0,0 +1,160 @@ +// Package common provides shared utilities for Kiro translators. +package common + +import ( + "encoding/json" + + "github.com/tidwall/gjson" +) + +// MergeAdjacentMessages merges adjacent messages with the same role. +// This reduces API call complexity and improves compatibility. +// Based on AIClient-2-API implementation. +// NOTE: Tool messages are NOT merged because each has a unique tool_call_id that must be preserved. +func MergeAdjacentMessages(messages []gjson.Result) []gjson.Result { + if len(messages) <= 1 { + return messages + } + + var merged []gjson.Result + for _, msg := range messages { + if len(merged) == 0 { + merged = append(merged, msg) + continue + } + + lastMsg := merged[len(merged)-1] + currentRole := msg.Get("role").String() + lastRole := lastMsg.Get("role").String() + + // Don't merge tool messages - each has a unique tool_call_id + if currentRole == "tool" || lastRole == "tool" { + merged = append(merged, msg) + continue + } + + if currentRole == lastRole { + // Merge content from current message into last message + mergedContent := mergeMessageContent(lastMsg, msg) + var mergedToolCalls []interface{} + if currentRole == "assistant" { + // Preserve assistant tool_calls when adjacent assistant messages are merged. + mergedToolCalls = mergeToolCalls(lastMsg.Get("tool_calls"), msg.Get("tool_calls")) + } + + // Create a new merged message JSON. + mergedMsg := createMergedMessage(lastRole, mergedContent, mergedToolCalls) + merged[len(merged)-1] = gjson.Parse(mergedMsg) + } else { + merged = append(merged, msg) + } + } + + return merged +} + +// mergeMessageContent merges the content of two messages with the same role. +// Handles both string content and array content (with text, tool_use, tool_result blocks). +func mergeMessageContent(msg1, msg2 gjson.Result) string { + content1 := msg1.Get("content") + content2 := msg2.Get("content") + + // Extract content blocks from both messages + var blocks1, blocks2 []map[string]interface{} + + if content1.IsArray() { + for _, block := range content1.Array() { + blocks1 = append(blocks1, blockToMap(block)) + } + } else if content1.Type == gjson.String { + blocks1 = append(blocks1, map[string]interface{}{ + "type": "text", + "text": content1.String(), + }) + } + + if content2.IsArray() { + for _, block := range content2.Array() { + blocks2 = append(blocks2, blockToMap(block)) + } + } else if content2.Type == gjson.String { + blocks2 = append(blocks2, map[string]interface{}{ + "type": "text", + "text": content2.String(), + }) + } + + // Merge text blocks if both end/start with text + if len(blocks1) > 0 && len(blocks2) > 0 { + if blocks1[len(blocks1)-1]["type"] == "text" && blocks2[0]["type"] == "text" { + // Merge the last text block of msg1 with the first text block of msg2 + text1 := blocks1[len(blocks1)-1]["text"].(string) + text2 := blocks2[0]["text"].(string) + blocks1[len(blocks1)-1]["text"] = text1 + "\n" + text2 + blocks2 = blocks2[1:] // Remove the merged block from blocks2 + } + } + + // Combine all blocks + allBlocks := append(blocks1, blocks2...) + + // Convert to JSON + result, _ := json.Marshal(allBlocks) + return string(result) +} + +// blockToMap converts a gjson.Result block to a map[string]interface{} +func blockToMap(block gjson.Result) map[string]interface{} { + result := make(map[string]interface{}) + block.ForEach(func(key, value gjson.Result) bool { + if value.IsObject() { + result[key.String()] = blockToMap(value) + } else if value.IsArray() { + var arr []interface{} + for _, item := range value.Array() { + if item.IsObject() { + arr = append(arr, blockToMap(item)) + } else { + arr = append(arr, item.Value()) + } + } + result[key.String()] = arr + } else { + result[key.String()] = value.Value() + } + return true + }) + return result +} + +// createMergedMessage creates a JSON string for a merged message. +// toolCalls is optional and only emitted for assistant role. +func createMergedMessage(role string, content string, toolCalls []interface{}) string { + msg := map[string]interface{}{ + "role": role, + "content": json.RawMessage(content), + } + if role == "assistant" && len(toolCalls) > 0 { + msg["tool_calls"] = toolCalls + } + result, _ := json.Marshal(msg) + return string(result) +} + +// mergeToolCalls combines tool_calls from two assistant messages while preserving order. +func mergeToolCalls(tc1, tc2 gjson.Result) []interface{} { + var merged []interface{} + + if tc1.IsArray() { + for _, tc := range tc1.Array() { + merged = append(merged, tc.Value()) + } + } + if tc2.IsArray() { + for _, tc := range tc2.Array() { + merged = append(merged, tc.Value()) + } + } + + return merged +} diff --git a/internal/translator/kiro/common/message_merge_test.go b/internal/translator/kiro/common/message_merge_test.go new file mode 100644 index 0000000000..a9cb7a28ec --- /dev/null +++ b/internal/translator/kiro/common/message_merge_test.go @@ -0,0 +1,106 @@ +package common + +import ( + "strings" + "testing" + + "github.com/tidwall/gjson" +) + +func parseMessages(t *testing.T, raw string) []gjson.Result { + t.Helper() + parsed := gjson.Parse(raw) + if !parsed.IsArray() { + t.Fatalf("expected JSON array, got: %s", raw) + } + return parsed.Array() +} + +func TestMergeAdjacentMessages_AssistantMergePreservesToolCalls(t *testing.T) { + messages := parseMessages(t, `[ + {"role":"assistant","content":"part1"}, + { + "role":"assistant", + "content":"part2", + "tool_calls":[ + { + "id":"call_1", + "type":"function", + "function":{"name":"Read","arguments":"{}"} + } + ] + }, + {"role":"tool","tool_call_id":"call_1","content":"ok"} + ]`) + + merged := MergeAdjacentMessages(messages) + if len(merged) != 2 { + t.Fatalf("expected 2 messages after merge, got %d", len(merged)) + } + + assistant := merged[0] + if assistant.Get("role").String() != "assistant" { + t.Fatalf("expected first message role assistant, got %q", assistant.Get("role").String()) + } + + toolCalls := assistant.Get("tool_calls") + if !toolCalls.IsArray() || len(toolCalls.Array()) != 1 { + t.Fatalf("expected assistant.tool_calls length 1, got: %s", toolCalls.Raw) + } + if toolCalls.Array()[0].Get("id").String() != "call_1" { + t.Fatalf("expected tool call id call_1, got %q", toolCalls.Array()[0].Get("id").String()) + } + + contentRaw := assistant.Get("content").Raw + if !strings.Contains(contentRaw, "part1") || !strings.Contains(contentRaw, "part2") { + t.Fatalf("expected merged content to contain both parts, got: %s", contentRaw) + } + + if merged[1].Get("role").String() != "tool" { + t.Fatalf("expected second message role tool, got %q", merged[1].Get("role").String()) + } +} + +func TestMergeAdjacentMessages_AssistantMergeCombinesMultipleToolCalls(t *testing.T) { + messages := parseMessages(t, `[ + { + "role":"assistant", + "content":"first", + "tool_calls":[ + {"id":"call_1","type":"function","function":{"name":"Read","arguments":"{}"}} + ] + }, + { + "role":"assistant", + "content":"second", + "tool_calls":[ + {"id":"call_2","type":"function","function":{"name":"Write","arguments":"{}"}} + ] + } + ]`) + + merged := MergeAdjacentMessages(messages) + if len(merged) != 1 { + t.Fatalf("expected 1 message after merge, got %d", len(merged)) + } + + toolCalls := merged[0].Get("tool_calls").Array() + if len(toolCalls) != 2 { + t.Fatalf("expected 2 merged tool calls, got %d", len(toolCalls)) + } + if toolCalls[0].Get("id").String() != "call_1" || toolCalls[1].Get("id").String() != "call_2" { + t.Fatalf("unexpected merged tool call ids: %q, %q", toolCalls[0].Get("id").String(), toolCalls[1].Get("id").String()) + } +} + +func TestMergeAdjacentMessages_ToolMessagesRemainUnmerged(t *testing.T) { + messages := parseMessages(t, `[ + {"role":"tool","tool_call_id":"call_1","content":"r1"}, + {"role":"tool","tool_call_id":"call_2","content":"r2"} + ]`) + + merged := MergeAdjacentMessages(messages) + if len(merged) != 2 { + t.Fatalf("expected tool messages to remain separate, got %d", len(merged)) + } +} diff --git a/internal/translator/kiro/common/utils.go b/internal/translator/kiro/common/utils.go new file mode 100644 index 0000000000..4c7c734085 --- /dev/null +++ b/internal/translator/kiro/common/utils.go @@ -0,0 +1,16 @@ +// Package common provides shared constants and utilities for Kiro translator. +package common + +// GetString safely extracts a string from a map. +// Returns empty string if the key doesn't exist or the value is not a string. +func GetString(m map[string]interface{}, key string) string { + if v, ok := m[key].(string); ok { + return v + } + return "" +} + +// GetStringValue is an alias for GetString for backward compatibility. +func GetStringValue(m map[string]interface{}, key string) string { + return GetString(m, key) +} diff --git a/internal/translator/kiro/openai/init.go b/internal/translator/kiro/openai/init.go new file mode 100644 index 0000000000..d26ae3031a --- /dev/null +++ b/internal/translator/kiro/openai/init.go @@ -0,0 +1,20 @@ +// Package openai provides translation between OpenAI Chat Completions and Kiro formats. +package openai + +import ( + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" +) + +func init() { + translator.Register( + OpenAI, // source format + Kiro, // target format + ConvertOpenAIRequestToKiro, + interfaces.TranslateResponse{ + Stream: ConvertKiroStreamToOpenAI, + NonStream: ConvertKiroNonStreamToOpenAI, + }, + ) +} diff --git a/internal/translator/kiro/openai/kiro_openai.go b/internal/translator/kiro/openai/kiro_openai.go new file mode 100644 index 0000000000..60d44966fa --- /dev/null +++ b/internal/translator/kiro/openai/kiro_openai.go @@ -0,0 +1,371 @@ +// Package openai provides translation between OpenAI Chat Completions and Kiro formats. +// This package enables direct OpenAI → Kiro translation, bypassing the Claude intermediate layer. +// +// The Kiro executor generates Claude-compatible SSE format internally, so the streaming response +// translation converts from Claude SSE format to OpenAI SSE format. +package openai + +import ( + "bytes" + "context" + "encoding/json" + "strings" + + kirocommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/common" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +// ConvertKiroStreamToOpenAI converts Kiro streaming response to OpenAI format. +// The Kiro executor emits Claude-compatible SSE events, so this function translates +// from Claude SSE format to OpenAI SSE format. +// +// Claude SSE format: +// - event: message_start\ndata: {...} +// - event: content_block_start\ndata: {...} +// - event: content_block_delta\ndata: {...} +// - event: content_block_stop\ndata: {...} +// - event: message_delta\ndata: {...} +// - event: message_stop\ndata: {...} +// +// OpenAI SSE format: +// - data: {"id":"...","object":"chat.completion.chunk",...} +// - data: [DONE] +func ConvertKiroStreamToOpenAI(ctx context.Context, model string, originalRequest, request, rawResponse []byte, param *any) [][]byte { + // Initialize state if needed + if *param == nil { + *param = NewOpenAIStreamState(model) + } + state := (*param).(*OpenAIStreamState) + + // Parse the Claude SSE event + responseStr := string(rawResponse) + + // Handle raw event format (event: xxx\ndata: {...}) + var eventType string + var eventData string + + if strings.HasPrefix(responseStr, "event:") { + // Parse event type and data + lines := strings.SplitN(responseStr, "\n", 2) + if len(lines) >= 1 { + eventType = strings.TrimSpace(strings.TrimPrefix(lines[0], "event:")) + } + if len(lines) >= 2 && strings.HasPrefix(lines[1], "data:") { + eventData = strings.TrimSpace(strings.TrimPrefix(lines[1], "data:")) + } + } else if strings.HasPrefix(responseStr, "data:") { + // Just data line + eventData = strings.TrimSpace(strings.TrimPrefix(responseStr, "data:")) + } else { + // Try to parse as raw JSON + eventData = strings.TrimSpace(responseStr) + } + + if eventData == "" { + return [][]byte{} + } + + // Parse the event data as JSON + eventJSON := gjson.Parse(eventData) + if !eventJSON.Exists() { + return [][]byte{} + } + + // Determine event type from JSON if not already set + if eventType == "" { + eventType = eventJSON.Get("type").String() + } + + var results [][]byte + + switch eventType { + case "message_start": + // Send first chunk with role + firstChunk := BuildOpenAISSEFirstChunk(state) + results = append(results, []byte(firstChunk)) + + case "content_block_start": + // Check block type + blockType := eventJSON.Get("content_block.type").String() + switch blockType { + case "text": + // Text block starting - nothing to emit yet + case "thinking": + // Thinking block starting - nothing to emit yet for OpenAI + case "tool_use": + // Tool use block starting + toolUseID := eventJSON.Get("content_block.id").String() + toolName := eventJSON.Get("content_block.name").String() + chunk := BuildOpenAISSEToolCallStart(state, toolUseID, toolName) + results = append(results, []byte(chunk)) + state.ToolCallIndex++ + } + + case "content_block_delta": + deltaType := eventJSON.Get("delta.type").String() + switch deltaType { + case "text_delta": + textDelta := eventJSON.Get("delta.text").String() + if textDelta != "" { + chunk := BuildOpenAISSETextDelta(state, textDelta) + results = append(results, []byte(chunk)) + } + case "thinking_delta": + // Convert thinking to reasoning_content for o1-style compatibility + thinkingDelta := eventJSON.Get("delta.thinking").String() + if thinkingDelta != "" { + chunk := BuildOpenAISSEReasoningDelta(state, thinkingDelta) + results = append(results, []byte(chunk)) + } + case "input_json_delta": + // Tool call arguments delta + partialJSON := eventJSON.Get("delta.partial_json").String() + if partialJSON != "" { + // Get the tool index from content block index + blockIndex := int(eventJSON.Get("index").Int()) + chunk := BuildOpenAISSEToolCallArgumentsDelta(state, partialJSON, blockIndex-1) // Adjust for 0-based tool index + results = append(results, []byte(chunk)) + } + } + + case "content_block_stop": + // Content block ended - nothing to emit for OpenAI + + case "message_delta": + // Message delta with stop_reason + stopReason := eventJSON.Get("delta.stop_reason").String() + finishReason := mapKiroStopReasonToOpenAI(stopReason) + if finishReason != "" { + chunk := BuildOpenAISSEFinish(state, finishReason) + results = append(results, []byte(chunk)) + } + + // Extract usage if present + if eventJSON.Get("usage").Exists() { + inputTokens := eventJSON.Get("usage.input_tokens").Int() + outputTokens := eventJSON.Get("usage.output_tokens").Int() + usageInfo := usage.Detail{ + InputTokens: inputTokens, + OutputTokens: outputTokens, + TotalTokens: inputTokens + outputTokens, + } + chunk := BuildOpenAISSEUsage(state, usageInfo) + results = append(results, []byte(chunk)) + } + + case "message_stop": + // Final event - do NOT emit [DONE] here + // The handler layer (openai_handlers.go) will send [DONE] when the stream closes + // Emitting [DONE] here would cause duplicate [DONE] markers + + case "ping": + // Ping event with usage - optionally emit usage chunk + if eventJSON.Get("usage").Exists() { + inputTokens := eventJSON.Get("usage.input_tokens").Int() + outputTokens := eventJSON.Get("usage.output_tokens").Int() + usageInfo := usage.Detail{ + InputTokens: inputTokens, + OutputTokens: outputTokens, + TotalTokens: inputTokens + outputTokens, + } + chunk := BuildOpenAISSEUsage(state, usageInfo) + results = append(results, []byte(chunk)) + } + } + + return results +} + +// ConvertKiroNonStreamToOpenAI converts Kiro non-streaming response to OpenAI format. +// The Kiro executor returns Claude-compatible JSON responses, so this function translates +// from Claude format to OpenAI format. +func ConvertKiroNonStreamToOpenAI(ctx context.Context, model string, originalRequest, request, rawResponse []byte, param *any) []byte { + // Parse the Claude-format response + response := gjson.ParseBytes(rawResponse) + + // Extract content + var content string + var reasoningContent string + var toolUses []KiroToolUse + var stopReason string + + // Get stop_reason + stopReason = response.Get("stop_reason").String() + + // Process content blocks + contentBlocks := response.Get("content") + if contentBlocks.IsArray() { + for _, block := range contentBlocks.Array() { + blockType := block.Get("type").String() + switch blockType { + case "text": + content += block.Get("text").String() + case "thinking": + // Convert thinking blocks to reasoning_content for OpenAI format + reasoningContent += block.Get("thinking").String() + case "tool_use": + toolUseID := block.Get("id").String() + toolName := block.Get("name").String() + toolInput := block.Get("input") + + var inputMap map[string]interface{} + if toolInput.IsObject() { + inputMap = make(map[string]interface{}) + toolInput.ForEach(func(key, value gjson.Result) bool { + inputMap[key.String()] = value.Value() + return true + }) + } + + toolUses = append(toolUses, KiroToolUse{ + ToolUseID: toolUseID, + Name: toolName, + Input: inputMap, + }) + } + } + } + + // Extract usage + usageInfo := usage.Detail{ + InputTokens: response.Get("usage.input_tokens").Int(), + OutputTokens: response.Get("usage.output_tokens").Int(), + } + usageInfo.TotalTokens = usageInfo.InputTokens + usageInfo.OutputTokens + + // Build OpenAI response with reasoning_content support + openaiResponse := BuildOpenAIResponseWithReasoning(content, reasoningContent, toolUses, model, usageInfo, stopReason) + return openaiResponse +} + +// ParseClaudeEvent parses a Claude SSE event and returns the event type and data +func ParseClaudeEvent(rawEvent []byte) (eventType string, eventData []byte) { + lines := bytes.Split(rawEvent, []byte("\n")) + for _, line := range lines { + line = bytes.TrimSpace(line) + if bytes.HasPrefix(line, []byte("event:")) { + eventType = string(bytes.TrimSpace(bytes.TrimPrefix(line, []byte("event:")))) + } else if bytes.HasPrefix(line, []byte("data:")) { + eventData = bytes.TrimSpace(bytes.TrimPrefix(line, []byte("data:"))) + } + } + return eventType, eventData +} + +// ExtractThinkingFromContent parses content to extract thinking blocks. +// Returns cleaned content (without thinking tags) and whether thinking was found. +func ExtractThinkingFromContent(content string) (string, string, bool) { + if !strings.Contains(content, kirocommon.ThinkingStartTag) { + return content, "", false + } + + var cleanedContent strings.Builder + var thinkingContent strings.Builder + hasThinking := false + remaining := content + + for len(remaining) > 0 { + startIdx := strings.Index(remaining, kirocommon.ThinkingStartTag) + if startIdx == -1 { + cleanedContent.WriteString(remaining) + break + } + + // Add content before thinking tag + cleanedContent.WriteString(remaining[:startIdx]) + + // Move past opening tag + remaining = remaining[startIdx+len(kirocommon.ThinkingStartTag):] + + // Find closing tag + endIdx := strings.Index(remaining, kirocommon.ThinkingEndTag) + if endIdx == -1 { + // No closing tag - treat rest as thinking + thinkingContent.WriteString(remaining) + hasThinking = true + break + } + + // Extract thinking content + thinkingContent.WriteString(remaining[:endIdx]) + hasThinking = true + remaining = remaining[endIdx+len(kirocommon.ThinkingEndTag):] + } + + return strings.TrimSpace(cleanedContent.String()), strings.TrimSpace(thinkingContent.String()), hasThinking +} + +// ConvertOpenAIToolsToKiroFormat is a helper that converts OpenAI tools format to Kiro format +func ConvertOpenAIToolsToKiroFormat(tools []map[string]interface{}) []KiroToolWrapper { + var kiroTools []KiroToolWrapper + + for _, tool := range tools { + toolType, _ := tool["type"].(string) + if toolType != "function" { + continue + } + + fn, ok := tool["function"].(map[string]interface{}) + if !ok { + continue + } + + name := kirocommon.GetString(fn, "name") + description := kirocommon.GetString(fn, "description") + parameters := ensureKiroInputSchema(fn["parameters"]) + + if name == "" { + continue + } + + if description == "" { + description = "Tool: " + name + } + + kiroTools = append(kiroTools, KiroToolWrapper{ + ToolSpecification: KiroToolSpecification{ + Name: name, + Description: description, + InputSchema: KiroInputSchema{JSON: parameters}, + }, + }) + } + + return kiroTools +} + +// OpenAIStreamParams holds parameters for OpenAI streaming conversion +type OpenAIStreamParams struct { + State *OpenAIStreamState + ThinkingState *ThinkingTagState + ToolCallsEmitted map[string]bool +} + +// NewOpenAIStreamParams creates new streaming parameters +func NewOpenAIStreamParams(model string) *OpenAIStreamParams { + return &OpenAIStreamParams{ + State: NewOpenAIStreamState(model), + ThinkingState: NewThinkingTagState(), + ToolCallsEmitted: make(map[string]bool), + } +} + +// ConvertClaudeToolUseToOpenAI converts a Claude tool_use block to OpenAI tool_calls format +func ConvertClaudeToolUseToOpenAI(toolUseID, toolName string, input map[string]interface{}) map[string]interface{} { + inputJSON, _ := json.Marshal(input) + return map[string]interface{}{ + "id": toolUseID, + "type": "function", + "function": map[string]interface{}{ + "name": toolName, + "arguments": string(inputJSON), + }, + } +} + +// LogStreamEvent logs a streaming event for debugging +func LogStreamEvent(eventType, data string) { + log.Debugf("kiro-openai: stream event type=%s, data_len=%d", eventType, len(data)) +} diff --git a/internal/translator/kiro/openai/kiro_openai_request.go b/internal/translator/kiro/openai/kiro_openai_request.go new file mode 100644 index 0000000000..f790582482 --- /dev/null +++ b/internal/translator/kiro/openai/kiro_openai_request.go @@ -0,0 +1,1009 @@ +// Package openai provides request translation from OpenAI Chat Completions to Kiro format. +// It handles parsing and transforming OpenAI API requests into the Kiro/Amazon Q API format, +// extracting model information, system instructions, message contents, and tool declarations. +package openai + +import ( + "encoding/json" + "fmt" + "net/http" + "strings" + "time" + "unicode/utf8" + + "github.com/google/uuid" + kiroclaude "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/claude" + kirocommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/kiro/common" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" +) + +// Kiro API request structs - reuse from kiroclaude package structure + +// KiroPayload is the top-level request structure for Kiro API +type KiroPayload struct { + ConversationState KiroConversationState `json:"conversationState"` + ProfileArn string `json:"profileArn,omitempty"` + InferenceConfig *KiroInferenceConfig `json:"inferenceConfig,omitempty"` +} + +// KiroInferenceConfig contains inference parameters for the Kiro API. +type KiroInferenceConfig struct { + MaxTokens int `json:"maxTokens,omitempty"` + Temperature float64 `json:"temperature,omitempty"` + TopP float64 `json:"topP,omitempty"` +} + +// KiroConversationState holds the conversation context +type KiroConversationState struct { + AgentContinuationID string `json:"agentContinuationId,omitempty"` + AgentTaskType string `json:"agentTaskType,omitempty"` + ChatTriggerType string `json:"chatTriggerType"` // Required: "MANUAL" + ConversationID string `json:"conversationId"` + CurrentMessage KiroCurrentMessage `json:"currentMessage"` + History []KiroHistoryMessage `json:"history,omitempty"` +} + +// KiroCurrentMessage wraps the current user message +type KiroCurrentMessage struct { + UserInputMessage KiroUserInputMessage `json:"userInputMessage"` +} + +// KiroHistoryMessage represents a message in the conversation history +type KiroHistoryMessage struct { + UserInputMessage *KiroUserInputMessage `json:"userInputMessage,omitempty"` + AssistantResponseMessage *KiroAssistantResponseMessage `json:"assistantResponseMessage,omitempty"` +} + +// KiroImage represents an image in Kiro API format +type KiroImage struct { + Format string `json:"format"` + Source KiroImageSource `json:"source"` +} + +// KiroImageSource contains the image data +type KiroImageSource struct { + Bytes string `json:"bytes"` // base64 encoded image data +} + +// KiroUserInputMessage represents a user message +type KiroUserInputMessage struct { + Content string `json:"content"` + ModelID string `json:"modelId"` + Origin string `json:"origin"` + Images []KiroImage `json:"images,omitempty"` + UserInputMessageContext *KiroUserInputMessageContext `json:"userInputMessageContext,omitempty"` +} + +// KiroUserInputMessageContext contains tool-related context +type KiroUserInputMessageContext struct { + ToolResults []KiroToolResult `json:"toolResults,omitempty"` + Tools []KiroToolWrapper `json:"tools,omitempty"` +} + +// KiroToolResult represents a tool execution result +type KiroToolResult struct { + Content []KiroTextContent `json:"content"` + Status string `json:"status"` + ToolUseID string `json:"toolUseId"` +} + +// KiroTextContent represents text content +type KiroTextContent struct { + Text string `json:"text"` +} + +// KiroToolWrapper wraps a tool specification +type KiroToolWrapper struct { + ToolSpecification KiroToolSpecification `json:"toolSpecification"` +} + +// KiroToolSpecification defines a tool's schema +type KiroToolSpecification struct { + Name string `json:"name"` + Description string `json:"description"` + InputSchema KiroInputSchema `json:"inputSchema"` +} + +// KiroInputSchema wraps the JSON schema for tool input +type KiroInputSchema struct { + JSON interface{} `json:"json"` +} + +// KiroAssistantResponseMessage represents an assistant message +type KiroAssistantResponseMessage struct { + Content string `json:"content"` + ToolUses []KiroToolUse `json:"toolUses,omitempty"` +} + +// KiroToolUse represents a tool invocation by the assistant +type KiroToolUse struct { + ToolUseID string `json:"toolUseId"` + Name string `json:"name"` + Input map[string]interface{} `json:"input"` +} + +// ConvertOpenAIRequestToKiro converts an OpenAI Chat Completions request to Kiro format. +// This is the main entry point for request translation. +// Note: The actual payload building happens in the executor, this just passes through +// the OpenAI format which will be converted by BuildKiroPayloadFromOpenAI. +func ConvertOpenAIRequestToKiro(modelName string, inputRawJSON []byte, stream bool) []byte { + // Pass through the OpenAI format - actual conversion happens in BuildKiroPayloadFromOpenAI + return inputRawJSON +} + +// BuildKiroPayloadFromOpenAI constructs the Kiro API request payload from OpenAI format. +// Supports tool calling - tools are passed via userInputMessageContext. +// origin parameter determines which quota to use: "CLI" for Amazon Q, "AI_EDITOR" for Kiro IDE. +// isAgentic parameter enables chunked write optimization prompt for -agentic model variants. +// isChatOnly parameter disables tool calling for -chat model variants (pure conversation mode). +// headers parameter allows checking Anthropic-Beta header for thinking mode detection. +// metadata parameter is kept for API compatibility but no longer used for thinking configuration. +// Returns the payload and a boolean indicating whether thinking mode was injected. +func BuildKiroPayloadFromOpenAI(openaiBody []byte, modelID, profileArn, origin string, isAgentic, isChatOnly bool, headers http.Header, metadata map[string]any) ([]byte, bool) { + // Extract max_tokens for potential use in inferenceConfig + // Handle -1 as "use maximum" (Kiro max output is ~32000 tokens) + const kiroMaxOutputTokens = 32000 + var maxTokens int64 + if mt := gjson.GetBytes(openaiBody, "max_tokens"); mt.Exists() { + maxTokens = mt.Int() + if maxTokens == -1 { + maxTokens = kiroMaxOutputTokens + log.Debugf("kiro-openai: max_tokens=-1 converted to %d", kiroMaxOutputTokens) + } + } + + // Extract temperature if specified + var temperature float64 + var hasTemperature bool + if temp := gjson.GetBytes(openaiBody, "temperature"); temp.Exists() { + temperature = temp.Float() + hasTemperature = true + } + + // Extract top_p if specified + var topP float64 + var hasTopP bool + if tp := gjson.GetBytes(openaiBody, "top_p"); tp.Exists() { + topP = tp.Float() + hasTopP = true + log.Debugf("kiro-openai: extracted top_p: %.2f", topP) + } + + // Normalize origin value for Kiro API compatibility + origin = normalizeOrigin(origin) + log.Debugf("kiro-openai: normalized origin value: %s", origin) + + messages := gjson.GetBytes(openaiBody, "messages") + + // For chat-only mode, don't include tools + var tools gjson.Result + if !isChatOnly { + tools = gjson.GetBytes(openaiBody, "tools") + } + + // Extract system prompt from messages + systemPrompt := extractSystemPromptFromOpenAI(messages) + + // Inject timestamp context + timestamp := time.Now().Format("2006-01-02 15:04:05 MST") + timestampContext := fmt.Sprintf("[Context: Current time is %s]", timestamp) + if systemPrompt != "" { + systemPrompt = timestampContext + "\n\n" + systemPrompt + } else { + systemPrompt = timestampContext + } + log.Debugf("kiro-openai: injected timestamp context: %s", timestamp) + + // Inject agentic optimization prompt for -agentic model variants + if isAgentic { + if systemPrompt != "" { + systemPrompt += "\n" + } + systemPrompt += kirocommon.KiroAgenticSystemPrompt + } + + // Handle tool_choice parameter - Kiro doesn't support it natively, so we inject system prompt hints + // OpenAI tool_choice values: "none", "auto", "required", or {"type":"function","function":{"name":"..."}} + toolChoiceHint := extractToolChoiceHint(openaiBody) + if toolChoiceHint != "" { + if systemPrompt != "" { + systemPrompt += "\n" + } + systemPrompt += toolChoiceHint + log.Debugf("kiro-openai: injected tool_choice hint into system prompt") + } + + // Handle response_format parameter - Kiro doesn't support it natively, so we inject system prompt hints + // OpenAI response_format: {"type": "json_object"} or {"type": "json_schema", "json_schema": {...}} + responseFormatHint := extractResponseFormatHint(openaiBody) + if responseFormatHint != "" { + if systemPrompt != "" { + systemPrompt += "\n" + } + systemPrompt += responseFormatHint + log.Debugf("kiro-openai: injected response_format hint into system prompt") + } + + // Check for thinking mode + // Supports OpenAI reasoning_effort parameter, model name hints, and Anthropic-Beta header + thinkingEnabled := checkThinkingModeFromOpenAIWithHeaders(openaiBody, headers) + + // Convert OpenAI tools to Kiro format + kiroTools := convertOpenAIToolsToKiro(tools) + + // Thinking mode implementation: + // Kiro API supports official thinking/reasoning mode via tag. + // When set to "enabled", Kiro returns reasoning content as official reasoningContentEvent + // rather than inline tags in assistantResponseEvent. + // Use a conservative thinking budget to reduce latency/cost spikes in long sessions. + if thinkingEnabled { + thinkingHint := `enabled +16000` + if systemPrompt != "" { + systemPrompt = thinkingHint + "\n\n" + systemPrompt + } else { + systemPrompt = thinkingHint + } + log.Infof("kiro-openai: injected thinking prompt (official mode), has_tools: %v", len(kiroTools) > 0) + } + + // Process messages and build history + history, currentUserMsg, currentToolResults := processOpenAIMessages(messages, modelID, origin) + + // Build content with system prompt + if currentUserMsg != nil { + currentUserMsg.Content = buildFinalContent(currentUserMsg.Content, systemPrompt, currentToolResults) + + // Deduplicate currentToolResults + currentToolResults = deduplicateToolResults(currentToolResults) + + // Build userInputMessageContext with tools and tool results + if len(kiroTools) > 0 || len(currentToolResults) > 0 { + currentUserMsg.UserInputMessageContext = &KiroUserInputMessageContext{ + Tools: kiroTools, + ToolResults: currentToolResults, + } + } + } + + // Build payload + var currentMessage KiroCurrentMessage + if currentUserMsg != nil { + currentMessage = KiroCurrentMessage{UserInputMessage: *currentUserMsg} + } else { + fallbackContent := "" + if systemPrompt != "" { + fallbackContent = "--- SYSTEM PROMPT ---\n" + systemPrompt + "\n--- END SYSTEM PROMPT ---\n" + } + currentMessage = KiroCurrentMessage{UserInputMessage: KiroUserInputMessage{ + Content: fallbackContent, + ModelID: modelID, + Origin: origin, + }} + } + + // Build inferenceConfig if we have any inference parameters + // Note: Kiro API doesn't actually use max_tokens for thinking budget + var inferenceConfig *KiroInferenceConfig + if maxTokens > 0 || hasTemperature || hasTopP { + inferenceConfig = &KiroInferenceConfig{} + if maxTokens > 0 { + inferenceConfig.MaxTokens = int(maxTokens) + } + if hasTemperature { + inferenceConfig.Temperature = temperature + } + if hasTopP { + inferenceConfig.TopP = topP + } + } + + // Session IDs: extract from messages[].additional_kwargs (LangChain format) or random + conversationID := extractMetadataFromMessages(messages, "conversationId") + continuationID := extractMetadataFromMessages(messages, "continuationId") + if conversationID == "" { + conversationID = uuid.New().String() + } + + payload := KiroPayload{ + ConversationState: KiroConversationState{ + AgentTaskType: "vibe", + ChatTriggerType: "MANUAL", + ConversationID: conversationID, + CurrentMessage: currentMessage, + History: history, + }, + ProfileArn: profileArn, + InferenceConfig: inferenceConfig, + } + + // Only set AgentContinuationID if client provided + if continuationID != "" { + payload.ConversationState.AgentContinuationID = continuationID + } + + result, err := json.Marshal(payload) + if err != nil { + log.Debugf("kiro-openai: failed to marshal payload: %v", err) + return nil, false + } + + return result, thinkingEnabled +} + +// normalizeOrigin normalizes origin value for Kiro API compatibility +func normalizeOrigin(origin string) string { + switch origin { + case "KIRO_CLI": + return "CLI" + case "KIRO_AI_EDITOR": + return "AI_EDITOR" + case "AMAZON_Q": + return "CLI" + case "KIRO_IDE": + return "AI_EDITOR" + default: + return origin + } +} + +// extractMetadataFromMessages extracts metadata from messages[].additional_kwargs (LangChain format). +// Searches from the last message backwards, returns empty string if not found. +func extractMetadataFromMessages(messages gjson.Result, key string) string { + arr := messages.Array() + for i := len(arr) - 1; i >= 0; i-- { + if val := arr[i].Get("additional_kwargs." + key); val.Exists() && val.String() != "" { + return val.String() + } + } + return "" +} + +// extractSystemPromptFromOpenAI extracts system prompt from OpenAI messages +func extractSystemPromptFromOpenAI(messages gjson.Result) string { + if !messages.IsArray() { + return "" + } + + var systemParts []string + for _, msg := range messages.Array() { + if msg.Get("role").String() == "system" { + content := msg.Get("content") + if content.Type == gjson.String { + systemParts = append(systemParts, content.String()) + } else if content.IsArray() { + // Handle array content format + for _, part := range content.Array() { + if part.Get("type").String() == "text" { + systemParts = append(systemParts, part.Get("text").String()) + } + } + } + } + } + + return strings.Join(systemParts, "\n") +} + +// shortenToolNameIfNeeded shortens tool names that exceed 64 characters. +// MCP tools often have long names like "mcp__server-name__tool-name". +// This preserves the "mcp__" prefix and last segment when possible. +func shortenToolNameIfNeeded(name string) string { + const limit = 64 + if len(name) <= limit { + return name + } + // For MCP tools, try to preserve prefix and last segment + if strings.HasPrefix(name, "mcp__") { + idx := strings.LastIndex(name, "__") + if idx > 0 { + cand := "mcp__" + name[idx+2:] + if len(cand) > limit { + return cand[:limit] + } + return cand + } + } + return name[:limit] +} + +func ensureKiroInputSchema(parameters interface{}) interface{} { + if parameters != nil { + return parameters + } + return map[string]interface{}{ + "type": "object", + "properties": map[string]interface{}{}, + } +} + +// convertOpenAIToolsToKiro converts OpenAI tools to Kiro format +func convertOpenAIToolsToKiro(tools gjson.Result) []KiroToolWrapper { + var kiroTools []KiroToolWrapper + if !tools.IsArray() { + return kiroTools + } + + for _, tool := range tools.Array() { + // OpenAI tools have type "function" with function definition inside + if tool.Get("type").String() != "function" { + continue + } + + fn := tool.Get("function") + if !fn.Exists() { + continue + } + + name := fn.Get("name").String() + description := fn.Get("description").String() + parametersResult := fn.Get("parameters") + var parameters interface{} + if parametersResult.Exists() && parametersResult.Type != gjson.Null { + parameters = parametersResult.Value() + } + parameters = ensureKiroInputSchema(parameters) + + // Shorten tool name if it exceeds 64 characters (common with MCP tools) + originalName := name + name = shortenToolNameIfNeeded(name) + if name != originalName { + log.Debugf("kiro-openai: shortened tool name from '%s' to '%s'", originalName, name) + } + + // CRITICAL FIX: Kiro API requires non-empty description + if strings.TrimSpace(description) == "" { + description = fmt.Sprintf("Tool: %s", name) + log.Debugf("kiro-openai: tool '%s' has empty description, using default: %s", name, description) + } + + // Truncate long descriptions + if len(description) > kirocommon.KiroMaxToolDescLen { + truncLen := kirocommon.KiroMaxToolDescLen - 30 + for truncLen > 0 && !utf8.RuneStart(description[truncLen]) { + truncLen-- + } + description = description[:truncLen] + "... (description truncated)" + } + + kiroTools = append(kiroTools, KiroToolWrapper{ + ToolSpecification: KiroToolSpecification{ + Name: name, + Description: description, + InputSchema: KiroInputSchema{JSON: parameters}, + }, + }) + } + + return kiroTools +} + +// processOpenAIMessages processes OpenAI messages and builds Kiro history +func processOpenAIMessages(messages gjson.Result, modelID, origin string) ([]KiroHistoryMessage, *KiroUserInputMessage, []KiroToolResult) { + var history []KiroHistoryMessage + var currentUserMsg *KiroUserInputMessage + var currentToolResults []KiroToolResult + + if !messages.IsArray() { + return history, currentUserMsg, currentToolResults + } + + // Merge adjacent messages with the same role + messagesArray := kirocommon.MergeAdjacentMessages(messages.Array()) + + // Track pending tool results that should be attached to the next user message + // This is critical for LiteLLM-translated requests where tool results appear + // as separate "tool" role messages between assistant and user messages + var pendingToolResults []KiroToolResult + + for i, msg := range messagesArray { + role := msg.Get("role").String() + isLastMessage := i == len(messagesArray)-1 + + switch role { + case "system": + // System messages are handled separately via extractSystemPromptFromOpenAI + continue + + case "user": + userMsg, toolResults := buildUserMessageFromOpenAI(msg, modelID, origin) + // Merge any pending tool results from preceding "tool" role messages + toolResults = append(pendingToolResults, toolResults...) + pendingToolResults = nil // Reset pending tool results + + if isLastMessage { + currentUserMsg = &userMsg + currentToolResults = toolResults + } else { + // CRITICAL: Kiro API requires content to be non-empty for history messages + if strings.TrimSpace(userMsg.Content) == "" { + if len(toolResults) > 0 { + userMsg.Content = "Tool results provided." + } else { + userMsg.Content = "Continue" + } + } + // For history messages, embed tool results in context + if len(toolResults) > 0 { + userMsg.UserInputMessageContext = &KiroUserInputMessageContext{ + ToolResults: toolResults, + } + } + history = append(history, KiroHistoryMessage{ + UserInputMessage: &userMsg, + }) + } + + case "assistant": + assistantMsg := buildAssistantMessageFromOpenAI(msg) + + // If there are pending tool results, we need to insert a synthetic user message + // before this assistant message to maintain proper conversation structure + if len(pendingToolResults) > 0 { + syntheticUserMsg := KiroUserInputMessage{ + Content: "Tool results provided.", + ModelID: modelID, + Origin: origin, + UserInputMessageContext: &KiroUserInputMessageContext{ + ToolResults: pendingToolResults, + }, + } + history = append(history, KiroHistoryMessage{ + UserInputMessage: &syntheticUserMsg, + }) + pendingToolResults = nil + } + + if isLastMessage { + history = append(history, KiroHistoryMessage{ + AssistantResponseMessage: &assistantMsg, + }) + // Create a "Continue" user message as currentMessage + currentUserMsg = &KiroUserInputMessage{ + Content: "Continue", + ModelID: modelID, + Origin: origin, + } + } else { + history = append(history, KiroHistoryMessage{ + AssistantResponseMessage: &assistantMsg, + }) + } + + case "tool": + // Tool messages in OpenAI format provide results for tool_calls + // These are typically followed by user or assistant messages + // Collect them as pending and attach to the next user message + toolCallID := msg.Get("tool_call_id").String() + content := msg.Get("content").String() + + if toolCallID != "" { + toolResult := KiroToolResult{ + ToolUseID: toolCallID, + Content: []KiroTextContent{{Text: content}}, + Status: "success", + } + // Collect pending tool results to attach to the next user message + pendingToolResults = append(pendingToolResults, toolResult) + } + } + } + + // Handle case where tool results are at the end with no following user message + if len(pendingToolResults) > 0 { + currentToolResults = append(currentToolResults, pendingToolResults...) + // If there's no current user message, create a synthetic one for the tool results + if currentUserMsg == nil { + currentUserMsg = &KiroUserInputMessage{ + Content: "Tool results provided.", + ModelID: modelID, + Origin: origin, + } + } + } + + // Truncate history if too long to prevent Kiro API errors + history = truncateHistoryIfNeeded(history) + history, currentToolResults = filterOrphanedToolResults(history, currentToolResults) + + return history, currentUserMsg, currentToolResults +} + +const kiroMaxHistoryMessages = 50 + +func truncateHistoryIfNeeded(history []KiroHistoryMessage) []KiroHistoryMessage { + if len(history) <= kiroMaxHistoryMessages { + return history + } + + log.Debugf("kiro-openai: truncating history from %d to %d messages", len(history), kiroMaxHistoryMessages) + return history[len(history)-kiroMaxHistoryMessages:] +} + +func filterOrphanedToolResults(history []KiroHistoryMessage, currentToolResults []KiroToolResult) ([]KiroHistoryMessage, []KiroToolResult) { + // Remove tool results with no matching tool_use in retained history. + // This happens after truncation when the assistant turn that produced tool_use + // is dropped but a later user/tool_result survives. + validToolUseIDs := make(map[string]bool) + for _, h := range history { + if h.AssistantResponseMessage == nil { + continue + } + for _, tu := range h.AssistantResponseMessage.ToolUses { + validToolUseIDs[tu.ToolUseID] = true + } + } + + for i, h := range history { + if h.UserInputMessage == nil || h.UserInputMessage.UserInputMessageContext == nil { + continue + } + ctx := h.UserInputMessage.UserInputMessageContext + if len(ctx.ToolResults) == 0 { + continue + } + + filtered := make([]KiroToolResult, 0, len(ctx.ToolResults)) + for _, tr := range ctx.ToolResults { + if validToolUseIDs[tr.ToolUseID] { + filtered = append(filtered, tr) + continue + } + log.Debugf("kiro-openai: dropping orphaned tool_result in history[%d]: toolUseId=%s (no matching tool_use)", i, tr.ToolUseID) + } + ctx.ToolResults = filtered + if len(ctx.ToolResults) == 0 && len(ctx.Tools) == 0 { + h.UserInputMessage.UserInputMessageContext = nil + } + } + + if len(currentToolResults) > 0 { + filtered := make([]KiroToolResult, 0, len(currentToolResults)) + for _, tr := range currentToolResults { + if validToolUseIDs[tr.ToolUseID] { + filtered = append(filtered, tr) + continue + } + log.Debugf("kiro-openai: dropping orphaned tool_result in currentMessage: toolUseId=%s (no matching tool_use)", tr.ToolUseID) + } + if len(filtered) != len(currentToolResults) { + log.Infof("kiro-openai: dropped %d orphaned tool_result(s) from currentMessage", len(currentToolResults)-len(filtered)) + } + currentToolResults = filtered + } + + return history, currentToolResults +} + +// buildUserMessageFromOpenAI builds a user message from OpenAI format and extracts tool results +func buildUserMessageFromOpenAI(msg gjson.Result, modelID, origin string) (KiroUserInputMessage, []KiroToolResult) { + content := msg.Get("content") + var contentBuilder strings.Builder + var toolResults []KiroToolResult + var images []KiroImage + + if content.IsArray() { + for _, part := range content.Array() { + partType := part.Get("type").String() + switch partType { + case "text": + contentBuilder.WriteString(part.Get("text").String()) + case "image_url": + imageURL := part.Get("image_url.url").String() + if strings.HasPrefix(imageURL, "data:") { + // Parse data URL: data:image/png;base64,xxxxx + if idx := strings.Index(imageURL, ";base64,"); idx != -1 { + mediaType := imageURL[5:idx] // Skip "data:" + data := imageURL[idx+8:] // Skip ";base64," + + format := "" + if lastSlash := strings.LastIndex(mediaType, "/"); lastSlash != -1 { + format = mediaType[lastSlash+1:] + } + + if format != "" && data != "" { + images = append(images, KiroImage{ + Format: format, + Source: KiroImageSource{ + Bytes: data, + }, + }) + } + } + } + } + } + } else if content.Type == gjson.String { + contentBuilder.WriteString(content.String()) + } + + userMsg := KiroUserInputMessage{ + Content: contentBuilder.String(), + ModelID: modelID, + Origin: origin, + } + + if len(images) > 0 { + userMsg.Images = images + } + + return userMsg, toolResults +} + +// buildAssistantMessageFromOpenAI builds an assistant message from OpenAI format +func buildAssistantMessageFromOpenAI(msg gjson.Result) KiroAssistantResponseMessage { + content := msg.Get("content") + var contentBuilder strings.Builder + var toolUses []KiroToolUse + + // Handle content + if content.Type == gjson.String { + contentBuilder.WriteString(content.String()) + } else if content.IsArray() { + for _, part := range content.Array() { + partType := part.Get("type").String() + switch partType { + case "text": + contentBuilder.WriteString(part.Get("text").String()) + case "tool_use": + // Handle tool_use in content array (Anthropic/OpenCode format) + // This is different from OpenAI's tool_calls format + toolUseID := part.Get("id").String() + toolName := part.Get("name").String() + inputData := part.Get("input") + + inputMap := make(map[string]interface{}) + if inputData.Exists() && inputData.IsObject() { + inputData.ForEach(func(key, value gjson.Result) bool { + inputMap[key.String()] = value.Value() + return true + }) + } + + toolUses = append(toolUses, KiroToolUse{ + ToolUseID: toolUseID, + Name: toolName, + Input: inputMap, + }) + log.Debugf("kiro-openai: extracted tool_use from content array: %s", toolName) + } + } + } + + // Handle tool_calls (OpenAI format) + toolCalls := msg.Get("tool_calls") + if toolCalls.IsArray() { + for _, tc := range toolCalls.Array() { + if tc.Get("type").String() != "function" { + continue + } + + toolUseID := tc.Get("id").String() + toolName := tc.Get("function.name").String() + toolArgs := tc.Get("function.arguments").String() + + var inputMap map[string]interface{} + if err := json.Unmarshal([]byte(toolArgs), &inputMap); err != nil { + log.Debugf("kiro-openai: failed to parse tool arguments: %v", err) + inputMap = make(map[string]interface{}) + } + + toolUses = append(toolUses, KiroToolUse{ + ToolUseID: toolUseID, + Name: toolName, + Input: inputMap, + }) + } + } + + // CRITICAL FIX: Kiro API requires non-empty content for assistant messages + // This can happen with compaction requests or error recovery scenarios + finalContent := contentBuilder.String() + if strings.TrimSpace(finalContent) == "" { + if len(toolUses) > 0 { + finalContent = kirocommon.DefaultAssistantContentWithTools + } else { + finalContent = kirocommon.DefaultAssistantContent + } + log.Debugf("kiro-openai: assistant content was empty, using default: %s", finalContent) + } + + return KiroAssistantResponseMessage{ + Content: finalContent, + ToolUses: toolUses, + } +} + +// buildFinalContent builds the final content with system prompt +func buildFinalContent(content, systemPrompt string, toolResults []KiroToolResult) string { + var contentBuilder strings.Builder + + if systemPrompt != "" { + contentBuilder.WriteString("--- SYSTEM PROMPT ---\n") + contentBuilder.WriteString(systemPrompt) + contentBuilder.WriteString("\n--- END SYSTEM PROMPT ---\n\n") + } + + contentBuilder.WriteString(content) + finalContent := contentBuilder.String() + + // CRITICAL: Kiro API requires content to be non-empty + if strings.TrimSpace(finalContent) == "" { + if len(toolResults) > 0 { + finalContent = "Tool results provided." + } else { + finalContent = "Continue" + } + log.Debugf("kiro-openai: content was empty, using default: %s", finalContent) + } + + return finalContent +} + +// checkThinkingModeFromOpenAI checks if thinking mode is enabled in the OpenAI request. +// Returns thinkingEnabled. +// Supports: +// - reasoning_effort parameter (low/medium/high/auto) +// - Model name containing "thinking" or "reason" +// - tag in system prompt (AMP/Cursor format) +func checkThinkingModeFromOpenAI(openaiBody []byte) bool { + return checkThinkingModeFromOpenAIWithHeaders(openaiBody, nil) +} + +// checkThinkingModeFromOpenAIWithHeaders checks if thinking mode is enabled in the OpenAI request. +// Returns thinkingEnabled. +// Supports: +// - Anthropic-Beta header with interleaved-thinking (Claude CLI) +// - reasoning_effort parameter (low/medium/high/auto) +// - Model name containing "thinking" or "reason" +// - tag in system prompt (AMP/Cursor format) +func checkThinkingModeFromOpenAIWithHeaders(openaiBody []byte, headers http.Header) bool { + // Check Anthropic-Beta header first (Claude CLI uses this) + if kiroclaude.IsThinkingEnabledFromHeader(headers) { + log.Debugf("kiro-openai: thinking mode enabled via Anthropic-Beta header") + return true + } + + // Check OpenAI format: reasoning_effort parameter + // Valid values: "low", "medium", "high", "auto" (not "none") + reasoningEffort := gjson.GetBytes(openaiBody, "reasoning_effort") + if reasoningEffort.Exists() { + effort := reasoningEffort.String() + if effort != "" && effort != "none" { + log.Debugf("kiro-openai: thinking mode enabled via reasoning_effort: %s", effort) + return true + } + } + + // Check AMP/Cursor format: interleaved in system prompt + bodyStr := string(openaiBody) + if strings.Contains(bodyStr, "") && strings.Contains(bodyStr, "") { + startTag := "" + endTag := "" + startIdx := strings.Index(bodyStr, startTag) + if startIdx >= 0 { + startIdx += len(startTag) + endIdx := strings.Index(bodyStr[startIdx:], endTag) + if endIdx >= 0 { + thinkingMode := bodyStr[startIdx : startIdx+endIdx] + if thinkingMode == "interleaved" || thinkingMode == "enabled" { + log.Debugf("kiro-openai: thinking mode enabled via AMP/Cursor format: %s", thinkingMode) + return true + } + } + } + } + + // Check model name for thinking hints + model := gjson.GetBytes(openaiBody, "model").String() + modelLower := strings.ToLower(model) + if strings.Contains(modelLower, "thinking") || strings.Contains(modelLower, "-reason") { + log.Debugf("kiro-openai: thinking mode enabled via model name hint: %s", model) + return true + } + + log.Debugf("kiro-openai: no thinking mode detected in OpenAI request") + return false +} + +// hasThinkingTagInBody checks if the request body already contains thinking configuration tags. +// This is used to prevent duplicate injection when client (e.g., AMP/Cursor) already includes thinking config. +func hasThinkingTagInBody(body []byte) bool { + bodyStr := string(body) + return strings.Contains(bodyStr, "") || strings.Contains(bodyStr, "") +} + +// extractToolChoiceHint extracts tool_choice from OpenAI request and returns a system prompt hint. +// OpenAI tool_choice values: +// - "none": Don't use any tools +// - "auto": Model decides (default, no hint needed) +// - "required": Must use at least one tool +// - {"type":"function","function":{"name":"..."}} : Must use specific tool +func extractToolChoiceHint(openaiBody []byte) string { + toolChoice := gjson.GetBytes(openaiBody, "tool_choice") + if !toolChoice.Exists() { + return "" + } + + // Handle string values + if toolChoice.Type == gjson.String { + switch toolChoice.String() { + case "none": + // Note: When tool_choice is "none", we should ideally not pass tools at all + // But since we can't modify tool passing here, we add a strong hint + return "[INSTRUCTION: Do NOT use any tools. Respond with text only.]" + case "required": + return "[INSTRUCTION: You MUST use at least one of the available tools to respond. Do not respond with text only - always make a tool call.]" + case "auto": + // Default behavior, no hint needed + return "" + } + } + + // Handle object value: {"type":"function","function":{"name":"..."}} + if toolChoice.IsObject() { + if toolChoice.Get("type").String() == "function" { + toolName := toolChoice.Get("function.name").String() + if toolName != "" { + return fmt.Sprintf("[INSTRUCTION: You MUST use the tool named '%s' to respond. Do not use any other tool or respond with text only.]", toolName) + } + } + } + + return "" +} + +// extractResponseFormatHint extracts response_format from OpenAI request and returns a system prompt hint. +// OpenAI response_format values: +// - {"type": "text"}: Default, no hint needed +// - {"type": "json_object"}: Must respond with valid JSON +// - {"type": "json_schema", "json_schema": {...}}: Must respond with JSON matching schema +func extractResponseFormatHint(openaiBody []byte) string { + responseFormat := gjson.GetBytes(openaiBody, "response_format") + if !responseFormat.Exists() { + return "" + } + + formatType := responseFormat.Get("type").String() + switch formatType { + case "json_object": + return "[INSTRUCTION: You MUST respond with valid JSON only. Do not include any text before or after the JSON. Do not wrap the JSON in markdown code blocks. Output raw JSON directly.]" + case "json_schema": + // Extract schema if provided + schema := responseFormat.Get("json_schema.schema") + if schema.Exists() { + schemaStr := schema.Raw + // Truncate if too long + if len(schemaStr) > 500 { + schemaStr = schemaStr[:500] + "..." + } + return fmt.Sprintf("[INSTRUCTION: You MUST respond with valid JSON that matches this schema: %s. Do not include any text before or after the JSON. Do not wrap the JSON in markdown code blocks. Output raw JSON directly.]", schemaStr) + } + return "[INSTRUCTION: You MUST respond with valid JSON only. Do not include any text before or after the JSON. Do not wrap the JSON in markdown code blocks. Output raw JSON directly.]" + case "text": + // Default behavior, no hint needed + return "" + } + + return "" +} + +// deduplicateToolResults removes duplicate tool results +func deduplicateToolResults(toolResults []KiroToolResult) []KiroToolResult { + if len(toolResults) == 0 { + return toolResults + } + + seenIDs := make(map[string]bool) + unique := make([]KiroToolResult, 0, len(toolResults)) + for _, tr := range toolResults { + if !seenIDs[tr.ToolUseID] { + seenIDs[tr.ToolUseID] = true + unique = append(unique, tr) + } else { + log.Debugf("kiro-openai: skipping duplicate toolResult: %s", tr.ToolUseID) + } + } + return unique +} diff --git a/internal/translator/kiro/openai/kiro_openai_request_test.go b/internal/translator/kiro/openai/kiro_openai_request_test.go new file mode 100644 index 0000000000..22953bbc27 --- /dev/null +++ b/internal/translator/kiro/openai/kiro_openai_request_test.go @@ -0,0 +1,440 @@ +package openai + +import ( + "encoding/json" + "testing" +) + +// TestToolResultsAttachedToCurrentMessage verifies that tool results from "tool" role messages +// are properly attached to the current user message (the last message in the conversation). +// This is critical for LiteLLM-translated requests where tool results appear as separate messages. +func TestToolResultsAttachedToCurrentMessage(t *testing.T) { + // OpenAI format request simulating LiteLLM's translation from Anthropic format + // Sequence: user -> assistant (with tool_calls) -> tool (result) -> user + // The last user message should have the tool results attached + input := []byte(`{ + "model": "kiro-claude-opus-4-5-agentic", + "messages": [ + {"role": "user", "content": "Hello, can you read a file for me?"}, + { + "role": "assistant", + "content": "I'll read that file for you.", + "tool_calls": [ + { + "id": "call_abc123", + "type": "function", + "function": { + "name": "Read", + "arguments": "{\"file_path\": \"/tmp/test.txt\"}" + } + } + ] + }, + { + "role": "tool", + "tool_call_id": "call_abc123", + "content": "File contents: Hello World!" + }, + {"role": "user", "content": "What did the file say?"} + ] + }`) + + result, _ := BuildKiroPayloadFromOpenAI(input, "kiro-model", "", "CLI", false, false, nil, nil) + + var payload KiroPayload + if err := json.Unmarshal(result, &payload); err != nil { + t.Fatalf("Failed to unmarshal result: %v", err) + } + + // The last user message becomes currentMessage + // History should have: user (first), assistant (with tool_calls) + t.Logf("History count: %d", len(payload.ConversationState.History)) + if len(payload.ConversationState.History) != 2 { + t.Errorf("Expected 2 history entries (user + assistant), got %d", len(payload.ConversationState.History)) + } + + // Tool results should be attached to currentMessage (the last user message) + ctx := payload.ConversationState.CurrentMessage.UserInputMessage.UserInputMessageContext + if ctx == nil { + t.Fatal("Expected currentMessage to have UserInputMessageContext with tool results") + } + + if len(ctx.ToolResults) != 1 { + t.Fatalf("Expected 1 tool result in currentMessage, got %d", len(ctx.ToolResults)) + } + + tr := ctx.ToolResults[0] + if tr.ToolUseID != "call_abc123" { + t.Errorf("Expected toolUseId 'call_abc123', got '%s'", tr.ToolUseID) + } + if len(tr.Content) == 0 || tr.Content[0].Text != "File contents: Hello World!" { + t.Errorf("Tool result content mismatch, got: %+v", tr.Content) + } +} + +// TestToolResultsInHistoryUserMessage verifies that when there are multiple user messages +// after tool results, the tool results are attached to the correct user message in history. +func TestToolResultsInHistoryUserMessage(t *testing.T) { + // Sequence: user -> assistant (with tool_calls) -> tool (result) -> user -> assistant -> user + // The first user after tool should have tool results in history + input := []byte(`{ + "model": "kiro-claude-opus-4-5-agentic", + "messages": [ + {"role": "user", "content": "Hello"}, + { + "role": "assistant", + "content": "I'll read the file.", + "tool_calls": [ + { + "id": "call_1", + "type": "function", + "function": { + "name": "Read", + "arguments": "{}" + } + } + ] + }, + { + "role": "tool", + "tool_call_id": "call_1", + "content": "File result" + }, + {"role": "user", "content": "Thanks for the file"}, + {"role": "assistant", "content": "You're welcome"}, + {"role": "user", "content": "Bye"} + ] + }`) + + result, _ := BuildKiroPayloadFromOpenAI(input, "kiro-model", "", "CLI", false, false, nil, nil) + + var payload KiroPayload + if err := json.Unmarshal(result, &payload); err != nil { + t.Fatalf("Failed to unmarshal result: %v", err) + } + + // History should have: user, assistant, user (with tool results), assistant + // CurrentMessage should be: last user "Bye" + t.Logf("History count: %d", len(payload.ConversationState.History)) + + // Find the user message in history with tool results + foundToolResults := false + for i, h := range payload.ConversationState.History { + if h.UserInputMessage != nil { + t.Logf("History[%d]: user message content=%q", i, h.UserInputMessage.Content) + if h.UserInputMessage.UserInputMessageContext != nil { + if len(h.UserInputMessage.UserInputMessageContext.ToolResults) > 0 { + foundToolResults = true + t.Logf(" Found %d tool results", len(h.UserInputMessage.UserInputMessageContext.ToolResults)) + tr := h.UserInputMessage.UserInputMessageContext.ToolResults[0] + if tr.ToolUseID != "call_1" { + t.Errorf("Expected toolUseId 'call_1', got '%s'", tr.ToolUseID) + } + } + } + } + if h.AssistantResponseMessage != nil { + t.Logf("History[%d]: assistant message content=%q", i, h.AssistantResponseMessage.Content) + } + } + + if !foundToolResults { + t.Error("Tool results were not attached to any user message in history") + } +} + +// TestToolResultsWithMultipleToolCalls verifies handling of multiple tool calls +func TestToolResultsWithMultipleToolCalls(t *testing.T) { + input := []byte(`{ + "model": "kiro-claude-opus-4-5-agentic", + "messages": [ + {"role": "user", "content": "Read two files for me"}, + { + "role": "assistant", + "content": "I'll read both files.", + "tool_calls": [ + { + "id": "call_1", + "type": "function", + "function": { + "name": "Read", + "arguments": "{\"file_path\": \"/tmp/file1.txt\"}" + } + }, + { + "id": "call_2", + "type": "function", + "function": { + "name": "Read", + "arguments": "{\"file_path\": \"/tmp/file2.txt\"}" + } + } + ] + }, + { + "role": "tool", + "tool_call_id": "call_1", + "content": "Content of file 1" + }, + { + "role": "tool", + "tool_call_id": "call_2", + "content": "Content of file 2" + }, + {"role": "user", "content": "What do they say?"} + ] + }`) + + result, _ := BuildKiroPayloadFromOpenAI(input, "kiro-model", "", "CLI", false, false, nil, nil) + + var payload KiroPayload + if err := json.Unmarshal(result, &payload); err != nil { + t.Fatalf("Failed to unmarshal result: %v", err) + } + + t.Logf("History count: %d", len(payload.ConversationState.History)) + t.Logf("CurrentMessage content: %q", payload.ConversationState.CurrentMessage.UserInputMessage.Content) + + // Check if there are any tool results anywhere + var totalToolResults int + for i, h := range payload.ConversationState.History { + if h.UserInputMessage != nil && h.UserInputMessage.UserInputMessageContext != nil { + count := len(h.UserInputMessage.UserInputMessageContext.ToolResults) + t.Logf("History[%d] user message has %d tool results", i, count) + totalToolResults += count + } + } + + ctx := payload.ConversationState.CurrentMessage.UserInputMessage.UserInputMessageContext + if ctx != nil { + t.Logf("CurrentMessage has %d tool results", len(ctx.ToolResults)) + totalToolResults += len(ctx.ToolResults) + } else { + t.Logf("CurrentMessage has no UserInputMessageContext") + } + + if totalToolResults != 2 { + t.Errorf("Expected 2 tool results total, got %d", totalToolResults) + } +} + +// TestToolResultsAtEndOfConversation verifies tool results are handled when +// the conversation ends with tool results (no following user message) +func TestToolResultsAtEndOfConversation(t *testing.T) { + input := []byte(`{ + "model": "kiro-claude-opus-4-5-agentic", + "messages": [ + {"role": "user", "content": "Read a file"}, + { + "role": "assistant", + "content": "Reading the file.", + "tool_calls": [ + { + "id": "call_end", + "type": "function", + "function": { + "name": "Read", + "arguments": "{\"file_path\": \"/tmp/test.txt\"}" + } + } + ] + }, + { + "role": "tool", + "tool_call_id": "call_end", + "content": "File contents here" + } + ] + }`) + + result, _ := BuildKiroPayloadFromOpenAI(input, "kiro-model", "", "CLI", false, false, nil, nil) + + var payload KiroPayload + if err := json.Unmarshal(result, &payload); err != nil { + t.Fatalf("Failed to unmarshal result: %v", err) + } + + // When the last message is a tool result, a synthetic user message is created + // and tool results should be attached to it + ctx := payload.ConversationState.CurrentMessage.UserInputMessage.UserInputMessageContext + if ctx == nil || len(ctx.ToolResults) == 0 { + t.Error("Expected tool results to be attached to current message when conversation ends with tool result") + } else { + if ctx.ToolResults[0].ToolUseID != "call_end" { + t.Errorf("Expected toolUseId 'call_end', got '%s'", ctx.ToolResults[0].ToolUseID) + } + } +} + +// TestToolResultsFollowedByAssistant verifies handling when tool results are followed +// by an assistant message (no intermediate user message). +// This is the pattern from LiteLLM translation of Anthropic format where: +// user message has ONLY tool_result blocks -> LiteLLM creates tool messages +// then the next message is assistant +func TestToolResultsFollowedByAssistant(t *testing.T) { + // Sequence: user -> assistant (with tool_calls) -> tool -> tool -> assistant -> user + // This simulates LiteLLM's translation of: + // user: "Read files" + // assistant: [tool_use, tool_use] + // user: [tool_result, tool_result] <- becomes multiple "tool" role messages + // assistant: "I've read them" + // user: "What did they say?" + input := []byte(`{ + "model": "kiro-claude-opus-4-5-agentic", + "messages": [ + {"role": "user", "content": "Read two files for me"}, + { + "role": "assistant", + "content": "I'll read both files.", + "tool_calls": [ + { + "id": "call_1", + "type": "function", + "function": { + "name": "Read", + "arguments": "{\"file_path\": \"/tmp/a.txt\"}" + } + }, + { + "id": "call_2", + "type": "function", + "function": { + "name": "Read", + "arguments": "{\"file_path\": \"/tmp/b.txt\"}" + } + } + ] + }, + { + "role": "tool", + "tool_call_id": "call_1", + "content": "Contents of file A" + }, + { + "role": "tool", + "tool_call_id": "call_2", + "content": "Contents of file B" + }, + { + "role": "assistant", + "content": "I've read both files." + }, + {"role": "user", "content": "What did they say?"} + ] + }`) + + result, _ := BuildKiroPayloadFromOpenAI(input, "kiro-model", "", "CLI", false, false, nil, nil) + + var payload KiroPayload + if err := json.Unmarshal(result, &payload); err != nil { + t.Fatalf("Failed to unmarshal result: %v", err) + } + + t.Logf("History count: %d", len(payload.ConversationState.History)) + + // Tool results should be attached to a synthetic user message or the history should be valid + var totalToolResults int + for i, h := range payload.ConversationState.History { + if h.UserInputMessage != nil { + t.Logf("History[%d]: user message content=%q", i, h.UserInputMessage.Content) + if h.UserInputMessage.UserInputMessageContext != nil { + count := len(h.UserInputMessage.UserInputMessageContext.ToolResults) + t.Logf(" Has %d tool results", count) + totalToolResults += count + } + } + if h.AssistantResponseMessage != nil { + t.Logf("History[%d]: assistant message content=%q", i, h.AssistantResponseMessage.Content) + } + } + + ctx := payload.ConversationState.CurrentMessage.UserInputMessage.UserInputMessageContext + if ctx != nil { + t.Logf("CurrentMessage has %d tool results", len(ctx.ToolResults)) + totalToolResults += len(ctx.ToolResults) + } + + if totalToolResults != 2 { + t.Errorf("Expected 2 tool results total, got %d", totalToolResults) + } +} + +// TestAssistantEndsConversation verifies handling when assistant is the last message +func TestAssistantEndsConversation(t *testing.T) { + input := []byte(`{ + "model": "kiro-claude-opus-4-5-agentic", + "messages": [ + {"role": "user", "content": "Hello"}, + { + "role": "assistant", + "content": "Hi there!" + } + ] + }`) + + result, _ := BuildKiroPayloadFromOpenAI(input, "kiro-model", "", "CLI", false, false, nil, nil) + + var payload KiroPayload + if err := json.Unmarshal(result, &payload); err != nil { + t.Fatalf("Failed to unmarshal result: %v", err) + } + + // When assistant is last, a "Continue" user message should be created + if payload.ConversationState.CurrentMessage.UserInputMessage.Content == "" { + t.Error("Expected a 'Continue' message to be created when assistant is last") + } +} + +func TestFilterOrphanedToolResults_RemovesHistoryAndCurrentOrphans(t *testing.T) { + history := []KiroHistoryMessage{ + { + AssistantResponseMessage: &KiroAssistantResponseMessage{ + Content: "assistant", + ToolUses: []KiroToolUse{ + {ToolUseID: "keep-1", Name: "Read", Input: map[string]interface{}{}}, + }, + }, + }, + { + UserInputMessage: &KiroUserInputMessage{ + Content: "user-with-mixed-results", + UserInputMessageContext: &KiroUserInputMessageContext{ + ToolResults: []KiroToolResult{ + {ToolUseID: "keep-1", Status: "success", Content: []KiroTextContent{{Text: "ok"}}}, + {ToolUseID: "orphan-1", Status: "success", Content: []KiroTextContent{{Text: "bad"}}}, + }, + }, + }, + }, + { + UserInputMessage: &KiroUserInputMessage{ + Content: "user-only-orphans", + UserInputMessageContext: &KiroUserInputMessageContext{ + ToolResults: []KiroToolResult{ + {ToolUseID: "orphan-2", Status: "success", Content: []KiroTextContent{{Text: "bad"}}}, + }, + }, + }, + }, + } + + currentToolResults := []KiroToolResult{ + {ToolUseID: "keep-1", Status: "success", Content: []KiroTextContent{{Text: "ok"}}}, + {ToolUseID: "orphan-3", Status: "success", Content: []KiroTextContent{{Text: "bad"}}}, + } + + filteredHistory, filteredCurrent := filterOrphanedToolResults(history, currentToolResults) + + ctx1 := filteredHistory[1].UserInputMessage.UserInputMessageContext + if ctx1 == nil || len(ctx1.ToolResults) != 1 || ctx1.ToolResults[0].ToolUseID != "keep-1" { + t.Fatalf("expected mixed history message to keep only keep-1, got: %+v", ctx1) + } + + if filteredHistory[2].UserInputMessage.UserInputMessageContext != nil { + t.Fatalf("expected orphan-only history context to be removed") + } + + if len(filteredCurrent) != 1 || filteredCurrent[0].ToolUseID != "keep-1" { + t.Fatalf("expected current tool results to keep only keep-1, got: %+v", filteredCurrent) + } +} diff --git a/internal/translator/kiro/openai/kiro_openai_response.go b/internal/translator/kiro/openai/kiro_openai_response.go new file mode 100644 index 0000000000..fe147a92cc --- /dev/null +++ b/internal/translator/kiro/openai/kiro_openai_response.go @@ -0,0 +1,277 @@ +// Package openai provides response translation from Kiro to OpenAI format. +// This package handles the conversion of Kiro API responses into OpenAI Chat Completions-compatible +// JSON format, transforming streaming events and non-streaming responses. +package openai + +import ( + "encoding/json" + "fmt" + "sync/atomic" + "time" + + "github.com/google/uuid" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + log "github.com/sirupsen/logrus" +) + +// functionCallIDCounter provides a process-wide unique counter for function call identifiers. +var functionCallIDCounter uint64 + +// BuildOpenAIResponse constructs an OpenAI Chat Completions-compatible response. +// Supports tool_calls when tools are present in the response. +// stopReason is passed from upstream; fallback logic applied if empty. +func BuildOpenAIResponse(content string, toolUses []KiroToolUse, model string, usageInfo usage.Detail, stopReason string) []byte { + return BuildOpenAIResponseWithReasoning(content, "", toolUses, model, usageInfo, stopReason) +} + +// BuildOpenAIResponseWithReasoning constructs an OpenAI Chat Completions-compatible response with reasoning_content support. +// Supports tool_calls when tools are present in the response. +// reasoningContent is included as reasoning_content field in the message when present. +// stopReason is passed from upstream; fallback logic applied if empty. +func BuildOpenAIResponseWithReasoning(content, reasoningContent string, toolUses []KiroToolUse, model string, usageInfo usage.Detail, stopReason string) []byte { + // Build the message object + message := map[string]interface{}{ + "role": "assistant", + "content": content, + } + + // Add reasoning_content if present (for thinking/reasoning models) + if reasoningContent != "" { + message["reasoning_content"] = reasoningContent + } + + // Add tool_calls if present + if len(toolUses) > 0 { + var toolCalls []map[string]interface{} + for i, tu := range toolUses { + inputJSON, _ := json.Marshal(tu.Input) + toolCalls = append(toolCalls, map[string]interface{}{ + "id": tu.ToolUseID, + "type": "function", + "index": i, + "function": map[string]interface{}{ + "name": tu.Name, + "arguments": string(inputJSON), + }, + }) + } + message["tool_calls"] = toolCalls + // When tool_calls are present, content should be null according to OpenAI spec + if content == "" { + message["content"] = nil + } + } + + // Use upstream stopReason; apply fallback logic if not provided + finishReason := mapKiroStopReasonToOpenAI(stopReason) + if finishReason == "" { + finishReason = "stop" + if len(toolUses) > 0 { + finishReason = "tool_calls" + } + log.Debugf("kiro-openai: buildOpenAIResponse using fallback finish_reason: %s", finishReason) + } + + response := map[string]interface{}{ + "id": "chatcmpl-" + uuid.New().String()[:24], + "object": "chat.completion", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{ + { + "index": 0, + "message": message, + "finish_reason": finishReason, + }, + }, + "usage": map[string]interface{}{ + "prompt_tokens": usageInfo.InputTokens, + "completion_tokens": usageInfo.OutputTokens, + "total_tokens": usageInfo.InputTokens + usageInfo.OutputTokens, + }, + } + + result, _ := json.Marshal(response) + return result +} + +// mapKiroStopReasonToOpenAI converts Kiro/Claude stop_reason to OpenAI finish_reason +func mapKiroStopReasonToOpenAI(stopReason string) string { + switch stopReason { + case "end_turn": + return "stop" + case "stop_sequence": + return "stop" + case "tool_use": + return "tool_calls" + case "max_tokens": + return "length" + case "content_filtered": + return "content_filter" + default: + return stopReason + } +} + +// BuildOpenAIStreamChunk constructs an OpenAI Chat Completions streaming chunk. +// This is the delta format used in streaming responses. +func BuildOpenAIStreamChunk(model string, deltaContent string, deltaToolCalls []map[string]interface{}, finishReason string, index int) []byte { + delta := map[string]interface{}{} + + // First chunk should include role + if index == 0 && deltaContent == "" && len(deltaToolCalls) == 0 { + delta["role"] = "assistant" + delta["content"] = "" + } else if deltaContent != "" { + delta["content"] = deltaContent + } + + // Add tool_calls delta if present + if len(deltaToolCalls) > 0 { + delta["tool_calls"] = deltaToolCalls + } + + choice := map[string]interface{}{ + "index": 0, + "delta": delta, + } + + if finishReason != "" { + choice["finish_reason"] = finishReason + } else { + choice["finish_reason"] = nil + } + + chunk := map[string]interface{}{ + "id": "chatcmpl-" + uuid.New().String()[:12], + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{choice}, + } + + result, _ := json.Marshal(chunk) + return result +} + +// BuildOpenAIStreamChunkWithToolCallStart creates a stream chunk for tool call start +func BuildOpenAIStreamChunkWithToolCallStart(model string, toolUseID, toolName string, toolIndex int) []byte { + toolCall := map[string]interface{}{ + "index": toolIndex, + "id": toolUseID, + "type": "function", + "function": map[string]interface{}{ + "name": toolName, + "arguments": "", + }, + } + + delta := map[string]interface{}{ + "tool_calls": []map[string]interface{}{toolCall}, + } + + choice := map[string]interface{}{ + "index": 0, + "delta": delta, + "finish_reason": nil, + } + + chunk := map[string]interface{}{ + "id": "chatcmpl-" + uuid.New().String()[:12], + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{choice}, + } + + result, _ := json.Marshal(chunk) + return result +} + +// BuildOpenAIStreamChunkWithToolCallDelta creates a stream chunk for tool call arguments delta +func BuildOpenAIStreamChunkWithToolCallDelta(model string, argumentsDelta string, toolIndex int) []byte { + toolCall := map[string]interface{}{ + "index": toolIndex, + "function": map[string]interface{}{ + "arguments": argumentsDelta, + }, + } + + delta := map[string]interface{}{ + "tool_calls": []map[string]interface{}{toolCall}, + } + + choice := map[string]interface{}{ + "index": 0, + "delta": delta, + "finish_reason": nil, + } + + chunk := map[string]interface{}{ + "id": "chatcmpl-" + uuid.New().String()[:12], + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{choice}, + } + + result, _ := json.Marshal(chunk) + return result +} + +// BuildOpenAIStreamDoneChunk creates the final [DONE] stream event +func BuildOpenAIStreamDoneChunk() []byte { + return []byte("data: [DONE]") +} + +// BuildOpenAIStreamFinishChunk creates the final chunk with finish_reason +func BuildOpenAIStreamFinishChunk(model string, finishReason string) []byte { + choice := map[string]interface{}{ + "index": 0, + "delta": map[string]interface{}{}, + "finish_reason": finishReason, + } + + chunk := map[string]interface{}{ + "id": "chatcmpl-" + uuid.New().String()[:12], + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{choice}, + } + + result, _ := json.Marshal(chunk) + return result +} + +// BuildOpenAIStreamUsageChunk creates a chunk with usage information (optional, for stream_options.include_usage) +func BuildOpenAIStreamUsageChunk(model string, usageInfo usage.Detail) []byte { + chunk := map[string]interface{}{ + "id": "chatcmpl-" + uuid.New().String()[:12], + "object": "chat.completion.chunk", + "created": time.Now().Unix(), + "model": model, + "choices": []map[string]interface{}{}, + "usage": map[string]interface{}{ + "prompt_tokens": usageInfo.InputTokens, + "completion_tokens": usageInfo.OutputTokens, + "total_tokens": usageInfo.InputTokens + usageInfo.OutputTokens, + }, + } + + result, _ := json.Marshal(chunk) + return result +} + +// GenerateToolCallID generates a unique tool call ID in OpenAI format +func GenerateToolCallID(toolName string) string { + return fmt.Sprintf("call_%s_%d_%d", toolName[:min(8, len(toolName))], time.Now().UnixNano(), atomic.AddUint64(&functionCallIDCounter, 1)) +} + +// min returns the minimum of two integers +func min(a, b int) int { + if a < b { + return a + } + return b +} diff --git a/internal/translator/kiro/openai/kiro_openai_stream.go b/internal/translator/kiro/openai/kiro_openai_stream.go new file mode 100644 index 0000000000..490bc78d51 --- /dev/null +++ b/internal/translator/kiro/openai/kiro_openai_stream.go @@ -0,0 +1,212 @@ +// Package openai provides streaming SSE event building for OpenAI format. +// This package handles the construction of OpenAI-compatible Server-Sent Events (SSE) +// for streaming responses from Kiro API. +package openai + +import ( + "encoding/json" + "time" + + "github.com/google/uuid" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" +) + +// OpenAIStreamState tracks the state of streaming response conversion +type OpenAIStreamState struct { + ChunkIndex int + ToolCallIndex int + HasSentFirstChunk bool + Model string + ResponseID string + Created int64 +} + +// NewOpenAIStreamState creates a new stream state for tracking +func NewOpenAIStreamState(model string) *OpenAIStreamState { + return &OpenAIStreamState{ + ChunkIndex: 0, + ToolCallIndex: 0, + HasSentFirstChunk: false, + Model: model, + ResponseID: "chatcmpl-" + uuid.New().String()[:24], + Created: time.Now().Unix(), + } +} + +// FormatSSEEvent formats a JSON payload for SSE streaming. +// Note: This returns raw JSON data without "data:" prefix. +// The SSE "data:" prefix is added by the Handler layer (e.g., openai_handlers.go) +// to maintain architectural consistency and avoid double-prefix issues. +func FormatSSEEvent(data []byte) string { + return string(data) +} + +// BuildOpenAISSETextDelta creates an SSE event for text content delta +func BuildOpenAISSETextDelta(state *OpenAIStreamState, textDelta string) string { + delta := map[string]interface{}{ + "content": textDelta, + } + + // Include role in first chunk + if !state.HasSentFirstChunk { + delta["role"] = "assistant" + state.HasSentFirstChunk = true + } + + chunk := buildBaseChunk(state, delta, nil) + result, _ := json.Marshal(chunk) + state.ChunkIndex++ + return FormatSSEEvent(result) +} + +// BuildOpenAISSEToolCallStart creates an SSE event for tool call start +func BuildOpenAISSEToolCallStart(state *OpenAIStreamState, toolUseID, toolName string) string { + toolCall := map[string]interface{}{ + "index": state.ToolCallIndex, + "id": toolUseID, + "type": "function", + "function": map[string]interface{}{ + "name": toolName, + "arguments": "", + }, + } + + delta := map[string]interface{}{ + "tool_calls": []map[string]interface{}{toolCall}, + } + + // Include role in first chunk if not sent yet + if !state.HasSentFirstChunk { + delta["role"] = "assistant" + state.HasSentFirstChunk = true + } + + chunk := buildBaseChunk(state, delta, nil) + result, _ := json.Marshal(chunk) + state.ChunkIndex++ + return FormatSSEEvent(result) +} + +// BuildOpenAISSEToolCallArgumentsDelta creates an SSE event for tool call arguments delta +func BuildOpenAISSEToolCallArgumentsDelta(state *OpenAIStreamState, argumentsDelta string, toolIndex int) string { + toolCall := map[string]interface{}{ + "index": toolIndex, + "function": map[string]interface{}{ + "arguments": argumentsDelta, + }, + } + + delta := map[string]interface{}{ + "tool_calls": []map[string]interface{}{toolCall}, + } + + chunk := buildBaseChunk(state, delta, nil) + result, _ := json.Marshal(chunk) + state.ChunkIndex++ + return FormatSSEEvent(result) +} + +// BuildOpenAISSEFinish creates an SSE event with finish_reason +func BuildOpenAISSEFinish(state *OpenAIStreamState, finishReason string) string { + chunk := buildBaseChunk(state, map[string]interface{}{}, &finishReason) + result, _ := json.Marshal(chunk) + state.ChunkIndex++ + return FormatSSEEvent(result) +} + +// BuildOpenAISSEUsage creates an SSE event with usage information +func BuildOpenAISSEUsage(state *OpenAIStreamState, usageInfo usage.Detail) string { + chunk := map[string]interface{}{ + "id": state.ResponseID, + "object": "chat.completion.chunk", + "created": state.Created, + "model": state.Model, + "choices": []map[string]interface{}{}, + "usage": map[string]interface{}{ + "prompt_tokens": usageInfo.InputTokens, + "completion_tokens": usageInfo.OutputTokens, + "total_tokens": usageInfo.InputTokens + usageInfo.OutputTokens, + }, + } + result, _ := json.Marshal(chunk) + return FormatSSEEvent(result) +} + +// BuildOpenAISSEDone creates the final [DONE] SSE event. +// Note: This returns raw "[DONE]" without "data:" prefix. +// The SSE "data:" prefix is added by the Handler layer (e.g., openai_handlers.go) +// to maintain architectural consistency and avoid double-prefix issues. +func BuildOpenAISSEDone() string { + return "[DONE]" +} + +// buildBaseChunk creates a base chunk structure for streaming +func buildBaseChunk(state *OpenAIStreamState, delta map[string]interface{}, finishReason *string) map[string]interface{} { + choice := map[string]interface{}{ + "index": 0, + "delta": delta, + } + + if finishReason != nil { + choice["finish_reason"] = *finishReason + } else { + choice["finish_reason"] = nil + } + + return map[string]interface{}{ + "id": state.ResponseID, + "object": "chat.completion.chunk", + "created": state.Created, + "model": state.Model, + "choices": []map[string]interface{}{choice}, + } +} + +// BuildOpenAISSEReasoningDelta creates an SSE event for reasoning content delta +// This is used for o1/o3 style models that expose reasoning tokens +func BuildOpenAISSEReasoningDelta(state *OpenAIStreamState, reasoningDelta string) string { + delta := map[string]interface{}{ + "reasoning_content": reasoningDelta, + } + + // Include role in first chunk + if !state.HasSentFirstChunk { + delta["role"] = "assistant" + state.HasSentFirstChunk = true + } + + chunk := buildBaseChunk(state, delta, nil) + result, _ := json.Marshal(chunk) + state.ChunkIndex++ + return FormatSSEEvent(result) +} + +// BuildOpenAISSEFirstChunk creates the first chunk with role only +func BuildOpenAISSEFirstChunk(state *OpenAIStreamState) string { + delta := map[string]interface{}{ + "role": "assistant", + "content": "", + } + + state.HasSentFirstChunk = true + chunk := buildBaseChunk(state, delta, nil) + result, _ := json.Marshal(chunk) + state.ChunkIndex++ + return FormatSSEEvent(result) +} + +// ThinkingTagState tracks state for thinking tag detection in streaming +type ThinkingTagState struct { + InThinkingBlock bool + PendingStartChars int + PendingEndChars int +} + +// NewThinkingTagState creates a new thinking tag state +func NewThinkingTagState() *ThinkingTagState { + return &ThinkingTagState{ + InThinkingBlock: false, + PendingStartChars: 0, + PendingEndChars: 0, + } +} diff --git a/internal/translator/openai/claude/init.go b/internal/translator/openai/claude/init.go index 0e0f82eae9..baeeca84bc 100644 --- a/internal/translator/openai/claude/init.go +++ b/internal/translator/openai/claude/init.go @@ -1,9 +1,9 @@ package claude import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/openai/claude/openai_claude_request.go b/internal/translator/openai/claude/openai_claude_request.go index f12dd0c694..99fc2763ff 100644 --- a/internal/translator/openai/claude/openai_claude_request.go +++ b/internal/translator/openai/claude/openai_claude_request.go @@ -8,7 +8,7 @@ package claude import ( "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/openai/claude/openai_claude_response.go b/internal/translator/openai/claude/openai_claude_response.go index 46c75898c4..1925539c19 100644 --- a/internal/translator/openai/claude/openai_claude_response.go +++ b/internal/translator/openai/claude/openai_claude_response.go @@ -10,8 +10,8 @@ import ( "context" "strings" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -236,7 +236,7 @@ func convertOpenAIStreamingChunkToAnthropic(rawJSON []byte, param *ConvertOpenAI // Handle function name if function := toolCall.Get("function"); function.Exists() { - if name := function.Get("name"); name.Exists() { + if name := function.Get("name"); name.Exists() && name.String() != "" { accumulator.Name = util.MapToolName(param.ToolNameMap, name.String()) stopThinkingContentBlock(param, &results) diff --git a/internal/translator/openai/claude/openai_claude_response_test.go b/internal/translator/openai/claude/openai_claude_response_test.go new file mode 100644 index 0000000000..8c36fc3d8c --- /dev/null +++ b/internal/translator/openai/claude/openai_claude_response_test.go @@ -0,0 +1,41 @@ +package claude + +import ( + "bytes" + "context" + "testing" +) + +func TestConvertOpenAIResponseToClaude_StreamIgnoresNullToolNameDelta(t *testing.T) { + originalRequest := []byte(`{"stream":true}`) + var param any + + firstChunks := ConvertOpenAIResponseToClaude( + context.Background(), + "test-model", + originalRequest, + nil, + []byte(`data: {"id":"chatcmpl_1","model":"test-model","created":1,"choices":[{"index":0,"delta":{"role":"assistant","tool_calls":[{"index":0,"id":"call_1","type":"function","function":{"name":"read_file","arguments":""}}]},"finish_reason":null}]}`), + ¶m, + ) + firstOutput := bytes.Join(firstChunks, nil) + if !bytes.Contains(firstOutput, []byte(`"name":"read_file"`)) { + t.Fatalf("expected first chunk to start read_file tool block, got %s", string(firstOutput)) + } + + secondChunks := ConvertOpenAIResponseToClaude( + context.Background(), + "test-model", + originalRequest, + nil, + []byte(`data: {"id":"chatcmpl_1","model":"test-model","created":1,"choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"name":null,"arguments":"{\"path\":\"/tmp/a\"}"}}]},"finish_reason":null}]}`), + ¶m, + ) + secondOutput := bytes.Join(secondChunks, nil) + if bytes.Contains(secondOutput, []byte(`content_block_start`)) { + t.Fatalf("did not expect null tool name delta to start a new content block, got %s", string(secondOutput)) + } + if bytes.Contains(secondOutput, []byte(`"name":""`)) { + t.Fatalf("did not expect null tool name delta to emit an empty tool name, got %s", string(secondOutput)) + } +} diff --git a/internal/translator/openai/gemini-cli/init.go b/internal/translator/openai/gemini-cli/init.go index 12aec5ec90..7b52d06dc0 100644 --- a/internal/translator/openai/gemini-cli/init.go +++ b/internal/translator/openai/gemini-cli/init.go @@ -1,9 +1,9 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/openai/gemini-cli/openai_gemini_request.go b/internal/translator/openai/gemini-cli/openai_gemini_request.go index 847c278f36..c651826669 100644 --- a/internal/translator/openai/gemini-cli/openai_gemini_request.go +++ b/internal/translator/openai/gemini-cli/openai_gemini_request.go @@ -6,7 +6,7 @@ package geminiCLI import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/gemini" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/gemini" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/openai/gemini-cli/openai_gemini_response.go b/internal/translator/openai/gemini-cli/openai_gemini_response.go index a7369dbfe9..e54e08fc27 100644 --- a/internal/translator/openai/gemini-cli/openai_gemini_response.go +++ b/internal/translator/openai/gemini-cli/openai_gemini_response.go @@ -8,8 +8,8 @@ package geminiCLI import ( "context" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/gemini" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/gemini" ) // ConvertOpenAIResponseToGeminiCLI converts OpenAI Chat Completions streaming response format to Gemini API format. diff --git a/internal/translator/openai/gemini/init.go b/internal/translator/openai/gemini/init.go index 4f056ace9f..24ae281eff 100644 --- a/internal/translator/openai/gemini/init.go +++ b/internal/translator/openai/gemini/init.go @@ -1,9 +1,9 @@ package gemini import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/openai/gemini/openai_gemini_request.go b/internal/translator/openai/gemini/openai_gemini_request.go index b4edbb1df6..7369de88df 100644 --- a/internal/translator/openai/gemini/openai_gemini_request.go +++ b/internal/translator/openai/gemini/openai_gemini_request.go @@ -11,7 +11,7 @@ import ( "math/big" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/openai/gemini/openai_gemini_response.go b/internal/translator/openai/gemini/openai_gemini_response.go index 092a778eac..439ae8fbd7 100644 --- a/internal/translator/openai/gemini/openai_gemini_response.go +++ b/internal/translator/openai/gemini/openai_gemini_response.go @@ -12,7 +12,7 @@ import ( "strconv" "strings" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/openai/openai/chat-completions/init.go b/internal/translator/openai/openai/chat-completions/init.go index 90fa3dcd90..bfe82cea72 100644 --- a/internal/translator/openai/openai/chat-completions/init.go +++ b/internal/translator/openai/openai/chat-completions/init.go @@ -1,9 +1,9 @@ package chat_completions import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/openai/openai/responses/init.go b/internal/translator/openai/openai/responses/init.go index e6f60e0e13..c47081bae3 100644 --- a/internal/translator/openai/openai/responses/init.go +++ b/internal/translator/openai/openai/responses/init.go @@ -1,9 +1,9 @@ package responses import ( - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/translator" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/translator" ) func init() { diff --git a/internal/translator/openai/openai/responses/openai_openai-responses_request.go b/internal/translator/openai/openai/responses/openai_openai-responses_request.go index 2366c9c37b..15acf7cdb4 100644 --- a/internal/translator/openai/openai/responses/openai_openai-responses_request.go +++ b/internal/translator/openai/openai/responses/openai_openai-responses_request.go @@ -57,11 +57,72 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu // Convert input array to messages if input := root.Get("input"); input.Exists() && input.IsArray() { - input.ForEach(func(_, item gjson.Result) bool { + inputItems := input.Array() + outputCallIDs := make(map[string]struct{}) + for _, item := range inputItems { + if item.Get("type").String() != "function_call_output" { + continue + } + callID := strings.TrimSpace(item.Get("call_id").String()) + if callID == "" { + continue + } + outputCallIDs[callID] = struct{}{} + } + + pendingToolCalls := make([]interface{}, 0) + pendingToolCallIDs := make([]string, 0) + awaitingToolOutputs := make(map[string]struct{}) + deferredMessages := make([][]byte, 0) + + flushPendingToolCalls := func() { + if len(pendingToolCalls) == 0 { + return + } + assistantMessage := []byte(`{"role":"assistant","tool_calls":[]}`) + assistantMessage, _ = sjson.SetBytes(assistantMessage, "tool_calls", pendingToolCalls) + out, _ = sjson.SetRawBytes(out, "messages.-1", assistantMessage) + for _, id := range pendingToolCallIDs { + if strings.TrimSpace(id) == "" { + continue + } + awaitingToolOutputs[id] = struct{}{} + } + pendingToolCalls = pendingToolCalls[:0] + pendingToolCallIDs = pendingToolCallIDs[:0] + } + flushDeferredMessages := func() { + for _, message := range deferredMessages { + out, _ = sjson.SetRawBytes(out, "messages.-1", message) + } + deferredMessages = deferredMessages[:0] + } + hasAwaitingToolOutput := func() bool { + for id := range awaitingToolOutputs { + if _, ok := outputCallIDs[id]; ok { + return true + } + } + return false + } + appendRegularMessage := func(message []byte) { + // Keep tool-call adjacency strict for providers that require + // assistant(tool_calls) -> tool(tool_call_id) with no message in between. + if hasAwaitingToolOutput() { + deferredMessages = append(deferredMessages, message) + return + } + out, _ = sjson.SetRawBytes(out, "messages.-1", message) + } + + for _, item := range inputItems { itemType := item.Get("type").String() if itemType == "" && item.Get("role").String() != "" { itemType = "message" } + if itemType != "function_call" { + flushPendingToolCalls() + } switch itemType { case "message", "": @@ -109,12 +170,10 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu message, _ = sjson.SetBytes(message, "content", content.String()) } - out, _ = sjson.SetRawBytes(out, "messages.-1", message) + appendRegularMessage(message) case "function_call": - // Handle function call conversion to assistant message with tool_calls - assistantMessage := []byte(`{"role":"assistant","tool_calls":[]}`) - + // Buffer consecutive function calls and emit them as one assistant message. toolCall := []byte(`{"id":"","type":"function","function":{"name":"","arguments":""}}`) if callId := item.Get("call_id"); callId.Exists() { @@ -128,16 +187,19 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu if arguments := item.Get("arguments"); arguments.Exists() { toolCall, _ = sjson.SetBytes(toolCall, "function.arguments", arguments.String()) } - - assistantMessage, _ = sjson.SetRawBytes(assistantMessage, "tool_calls.0", toolCall) - out, _ = sjson.SetRawBytes(out, "messages.-1", assistantMessage) + pendingToolCalls = append(pendingToolCalls, gjson.ParseBytes(toolCall).Value()) + if callID := strings.TrimSpace(item.Get("call_id").String()); callID != "" { + pendingToolCallIDs = append(pendingToolCallIDs, callID) + } case "function_call_output": // Handle function call output conversion to tool message toolMessage := []byte(`{"role":"tool","tool_call_id":"","content":""}`) + callID := "" if callId := item.Get("call_id"); callId.Exists() { - toolMessage, _ = sjson.SetBytes(toolMessage, "tool_call_id", callId.String()) + callID = strings.TrimSpace(callId.String()) + toolMessage, _ = sjson.SetBytes(toolMessage, "tool_call_id", callID) } if output := item.Get("output"); output.Exists() { @@ -145,10 +207,17 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu } out, _ = sjson.SetRawBytes(out, "messages.-1", toolMessage) + if callID != "" { + delete(awaitingToolOutputs, callID) + } + if len(awaitingToolOutputs) == 0 && len(deferredMessages) > 0 { + flushDeferredMessages() + } } - return true - }) + } + flushPendingToolCalls() + flushDeferredMessages() } else if input.Type == gjson.String { msg := []byte(`{}`) msg, _ = sjson.SetBytes(msg, "role", "user") diff --git a/internal/translator/openai/openai/responses/openai_openai-responses_request_test.go b/internal/translator/openai/openai/responses/openai_openai-responses_request_test.go new file mode 100644 index 0000000000..9dd0e288b2 --- /dev/null +++ b/internal/translator/openai/openai/responses/openai_openai-responses_request_test.go @@ -0,0 +1,124 @@ +package responses + +import ( + "bytes" + "encoding/json" + "testing" + + "github.com/tidwall/gjson" +) + +func prettyJSONForTest(raw []byte) string { + if !gjson.ValidBytes(raw) { + return string(raw) + } + var out bytes.Buffer + if err := json.Indent(&out, raw, "", " "); err != nil { + return string(raw) + } + return out.String() +} + +func TestConvertOpenAIResponsesRequestToOpenAIChatCompletions_MergeConsecutiveFunctionCalls(t *testing.T) { + raw := []byte(`{ + "input": [ + {"type":"function_call","call_id":"exec_command:0","name":"exec_command","arguments":"{\"cmd\":\"ls\"}"}, + {"type":"function_call","call_id":"exec_command:1","name":"exec_command","arguments":"{\"cmd\":\"pwd\"}"}, + {"type":"function_call_output","call_id":"exec_command:0","output":"ok0"}, + {"type":"function_call_output","call_id":"exec_command:1","output":"ok1"} + ] + }`) + t.Logf("input json:\n%s", prettyJSONForTest(raw)) + + out := ConvertOpenAIResponsesRequestToOpenAIChatCompletions("kimi-k2.6", raw, true) + t.Logf("output json:\n%s", prettyJSONForTest(out)) + + msgs := gjson.GetBytes(out, "messages") + if !msgs.Exists() || !msgs.IsArray() { + t.Fatalf("messages should be an array") + } + if got := len(msgs.Array()); got != 3 { + t.Fatalf("messages count = %d, want %d", got, 3) + } + + if got := gjson.GetBytes(out, "messages.0.role").String(); got != "assistant" { + t.Fatalf("messages.0.role = %q, want %q", got, "assistant") + } + if got := len(gjson.GetBytes(out, "messages.0.tool_calls").Array()); got != 2 { + t.Fatalf("messages.0.tool_calls length = %d, want %d", got, 2) + } + if got := gjson.GetBytes(out, "messages.0.tool_calls.0.id").String(); got != "exec_command:0" { + t.Fatalf("messages.0.tool_calls.0.id = %q, want %q", got, "exec_command:0") + } + if got := gjson.GetBytes(out, "messages.0.tool_calls.1.id").String(); got != "exec_command:1" { + t.Fatalf("messages.0.tool_calls.1.id = %q, want %q", got, "exec_command:1") + } + + if got := gjson.GetBytes(out, "messages.1.tool_call_id").String(); got != "exec_command:0" { + t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "exec_command:0") + } + if got := gjson.GetBytes(out, "messages.2.tool_call_id").String(); got != "exec_command:1" { + t.Fatalf("messages.2.tool_call_id = %q, want %q", got, "exec_command:1") + } +} + +func TestConvertOpenAIResponsesRequestToOpenAIChatCompletions_SplitFunctionCallsWhenInterrupted(t *testing.T) { + raw := []byte(`{ + "input": [ + {"type":"function_call","call_id":"call_a","name":"tool_a","arguments":"{}"}, + {"type":"message","role":"user","content":"next"}, + {"type":"function_call","call_id":"call_b","name":"tool_b","arguments":"{}"} + ] + }`) + t.Logf("input json:\n%s", prettyJSONForTest(raw)) + + out := ConvertOpenAIResponsesRequestToOpenAIChatCompletions("kimi-k2.6", raw, false) + t.Logf("output json:\n%s", prettyJSONForTest(out)) + + if got := len(gjson.GetBytes(out, "messages").Array()); got != 3 { + t.Fatalf("messages count = %d, want %d", got, 3) + } + if got := gjson.GetBytes(out, "messages.0.tool_calls.0.id").String(); got != "call_a" { + t.Fatalf("messages.0.tool_calls.0.id = %q, want %q", got, "call_a") + } + if got := gjson.GetBytes(out, "messages.2.tool_calls.0.id").String(); got != "call_b" { + t.Fatalf("messages.2.tool_calls.0.id = %q, want %q", got, "call_b") + } +} + +func TestConvertOpenAIResponsesRequestToOpenAIChatCompletions_DefersMessageUntilToolOutput(t *testing.T) { + raw := []byte(`{ + "input": [ + {"type":"function_call","call_id":"call_x","name":"exec_command","arguments":"{\"cmd\":\"echo hi\"}"}, + {"type":"message","role":"user","content":"Approved command prefix saved"}, + {"type":"function_call_output","call_id":"call_x","output":"ok"}, + {"type":"message","role":"user","content":"next"} + ] + }`) + t.Logf("input json:\n%s", prettyJSONForTest(raw)) + + out := ConvertOpenAIResponsesRequestToOpenAIChatCompletions("kimi-k2.6", raw, true) + t.Logf("output json:\n%s", prettyJSONForTest(out)) + + if got := len(gjson.GetBytes(out, "messages").Array()); got != 4 { + t.Fatalf("messages count = %d, want %d", got, 4) + } + if got := gjson.GetBytes(out, "messages.0.role").String(); got != "assistant" { + t.Fatalf("messages.0.role = %q, want %q", got, "assistant") + } + if got := gjson.GetBytes(out, "messages.1.role").String(); got != "tool" { + t.Fatalf("messages.1.role = %q, want %q", got, "tool") + } + if got := gjson.GetBytes(out, "messages.1.tool_call_id").String(); got != "call_x" { + t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_x") + } + if got := gjson.GetBytes(out, "messages.2.role").String(); got != "user" { + t.Fatalf("messages.2.role = %q, want %q", got, "user") + } + if got := gjson.GetBytes(out, "messages.2.content").String(); got != "Approved command prefix saved" { + t.Fatalf("messages.2.content = %q, want %q", got, "Approved command prefix saved") + } + if got := gjson.GetBytes(out, "messages.3.content").String(); got != "next" { + t.Fatalf("messages.3.content = %q, want %q", got, "next") + } +} diff --git a/internal/translator/openai/openai/responses/openai_openai-responses_response.go b/internal/translator/openai/openai/responses/openai_openai-responses_response.go index 8a44aede44..8895b68445 100644 --- a/internal/translator/openai/openai/responses/openai_openai-responses_response.go +++ b/internal/translator/openai/openai/responses/openai_openai-responses_response.go @@ -9,7 +9,7 @@ import ( "sync/atomic" "time" - translatorcommon "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/common" + translatorcommon "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/common" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/internal/translator/translator/translator.go b/internal/translator/translator/translator.go index ab3f68a99d..88766a83bb 100644 --- a/internal/translator/translator/translator.go +++ b/internal/translator/translator/translator.go @@ -7,8 +7,8 @@ package translator import ( "context" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) // registry holds the default translator registry instance. diff --git a/internal/tui/app.go b/internal/tui/app.go index b9ee9e1a3a..c0a7c3a8ab 100644 --- a/internal/tui/app.go +++ b/internal/tui/app.go @@ -18,7 +18,6 @@ const ( tabAuthFiles tabAPIKeys tabOAuth - tabUsage tabLogs ) @@ -40,7 +39,6 @@ type App struct { auth authTabModel keys keysTabModel oauth oauthTabModel - usage usageTabModel logs logsTabModel client *Client @@ -50,7 +48,7 @@ type App struct { ready bool // Track which tabs have been initialized (fetched data) - initialized [7]bool + initialized [6]bool } type authConnectMsg struct { @@ -81,10 +79,9 @@ func NewApp(port int, secretKey string, hook *LogHook) App { auth: newAuthTabModel(client), keys: newKeysTabModel(client), oauth: newOAuthTabModel(client), - usage: newUsageTabModel(client), logs: newLogsTabModel(client, hook), client: client, - initialized: [7]bool{ + initialized: [6]bool{ tabDashboard: true, tabLogs: true, }, @@ -92,7 +89,7 @@ func NewApp(port int, secretKey string, hook *LogHook) App { app.refreshTabs() if authRequired { - app.initialized = [7]bool{} + app.initialized = [6]bool{} } app.setAuthInputPrompt() return app @@ -128,7 +125,6 @@ func (a App) Update(msg tea.Msg) (tea.Model, tea.Cmd) { a.auth.SetSize(contentW, contentH) a.keys.SetSize(contentW, contentH) a.oauth.SetSize(contentW, contentH) - a.usage.SetSize(contentW, contentH) a.logs.SetSize(contentW, contentH) return a, nil @@ -142,7 +138,7 @@ func (a App) Update(msg tea.Msg) (tea.Model, tea.Cmd) { a.authenticated = true a.logsEnabled = a.standalone || isLogsEnabledFromConfig(msg.cfg) a.refreshTabs() - a.initialized = [7]bool{} + a.initialized = [6]bool{} a.initialized[tabDashboard] = true cmds := []tea.Cmd{a.dashboard.Init()} if a.logsEnabled { @@ -258,8 +254,6 @@ func (a App) Update(msg tea.Msg) (tea.Model, tea.Cmd) { a.keys, cmd = a.keys.Update(msg) case tabOAuth: a.oauth, cmd = a.oauth.Update(msg) - case tabUsage: - a.usage, cmd = a.usage.Update(msg) case tabLogs: a.logs, cmd = a.logs.Update(msg) } @@ -322,8 +316,6 @@ func (a *App) initTabIfNeeded(_ int) tea.Cmd { return a.keys.Init() case tabOAuth: return a.oauth.Init() - case tabUsage: - return a.usage.Init() case tabLogs: if !a.logsEnabled { return nil @@ -360,8 +352,6 @@ func (a App) View() string { sb.WriteString(a.keys.View()) case tabOAuth: sb.WriteString(a.oauth.View()) - case tabUsage: - sb.WriteString(a.usage.View()) case tabLogs: if a.logsEnabled { sb.WriteString(a.logs.View()) @@ -529,10 +519,6 @@ func (a App) broadcastToAllTabs(msg tea.Msg) (tea.Model, tea.Cmd) { if cmd != nil { cmds = append(cmds, cmd) } - a.usage, cmd = a.usage.Update(msg) - if cmd != nil { - cmds = append(cmds, cmd) - } a.logs, cmd = a.logs.Update(msg) if cmd != nil { cmds = append(cmds, cmd) diff --git a/internal/tui/client.go b/internal/tui/client.go index 6f75d6befc..5651e0fe39 100644 --- a/internal/tui/client.go +++ b/internal/tui/client.go @@ -140,11 +140,6 @@ func (c *Client) PutConfigYAML(yamlContent string) error { return err } -// GetUsage fetches usage statistics. -func (c *Client) GetUsage() (map[string]any, error) { - return c.getJSON("/v0/management/usage") -} - // GetAuthFiles lists auth credential files. // API returns {"files": [...]}. func (c *Client) GetAuthFiles() ([]map[string]any, error) { @@ -398,3 +393,8 @@ func (c *Client) DeleteField(path string) error { _, _, err := c.doRequest("DELETE", "/v0/management/"+path, nil) return err } + +// GetUsage retrieves usage statistics from the management API. +func (c *Client) GetUsage() (map[string]any, error) { + return c.getJSON("/v0/management/usage") +} diff --git a/internal/tui/dashboard.go b/internal/tui/dashboard.go index 8561fe9c5b..99b5409c2e 100644 --- a/internal/tui/dashboard.go +++ b/internal/tui/dashboard.go @@ -22,14 +22,12 @@ type dashboardModel struct { // Cached data for re-rendering on locale change lastConfig map[string]any - lastUsage map[string]any lastAuthFiles []map[string]any lastAPIKeys []string } type dashboardDataMsg struct { config map[string]any - usage map[string]any authFiles []map[string]any apiKeys []string err error @@ -47,25 +45,24 @@ func (m dashboardModel) Init() tea.Cmd { func (m dashboardModel) fetchData() tea.Msg { cfg, cfgErr := m.client.GetConfig() - usage, usageErr := m.client.GetUsage() authFiles, authErr := m.client.GetAuthFiles() apiKeys, keysErr := m.client.GetAPIKeys() var err error - for _, e := range []error{cfgErr, usageErr, authErr, keysErr} { + for _, e := range []error{cfgErr, authErr, keysErr} { if e != nil { err = e break } } - return dashboardDataMsg{config: cfg, usage: usage, authFiles: authFiles, apiKeys: apiKeys, err: err} + return dashboardDataMsg{config: cfg, authFiles: authFiles, apiKeys: apiKeys, err: err} } func (m dashboardModel) Update(msg tea.Msg) (dashboardModel, tea.Cmd) { switch msg := msg.(type) { case localeChangedMsg: // Re-render immediately with cached data using new locale - m.content = m.renderDashboard(m.lastConfig, m.lastUsage, m.lastAuthFiles, m.lastAPIKeys) + m.content = m.renderDashboard(m.lastConfig, m.lastAuthFiles, m.lastAPIKeys) m.viewport.SetContent(m.content) // Also fetch fresh data in background return m, m.fetchData @@ -78,11 +75,10 @@ func (m dashboardModel) Update(msg tea.Msg) (dashboardModel, tea.Cmd) { m.err = nil // Cache data for locale switching m.lastConfig = msg.config - m.lastUsage = msg.usage m.lastAuthFiles = msg.authFiles m.lastAPIKeys = msg.apiKeys - m.content = m.renderDashboard(msg.config, msg.usage, msg.authFiles, msg.apiKeys) + m.content = m.renderDashboard(msg.config, msg.authFiles, msg.apiKeys) } m.viewport.SetContent(m.content) return m, nil @@ -121,7 +117,7 @@ func (m dashboardModel) View() string { return m.viewport.View() } -func (m dashboardModel) renderDashboard(cfg, usage map[string]any, authFiles []map[string]any, apiKeys []string) string { +func (m dashboardModel) renderDashboard(cfg map[string]any, authFiles []map[string]any, apiKeys []string) string { var sb strings.Builder sb.WriteString(titleStyle.Render(T("dashboard_title"))) @@ -138,7 +134,7 @@ func (m dashboardModel) renderDashboard(cfg, usage map[string]any, authFiles []m // ━━━ Stats Cards ━━━ cardWidth := 25 if m.width > 0 { - cardWidth = (m.width - 6) / 4 + cardWidth = (m.width - 2) / 2 if cardWidth < 18 { cardWidth = 18 } @@ -173,34 +169,7 @@ func (m dashboardModel) renderDashboard(cfg, usage map[string]any, authFiles []m lipgloss.NewStyle().Foreground(colorMuted).Render(fmt.Sprintf("%s (%d %s)", T("auth_files_label"), activeAuth, T("active_suffix"))), )) - // Card 3: Total Requests - totalReqs := int64(0) - successReqs := int64(0) - failedReqs := int64(0) - totalTokens := int64(0) - if usage != nil { - if usageMap, ok := usage["usage"].(map[string]any); ok { - totalReqs = int64(getFloat(usageMap, "total_requests")) - successReqs = int64(getFloat(usageMap, "success_count")) - failedReqs = int64(getFloat(usageMap, "failure_count")) - totalTokens = int64(getFloat(usageMap, "total_tokens")) - } - } - card3 := cardStyle.Render(fmt.Sprintf( - "%s\n%s", - lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("214")).Render(fmt.Sprintf("📈 %d", totalReqs)), - lipgloss.NewStyle().Foreground(colorMuted).Render(fmt.Sprintf("%s (✓%d ✗%d)", T("total_requests"), successReqs, failedReqs)), - )) - - // Card 4: Total Tokens - tokenStr := formatLargeNumber(totalTokens) - card4 := cardStyle.Render(fmt.Sprintf( - "%s\n%s", - lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("170")).Render(fmt.Sprintf("🔤 %s", tokenStr)), - lipgloss.NewStyle().Foreground(colorMuted).Render(T("total_tokens")), - )) - - sb.WriteString(lipgloss.JoinHorizontal(lipgloss.Top, card1, " ", card2, " ", card3, " ", card4)) + sb.WriteString(lipgloss.JoinHorizontal(lipgloss.Top, card1, " ", card2)) sb.WriteString("\n\n") // ━━━ Current Config ━━━ @@ -258,38 +227,6 @@ func (m dashboardModel) renderDashboard(cfg, usage map[string]any, authFiles []m sb.WriteString("\n") - // ━━━ Per-Model Usage ━━━ - if usage != nil { - if usageMap, ok := usage["usage"].(map[string]any); ok { - if apis, ok := usageMap["apis"].(map[string]any); ok && len(apis) > 0 { - sb.WriteString(lipgloss.NewStyle().Bold(true).Foreground(colorHighlight).Render(T("model_stats"))) - sb.WriteString("\n") - sb.WriteString(strings.Repeat("─", minInt(m.width, 60))) - sb.WriteString("\n") - - header := fmt.Sprintf(" %-40s %10s %12s", T("model"), T("requests"), T("tokens")) - sb.WriteString(tableHeaderStyle.Render(header)) - sb.WriteString("\n") - - for _, apiSnap := range apis { - if apiMap, ok := apiSnap.(map[string]any); ok { - if models, ok := apiMap["models"].(map[string]any); ok { - for model, v := range models { - if stats, ok := v.(map[string]any); ok { - reqs := int64(getFloat(stats, "total_requests")) - toks := int64(getFloat(stats, "total_tokens")) - row := fmt.Sprintf(" %-40s %10d %12s", truncate(model, 40), reqs, formatLargeNumber(toks)) - sb.WriteString(tableCellStyle.Render(row)) - sb.WriteString("\n") - } - } - } - } - } - } - } - } - return sb.String() } diff --git a/internal/tui/i18n.go b/internal/tui/i18n.go index f6a33ca481..a4c0ac1658 100644 --- a/internal/tui/i18n.go +++ b/internal/tui/i18n.go @@ -50,8 +50,8 @@ var locales = map[string]map[string]string{ // ────────────────────────────────────────── // Tab names // ────────────────────────────────────────── -var zhTabNames = []string{"仪表盘", "配置", "认证文件", "API 密钥", "OAuth", "使用统计", "日志"} -var enTabNames = []string{"Dashboard", "Config", "Auth Files", "API Keys", "OAuth", "Usage", "Logs"} +var zhTabNames = []string{"仪表盘", "配置", "认证文件", "API 密钥", "OAuth", "日志"} +var enTabNames = []string{"Dashboard", "Config", "Auth Files", "API Keys", "OAuth", "Logs"} // TabNames returns tab names in the current locale. func TabNames() []string { diff --git a/internal/usage/logger_plugin.go b/internal/usage/logger_plugin.go index 803d005ee2..05deb558db 100644 --- a/internal/usage/logger_plugin.go +++ b/internal/usage/logger_plugin.go @@ -12,7 +12,7 @@ import ( "time" "github.com/gin-gonic/gin" - coreusage "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/usage" + coreusage "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" ) var statisticsEnabled atomic.Bool diff --git a/internal/usage/logger_plugin_test.go b/internal/usage/logger_plugin_test.go index 842b3f0cad..378e150b18 100644 --- a/internal/usage/logger_plugin_test.go +++ b/internal/usage/logger_plugin_test.go @@ -5,7 +5,7 @@ import ( "testing" "time" - coreusage "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/usage" + coreusage "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" ) func TestRequestStatisticsRecordIncludesLatency(t *testing.T) { diff --git a/internal/util/gemini_schema.go b/internal/util/gemini_schema.go index 4cc946d5f3..5383ca0360 100644 --- a/internal/util/gemini_schema.go +++ b/internal/util/gemini_schema.go @@ -801,3 +801,44 @@ func mergeDescriptionRaw(schemaRaw, parentDesc string) string { return string(updated) } } + +// --- Helpers --- + +// CleanupOrphanedRequiredInTools removes "required" entries from +// tools[].function.parameters that reference properties not defined in the +// corresponding "properties" object. Moonshot/Kimi strictly validates that +// every item in "required" must have a matching entry in "properties". +func CleanupOrphanedRequiredInTools(body []byte) []byte { + if len(body) == 0 || !gjson.ValidBytes(body) { + return body + } + tools := gjson.GetBytes(body, "tools") + if !tools.Exists() || !tools.IsArray() || len(tools.Array()) == 0 { + return body + } + + out := string(body) + changed := false + + tools.ForEach(func(idx, tool gjson.Result) bool { + params := tool.Get("function.parameters") + if !params.Exists() { + return true + } + cleaned := cleanupRequiredFields(params.Raw) + if cleaned != params.Raw { + path := fmt.Sprintf("tools.%d.function.parameters", idx.Int()) + updated, err := sjson.SetRaw(out, path, cleaned) + if err == nil { + out = updated + changed = true + } + } + return true + }) + + if !changed { + return body + } + return []byte(out) +} diff --git a/internal/util/header_helpers.go b/internal/util/header_helpers.go index c53c291f10..0b8d72bcb4 100644 --- a/internal/util/header_helpers.go +++ b/internal/util/header_helpers.go @@ -47,6 +47,14 @@ func applyCustomHeaders(r *http.Request, headers map[string]string) { if k == "" || v == "" { continue } + // net/http reads Host from req.Host (not req.Header) when writing + // a real request, so we must mirror it there. Some callers pass + // synthetic requests (e.g. &http.Request{Header: ...}) and only + // consume r.Header afterwards, so keep the value in the header + // map too. + if http.CanonicalHeaderKey(k) == "Host" { + r.Host = v + } r.Header.Set(k, v) } } diff --git a/internal/util/provider.go b/internal/util/provider.go index ce0ed1a397..6313f58e32 100644 --- a/internal/util/provider.go +++ b/internal/util/provider.go @@ -7,8 +7,8 @@ import ( "net/url" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" log "github.com/sirupsen/logrus" ) @@ -98,6 +98,9 @@ func IsOpenAICompatibilityAlias(modelName string, cfg *config.Config) bool { } for _, compat := range cfg.OpenAICompatibility { + if compat.Disabled { + continue + } for _, model := range compat.Models { if model.Alias == modelName { return true @@ -123,6 +126,9 @@ func GetOpenAICompatibilityConfig(alias string, cfg *config.Config) (*config.Ope } for _, compat := range cfg.OpenAICompatibility { + if compat.Disabled { + continue + } for _, model := range compat.Models { if model.Alias == alias { return &compat, &model diff --git a/internal/util/proxy.go b/internal/util/proxy.go index 9b57ca1733..781dd54dc0 100644 --- a/internal/util/proxy.go +++ b/internal/util/proxy.go @@ -6,8 +6,8 @@ package util import ( "net/http" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" ) diff --git a/internal/util/util.go b/internal/util/util.go index 9bf630f299..2c50cf67b5 100644 --- a/internal/util/util.go +++ b/internal/util/util.go @@ -11,7 +11,7 @@ import ( "regexp" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" log "github.com/sirupsen/logrus" ) @@ -73,9 +73,10 @@ func SetLogLevel(cfg *config.Config) { // ResolveAuthDir normalizes the auth directory path for consistent reuse throughout the app. // It expands a leading tilde (~) to the user's home directory and returns a cleaned path. +// If authDir is empty, it defaults to ~/.cli-proxy-api. func ResolveAuthDir(authDir string) (string, error) { if authDir == "" { - return "", nil + authDir = config.DefaultAuthDir } if strings.HasPrefix(authDir, "~") { home, err := os.UserHomeDir() diff --git a/internal/watcher/clients.go b/internal/watcher/clients.go index 7746f4ad3b..0a46660e8b 100644 --- a/internal/watcher/clients.go +++ b/internal/watcher/clients.go @@ -13,11 +13,11 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/synthesizer" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/synthesizer" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) @@ -357,6 +357,9 @@ func BuildAPIKeyClients(cfg *config.Config) (int, int, int, int, int) { } if len(cfg.OpenAICompatibility) > 0 { for _, compatConfig := range cfg.OpenAICompatibility { + if compatConfig.Disabled { + continue + } openAICompatCount += len(compatConfig.APIKeyEntries) } } diff --git a/internal/watcher/config_reload.go b/internal/watcher/config_reload.go index 1bbf4ef239..0471f8b3f2 100644 --- a/internal/watcher/config_reload.go +++ b/internal/watcher/config_reload.go @@ -9,9 +9,9 @@ import ( "reflect" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" "gopkg.in/yaml.v3" log "github.com/sirupsen/logrus" diff --git a/internal/watcher/diff/auth_diff.go b/internal/watcher/diff/auth_diff.go index 4b6e600852..39fe5e886d 100644 --- a/internal/watcher/diff/auth_diff.go +++ b/internal/watcher/diff/auth_diff.go @@ -5,7 +5,7 @@ import ( "fmt" "strings" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // BuildAuthChangeDetails computes a redacted, human-readable list of auth field changes. diff --git a/internal/watcher/diff/config_diff.go b/internal/watcher/diff/config_diff.go index 11f9093e80..c206049e43 100644 --- a/internal/watcher/diff/config_diff.go +++ b/internal/watcher/diff/config_diff.go @@ -6,7 +6,7 @@ import ( "reflect" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) // BuildConfigChangeDetails computes a redacted, human-readable list of config changes. @@ -39,9 +39,15 @@ func BuildConfigChangeDetails(oldCfg, newCfg *config.Config) []string { if oldCfg.UsageStatisticsEnabled != newCfg.UsageStatisticsEnabled { changes = append(changes, fmt.Sprintf("usage-statistics-enabled: %t -> %t", oldCfg.UsageStatisticsEnabled, newCfg.UsageStatisticsEnabled)) } + if oldCfg.RedisUsageQueueRetentionSeconds != newCfg.RedisUsageQueueRetentionSeconds { + changes = append(changes, fmt.Sprintf("redis-usage-queue-retention-seconds: %d -> %d", oldCfg.RedisUsageQueueRetentionSeconds, newCfg.RedisUsageQueueRetentionSeconds)) + } if oldCfg.DisableCooling != newCfg.DisableCooling { changes = append(changes, fmt.Sprintf("disable-cooling: %t -> %t", oldCfg.DisableCooling, newCfg.DisableCooling)) } + if oldCfg.DisableImageGeneration != newCfg.DisableImageGeneration { + changes = append(changes, fmt.Sprintf("disable-image-generation: %v -> %v", oldCfg.DisableImageGeneration, newCfg.DisableImageGeneration)) + } if oldCfg.RequestLog != newCfg.RequestLog { changes = append(changes, fmt.Sprintf("request-log: %t -> %t", oldCfg.RequestLog, newCfg.RequestLog)) } diff --git a/internal/watcher/diff/config_diff_test.go b/internal/watcher/diff/config_diff_test.go index 2d45aa5743..192791ea74 100644 --- a/internal/watcher/diff/config_diff_test.go +++ b/internal/watcher/diff/config_diff_test.go @@ -3,8 +3,8 @@ package diff import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestBuildConfigChangeDetails(t *testing.T) { @@ -279,6 +279,7 @@ func TestBuildConfigChangeDetails_FlagsAndKeys(t *testing.T) { APIKeys: []string{" key-1 ", "key-2"}, ForceModelPrefix: true, NonStreamKeepAliveInterval: 5, + DisableImageGeneration: config.DisableImageGenerationAll, }, } @@ -287,6 +288,7 @@ func TestBuildConfigChangeDetails_FlagsAndKeys(t *testing.T) { expectContains(t, details, "logging-to-file: false -> true") expectContains(t, details, "usage-statistics-enabled: false -> true") expectContains(t, details, "disable-cooling: false -> true") + expectContains(t, details, "disable-image-generation: false -> true") expectContains(t, details, "request-log: false -> true") expectContains(t, details, "request-retry: 1 -> 2") expectContains(t, details, "max-retry-credentials: 1 -> 3") @@ -403,9 +405,10 @@ func TestBuildConfigChangeDetails_AllBranches(t *testing.T) { SecretKey: "", }, SDKConfig: sdkconfig.SDKConfig{ - RequestLog: true, - ProxyURL: "http://new-proxy", - APIKeys: []string{"keyB"}, + RequestLog: true, + ProxyURL: "http://new-proxy", + APIKeys: []string{"keyB"}, + DisableImageGeneration: config.DisableImageGenerationAll, }, OAuthExcludedModels: map[string][]string{"p1": {"b", "c"}, "p2": {"d"}}, OpenAICompatibility: []config.OpenAICompatibility{ @@ -431,6 +434,7 @@ func TestBuildConfigChangeDetails_AllBranches(t *testing.T) { expectContains(t, changes, "logging-to-file: false -> true") expectContains(t, changes, "usage-statistics-enabled: false -> true") expectContains(t, changes, "disable-cooling: false -> true") + expectContains(t, changes, "disable-image-generation: false -> true") expectContains(t, changes, "request-retry: 1 -> 2") expectContains(t, changes, "max-retry-credentials: 1 -> 3") expectContains(t, changes, "max-retry-interval: 1 -> 3") diff --git a/internal/watcher/diff/model_hash.go b/internal/watcher/diff/model_hash.go index 5779faccd7..fed3386a7a 100644 --- a/internal/watcher/diff/model_hash.go +++ b/internal/watcher/diff/model_hash.go @@ -7,7 +7,7 @@ import ( "sort" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) // ComputeOpenAICompatModelsHash returns a stable hash for OpenAI-compat models. diff --git a/internal/watcher/diff/model_hash_test.go b/internal/watcher/diff/model_hash_test.go index db06ebd12c..b687d4da2e 100644 --- a/internal/watcher/diff/model_hash_test.go +++ b/internal/watcher/diff/model_hash_test.go @@ -3,7 +3,7 @@ package diff import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestComputeOpenAICompatModelsHash_Deterministic(t *testing.T) { diff --git a/internal/watcher/diff/models_summary.go b/internal/watcher/diff/models_summary.go index 9c2aa91ac4..4c9b035a16 100644 --- a/internal/watcher/diff/models_summary.go +++ b/internal/watcher/diff/models_summary.go @@ -6,7 +6,7 @@ import ( "sort" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) type GeminiModelsSummary struct { diff --git a/internal/watcher/diff/oauth_excluded.go b/internal/watcher/diff/oauth_excluded.go index 2039cf4898..d632062840 100644 --- a/internal/watcher/diff/oauth_excluded.go +++ b/internal/watcher/diff/oauth_excluded.go @@ -7,7 +7,7 @@ import ( "sort" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) type ExcludedModelsSummary struct { diff --git a/internal/watcher/diff/oauth_excluded_test.go b/internal/watcher/diff/oauth_excluded_test.go index f5ad391358..8643f59447 100644 --- a/internal/watcher/diff/oauth_excluded_test.go +++ b/internal/watcher/diff/oauth_excluded_test.go @@ -3,7 +3,7 @@ package diff import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestSummarizeExcludedModels_NormalizesAndDedupes(t *testing.T) { diff --git a/internal/watcher/diff/oauth_model_alias.go b/internal/watcher/diff/oauth_model_alias.go index c5a17d2940..8c14089b9f 100644 --- a/internal/watcher/diff/oauth_model_alias.go +++ b/internal/watcher/diff/oauth_model_alias.go @@ -7,7 +7,7 @@ import ( "sort" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) type OAuthModelAliasSummary struct { diff --git a/internal/watcher/diff/openai_compat.go b/internal/watcher/diff/openai_compat.go index 6b01aed296..31d0bcd99d 100644 --- a/internal/watcher/diff/openai_compat.go +++ b/internal/watcher/diff/openai_compat.go @@ -7,7 +7,7 @@ import ( "sort" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) // DiffOpenAICompatibility produces human-readable change descriptions. @@ -66,6 +66,9 @@ func describeOpenAICompatibilityUpdate(oldEntry, newEntry config.OpenAICompatibi oldModelCount := countOpenAIModels(oldEntry.Models) newModelCount := countOpenAIModels(newEntry.Models) details := make([]string, 0, 3) + if oldEntry.Disabled != newEntry.Disabled { + details = append(details, fmt.Sprintf("disabled %t -> %t", oldEntry.Disabled, newEntry.Disabled)) + } if oldKeyCount != newKeyCount { details = append(details, fmt.Sprintf("api-keys %d -> %d", oldKeyCount, newKeyCount)) } diff --git a/internal/watcher/diff/openai_compat_test.go b/internal/watcher/diff/openai_compat_test.go index db33db1487..5683671ae4 100644 --- a/internal/watcher/diff/openai_compat_test.go +++ b/internal/watcher/diff/openai_compat_test.go @@ -4,7 +4,7 @@ import ( "strings" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestDiffOpenAICompatibility(t *testing.T) { diff --git a/internal/watcher/dispatcher.go b/internal/watcher/dispatcher.go index 3d7d7527b3..d0182e2c25 100644 --- a/internal/watcher/dispatcher.go +++ b/internal/watcher/dispatcher.go @@ -9,9 +9,9 @@ import ( "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/synthesizer" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/synthesizer" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) var snapshotCoreAuthsFunc = snapshotCoreAuths diff --git a/internal/watcher/synthesizer/config.go b/internal/watcher/synthesizer/config.go index 52ae9a4808..1eea3dc112 100644 --- a/internal/watcher/synthesizer/config.go +++ b/internal/watcher/synthesizer/config.go @@ -5,8 +5,8 @@ import ( "strconv" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // ConfigSynthesizer generates Auth entries from configuration API keys. @@ -60,6 +60,10 @@ func (s *ConfigSynthesizer) synthesizeGeminiKeys(ctx *SynthesisContext) []*corea "source": fmt.Sprintf("config:gemini[%s]", token), "api_key": key, } + metadata := map[string]any{} + if entry.DisableCooling { + metadata["disable_cooling"] = true + } if entry.Priority != 0 { attrs["priority"] = strconv.Itoa(entry.Priority) } @@ -78,10 +82,14 @@ func (s *ConfigSynthesizer) synthesizeGeminiKeys(ctx *SynthesisContext) []*corea Status: coreauth.StatusActive, ProxyURL: proxyURL, Attributes: attrs, + Metadata: metadata, CreatedAt: now, UpdatedAt: now, } ApplyAuthExcludedModelsMeta(a, cfg, entry.ExcludedModels, "apikey") + if len(a.Metadata) == 0 { + a.Metadata = nil + } out = append(out, a) } return out @@ -107,6 +115,10 @@ func (s *ConfigSynthesizer) synthesizeClaudeKeys(ctx *SynthesisContext) []*corea "source": fmt.Sprintf("config:claude[%s]", token), "api_key": key, } + metadata := map[string]any{} + if ck.DisableCooling { + metadata["disable_cooling"] = true + } if ck.Priority != 0 { attrs["priority"] = strconv.Itoa(ck.Priority) } @@ -126,10 +138,14 @@ func (s *ConfigSynthesizer) synthesizeClaudeKeys(ctx *SynthesisContext) []*corea Status: coreauth.StatusActive, ProxyURL: proxyURL, Attributes: attrs, + Metadata: metadata, CreatedAt: now, UpdatedAt: now, } ApplyAuthExcludedModelsMeta(a, cfg, ck.ExcludedModels, "apikey") + if len(a.Metadata) == 0 { + a.Metadata = nil + } out = append(out, a) } return out @@ -154,6 +170,10 @@ func (s *ConfigSynthesizer) synthesizeCodexKeys(ctx *SynthesisContext) []*coreau "source": fmt.Sprintf("config:codex[%s]", token), "api_key": key, } + metadata := map[string]any{} + if ck.DisableCooling { + metadata["disable_cooling"] = true + } if ck.Priority != 0 { attrs["priority"] = strconv.Itoa(ck.Priority) } @@ -176,10 +196,14 @@ func (s *ConfigSynthesizer) synthesizeCodexKeys(ctx *SynthesisContext) []*coreau Status: coreauth.StatusActive, ProxyURL: proxyURL, Attributes: attrs, + Metadata: metadata, CreatedAt: now, UpdatedAt: now, } ApplyAuthExcludedModelsMeta(a, cfg, ck.ExcludedModels, "apikey") + if len(a.Metadata) == 0 { + a.Metadata = nil + } out = append(out, a) } return out @@ -194,12 +218,16 @@ func (s *ConfigSynthesizer) synthesizeOpenAICompat(ctx *SynthesisContext) []*cor out := make([]*coreauth.Auth, 0) for i := range cfg.OpenAICompatibility { compat := &cfg.OpenAICompatibility[i] + if compat.Disabled { + continue + } prefix := strings.TrimSpace(compat.Prefix) providerName := strings.ToLower(strings.TrimSpace(compat.Name)) if providerName == "" { providerName = "openai-compatibility" } base := strings.TrimSpace(compat.BaseURL) + disableCooling := compat.DisableCooling // Handle new APIKeyEntries format (preferred) createdEntries := 0 @@ -215,6 +243,10 @@ func (s *ConfigSynthesizer) synthesizeOpenAICompat(ctx *SynthesisContext) []*cor "compat_name": compat.Name, "provider_key": providerName, } + metadata := map[string]any{} + if disableCooling { + metadata["disable_cooling"] = true + } if compat.Priority != 0 { attrs["priority"] = strconv.Itoa(compat.Priority) } @@ -233,9 +265,13 @@ func (s *ConfigSynthesizer) synthesizeOpenAICompat(ctx *SynthesisContext) []*cor Status: coreauth.StatusActive, ProxyURL: proxyURL, Attributes: attrs, + Metadata: metadata, CreatedAt: now, UpdatedAt: now, } + if len(a.Metadata) == 0 { + a.Metadata = nil + } out = append(out, a) createdEntries++ } @@ -249,6 +285,10 @@ func (s *ConfigSynthesizer) synthesizeOpenAICompat(ctx *SynthesisContext) []*cor "compat_name": compat.Name, "provider_key": providerName, } + metadata := map[string]any{} + if disableCooling { + metadata["disable_cooling"] = true + } if compat.Priority != 0 { attrs["priority"] = strconv.Itoa(compat.Priority) } @@ -263,9 +303,13 @@ func (s *ConfigSynthesizer) synthesizeOpenAICompat(ctx *SynthesisContext) []*cor Prefix: prefix, Status: coreauth.StatusActive, Attributes: attrs, + Metadata: metadata, CreatedAt: now, UpdatedAt: now, } + if len(a.Metadata) == 0 { + a.Metadata = nil + } out = append(out, a) } } diff --git a/internal/watcher/synthesizer/config_test.go b/internal/watcher/synthesizer/config_test.go index 437f18d11e..c8526a654a 100644 --- a/internal/watcher/synthesizer/config_test.go +++ b/internal/watcher/synthesizer/config_test.go @@ -4,8 +4,8 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestNewConfigSynthesizer(t *testing.T) { @@ -68,11 +68,26 @@ func TestConfigSynthesizer_GeminiKeys(t *testing.T) { if auths[0].Attributes["api_key"] != "test-key-123" { t.Errorf("expected api_key test-key-123, got %s", auths[0].Attributes["api_key"]) } + if auths[0].Metadata != nil { + t.Errorf("expected metadata to be nil when disable_cooling not set, got %v", auths[0].Metadata) + } if auths[0].Status != coreauth.StatusActive { t.Errorf("expected status active, got %s", auths[0].Status) } }, }, + { + name: "gemini key disable cooling", + geminiKeys: []config.GeminiKey{ + {APIKey: "test-key-123", Prefix: "team-a", DisableCooling: true}, + }, + wantLen: 1, + validate: func(t *testing.T, auths []*coreauth.Auth) { + if v, ok := auths[0].Metadata["disable_cooling"].(bool); !ok || !v { + t.Errorf("expected disable_cooling=true, got %v", auths[0].Metadata["disable_cooling"]) + } + }, + }, { name: "gemini key with base url and proxy", geminiKeys: []config.GeminiKey{ @@ -160,9 +175,10 @@ func TestConfigSynthesizer_ClaudeKeys(t *testing.T) { Config: &config.Config{ ClaudeKey: []config.ClaudeKey{ { - APIKey: "sk-ant-api-xxx", - Prefix: "main", - BaseURL: "https://api.anthropic.com", + APIKey: "sk-ant-api-xxx", + Prefix: "main", + BaseURL: "https://api.anthropic.com", + DisableCooling: true, Models: []config.ClaudeModel{ {Name: "claude-3-opus"}, {Name: "claude-3-sonnet"}, @@ -197,6 +213,9 @@ func TestConfigSynthesizer_ClaudeKeys(t *testing.T) { if _, ok := auths[0].Attributes["models_hash"]; !ok { t.Error("expected models_hash in attributes") } + if v, ok := auths[0].Metadata["disable_cooling"].(bool); !ok || !v { + t.Errorf("expected disable_cooling=true, got %v", auths[0].Metadata["disable_cooling"]) + } } func TestConfigSynthesizer_ClaudeKeys_SkipsEmptyAndHeaders(t *testing.T) { @@ -231,11 +250,12 @@ func TestConfigSynthesizer_CodexKeys(t *testing.T) { Config: &config.Config{ CodexKey: []config.CodexKey{ { - APIKey: "codex-key-123", - Prefix: "dev", - BaseURL: "https://api.openai.com", - ProxyURL: "http://proxy.local", - Websockets: true, + APIKey: "codex-key-123", + Prefix: "dev", + BaseURL: "https://api.openai.com", + ProxyURL: "http://proxy.local", + Websockets: true, + DisableCooling: true, }, }, }, @@ -263,6 +283,9 @@ func TestConfigSynthesizer_CodexKeys(t *testing.T) { if auths[0].Attributes["websockets"] != "true" { t.Errorf("expected websockets=true, got %s", auths[0].Attributes["websockets"]) } + if v, ok := auths[0].Metadata["disable_cooling"].(bool); !ok || !v { + t.Errorf("expected disable_cooling=true, got %v", auths[0].Metadata["disable_cooling"]) + } } func TestConfigSynthesizer_CodexKeys_SkipsEmptyAndHeaders(t *testing.T) { @@ -301,8 +324,9 @@ func TestConfigSynthesizer_OpenAICompat(t *testing.T) { name: "with APIKeyEntries", compat: []config.OpenAICompatibility{ { - Name: "CustomProvider", - BaseURL: "https://custom.api.com", + Name: "CustomProvider", + BaseURL: "https://custom.api.com", + DisableCooling: true, APIKeyEntries: []config.OpenAICompatibilityAPIKey{ {APIKey: "key-1"}, {APIKey: "key-2"}, @@ -365,6 +389,13 @@ func TestConfigSynthesizer_OpenAICompat(t *testing.T) { if len(auths) != tt.wantLen { t.Fatalf("expected %d auths, got %d", tt.wantLen, len(auths)) } + if tt.name == "with APIKeyEntries" { + for i := range auths { + if v, ok := auths[i].Metadata["disable_cooling"].(bool); !ok || !v { + t.Fatalf("expected auth[%d].disable_cooling=true, got %v", i, auths[i].Metadata["disable_cooling"]) + } + } + } }) } } diff --git a/internal/watcher/synthesizer/context.go b/internal/watcher/synthesizer/context.go index d973289a3a..f92b41ddaf 100644 --- a/internal/watcher/synthesizer/context.go +++ b/internal/watcher/synthesizer/context.go @@ -3,7 +3,7 @@ package synthesizer import ( "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) // SynthesisContext provides the context needed for auth synthesis. diff --git a/internal/watcher/synthesizer/file.go b/internal/watcher/synthesizer/file.go index 49a635e7e8..47990bc154 100644 --- a/internal/watcher/synthesizer/file.go +++ b/internal/watcher/synthesizer/file.go @@ -10,9 +10,9 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/geminicli" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/geminicli" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // FileSynthesizer generates Auth entries from OAuth JSON files. diff --git a/internal/watcher/synthesizer/file_test.go b/internal/watcher/synthesizer/file_test.go index f3e4497923..63b394aaf5 100644 --- a/internal/watcher/synthesizer/file_test.go +++ b/internal/watcher/synthesizer/file_test.go @@ -8,8 +8,8 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestNewFileSynthesizer(t *testing.T) { diff --git a/internal/watcher/synthesizer/helpers.go b/internal/watcher/synthesizer/helpers.go index 102dc77e22..19b4c896f1 100644 --- a/internal/watcher/synthesizer/helpers.go +++ b/internal/watcher/synthesizer/helpers.go @@ -7,9 +7,9 @@ import ( "sort" "strings" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // StableIDGenerator generates stable, deterministic IDs for auth entries. diff --git a/internal/watcher/synthesizer/helpers_test.go b/internal/watcher/synthesizer/helpers_test.go index 46b9c8a053..69ba85d60d 100644 --- a/internal/watcher/synthesizer/helpers_test.go +++ b/internal/watcher/synthesizer/helpers_test.go @@ -5,9 +5,9 @@ import ( "strings" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestNewStableIDGenerator(t *testing.T) { diff --git a/internal/watcher/synthesizer/interface.go b/internal/watcher/synthesizer/interface.go index 1a9aedc965..e0962c11c9 100644 --- a/internal/watcher/synthesizer/interface.go +++ b/internal/watcher/synthesizer/interface.go @@ -5,7 +5,7 @@ package synthesizer import ( - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // AuthSynthesizer defines the interface for generating Auth entries from various sources. diff --git a/internal/watcher/watcher.go b/internal/watcher/watcher.go index cf890a4c46..c18cd84d08 100644 --- a/internal/watcher/watcher.go +++ b/internal/watcher/watcher.go @@ -10,11 +10,11 @@ import ( "time" "github.com/fsnotify/fsnotify" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" "gopkg.in/yaml.v3" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/internal/watcher/watcher_test.go b/internal/watcher/watcher_test.go index 00a7a14360..bb3b557777 100644 --- a/internal/watcher/watcher_test.go +++ b/internal/watcher/watcher_test.go @@ -14,11 +14,11 @@ import ( "time" "github.com/fsnotify/fsnotify" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/synthesizer" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/synthesizer" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" "gopkg.in/yaml.v3" ) diff --git a/sdk/api/handlers/claude/code_handlers.go b/sdk/api/handlers/claude/code_handlers.go index 074ffc0d07..464f385eb5 100644 --- a/sdk/api/handlers/claude/code_handlers.go +++ b/sdk/api/handlers/claude/code_handlers.go @@ -16,10 +16,10 @@ import ( "net/http" "github.com/gin-gonic/gin" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" ) diff --git a/sdk/api/handlers/claude/gitlab_duo_handler_test.go b/sdk/api/handlers/claude/gitlab_duo_handler_test.go new file mode 100644 index 0000000000..2a06bf2d8c --- /dev/null +++ b/sdk/api/handlers/claude/gitlab_duo_handler_test.go @@ -0,0 +1,151 @@ +package claude + +import ( + "context" + "net/http" + "net/http/httptest" + "strings" + "testing" + + "github.com/gin-gonic/gin" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + runtimeexecutor "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" +) + +func TestClaudeMessagesWithGitLabDuoAnthropicGateway(t *testing.T) { + gin.SetMode(gin.TestMode) + + var gotPath, gotAuthHeader, gotRealmHeader string + upstream := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotAuthHeader = r.Header.Get("Authorization") + gotRealmHeader = r.Header.Get("X-Gitlab-Realm") + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"id":"msg_1","type":"message","role":"assistant","model":"claude-sonnet-4-5","content":[{"type":"tool_use","id":"toolu_1","name":"Bash","input":{"cmd":"ls"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":11,"output_tokens":4}}`)) + })) + defer upstream.Close() + + manager, _ := registerGitLabDuoAnthropicAuth(t, upstream.URL) + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewClaudeCodeAPIHandler(base) + router := gin.New() + router.POST("/v1/messages", h.ClaudeMessages) + + req := httptest.NewRequest(http.MethodPost, "/v1/messages", strings.NewReader(`{ + "model":"claude-sonnet-4-5", + "max_tokens":128, + "messages":[{"role":"user","content":"list files"}], + "tools":[{"name":"Bash","description":"run bash","input_schema":{"type":"object","properties":{"cmd":{"type":"string"}},"required":["cmd"]}}] + }`)) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Anthropic-Version", "2023-06-01") + resp := httptest.NewRecorder() + router.ServeHTTP(resp, req) + + if resp.Code != http.StatusOK { + t.Fatalf("status = %d, want %d body=%s", resp.Code, http.StatusOK, resp.Body.String()) + } + if gotPath != "/v1/proxy/anthropic/v1/messages" { + t.Fatalf("path = %q, want %q", gotPath, "/v1/proxy/anthropic/v1/messages") + } + if gotAuthHeader != "Bearer gateway-token" { + t.Fatalf("authorization = %q, want Bearer gateway-token", gotAuthHeader) + } + if gotRealmHeader != "saas" { + t.Fatalf("x-gitlab-realm = %q, want saas", gotRealmHeader) + } + if !strings.Contains(resp.Body.String(), `"tool_use"`) { + t.Fatalf("expected tool_use response, got %s", resp.Body.String()) + } + if !strings.Contains(resp.Body.String(), `"Bash"`) { + t.Fatalf("expected Bash tool in response, got %s", resp.Body.String()) + } +} + +func TestClaudeMessagesStreamWithGitLabDuoAnthropicGateway(t *testing.T) { + gin.SetMode(gin.TestMode) + + var gotPath string + upstream := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("event: message_start\n")) + _, _ = w.Write([]byte("data: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_1\",\"type\":\"message\",\"role\":\"assistant\",\"model\":\"claude-sonnet-4-5\",\"content\":[],\"stop_reason\":null,\"stop_sequence\":null,\"usage\":{\"input_tokens\":0,\"output_tokens\":0}}}\n\n")) + _, _ = w.Write([]byte("event: content_block_start\n")) + _, _ = w.Write([]byte("data: {\"type\":\"content_block_start\",\"index\":0,\"content_block\":{\"type\":\"text\",\"text\":\"\"}}\n\n")) + _, _ = w.Write([]byte("event: content_block_delta\n")) + _, _ = w.Write([]byte("data: {\"type\":\"content_block_delta\",\"index\":0,\"delta\":{\"type\":\"text_delta\",\"text\":\"hello from duo\"}}\n\n")) + _, _ = w.Write([]byte("event: message_delta\n")) + _, _ = w.Write([]byte("data: {\"type\":\"message_delta\",\"delta\":{\"stop_reason\":\"end_turn\",\"stop_sequence\":null},\"usage\":{\"input_tokens\":10,\"output_tokens\":3}}\n\n")) + _, _ = w.Write([]byte("event: message_stop\n")) + _, _ = w.Write([]byte("data: {\"type\":\"message_stop\"}\n\n")) + })) + defer upstream.Close() + + manager, _ := registerGitLabDuoAnthropicAuth(t, upstream.URL) + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewClaudeCodeAPIHandler(base) + router := gin.New() + router.POST("/v1/messages", h.ClaudeMessages) + + req := httptest.NewRequest(http.MethodPost, "/v1/messages", strings.NewReader(`{ + "model":"claude-sonnet-4-5", + "stream":true, + "max_tokens":64, + "messages":[{"role":"user","content":"hello"}] + }`)) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Anthropic-Version", "2023-06-01") + resp := httptest.NewRecorder() + router.ServeHTTP(resp, req) + + if resp.Code != http.StatusOK { + t.Fatalf("status = %d, want %d body=%s", resp.Code, http.StatusOK, resp.Body.String()) + } + if gotPath != "/v1/proxy/anthropic/v1/messages" { + t.Fatalf("path = %q, want %q", gotPath, "/v1/proxy/anthropic/v1/messages") + } + if got := resp.Header().Get("Content-Type"); got != "text/event-stream" { + t.Fatalf("content-type = %q, want text/event-stream", got) + } + if !strings.Contains(resp.Body.String(), "event: content_block_delta") { + t.Fatalf("expected streamed claude event, got %s", resp.Body.String()) + } + if !strings.Contains(resp.Body.String(), "hello from duo") { + t.Fatalf("expected streamed text, got %s", resp.Body.String()) + } +} + +func registerGitLabDuoAnthropicAuth(t *testing.T, upstreamURL string) (*coreauth.Manager, string) { + t.Helper() + + manager := coreauth.NewManager(nil, nil, nil) + manager.RegisterExecutor(runtimeexecutor.NewGitLabExecutor(&internalconfig.Config{})) + + auth := &coreauth.Auth{ + ID: "gitlab-duo-claude-handler-test", + Provider: "gitlab", + Status: coreauth.StatusActive, + Metadata: map[string]any{ + "duo_gateway_base_url": upstreamURL, + "duo_gateway_token": "gateway-token", + "duo_gateway_headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + } + registered, err := manager.Register(context.Background(), auth) + if err != nil { + t.Fatalf("register auth: %v", err) + } + + registry.GetGlobalRegistry().RegisterClient(registered.ID, registered.Provider, runtimeexecutor.GitLabModelsFromAuth(registered)) + t.Cleanup(func() { + registry.GetGlobalRegistry().UnregisterClient(registered.ID) + }) + return manager, registered.ID +} diff --git a/sdk/api/handlers/gemini/gemini-cli_handlers.go b/sdk/api/handlers/gemini/gemini-cli_handlers.go index 4c5ddf80f9..de79f05b7c 100644 --- a/sdk/api/handlers/gemini/gemini-cli_handlers.go +++ b/sdk/api/handlers/gemini/gemini-cli_handlers.go @@ -15,10 +15,10 @@ import ( "time" "github.com/gin-gonic/gin" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" ) diff --git a/sdk/api/handlers/gemini/gemini_handlers.go b/sdk/api/handlers/gemini/gemini_handlers.go index e51ad19bc5..60aed26a55 100644 --- a/sdk/api/handlers/gemini/gemini_handlers.go +++ b/sdk/api/handlers/gemini/gemini_handlers.go @@ -13,10 +13,10 @@ import ( "time" "github.com/gin-gonic/gin" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" ) // GeminiAPIHandler contains the handlers for Gemini API endpoints. diff --git a/sdk/api/handlers/handlers.go b/sdk/api/handlers/handlers.go index 49e73d4637..6e0adb6417 100644 --- a/sdk/api/handlers/handlers.go +++ b/sdk/api/handlers/handlers.go @@ -14,14 +14,14 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - coreexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + coreexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "golang.org/x/net/context" ) @@ -55,6 +55,7 @@ const ( type pinnedAuthContextKey struct{} type selectedAuthCallbackContextKey struct{} type executionSessionContextKey struct{} +type disallowFreeAuthContextKey struct{} // WithPinnedAuthID returns a child context that requests execution on a specific auth ID. func WithPinnedAuthID(ctx context.Context, authID string) context.Context { @@ -91,6 +92,14 @@ func WithExecutionSessionID(ctx context.Context, sessionID string) context.Conte return context.WithValue(ctx, executionSessionContextKey{}, sessionID) } +// WithDisallowFreeAuth returns a child context that requests skipping known free-tier credentials. +func WithDisallowFreeAuth(ctx context.Context) context.Context { + if ctx == nil { + ctx = context.Background() + } + return context.WithValue(ctx, disallowFreeAuthContextKey{}, true) +} + // BuildErrorResponseBody builds an OpenAI-compatible JSON error response body. // If errText is already valid JSON, it is returned as-is to preserve upstream error payloads. func BuildErrorResponseBody(status int, errText string) []byte { @@ -189,9 +198,14 @@ func requestExecutionMetadata(ctx context.Context) map[string]any { // Idempotency-Key is an optional client-supplied header used to correlate retries. // Only include it if the client explicitly provides it. key := "" + requestPath := "" if ctx != nil { if ginCtx, ok := ctx.Value("gin").(*gin.Context); ok && ginCtx != nil && ginCtx.Request != nil { key = strings.TrimSpace(ginCtx.GetHeader("Idempotency-Key")) + requestPath = strings.TrimSpace(ginCtx.FullPath()) + if requestPath == "" && ginCtx.Request.URL != nil { + requestPath = strings.TrimSpace(ginCtx.Request.URL.Path) + } } } @@ -199,6 +213,9 @@ func requestExecutionMetadata(ctx context.Context) map[string]any { if key != "" { meta[idempotencyKeyMetadataKey] = key } + if requestPath != "" { + meta[coreexecutor.RequestPathMetadataKey] = requestPath + } if pinnedAuthID := pinnedAuthIDFromContext(ctx); pinnedAuthID != "" { meta[coreexecutor.PinnedAuthMetadataKey] = pinnedAuthID } @@ -208,9 +225,25 @@ func requestExecutionMetadata(ctx context.Context) map[string]any { if executionSessionID := executionSessionIDFromContext(ctx); executionSessionID != "" { meta[coreexecutor.ExecutionSessionMetadataKey] = executionSessionID } + if disallowFreeAuthFromContext(ctx) { + meta[coreexecutor.DisallowFreeAuthMetadataKey] = true + } return meta } +// headersFromContext extracts the original HTTP request headers from the gin context +// embedded in the provided context. This allows session affinity selectors to read +// client headers like X-Amp-Thread-Id. +func headersFromContext(ctx context.Context) http.Header { + if ctx == nil { + return nil + } + if ginCtx, ok := ctx.Value("gin").(*gin.Context); ok && ginCtx != nil && ginCtx.Request != nil { + return ginCtx.Request.Header.Clone() + } + return nil +} + func pinnedAuthIDFromContext(ctx context.Context) string { if ctx == nil { return "" @@ -252,6 +285,14 @@ func executionSessionIDFromContext(ctx context.Context) string { } } +func disallowFreeAuthFromContext(ctx context.Context) bool { + if ctx == nil { + return false + } + raw, ok := ctx.Value(disallowFreeAuthContextKey{}).(bool) + return ok && raw +} + // BaseAPIHandler contains the handlers for API endpoints. // It holds a pool of clients to interact with the backend service and manages // load balancing, client selection, and configuration. @@ -334,11 +375,32 @@ func (h *BaseAPIHandler) GetContextWithCancel(handler interfaces.APIHandler, c * if requestCtx != nil && logging.GetRequestID(parentCtx) == "" { if requestID := logging.GetRequestID(requestCtx); requestID != "" { parentCtx = logging.WithRequestID(parentCtx, requestID) - } else if requestID := logging.GetGinRequestID(c); requestID != "" { + } else if requestID = logging.GetGinRequestID(c); requestID != "" { parentCtx = logging.WithRequestID(parentCtx, requestID) } } newCtx, cancel := context.WithCancel(parentCtx) + + endpoint := "" + if c != nil && c.Request != nil { + path := strings.TrimSpace(c.FullPath()) + if path == "" && c.Request.URL != nil { + path = strings.TrimSpace(c.Request.URL.Path) + } + if path != "" { + method := strings.TrimSpace(c.Request.Method) + if method != "" { + endpoint = method + " " + path + } else { + endpoint = path + } + } + } + if endpoint != "" { + newCtx = logging.WithEndpoint(newCtx, endpoint) + } + newCtx = logging.WithResponseStatusHolder(newCtx) + cancelCtx := newCtx if requestCtx != nil && requestCtx != parentCtx { go func() { @@ -352,6 +414,9 @@ func (h *BaseAPIHandler) GetContextWithCancel(handler interfaces.APIHandler, c * newCtx = context.WithValue(newCtx, "gin", c) newCtx = context.WithValue(newCtx, "handler", handler) return newCtx, func(params ...interface{}) { + if c != nil { + logging.SetResponseStatus(cancelCtx, c.Writer.Status()) + } if h.Cfg.RequestLog && len(params) == 1 { if existing, exists := c.Get("API_RESPONSE"); exists { if existingBytes, ok := existing.([]byte); ok && len(bytes.TrimSpace(existingBytes)) > 0 { @@ -474,7 +539,7 @@ func (h *BaseAPIHandler) ExecuteWithAuthManager(ctx context.Context, handlerType return nil, nil, errMsg } reqMeta := requestExecutionMetadata(ctx) - reqMeta[coreexecutor.RequestedModelMetadataKey] = normalizedModel + reqMeta[coreexecutor.RequestedModelMetadataKey] = modelName payload := rawJSON if len(payload) == 0 { payload = nil @@ -488,6 +553,7 @@ func (h *BaseAPIHandler) ExecuteWithAuthManager(ctx context.Context, handlerType Alt: alt, OriginalRequest: rawJSON, SourceFormat: sdktranslator.FromString(handlerType), + Headers: headersFromContext(ctx), } opts.Metadata = reqMeta resp, err := h.AuthManager.Execute(ctx, providers, req, opts) @@ -521,7 +587,7 @@ func (h *BaseAPIHandler) ExecuteCountWithAuthManager(ctx context.Context, handle return nil, nil, errMsg } reqMeta := requestExecutionMetadata(ctx) - reqMeta[coreexecutor.RequestedModelMetadataKey] = normalizedModel + reqMeta[coreexecutor.RequestedModelMetadataKey] = modelName payload := rawJSON if len(payload) == 0 { payload = nil @@ -535,6 +601,7 @@ func (h *BaseAPIHandler) ExecuteCountWithAuthManager(ctx context.Context, handle Alt: alt, OriginalRequest: rawJSON, SourceFormat: sdktranslator.FromString(handlerType), + Headers: headersFromContext(ctx), } opts.Metadata = reqMeta resp, err := h.AuthManager.ExecuteCount(ctx, providers, req, opts) @@ -572,7 +639,7 @@ func (h *BaseAPIHandler) ExecuteStreamWithAuthManager(ctx context.Context, handl return nil, nil, errChan } reqMeta := requestExecutionMetadata(ctx) - reqMeta[coreexecutor.RequestedModelMetadataKey] = normalizedModel + reqMeta[coreexecutor.RequestedModelMetadataKey] = modelName payload := rawJSON if len(payload) == 0 { payload = nil @@ -586,6 +653,7 @@ func (h *BaseAPIHandler) ExecuteStreamWithAuthManager(ctx context.Context, handl Alt: alt, OriginalRequest: rawJSON, SourceFormat: sdktranslator.FromString(handlerType), + Headers: headersFromContext(ctx), } opts.Metadata = reqMeta streamResult, err := h.AuthManager.ExecuteStream(ctx, providers, req, opts) @@ -782,19 +850,38 @@ func (h *BaseAPIHandler) getRequestDetails(modelName string) (providers []string resolvedModelName := modelName initialSuffix := thinking.ParseSuffix(modelName) if initialSuffix.ModelName == "auto" { - resolvedBase := util.ResolveAutoModel(initialSuffix.ModelName) - if initialSuffix.HasSuffix { - resolvedModelName = fmt.Sprintf("%s(%s)", resolvedBase, initialSuffix.RawSuffix) + if h != nil && h.AuthManager != nil && h.AuthManager.HomeEnabled() { + resolvedModelName = modelName } else { - resolvedModelName = resolvedBase + resolvedBase := util.ResolveAutoModel(initialSuffix.ModelName) + if initialSuffix.HasSuffix { + resolvedModelName = fmt.Sprintf("%s(%s)", resolvedBase, initialSuffix.RawSuffix) + } else { + resolvedModelName = resolvedBase + } } } else { - resolvedModelName = util.ResolveAutoModel(modelName) + if h != nil && h.AuthManager != nil && h.AuthManager.HomeEnabled() { + resolvedModelName = modelName + } else { + resolvedModelName = util.ResolveAutoModel(modelName) + } } parsed := thinking.ParseSuffix(resolvedModelName) baseModel := strings.TrimSpace(parsed.ModelName) + if strings.EqualFold(baseModel, "gpt-image-2") { + return nil, "", &interfaces.ErrorMessage{ + StatusCode: http.StatusServiceUnavailable, + Error: fmt.Errorf("model %s is only supported on /v1/images/generations and /v1/images/edits", baseModel), + } + } + + if h != nil && h.AuthManager != nil && h.AuthManager.HomeEnabled() { + return []string{"home"}, resolvedModelName, nil + } + providers = util.GetProviderName(baseModel) // Fallback: if baseModel has no provider but differs from resolvedModelName, // try using the full model name. This handles edge cases where custom models diff --git a/sdk/api/handlers/handlers_error_response_test.go b/sdk/api/handlers/handlers_error_response_test.go index 917971c245..0c206e386f 100644 --- a/sdk/api/handlers/handlers_error_response_test.go +++ b/sdk/api/handlers/handlers_error_response_test.go @@ -9,9 +9,9 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestWriteErrorResponse_AddonHeadersDisabledByDefault(t *testing.T) { diff --git a/sdk/api/handlers/handlers_metadata_test.go b/sdk/api/handlers/handlers_metadata_test.go index 99af872dc0..c5e94f963e 100644 --- a/sdk/api/handlers/handlers_metadata_test.go +++ b/sdk/api/handlers/handlers_metadata_test.go @@ -3,7 +3,7 @@ package handlers import ( "testing" - coreexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + coreexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" "golang.org/x/net/context" ) diff --git a/sdk/api/handlers/handlers_request_details_test.go b/sdk/api/handlers/handlers_request_details_test.go index b0f6b13262..3110cbc561 100644 --- a/sdk/api/handlers/handlers_request_details_test.go +++ b/sdk/api/handlers/handlers_request_details_test.go @@ -1,13 +1,15 @@ package handlers import ( + "net/http" "reflect" + "strings" "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestGetRequestDetails_PreservesSuffix(t *testing.T) { @@ -116,3 +118,22 @@ func TestGetRequestDetails_PreservesSuffix(t *testing.T) { }) } } + +func TestGetRequestDetails_ImageModelReturns503(t *testing.T) { + handler := NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, coreauth.NewManager(nil, nil, nil)) + + _, _, errMsg := handler.getRequestDetails("gpt-image-2") + if errMsg == nil { + t.Fatalf("expected error for gpt-image-2, got nil") + } + if errMsg.StatusCode != http.StatusServiceUnavailable { + t.Fatalf("unexpected status code: got %d want %d", errMsg.StatusCode, http.StatusServiceUnavailable) + } + if errMsg.Error == nil { + t.Fatalf("expected error message, got nil") + } + msg := errMsg.Error.Error() + if !strings.Contains(msg, "/v1/images/generations") || !strings.Contains(msg, "/v1/images/edits") { + t.Fatalf("unexpected error message: %q", msg) + } +} diff --git a/sdk/api/handlers/handlers_stream_bootstrap_test.go b/sdk/api/handlers/handlers_stream_bootstrap_test.go index f357962f0a..551baac374 100644 --- a/sdk/api/handlers/handlers_stream_bootstrap_test.go +++ b/sdk/api/handlers/handlers_stream_bootstrap_test.go @@ -8,11 +8,11 @@ import ( "sync" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - coreexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + coreexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) type failOnceStreamExecutor struct { diff --git a/sdk/api/handlers/openai/endpoint_compat.go b/sdk/api/handlers/openai/endpoint_compat.go new file mode 100644 index 0000000000..68eeee9006 --- /dev/null +++ b/sdk/api/handlers/openai/endpoint_compat.go @@ -0,0 +1,46 @@ +package openai + +import ( + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" +) + +const ( + openAIChatEndpoint = "/chat/completions" + openAIResponsesEndpoint = "/responses" +) + +func resolveEndpointOverride(modelName, requestedEndpoint string) (string, bool) { + if modelName == "" { + return "", false + } + info := registry.GetGlobalRegistry().GetModelInfo(modelName, "") + if info == nil { + baseModel := thinking.ParseSuffix(modelName).ModelName + if baseModel != "" && baseModel != modelName { + info = registry.GetGlobalRegistry().GetModelInfo(baseModel, "") + } + } + if info == nil || len(info.SupportedEndpoints) == 0 { + return "", false + } + if endpointListContains(info.SupportedEndpoints, requestedEndpoint) { + return "", false + } + if requestedEndpoint == openAIChatEndpoint && endpointListContains(info.SupportedEndpoints, openAIResponsesEndpoint) { + return openAIResponsesEndpoint, true + } + if requestedEndpoint == openAIResponsesEndpoint && endpointListContains(info.SupportedEndpoints, openAIChatEndpoint) { + return openAIChatEndpoint, true + } + return "", false +} + +func endpointListContains(items []string, value string) bool { + for _, item := range items { + if item == value { + return true + } + } + return false +} diff --git a/sdk/api/handlers/openai/endpoint_compat_test.go b/sdk/api/handlers/openai/endpoint_compat_test.go new file mode 100644 index 0000000000..a0d12b7ab7 --- /dev/null +++ b/sdk/api/handlers/openai/endpoint_compat_test.go @@ -0,0 +1,29 @@ +package openai + +import ( + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" +) + +func TestResolveEndpointOverride_StripsThinkingSuffix(t *testing.T) { + const clientID = "test-endpoint-compat-suffix" + reg := registry.GetGlobalRegistry() + reg.RegisterClient(clientID, "github-copilot", []*registry.ModelInfo{ + { + ID: "test-gemini-chat-only", + SupportedEndpoints: []string{openAIChatEndpoint}, + }, + }) + t.Cleanup(func() { + reg.UnregisterClient(clientID) + }) + + override, ok := resolveEndpointOverride("test-gemini-chat-only(high)", openAIResponsesEndpoint) + if !ok { + t.Fatalf("expected endpoint override to be resolved") + } + if override != openAIChatEndpoint { + t.Fatalf("override endpoint = %q, want %q", override, openAIChatEndpoint) + } +} diff --git a/sdk/api/handlers/openai/gitlab_duo_handler_test.go b/sdk/api/handlers/openai/gitlab_duo_handler_test.go new file mode 100644 index 0000000000..efa7ed87da --- /dev/null +++ b/sdk/api/handlers/openai/gitlab_duo_handler_test.go @@ -0,0 +1,143 @@ +package openai + +import ( + "context" + "net/http" + "net/http/httptest" + "strings" + "testing" + + "github.com/gin-gonic/gin" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + runtimeexecutor "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" +) + +func TestOpenAIChatCompletionsWithGitLabDuoOpenAIGateway(t *testing.T) { + gin.SetMode(gin.TestMode) + + var gotPath, gotAuthHeader, gotRealmHeader string + upstream := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotAuthHeader = r.Header.Get("Authorization") + gotRealmHeader = r.Header.Get("X-Gitlab-Realm") + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("data: {\"type\":\"response.created\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"gpt-5-codex\"}}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.output_text.delta\",\"delta\":\"hello from duo openai\"}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"gpt-5-codex\",\"status\":\"completed\",\"output\":[{\"type\":\"message\",\"id\":\"msg_1\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"hello from duo openai\"}]}],\"usage\":{\"input_tokens\":11,\"output_tokens\":4,\"total_tokens\":15}}}\n\n")) + })) + defer upstream.Close() + + manager := registerGitLabDuoOpenAIAuth(t, upstream.URL) + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewOpenAIAPIHandler(base) + router := gin.New() + router.POST("/v1/chat/completions", h.ChatCompletions) + + req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", strings.NewReader(`{ + "model":"gpt-5-codex", + "messages":[{"role":"user","content":"hello"}] + }`)) + req.Header.Set("Content-Type", "application/json") + resp := httptest.NewRecorder() + router.ServeHTTP(resp, req) + + if resp.Code != http.StatusOK { + t.Fatalf("status = %d, want %d body=%s", resp.Code, http.StatusOK, resp.Body.String()) + } + if gotPath != "/v1/proxy/openai/v1/responses" { + t.Fatalf("path = %q, want %q", gotPath, "/v1/proxy/openai/v1/responses") + } + if gotAuthHeader != "Bearer gateway-token" { + t.Fatalf("authorization = %q, want Bearer gateway-token", gotAuthHeader) + } + if gotRealmHeader != "saas" { + t.Fatalf("x-gitlab-realm = %q, want saas", gotRealmHeader) + } + if !strings.Contains(resp.Body.String(), `"content":"hello from duo openai"`) { + t.Fatalf("expected translated chat completion, got %s", resp.Body.String()) + } +} + +func TestOpenAIResponsesStreamWithGitLabDuoOpenAIGateway(t *testing.T) { + gin.SetMode(gin.TestMode) + + var gotPath, gotAuthHeader string + upstream := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + gotPath = r.URL.Path + gotAuthHeader = r.Header.Get("Authorization") + w.Header().Set("Content-Type", "text/event-stream") + _, _ = w.Write([]byte("data: {\"type\":\"response.created\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"gpt-5-codex\"}}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.output_text.delta\",\"delta\":\"streamed duo output\"}\n\n")) + _, _ = w.Write([]byte("data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"created_at\":1710000000,\"model\":\"gpt-5-codex\",\"status\":\"completed\",\"output\":[{\"type\":\"message\",\"id\":\"msg_1\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"streamed duo output\"}]}],\"usage\":{\"input_tokens\":10,\"output_tokens\":3,\"total_tokens\":13}}}\n\n")) + })) + defer upstream.Close() + + manager := registerGitLabDuoOpenAIAuth(t, upstream.URL) + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewOpenAIResponsesAPIHandler(base) + router := gin.New() + router.POST("/v1/responses", h.Responses) + + req := httptest.NewRequest(http.MethodPost, "/v1/responses", strings.NewReader(`{ + "model":"gpt-5-codex", + "stream":true, + "input":"hello" + }`)) + req.Header.Set("Content-Type", "application/json") + resp := httptest.NewRecorder() + router.ServeHTTP(resp, req) + + if resp.Code != http.StatusOK { + t.Fatalf("status = %d, want %d body=%s", resp.Code, http.StatusOK, resp.Body.String()) + } + if gotPath != "/v1/proxy/openai/v1/responses" { + t.Fatalf("path = %q, want %q", gotPath, "/v1/proxy/openai/v1/responses") + } + if gotAuthHeader != "Bearer gateway-token" { + t.Fatalf("authorization = %q, want Bearer gateway-token", gotAuthHeader) + } + if got := resp.Header().Get("Content-Type"); got != "text/event-stream" { + t.Fatalf("content-type = %q, want text/event-stream", got) + } + if !strings.Contains(resp.Body.String(), `"type":"response.output_text.delta"`) { + t.Fatalf("expected streamed responses delta, got %s", resp.Body.String()) + } + if !strings.Contains(resp.Body.String(), `"type":"response.completed"`) { + t.Fatalf("expected streamed responses completion, got %s", resp.Body.String()) + } +} + +func registerGitLabDuoOpenAIAuth(t *testing.T, upstreamURL string) *coreauth.Manager { + t.Helper() + + manager := coreauth.NewManager(nil, nil, nil) + manager.RegisterExecutor(runtimeexecutor.NewGitLabExecutor(&internalconfig.Config{})) + + auth := &coreauth.Auth{ + ID: "gitlab-duo-openai-handler-test", + Provider: "gitlab", + Status: coreauth.StatusActive, + Metadata: map[string]any{ + "duo_gateway_base_url": upstreamURL, + "duo_gateway_token": "gateway-token", + "duo_gateway_headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_provider": "openai", + "model_name": "gpt-5-codex", + }, + } + registered, err := manager.Register(context.Background(), auth) + if err != nil { + t.Fatalf("register auth: %v", err) + } + + registry.GetGlobalRegistry().RegisterClient(registered.ID, registered.Provider, runtimeexecutor.GitLabModelsFromAuth(registered)) + t.Cleanup(func() { + registry.GetGlobalRegistry().UnregisterClient(registered.ID) + }) + return manager +} diff --git a/sdk/api/handlers/openai/openai_handlers.go b/sdk/api/handlers/openai/openai_handlers.go index 4b4a9833bd..29dc0ea0b1 100644 --- a/sdk/api/handlers/openai/openai_handlers.go +++ b/sdk/api/handlers/openai/openai_handlers.go @@ -14,11 +14,11 @@ import ( "sync" "github.com/gin-gonic/gin" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - responsesconverter "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/openai/responses" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + responsesconverter "github.com/router-for-me/CLIProxyAPI/v7/internal/translator/openai/openai/responses" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) diff --git a/sdk/api/handlers/openai/openai_images_handlers.go b/sdk/api/handlers/openai/openai_images_handlers.go new file mode 100644 index 0000000000..6e6e8ef6ff --- /dev/null +++ b/sdk/api/handlers/openai/openai_images_handlers.go @@ -0,0 +1,963 @@ +package openai + +import ( + "bytes" + "context" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "mime/multipart" + "net/http" + "strconv" + "strings" + "time" + + "github.com/gin-gonic/gin" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + log "github.com/sirupsen/logrus" + "github.com/tidwall/gjson" + "github.com/tidwall/sjson" +) + +const ( + defaultImagesMainModel = "gpt-5.4-mini" + defaultImagesToolModel = "gpt-image-2" + imagesGenerationsPath = "/v1/images/generations" + imagesEditsPath = "/v1/images/edits" +) + +type imageCallResult struct { + Result string + RevisedPrompt string + OutputFormat string + Size string + Background string + Quality string +} + +type sseFrameAccumulator struct { + pending []byte +} + +func (a *sseFrameAccumulator) AddChunk(chunk []byte) [][]byte { + if len(chunk) == 0 { + return nil + } + + if responsesSSENeedsLineBreak(a.pending, chunk) { + a.pending = append(a.pending, '\n') + } + a.pending = append(a.pending, chunk...) + + var frames [][]byte + for { + frameLen := responsesSSEFrameLen(a.pending) + if frameLen == 0 { + break + } + frames = append(frames, a.pending[:frameLen]) + copy(a.pending, a.pending[frameLen:]) + a.pending = a.pending[:len(a.pending)-frameLen] + } + + if len(bytes.TrimSpace(a.pending)) == 0 { + a.pending = a.pending[:0] + return frames + } + if len(a.pending) == 0 || !responsesSSECanEmitWithoutDelimiter(a.pending) { + return frames + } + frames = append(frames, a.pending) + a.pending = a.pending[:0] + return frames +} + +func (a *sseFrameAccumulator) Flush() [][]byte { + if len(a.pending) == 0 { + return nil + } + + var frames [][]byte + for { + frameLen := responsesSSEFrameLen(a.pending) + if frameLen == 0 { + break + } + frames = append(frames, a.pending[:frameLen]) + copy(a.pending, a.pending[frameLen:]) + a.pending = a.pending[:len(a.pending)-frameLen] + } + + if len(bytes.TrimSpace(a.pending)) == 0 { + a.pending = nil + return frames + } + if responsesSSECanEmitWithoutDelimiter(a.pending) { + frames = append(frames, a.pending) + } + a.pending = nil + return frames +} + +func isSupportedImagesModel(model string) bool { + baseModel := strings.TrimSpace(model) + if idx := strings.LastIndex(baseModel, "/"); idx >= 0 && idx < len(baseModel)-1 { + baseModel = strings.TrimSpace(baseModel[idx+1:]) + } + return baseModel == defaultImagesToolModel +} + +func rejectUnsupportedImagesModel(c *gin.Context, model string) bool { + if isSupportedImagesModel(model) { + return false + } + + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Model %s is not supported on %s or %s. Use %s.", model, imagesGenerationsPath, imagesEditsPath, defaultImagesToolModel), + Type: "invalid_request_error", + }, + }) + return true +} + +func mimeTypeFromOutputFormat(outputFormat string) string { + if outputFormat == "" { + return "image/png" + } + if strings.Contains(outputFormat, "/") { + return outputFormat + } + switch strings.ToLower(strings.TrimSpace(outputFormat)) { + case "png": + return "image/png" + case "jpg", "jpeg": + return "image/jpeg" + case "webp": + return "image/webp" + default: + return "image/png" + } +} + +func multipartFileToDataURL(fileHeader *multipart.FileHeader) (string, error) { + if fileHeader == nil { + return "", fmt.Errorf("upload file is nil") + } + f, err := fileHeader.Open() + if err != nil { + return "", fmt.Errorf("open upload file failed: %w", err) + } + defer func() { + if errClose := f.Close(); errClose != nil { + log.Errorf("openai images: close upload file error: %v", errClose) + } + }() + + data, err := io.ReadAll(f) + if err != nil { + return "", fmt.Errorf("read upload file failed: %w", err) + } + + mediaType := strings.TrimSpace(fileHeader.Header.Get("Content-Type")) + if mediaType == "" { + mediaType = http.DetectContentType(data) + } + + b64 := base64.StdEncoding.EncodeToString(data) + return "data:" + mediaType + ";base64," + b64, nil +} + +func parseIntField(raw string, fallback int64) int64 { + raw = strings.TrimSpace(raw) + if raw == "" { + return fallback + } + v, err := strconv.ParseInt(raw, 10, 64) + if err != nil { + return fallback + } + return v +} + +func parseBoolField(raw string, fallback bool) bool { + raw = strings.TrimSpace(strings.ToLower(raw)) + if raw == "" { + return fallback + } + switch raw { + case "1", "true", "yes", "on": + return true + case "0", "false", "no", "off": + return false + default: + return fallback + } +} + +func (h *OpenAIAPIHandler) ImagesGenerations(c *gin.Context) { + if h != nil && h.BaseAPIHandler != nil && h.BaseAPIHandler.Cfg != nil && h.BaseAPIHandler.Cfg.DisableImageGeneration == internalconfig.DisableImageGenerationAll { + c.AbortWithStatus(http.StatusNotFound) + return + } + + rawJSON, err := c.GetRawData() + if err != nil { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Invalid request: %v", err), + Type: "invalid_request_error", + }, + }) + return + } + if !json.Valid(rawJSON) { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: body must be valid JSON", + Type: "invalid_request_error", + }, + }) + return + } + + imageModel := strings.TrimSpace(gjson.GetBytes(rawJSON, "model").String()) + if imageModel == "" { + imageModel = defaultImagesToolModel + } + if rejectUnsupportedImagesModel(c, imageModel) { + return + } + + prompt := strings.TrimSpace(gjson.GetBytes(rawJSON, "prompt").String()) + if prompt == "" { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: prompt is required", + Type: "invalid_request_error", + }, + }) + return + } + + responseFormat := strings.TrimSpace(gjson.GetBytes(rawJSON, "response_format").String()) + if responseFormat == "" { + responseFormat = "b64_json" + } + stream := gjson.GetBytes(rawJSON, "stream").Bool() + + tool := []byte(`{"type":"image_generation","action":"generate"}`) + tool, _ = sjson.SetBytes(tool, "model", imageModel) + + if v := strings.TrimSpace(gjson.GetBytes(rawJSON, "size").String()); v != "" { + tool, _ = sjson.SetBytes(tool, "size", v) + } + if v := strings.TrimSpace(gjson.GetBytes(rawJSON, "quality").String()); v != "" { + tool, _ = sjson.SetBytes(tool, "quality", v) + } + if v := strings.TrimSpace(gjson.GetBytes(rawJSON, "background").String()); v != "" { + tool, _ = sjson.SetBytes(tool, "background", v) + } + if v := strings.TrimSpace(gjson.GetBytes(rawJSON, "output_format").String()); v != "" { + tool, _ = sjson.SetBytes(tool, "output_format", v) + } + if v := gjson.GetBytes(rawJSON, "output_compression"); v.Exists() { + if v.Type == gjson.Number { + tool, _ = sjson.SetBytes(tool, "output_compression", v.Int()) + } + } + if v := gjson.GetBytes(rawJSON, "partial_images"); v.Exists() { + if v.Type == gjson.Number { + tool, _ = sjson.SetBytes(tool, "partial_images", v.Int()) + } + } + if v := strings.TrimSpace(gjson.GetBytes(rawJSON, "moderation").String()); v != "" { + tool, _ = sjson.SetBytes(tool, "moderation", v) + } + + responsesReq := buildImagesResponsesRequest(prompt, nil, tool) + if stream { + h.streamImagesFromResponses(c, responsesReq, responseFormat, "image_generation") + return + } + h.collectImagesFromResponses(c, responsesReq, responseFormat) +} + +func (h *OpenAIAPIHandler) ImagesEdits(c *gin.Context) { + if h != nil && h.BaseAPIHandler != nil && h.BaseAPIHandler.Cfg != nil && h.BaseAPIHandler.Cfg.DisableImageGeneration == internalconfig.DisableImageGenerationAll { + c.AbortWithStatus(http.StatusNotFound) + return + } + + contentType := strings.ToLower(strings.TrimSpace(c.GetHeader("Content-Type"))) + if strings.HasPrefix(contentType, "application/json") { + h.imagesEditsFromJSON(c) + return + } + if strings.HasPrefix(contentType, "multipart/form-data") || contentType == "" { + h.imagesEditsFromMultipart(c) + return + } + + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Invalid request: unsupported Content-Type %q", contentType), + Type: "invalid_request_error", + }, + }) +} + +func (h *OpenAIAPIHandler) imagesEditsFromMultipart(c *gin.Context) { + form, err := c.MultipartForm() + if err != nil { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Invalid request: %v", err), + Type: "invalid_request_error", + }, + }) + return + } + + imageModel := strings.TrimSpace(c.PostForm("model")) + if imageModel == "" { + imageModel = defaultImagesToolModel + } + if rejectUnsupportedImagesModel(c, imageModel) { + return + } + + prompt := strings.TrimSpace(c.PostForm("prompt")) + if prompt == "" { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: prompt is required", + Type: "invalid_request_error", + }, + }) + return + } + + var imageFiles []*multipart.FileHeader + if files := form.File["image[]"]; len(files) > 0 { + imageFiles = files + } else if files := form.File["image"]; len(files) > 0 { + imageFiles = files + } + if len(imageFiles) == 0 { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: image is required", + Type: "invalid_request_error", + }, + }) + return + } + + images := make([]string, 0, len(imageFiles)) + for _, fh := range imageFiles { + dataURL, err := multipartFileToDataURL(fh) + if err != nil { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Invalid request: %v", err), + Type: "invalid_request_error", + }, + }) + return + } + images = append(images, dataURL) + } + + var maskDataURL *string + if maskFiles := form.File["mask"]; len(maskFiles) > 0 && maskFiles[0] != nil { + dataURL, err := multipartFileToDataURL(maskFiles[0]) + if err != nil { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Invalid request: %v", err), + Type: "invalid_request_error", + }, + }) + return + } + maskDataURL = &dataURL + } + + responseFormat := strings.TrimSpace(c.PostForm("response_format")) + if responseFormat == "" { + responseFormat = "b64_json" + } + stream := parseBoolField(c.PostForm("stream"), false) + + tool := []byte(`{"type":"image_generation","action":"edit"}`) + tool, _ = sjson.SetBytes(tool, "model", imageModel) + + if v := strings.TrimSpace(c.PostForm("size")); v != "" { + tool, _ = sjson.SetBytes(tool, "size", v) + } + if v := strings.TrimSpace(c.PostForm("quality")); v != "" { + tool, _ = sjson.SetBytes(tool, "quality", v) + } + if v := strings.TrimSpace(c.PostForm("background")); v != "" { + tool, _ = sjson.SetBytes(tool, "background", v) + } + if v := strings.TrimSpace(c.PostForm("output_format")); v != "" { + tool, _ = sjson.SetBytes(tool, "output_format", v) + } + if v := strings.TrimSpace(c.PostForm("input_fidelity")); v != "" { + tool, _ = sjson.SetBytes(tool, "input_fidelity", v) + } + if v := strings.TrimSpace(c.PostForm("moderation")); v != "" { + tool, _ = sjson.SetBytes(tool, "moderation", v) + } + + if v := strings.TrimSpace(c.PostForm("output_compression")); v != "" { + tool, _ = sjson.SetBytes(tool, "output_compression", parseIntField(v, 0)) + } + if v := strings.TrimSpace(c.PostForm("partial_images")); v != "" { + tool, _ = sjson.SetBytes(tool, "partial_images", parseIntField(v, 0)) + } + + if maskDataURL != nil && strings.TrimSpace(*maskDataURL) != "" { + tool, _ = sjson.SetBytes(tool, "input_image_mask.image_url", strings.TrimSpace(*maskDataURL)) + } + + responsesReq := buildImagesResponsesRequest(prompt, images, tool) + if stream { + h.streamImagesFromResponses(c, responsesReq, responseFormat, "image_edit") + return + } + h.collectImagesFromResponses(c, responsesReq, responseFormat) +} + +func (h *OpenAIAPIHandler) imagesEditsFromJSON(c *gin.Context) { + rawJSON, err := c.GetRawData() + if err != nil { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: fmt.Sprintf("Invalid request: %v", err), + Type: "invalid_request_error", + }, + }) + return + } + if !json.Valid(rawJSON) { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: body must be valid JSON", + Type: "invalid_request_error", + }, + }) + return + } + + imageModel := strings.TrimSpace(gjson.GetBytes(rawJSON, "model").String()) + if imageModel == "" { + imageModel = defaultImagesToolModel + } + if rejectUnsupportedImagesModel(c, imageModel) { + return + } + + prompt := strings.TrimSpace(gjson.GetBytes(rawJSON, "prompt").String()) + if prompt == "" { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: prompt is required", + Type: "invalid_request_error", + }, + }) + return + } + + var images []string + imagesResult := gjson.GetBytes(rawJSON, "images") + if imagesResult.IsArray() { + for _, img := range imagesResult.Array() { + url := strings.TrimSpace(img.Get("image_url").String()) + if url == "" { + continue + } + images = append(images, url) + } + } + if len(images) == 0 { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: images[].image_url is required (file_id is not supported)", + Type: "invalid_request_error", + }, + }) + return + } + + var maskDataURL *string + if mask := gjson.GetBytes(rawJSON, "mask.image_url"); mask.Exists() { + url := strings.TrimSpace(mask.String()) + if url != "" { + maskDataURL = &url + } + } else if mask := gjson.GetBytes(rawJSON, "mask.file_id"); mask.Exists() { + c.JSON(http.StatusBadRequest, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Invalid request: mask.file_id is not supported (use mask.image_url instead)", + Type: "invalid_request_error", + }, + }) + return + } + + responseFormat := strings.TrimSpace(gjson.GetBytes(rawJSON, "response_format").String()) + if responseFormat == "" { + responseFormat = "b64_json" + } + stream := gjson.GetBytes(rawJSON, "stream").Bool() + + tool := []byte(`{"type":"image_generation","action":"edit"}`) + tool, _ = sjson.SetBytes(tool, "model", imageModel) + + for _, field := range []string{"size", "quality", "background", "output_format", "input_fidelity", "moderation"} { + if v := strings.TrimSpace(gjson.GetBytes(rawJSON, field).String()); v != "" { + tool, _ = sjson.SetBytes(tool, field, v) + } + } + + for _, field := range []string{"output_compression", "partial_images"} { + if v := gjson.GetBytes(rawJSON, field); v.Exists() && v.Type == gjson.Number { + tool, _ = sjson.SetBytes(tool, field, v.Int()) + } + } + + if maskDataURL != nil && strings.TrimSpace(*maskDataURL) != "" { + tool, _ = sjson.SetBytes(tool, "input_image_mask.image_url", strings.TrimSpace(*maskDataURL)) + } + + responsesReq := buildImagesResponsesRequest(prompt, images, tool) + if stream { + h.streamImagesFromResponses(c, responsesReq, responseFormat, "image_edit") + return + } + h.collectImagesFromResponses(c, responsesReq, responseFormat) +} + +func buildImagesResponsesRequest(prompt string, images []string, toolJSON []byte) []byte { + req := []byte(`{"instructions":"","stream":true,"reasoning":{"effort":"medium","summary":"auto"},"parallel_tool_calls":true,"include":["reasoning.encrypted_content"],"model":"","store":false,"tool_choice":{"type":"image_generation"}}`) + mainModel := defaultImagesMainModel + if len(toolJSON) > 0 && json.Valid(toolJSON) { + toolModel := strings.TrimSpace(gjson.GetBytes(toolJSON, "model").String()) + if idx := strings.LastIndex(toolModel, "/"); idx > 0 && idx < len(toolModel)-1 { + prefix := strings.TrimSpace(toolModel[:idx]) + if prefix != "" { + mainModel = prefix + "/" + defaultImagesMainModel + } + } + } + req, _ = sjson.SetBytes(req, "model", mainModel) + + input := []byte(`[{"type":"message","role":"user","content":[{"type":"input_text","text":""}]}]`) + input, _ = sjson.SetBytes(input, "0.content.0.text", prompt) + contentIndex := 1 + for _, img := range images { + if strings.TrimSpace(img) == "" { + continue + } + part := []byte(`{"type":"input_image","image_url":""}`) + part, _ = sjson.SetBytes(part, "image_url", img) + path := fmt.Sprintf("0.content.%d", contentIndex) + input, _ = sjson.SetRawBytes(input, path, part) + contentIndex++ + } + req, _ = sjson.SetRawBytes(req, "input", input) + + req, _ = sjson.SetRawBytes(req, "tools", []byte(`[]`)) + if len(toolJSON) > 0 && json.Valid(toolJSON) { + req, _ = sjson.SetRawBytes(req, "tools.-1", toolJSON) + } + return req +} + +func (h *OpenAIAPIHandler) collectImagesFromResponses(c *gin.Context, responsesReq []byte, responseFormat string) { + c.Header("Content-Type", "application/json") + + cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background()) + cliCtx = handlers.WithDisallowFreeAuth(cliCtx) + stopKeepAlive := h.StartNonStreamingKeepAlive(c, cliCtx) + + mainModel := strings.TrimSpace(gjson.GetBytes(responsesReq, "model").String()) + if mainModel == "" { + mainModel = defaultImagesMainModel + } + dataChan, upstreamHeaders, errChan := h.ExecuteStreamWithAuthManager(cliCtx, "openai-response", mainModel, responsesReq, "") + + out, errMsg := collectImagesFromResponsesStream(cliCtx, dataChan, errChan, responseFormat) + stopKeepAlive() + if errMsg != nil { + h.WriteErrorResponse(c, errMsg) + if errMsg.Error != nil { + cliCancel(errMsg.Error) + } else { + cliCancel(nil) + } + return + } + handlers.WriteUpstreamHeaders(c.Writer.Header(), upstreamHeaders) + _, _ = c.Writer.Write(out) + cliCancel() +} + +func collectImagesFromResponsesStream(ctx context.Context, data <-chan []byte, errs <-chan *interfaces.ErrorMessage, responseFormat string) ([]byte, *interfaces.ErrorMessage) { + acc := &sseFrameAccumulator{} + + processFrame := func(frame []byte) ([]byte, bool, *interfaces.ErrorMessage) { + for _, line := range bytes.Split(frame, []byte("\n")) { + trimmed := bytes.TrimSpace(bytes.TrimRight(line, "\r")) + if len(trimmed) == 0 { + continue + } + if !bytes.HasPrefix(trimmed, []byte("data:")) { + continue + } + payload := bytes.TrimSpace(trimmed[len("data:"):]) + if len(payload) == 0 || bytes.Equal(payload, []byte("[DONE]")) { + continue + } + if !json.Valid(payload) { + return nil, false, &interfaces.ErrorMessage{StatusCode: http.StatusBadGateway, Error: fmt.Errorf("invalid SSE data JSON")} + } + + if gjson.GetBytes(payload, "type").String() != "response.completed" { + continue + } + + results, createdAt, usageRaw, firstMeta, err := extractImagesFromResponsesCompleted(payload) + if err != nil { + return nil, false, &interfaces.ErrorMessage{StatusCode: http.StatusBadGateway, Error: err} + } + if len(results) == 0 { + return nil, false, &interfaces.ErrorMessage{StatusCode: http.StatusBadGateway, Error: fmt.Errorf("upstream did not return image output")} + } + out, err := buildImagesAPIResponse(results, createdAt, usageRaw, firstMeta, responseFormat) + if err != nil { + return nil, false, &interfaces.ErrorMessage{StatusCode: http.StatusInternalServerError, Error: err} + } + return out, true, nil + } + return nil, false, nil + } + + for { + select { + case <-ctx.Done(): + return nil, &interfaces.ErrorMessage{StatusCode: http.StatusRequestTimeout, Error: ctx.Err()} + case errMsg, ok := <-errs: + if ok && errMsg != nil { + return nil, errMsg + } + errs = nil + case chunk, ok := <-data: + if !ok { + for _, frame := range acc.Flush() { + if out, done, errMsg := processFrame(frame); errMsg != nil { + return nil, errMsg + } else if done { + return out, nil + } + } + return nil, &interfaces.ErrorMessage{StatusCode: http.StatusBadGateway, Error: fmt.Errorf("stream disconnected before completion")} + } + for _, frame := range acc.AddChunk(chunk) { + if out, done, errMsg := processFrame(frame); errMsg != nil { + return nil, errMsg + } else if done { + return out, nil + } + } + } + } +} + +func extractImagesFromResponsesCompleted(payload []byte) (results []imageCallResult, createdAt int64, usageRaw []byte, firstMeta imageCallResult, err error) { + if gjson.GetBytes(payload, "type").String() != "response.completed" { + return nil, 0, nil, imageCallResult{}, fmt.Errorf("unexpected event type") + } + + createdAt = gjson.GetBytes(payload, "response.created_at").Int() + if createdAt <= 0 { + createdAt = time.Now().Unix() + } + + output := gjson.GetBytes(payload, "response.output") + if output.IsArray() { + for _, item := range output.Array() { + if item.Get("type").String() != "image_generation_call" { + continue + } + res := strings.TrimSpace(item.Get("result").String()) + if res == "" { + continue + } + entry := imageCallResult{ + Result: res, + RevisedPrompt: strings.TrimSpace(item.Get("revised_prompt").String()), + OutputFormat: strings.TrimSpace(item.Get("output_format").String()), + Size: strings.TrimSpace(item.Get("size").String()), + Background: strings.TrimSpace(item.Get("background").String()), + Quality: strings.TrimSpace(item.Get("quality").String()), + } + if len(results) == 0 { + firstMeta = entry + } + results = append(results, entry) + } + } + + if usage := gjson.GetBytes(payload, "response.tool_usage.image_gen"); usage.Exists() && usage.IsObject() { + usageRaw = []byte(usage.Raw) + } + + return results, createdAt, usageRaw, firstMeta, nil +} + +func buildImagesAPIResponse(results []imageCallResult, createdAt int64, usageRaw []byte, firstMeta imageCallResult, responseFormat string) ([]byte, error) { + out := []byte(`{"created":0,"data":[]}`) + out, _ = sjson.SetBytes(out, "created", createdAt) + + responseFormat = strings.ToLower(strings.TrimSpace(responseFormat)) + if responseFormat == "" { + responseFormat = "b64_json" + } + + for _, img := range results { + item := []byte(`{}`) + if responseFormat == "url" { + mt := mimeTypeFromOutputFormat(img.OutputFormat) + item, _ = sjson.SetBytes(item, "url", "data:"+mt+";base64,"+img.Result) + } else { + item, _ = sjson.SetBytes(item, "b64_json", img.Result) + } + if img.RevisedPrompt != "" { + item, _ = sjson.SetBytes(item, "revised_prompt", img.RevisedPrompt) + } + out, _ = sjson.SetRawBytes(out, "data.-1", item) + } + + if firstMeta.Background != "" { + out, _ = sjson.SetBytes(out, "background", firstMeta.Background) + } + if firstMeta.OutputFormat != "" { + out, _ = sjson.SetBytes(out, "output_format", firstMeta.OutputFormat) + } + if firstMeta.Quality != "" { + out, _ = sjson.SetBytes(out, "quality", firstMeta.Quality) + } + if firstMeta.Size != "" { + out, _ = sjson.SetBytes(out, "size", firstMeta.Size) + } + + if len(usageRaw) > 0 && json.Valid(usageRaw) { + out, _ = sjson.SetRawBytes(out, "usage", usageRaw) + } + + return out, nil +} + +func (h *OpenAIAPIHandler) streamImagesFromResponses(c *gin.Context, responsesReq []byte, responseFormat string, streamPrefix string) { + flusher, ok := c.Writer.(http.Flusher) + if !ok { + c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{ + Error: handlers.ErrorDetail{ + Message: "Streaming not supported", + Type: "server_error", + }, + }) + return + } + + cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background()) + cliCtx = handlers.WithDisallowFreeAuth(cliCtx) + mainModel := strings.TrimSpace(gjson.GetBytes(responsesReq, "model").String()) + if mainModel == "" { + mainModel = defaultImagesMainModel + } + dataChan, upstreamHeaders, errChan := h.ExecuteStreamWithAuthManager(cliCtx, "openai-response", mainModel, responsesReq, "") + + setSSEHeaders := func() { + c.Header("Content-Type", "text/event-stream") + c.Header("Cache-Control", "no-cache") + c.Header("Connection", "keep-alive") + c.Header("Access-Control-Allow-Origin", "*") + } + + writeEvent := func(eventName string, dataJSON []byte) { + if strings.TrimSpace(eventName) != "" { + _, _ = fmt.Fprintf(c.Writer, "event: %s\n", eventName) + } + _, _ = fmt.Fprintf(c.Writer, "data: %s\n\n", string(dataJSON)) + flusher.Flush() + } + + // Peek for first chunk/error so we can still return a JSON error body. + for { + select { + case <-c.Request.Context().Done(): + cliCancel(c.Request.Context().Err()) + return + case errMsg, ok := <-errChan: + if !ok { + errChan = nil + continue + } + h.WriteErrorResponse(c, errMsg) + if errMsg != nil { + cliCancel(errMsg.Error) + } else { + cliCancel(nil) + } + return + case chunk, ok := <-dataChan: + if !ok { + setSSEHeaders() + handlers.WriteUpstreamHeaders(c.Writer.Header(), upstreamHeaders) + _, _ = c.Writer.Write([]byte("\n")) + flusher.Flush() + cliCancel(nil) + return + } + + setSSEHeaders() + handlers.WriteUpstreamHeaders(c.Writer.Header(), upstreamHeaders) + + h.forwardImagesStream(cliCtx, c, flusher, func(err error) { cliCancel(err) }, dataChan, errChan, chunk, responseFormat, streamPrefix, writeEvent) + return + } + } +} + +func (h *OpenAIAPIHandler) forwardImagesStream(ctx context.Context, c *gin.Context, flusher http.Flusher, cancel func(error), data <-chan []byte, errs <-chan *interfaces.ErrorMessage, firstChunk []byte, responseFormat string, streamPrefix string, writeEvent func(string, []byte)) { + acc := &sseFrameAccumulator{} + + responseFormat = strings.ToLower(strings.TrimSpace(responseFormat)) + if responseFormat == "" { + responseFormat = "b64_json" + } + + emitError := func(errMsg *interfaces.ErrorMessage) { + if errMsg == nil { + return + } + status := http.StatusInternalServerError + if errMsg.StatusCode > 0 { + status = errMsg.StatusCode + } + errText := http.StatusText(status) + if errMsg.Error != nil && strings.TrimSpace(errMsg.Error.Error()) != "" { + errText = errMsg.Error.Error() + } + body := handlers.BuildErrorResponseBody(status, errText) + writeEvent("error", body) + } + + processFrame := func(frame []byte) (done bool) { + for _, line := range bytes.Split(frame, []byte("\n")) { + trimmed := bytes.TrimSpace(bytes.TrimRight(line, "\r")) + if len(trimmed) == 0 || !bytes.HasPrefix(trimmed, []byte("data:")) { + continue + } + payload := bytes.TrimSpace(trimmed[len("data:"):]) + if len(payload) == 0 || bytes.Equal(payload, []byte("[DONE]")) || !json.Valid(payload) { + continue + } + + switch gjson.GetBytes(payload, "type").String() { + case "response.image_generation_call.partial_image": + b64 := strings.TrimSpace(gjson.GetBytes(payload, "partial_image_b64").String()) + if b64 == "" { + continue + } + outputFormat := strings.TrimSpace(gjson.GetBytes(payload, "output_format").String()) + index := gjson.GetBytes(payload, "partial_image_index").Int() + eventName := streamPrefix + ".partial_image" + data := []byte(`{"type":"","partial_image_index":0}`) + data, _ = sjson.SetBytes(data, "type", eventName) + data, _ = sjson.SetBytes(data, "partial_image_index", index) + if responseFormat == "url" { + mt := mimeTypeFromOutputFormat(outputFormat) + data, _ = sjson.SetBytes(data, "url", "data:"+mt+";base64,"+b64) + } else { + data, _ = sjson.SetBytes(data, "b64_json", b64) + } + writeEvent(eventName, data) + case "response.completed": + results, _, usageRaw, _, err := extractImagesFromResponsesCompleted(payload) + if err != nil { + emitError(&interfaces.ErrorMessage{StatusCode: http.StatusBadGateway, Error: err}) + return true + } + if len(results) == 0 { + emitError(&interfaces.ErrorMessage{StatusCode: http.StatusBadGateway, Error: fmt.Errorf("upstream did not return image output")}) + return true + } + eventName := streamPrefix + ".completed" + for _, img := range results { + data := []byte(`{"type":""}`) + data, _ = sjson.SetBytes(data, "type", eventName) + if responseFormat == "url" { + mt := mimeTypeFromOutputFormat(img.OutputFormat) + data, _ = sjson.SetBytes(data, "url", "data:"+mt+";base64,"+img.Result) + } else { + data, _ = sjson.SetBytes(data, "b64_json", img.Result) + } + if len(usageRaw) > 0 && json.Valid(usageRaw) { + data, _ = sjson.SetRawBytes(data, "usage", usageRaw) + } + writeEvent(eventName, data) + } + return true + } + } + return false + } + + for _, frame := range acc.AddChunk(firstChunk) { + if processFrame(frame) { + cancel(nil) + return + } + } + + for { + select { + case <-c.Request.Context().Done(): + cancel(c.Request.Context().Err()) + return + case errMsg, ok := <-errs: + if ok && errMsg != nil { + emitError(errMsg) + cancel(errMsg.Error) + return + } + errs = nil + case chunk, ok := <-data: + if !ok { + for _, frame := range acc.Flush() { + if processFrame(frame) { + cancel(nil) + return + } + } + cancel(nil) + return + } + for _, frame := range acc.AddChunk(chunk) { + if processFrame(frame) { + cancel(nil) + return + } + } + } + } +} diff --git a/sdk/api/handlers/openai/openai_images_handlers_test.go b/sdk/api/handlers/openai/openai_images_handlers_test.go new file mode 100644 index 0000000000..7796599619 --- /dev/null +++ b/sdk/api/handlers/openai/openai_images_handlers_test.go @@ -0,0 +1,146 @@ +package openai + +import ( + "bytes" + "io" + "mime/multipart" + "net/http" + "net/http/httptest" + "strings" + "testing" + + "github.com/gin-gonic/gin" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + "github.com/tidwall/gjson" +) + +func performImagesEndpointRequest(t *testing.T, endpointPath string, contentType string, body io.Reader, handler gin.HandlerFunc) *httptest.ResponseRecorder { + t.Helper() + + gin.SetMode(gin.TestMode) + router := gin.New() + router.POST(endpointPath, handler) + + req := httptest.NewRequest(http.MethodPost, endpointPath, body) + if contentType != "" { + req.Header.Set("Content-Type", contentType) + } + resp := httptest.NewRecorder() + router.ServeHTTP(resp, req) + return resp +} + +func assertUnsupportedImagesModelResponse(t *testing.T, resp *httptest.ResponseRecorder, model string) { + t.Helper() + + if resp.Code != http.StatusBadRequest { + t.Fatalf("status = %d, want %d: %s", resp.Code, http.StatusBadRequest, resp.Body.String()) + } + + message := gjson.GetBytes(resp.Body.Bytes(), "error.message").String() + expectedMessage := "Model " + model + " is not supported on " + imagesGenerationsPath + " or " + imagesEditsPath + ". Use " + defaultImagesToolModel + "." + if message != expectedMessage { + t.Fatalf("error message = %q, want %q", message, expectedMessage) + } + if errorType := gjson.GetBytes(resp.Body.Bytes(), "error.type").String(); errorType != "invalid_request_error" { + t.Fatalf("error type = %q, want invalid_request_error", errorType) + } +} + +func TestImagesModelValidationAllowsGPTImage2WithOptionalPrefix(t *testing.T) { + for _, model := range []string{"gpt-image-2", "codex/gpt-image-2"} { + if !isSupportedImagesModel(model) { + t.Fatalf("expected %s to be supported", model) + } + } + if isSupportedImagesModel("gpt-5.4-mini") { + t.Fatal("expected gpt-5.4-mini to be rejected") + } +} + +func TestImagesGenerationsRejectsUnsupportedModel(t *testing.T) { + handler := &OpenAIAPIHandler{} + body := strings.NewReader(`{"model":"gpt-5.4-mini","prompt":"draw a square"}`) + + resp := performImagesEndpointRequest(t, imagesGenerationsPath, "application/json", body, handler.ImagesGenerations) + + assertUnsupportedImagesModelResponse(t, resp, "gpt-5.4-mini") +} + +func TestImagesEditsJSONRejectsUnsupportedModel(t *testing.T) { + handler := &OpenAIAPIHandler{} + body := strings.NewReader(`{"model":"gpt-5.4-mini","prompt":"edit this","images":[{"image_url":"data:image/png;base64,AA=="}]}`) + + resp := performImagesEndpointRequest(t, imagesEditsPath, "application/json", body, handler.ImagesEdits) + + assertUnsupportedImagesModelResponse(t, resp, "gpt-5.4-mini") +} + +func TestImagesEditsMultipartRejectsUnsupportedModel(t *testing.T) { + handler := &OpenAIAPIHandler{} + var body bytes.Buffer + writer := multipart.NewWriter(&body) + if err := writer.WriteField("model", "gpt-5.4-mini"); err != nil { + t.Fatalf("write model field: %v", err) + } + if err := writer.WriteField("prompt", "edit this"); err != nil { + t.Fatalf("write prompt field: %v", err) + } + if errClose := writer.Close(); errClose != nil { + t.Fatalf("close multipart writer: %v", errClose) + } + + resp := performImagesEndpointRequest(t, imagesEditsPath, writer.FormDataContentType(), &body, handler.ImagesEdits) + + assertUnsupportedImagesModelResponse(t, resp, "gpt-5.4-mini") +} + +func TestImagesGenerations_DisableImageGeneration_Returns404(t *testing.T) { + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{DisableImageGeneration: internalconfig.DisableImageGenerationAll}, nil) + handler := NewOpenAIAPIHandler(base) + body := strings.NewReader(`{"prompt":"draw a square"}`) + + resp := performImagesEndpointRequest(t, imagesGenerationsPath, "application/json", body, handler.ImagesGenerations) + + if resp.Code != http.StatusNotFound { + t.Fatalf("status = %d, want %d: %s", resp.Code, http.StatusNotFound, resp.Body.String()) + } +} + +func TestImagesEdits_DisableImageGeneration_Returns404(t *testing.T) { + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{DisableImageGeneration: internalconfig.DisableImageGenerationAll}, nil) + handler := NewOpenAIAPIHandler(base) + body := strings.NewReader(`{"prompt":"edit this","images":[{"image_url":"data:image/png;base64,AA=="}]}`) + + resp := performImagesEndpointRequest(t, imagesEditsPath, "application/json", body, handler.ImagesEdits) + + if resp.Code != http.StatusNotFound { + t.Fatalf("status = %d, want %d: %s", resp.Code, http.StatusNotFound, resp.Body.String()) + } +} + +func TestImagesGenerations_DisableImageGenerationChat_DoesNotReturn404(t *testing.T) { + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{DisableImageGeneration: internalconfig.DisableImageGenerationChat}, nil) + handler := NewOpenAIAPIHandler(base) + body := strings.NewReader(`{"model":"gpt-5.4-mini","prompt":"draw a square"}`) + + resp := performImagesEndpointRequest(t, imagesGenerationsPath, "application/json", body, handler.ImagesGenerations) + + if resp.Code != http.StatusBadRequest { + t.Fatalf("status = %d, want %d: %s", resp.Code, http.StatusBadRequest, resp.Body.String()) + } +} + +func TestImagesEdits_DisableImageGenerationChat_DoesNotReturn404(t *testing.T) { + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{DisableImageGeneration: internalconfig.DisableImageGenerationChat}, nil) + handler := NewOpenAIAPIHandler(base) + body := strings.NewReader(`{"model":"gpt-5.4-mini","prompt":"edit this","images":[{"image_url":"data:image/png;base64,AA=="}]}`) + + resp := performImagesEndpointRequest(t, imagesEditsPath, "application/json", body, handler.ImagesEdits) + + if resp.Code != http.StatusBadRequest { + t.Fatalf("status = %d, want %d: %s", resp.Code, http.StatusBadRequest, resp.Body.String()) + } +} diff --git a/sdk/api/handlers/openai/openai_responses_compact_test.go b/sdk/api/handlers/openai/openai_responses_compact_test.go index dcfcc99a7c..48b7e3bbde 100644 --- a/sdk/api/handlers/openai/openai_responses_compact_test.go +++ b/sdk/api/handlers/openai/openai_responses_compact_test.go @@ -9,11 +9,11 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - coreexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + coreexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) type compactCaptureExecutor struct { diff --git a/sdk/api/handlers/openai/openai_responses_handlers.go b/sdk/api/handlers/openai/openai_responses_handlers.go index 8969ce2f6d..5b2c006a30 100644 --- a/sdk/api/handlers/openai/openai_responses_handlers.go +++ b/sdk/api/handlers/openai/openai_responses_handlers.go @@ -13,12 +13,13 @@ import ( "fmt" "io" "net/http" + "sort" "github.com/gin-gonic/gin" - . "github.com/router-for-me/CLIProxyAPI/v6/internal/constant" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" + . "github.com/router-for-me/CLIProxyAPI/v7/internal/constant" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -45,7 +46,10 @@ func writeResponsesSSEChunk(w io.Writer, chunk []byte) { } type responsesSSEFramer struct { - pending []byte + pending []byte + outputItems map[int][]byte + outputOrder []int + unindexedOutputItems [][]byte } func (f *responsesSSEFramer) WriteChunk(w io.Writer, chunk []byte) { @@ -61,7 +65,7 @@ func (f *responsesSSEFramer) WriteChunk(w io.Writer, chunk []byte) { if frameLen == 0 { break } - writeResponsesSSEChunk(w, f.pending[:frameLen]) + f.writeFrame(w, f.pending[:frameLen]) copy(f.pending, f.pending[frameLen:]) f.pending = f.pending[:len(f.pending)-frameLen] } @@ -72,7 +76,7 @@ func (f *responsesSSEFramer) WriteChunk(w io.Writer, chunk []byte) { if len(f.pending) == 0 || !responsesSSECanEmitWithoutDelimiter(f.pending) { return } - writeResponsesSSEChunk(w, f.pending) + f.writeFrame(w, f.pending) f.pending = f.pending[:0] } @@ -88,10 +92,133 @@ func (f *responsesSSEFramer) Flush(w io.Writer) { f.pending = f.pending[:0] return } - writeResponsesSSEChunk(w, f.pending) + f.writeFrame(w, f.pending) f.pending = f.pending[:0] } +func (f *responsesSSEFramer) writeFrame(w io.Writer, frame []byte) { + writeResponsesSSEChunk(w, f.repairFrame(frame)) +} + +func (f *responsesSSEFramer) repairFrame(frame []byte) []byte { + payload, ok := responsesSSEDataPayload(frame) + if !ok || len(payload) == 0 || bytes.Equal(payload, []byte("[DONE]")) || !json.Valid(payload) { + return frame + } + + switch gjson.GetBytes(payload, "type").String() { + case "response.output_item.done": + f.recordOutputItem(payload) + case "response.completed": + repaired := f.repairCompletedPayload(payload) + if !bytes.Equal(repaired, payload) { + return responsesSSEFrameWithData(frame, repaired) + } + } + return frame +} + +func responsesSSEDataPayload(frame []byte) ([]byte, bool) { + var payload []byte + found := false + for _, line := range bytes.Split(frame, []byte("\n")) { + line = bytes.TrimRight(line, "\r") + trimmed := bytes.TrimSpace(line) + if !bytes.HasPrefix(trimmed, []byte("data:")) { + continue + } + data := bytes.TrimSpace(trimmed[len("data:"):]) + if found { + payload = append(payload, '\n') + } + payload = append(payload, data...) + found = true + } + return payload, found +} + +func responsesSSEFrameWithData(frame, payload []byte) []byte { + var out bytes.Buffer + for _, line := range bytes.Split(frame, []byte("\n")) { + line = bytes.TrimRight(line, "\r") + trimmed := bytes.TrimSpace(line) + if len(trimmed) == 0 || bytes.HasPrefix(trimmed, []byte("data:")) { + continue + } + out.Write(line) + out.WriteByte('\n') + } + for _, line := range bytes.Split(payload, []byte("\n")) { + out.WriteString("data: ") + out.Write(line) + out.WriteByte('\n') + } + out.WriteByte('\n') + return out.Bytes() +} + +func (f *responsesSSEFramer) recordOutputItem(payload []byte) { + item := gjson.GetBytes(payload, "item") + if !item.Exists() || !item.IsObject() || item.Get("type").String() == "" { + return + } + + if outputIndex := gjson.GetBytes(payload, "output_index"); outputIndex.Exists() { + index := int(outputIndex.Int()) + if f.outputItems == nil { + f.outputItems = make(map[int][]byte) + } + if _, exists := f.outputItems[index]; !exists { + f.outputOrder = append(f.outputOrder, index) + } + f.outputItems[index] = append([]byte(nil), item.Raw...) + return + } + + f.unindexedOutputItems = append(f.unindexedOutputItems, append([]byte(nil), item.Raw...)) +} + +func (f *responsesSSEFramer) repairCompletedPayload(payload []byte) []byte { + if len(f.outputOrder) == 0 && len(f.unindexedOutputItems) == 0 { + return payload + } + output := gjson.GetBytes(payload, "response.output") + if output.Exists() && (!output.IsArray() || len(output.Array()) > 0) { + return payload + } + + var outputJSON bytes.Buffer + outputJSON.WriteByte('[') + indexes := append([]int(nil), f.outputOrder...) + sort.Ints(indexes) + written := 0 + for _, index := range indexes { + item, ok := f.outputItems[index] + if !ok { + continue + } + if written > 0 { + outputJSON.WriteByte(',') + } + outputJSON.Write(item) + written++ + } + for _, item := range f.unindexedOutputItems { + if written > 0 { + outputJSON.WriteByte(',') + } + outputJSON.Write(item) + written++ + } + outputJSON.WriteByte(']') + + repaired, err := sjson.SetRawBytes(payload, "response.output", outputJSON.Bytes()) + if err != nil { + return payload + } + return repaired +} + func responsesSSEFrameLen(chunk []byte) int { if len(chunk) == 0 { return 0 diff --git a/sdk/api/handlers/openai/openai_responses_handlers_stream_error_test.go b/sdk/api/handlers/openai/openai_responses_handlers_stream_error_test.go index 771e46b88b..54d1467589 100644 --- a/sdk/api/handlers/openai/openai_responses_handlers_stream_error_test.go +++ b/sdk/api/handlers/openai/openai_responses_handlers_stream_error_test.go @@ -8,9 +8,9 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestForwardResponsesStreamTerminalErrorUsesResponsesErrorChunk(t *testing.T) { diff --git a/sdk/api/handlers/openai/openai_responses_handlers_stream_test.go b/sdk/api/handlers/openai/openai_responses_handlers_stream_test.go index ef16fe80ac..0742b9b3d3 100644 --- a/sdk/api/handlers/openai/openai_responses_handlers_stream_test.go +++ b/sdk/api/handlers/openai/openai_responses_handlers_stream_test.go @@ -7,9 +7,10 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + "github.com/tidwall/gjson" ) func newResponsesStreamTestHandler(t *testing.T) (*OpenAIResponsesAPIHandler, *httptest.ResponseRecorder, *gin.Context, http.Flusher) { @@ -53,12 +54,108 @@ func TestForwardResponsesStreamSeparatesDataOnlySSEChunks(t *testing.T) { t.Errorf("unexpected first event.\nGot: %q\nWant: %q", parts[0], expectedPart1) } - expectedPart2 := "data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp-1\",\"output\":[]}}" + expectedPart2 := "data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp-1\",\"output\":[{\"type\":\"function_call\",\"arguments\":\"{}\"}]}}" if parts[1] != expectedPart2 { t.Errorf("unexpected second event.\nGot: %q\nWant: %q", parts[1], expectedPart2) } } +func TestForwardResponsesStreamRepairsEmptyCompletedOutputFromDoneItems(t *testing.T) { + h, recorder, c, flusher := newResponsesStreamTestHandler(t) + + data := make(chan []byte, 3) + errs := make(chan *interfaces.ErrorMessage) + data <- []byte(`data: {"type":"response.output_item.done","output_index":0,"item":{"type":"reasoning","id":"rs-1","summary":[]}}`) + data <- []byte(`data: {"type":"response.output_item.done","output_index":1,"item":{"type":"function_call","id":"fc-1","call_id":"call-1","name":"shell","arguments":"{\"cmd\":\"pwd\"}","status":"completed"}}`) + data <- []byte(`data: {"type":"response.completed","response":{"id":"resp-1","output":[]}}`) + close(data) + close(errs) + + h.forwardResponsesStream(c, flusher, func(error) {}, data, errs, nil) + + parts := strings.Split(strings.TrimSpace(recorder.Body.String()), "\n\n") + if len(parts) != 3 { + t.Fatalf("expected 3 SSE events, got %d. Body: %q", len(parts), recorder.Body.String()) + } + + payload := strings.TrimPrefix(parts[2], "data: ") + output := gjson.Get(payload, "response.output") + if !output.IsArray() || len(output.Array()) != 2 { + t.Fatalf("expected repaired completed output with 2 items, got %s", output.Raw) + } + if got := gjson.Get(payload, "response.output.1.name").String(); got != "shell" { + t.Fatalf("expected function_call name to be preserved, got %q in %s", got, payload) + } + if got := gjson.Get(payload, "response.output.1.arguments").String(); got != `{"cmd":"pwd"}` { + t.Fatalf("expected function_call arguments to be preserved, got %q in %s", got, payload) + } +} + +func TestForwardResponsesStreamRepairsMixedIndexedAndUnindexedDoneItems(t *testing.T) { + h, recorder, c, flusher := newResponsesStreamTestHandler(t) + + data := make(chan []byte, 3) + errs := make(chan *interfaces.ErrorMessage) + data <- []byte(`data: {"type":"response.output_item.done","output_index":1,"item":{"type":"function_call","id":"fc-1","call_id":"call-1","name":"shell","arguments":"{}","status":"completed"}}`) + data <- []byte(`data: {"type":"response.output_item.done","item":{"type":"message","id":"msg-1","role":"assistant","content":[{"type":"output_text","text":"done"}]}}`) + data <- []byte(`data: {"type":"response.completed","response":{"id":"resp-1","output":[]}}`) + close(data) + close(errs) + + h.forwardResponsesStream(c, flusher, func(error) {}, data, errs, nil) + + parts := strings.Split(strings.TrimSpace(recorder.Body.String()), "\n\n") + if len(parts) != 3 { + t.Fatalf("expected 3 SSE events, got %d. Body: %q", len(parts), recorder.Body.String()) + } + + payload := strings.TrimPrefix(parts[2], "data: ") + output := gjson.Get(payload, "response.output") + if !output.IsArray() || len(output.Array()) != 2 { + t.Fatalf("expected repaired completed output with 2 items, got %s", output.Raw) + } + if got := gjson.Get(payload, "response.output.0.name").String(); got != "shell" { + t.Fatalf("expected indexed function_call to be preserved first, got %q in %s", got, payload) + } + if got := gjson.Get(payload, "response.output.1.id").String(); got != "msg-1" { + t.Fatalf("expected unindexed message to be appended, got %q in %s", got, payload) + } +} + +func TestForwardResponsesStreamRepairsMultilineCompletedOutputAsSSEDataLines(t *testing.T) { + h, recorder, c, flusher := newResponsesStreamTestHandler(t) + + data := make(chan []byte, 2) + errs := make(chan *interfaces.ErrorMessage) + data <- []byte(`data: {"type":"response.output_item.done","item":{"type":"function_call","arguments":"{}"}}`) + data <- []byte("data: {\"type\":\"response.completed\",\ndata: \"response\":{\"id\":\"resp-1\",\"output\":[]}}\n\n") + close(data) + close(errs) + + h.forwardResponsesStream(c, flusher, func(error) {}, data, errs, nil) + + parts := strings.Split(strings.TrimSpace(recorder.Body.String()), "\n\n") + if len(parts) != 2 { + t.Fatalf("expected 2 SSE events, got %d. Body: %q", len(parts), recorder.Body.String()) + } + + completedFrame := []byte(parts[1]) + for _, line := range strings.Split(parts[1], "\n") { + if line != "" && !strings.HasPrefix(line, "data: ") { + t.Fatalf("expected every completed payload line to be an SSE data line, got %q in %q", line, parts[1]) + } + } + + payload, ok := responsesSSEDataPayload(completedFrame) + if !ok { + t.Fatalf("expected completed frame to contain data payload: %q", parts[1]) + } + output := gjson.GetBytes(payload, "response.output") + if !output.IsArray() || len(output.Array()) != 1 { + t.Fatalf("expected repaired completed output with 1 item, got %s from %q", output.Raw, payload) + } +} + func TestForwardResponsesStreamReassemblesSplitSSEEventChunks(t *testing.T) { h, recorder, c, flusher := newResponsesStreamTestHandler(t) diff --git a/sdk/api/handlers/openai/openai_responses_websocket.go b/sdk/api/handlers/openai/openai_responses_websocket.go index 2f6b14a779..574338fd75 100644 --- a/sdk/api/handlers/openai/openai_responses_websocket.go +++ b/sdk/api/handlers/openai/openai_responses_websocket.go @@ -13,13 +13,13 @@ import ( "github.com/gin-gonic/gin" "github.com/google/uuid" "github.com/gorilla/websocket" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "github.com/tidwall/sjson" @@ -56,6 +56,31 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { retainResponsesWebsocketToolCaches(downstreamSessionKey) clientIP := websocketClientAddress(c) log.Infof("responses websocket: client connected id=%s remote=%s", passthroughSessionID, clientIP) + + wsDone := make(chan struct{}) + defer close(wsDone) + + if h != nil && h.AuthManager != nil { + if exec, ok := h.AuthManager.Executor("codex"); ok && exec != nil { + type upstreamDisconnectSubscriber interface { + UpstreamDisconnectChan(sessionID string) <-chan error + } + if subscriber, ok := exec.(upstreamDisconnectSubscriber); ok && subscriber != nil { + disconnectCh := subscriber.UpstreamDisconnectChan(passthroughSessionID) + if disconnectCh != nil { + go func() { + select { + case <-wsDone: + return + case <-disconnectCh: + _ = conn.Close() + } + }() + } + } + } + } + var wsTerminateErr error var wsTimelineLog strings.Builder defer func() { @@ -79,6 +104,16 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { var lastRequest []byte lastResponseOutput := []byte("[]") pinnedAuthID := "" + sessionAuthByID := func(authID string) (*coreauth.Auth, bool) { + if h == nil || h.AuthManager == nil { + return nil, false + } + if auth, ok := h.AuthManager.GetExecutionSessionAuthByID(passthroughSessionID, authID); ok { + return auth, true + } + return h.AuthManager.GetByID(authID) + } + forceTranscriptReplayNextRequest := false for { msgType, payload, errReadMessage := conn.ReadMessage() @@ -104,8 +139,8 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { appendWebsocketTimelineEvent(&wsTimelineLog, "request", payload, time.Now()) allowIncrementalInputWithPreviousResponseID := false - if pinnedAuthID != "" && h != nil && h.AuthManager != nil { - if pinnedAuth, ok := h.AuthManager.GetByID(pinnedAuthID); ok && pinnedAuth != nil { + if pinnedAuthID != "" { + if pinnedAuth, ok := sessionAuthByID(pinnedAuthID); ok && pinnedAuth != nil { allowIncrementalInputWithPreviousResponseID = websocketUpstreamSupportsIncrementalInput(pinnedAuth.Attributes, pinnedAuth.Metadata) } } else { @@ -115,6 +150,22 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { } allowIncrementalInputWithPreviousResponseID = h.websocketUpstreamSupportsIncrementalInputForModel(requestModelName) } + if forceTranscriptReplayNextRequest { + allowIncrementalInputWithPreviousResponseID = false + } + + allowCompactionReplayBypass := false + if pinnedAuthID != "" { + if pinnedAuth, ok := sessionAuthByID(pinnedAuthID); ok && pinnedAuth != nil { + allowCompactionReplayBypass = responsesWebsocketAuthSupportsCompactionReplay(pinnedAuth) + } + } else { + requestModelName := strings.TrimSpace(gjson.GetBytes(payload, "model").String()) + if requestModelName == "" { + requestModelName = strings.TrimSpace(gjson.GetBytes(lastRequest, "model").String()) + } + allowCompactionReplayBypass = h.websocketUpstreamSupportsCompactionReplayForModel(requestModelName) + } var requestJSON []byte var updatedLastRequest []byte @@ -124,6 +175,7 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { lastRequest, lastResponseOutput, allowIncrementalInputWithPreviousResponseID, + allowCompactionReplayBypass, ) if errMsg != nil { h.LoggingAPIResponseError(context.WithValue(context.Background(), "gin", c), errMsg) @@ -165,7 +217,13 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { requestJSON = repairResponsesWebsocketToolCalls(downstreamSessionKey, requestJSON) updatedLastRequest = bytes.Clone(requestJSON) + previousLastRequest := bytes.Clone(lastRequest) + previousLastResponseOutput := bytes.Clone(lastResponseOutput) + forcedTranscriptReplay := forceTranscriptReplayNextRequest lastRequest = updatedLastRequest + if forcedTranscriptReplay { + forceTranscriptReplayNextRequest = false + } modelName := gjson.GetBytes(requestJSON, "model").String() cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background()) @@ -179,7 +237,7 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { if authID == "" || h == nil || h.AuthManager == nil { return } - selectedAuth, ok := h.AuthManager.GetByID(authID) + selectedAuth, ok := sessionAuthByID(authID) if !ok || selectedAuth == nil { return } @@ -190,12 +248,19 @@ func (h *OpenAIResponsesAPIHandler) ResponsesWebsocket(c *gin.Context) { } dataChan, _, errChan := h.ExecuteStreamWithAuthManager(cliCtx, h.HandlerType(), modelName, requestJSON, "") - completedOutput, errForward := h.forwardResponsesWebsocket(c, conn, cliCancel, dataChan, errChan, &wsTimelineLog, passthroughSessionID) + completedOutput, forwardErrMsg, errForward := h.forwardResponsesWebsocket(c, conn, cliCancel, dataChan, errChan, &wsTimelineLog, passthroughSessionID) if errForward != nil { wsTerminateErr = errForward log.Warnf("responses websocket: forward failed id=%s error=%v", passthroughSessionID, errForward) return } + if shouldReleaseResponsesWebsocketPinnedAuth(forwardErrMsg) { + pinnedAuthID = "" + forceTranscriptReplayNextRequest = true + lastRequest = previousLastRequest + lastResponseOutput = previousLastResponseOutput + continue + } lastResponseOutput = completedOutput } } @@ -222,10 +287,10 @@ func websocketUpgradeHeaders(req *http.Request) http.Header { } func normalizeResponsesWebsocketRequest(rawJSON []byte, lastRequest []byte, lastResponseOutput []byte) ([]byte, []byte, *interfaces.ErrorMessage) { - return normalizeResponsesWebsocketRequestWithMode(rawJSON, lastRequest, lastResponseOutput, true) + return normalizeResponsesWebsocketRequestWithMode(rawJSON, lastRequest, lastResponseOutput, true, true) } -func normalizeResponsesWebsocketRequestWithMode(rawJSON []byte, lastRequest []byte, lastResponseOutput []byte, allowIncrementalInputWithPreviousResponseID bool) ([]byte, []byte, *interfaces.ErrorMessage) { +func normalizeResponsesWebsocketRequestWithMode(rawJSON []byte, lastRequest []byte, lastResponseOutput []byte, allowIncrementalInputWithPreviousResponseID bool, allowCompactionReplayBypass bool) ([]byte, []byte, *interfaces.ErrorMessage) { requestType := strings.TrimSpace(gjson.GetBytes(rawJSON, "type").String()) switch requestType { case wsRequestTypeCreate: @@ -233,10 +298,10 @@ func normalizeResponsesWebsocketRequestWithMode(rawJSON []byte, lastRequest []by if len(lastRequest) == 0 { return normalizeResponseCreateRequest(rawJSON) } - return normalizeResponseSubsequentRequest(rawJSON, lastRequest, lastResponseOutput, allowIncrementalInputWithPreviousResponseID) + return normalizeResponseSubsequentRequest(rawJSON, lastRequest, lastResponseOutput, allowIncrementalInputWithPreviousResponseID, allowCompactionReplayBypass) case wsRequestTypeAppend: // log.Infof("responses websocket: response.append request") - return normalizeResponseSubsequentRequest(rawJSON, lastRequest, lastResponseOutput, allowIncrementalInputWithPreviousResponseID) + return normalizeResponseSubsequentRequest(rawJSON, lastRequest, lastResponseOutput, allowIncrementalInputWithPreviousResponseID, allowCompactionReplayBypass) default: return nil, lastRequest, &interfaces.ErrorMessage{ StatusCode: http.StatusBadRequest, @@ -265,7 +330,7 @@ func normalizeResponseCreateRequest(rawJSON []byte) ([]byte, []byte, *interfaces return normalized, bytes.Clone(normalized), nil } -func normalizeResponseSubsequentRequest(rawJSON []byte, lastRequest []byte, lastResponseOutput []byte, allowIncrementalInputWithPreviousResponseID bool) ([]byte, []byte, *interfaces.ErrorMessage) { +func normalizeResponseSubsequentRequest(rawJSON []byte, lastRequest []byte, lastResponseOutput []byte, allowIncrementalInputWithPreviousResponseID bool, allowCompactionReplayBypass bool) ([]byte, []byte, *interfaces.ErrorMessage) { if len(lastRequest) == 0 { return nil, lastRequest, &interfaces.ErrorMessage{ StatusCode: http.StatusBadRequest, @@ -315,20 +380,37 @@ func normalizeResponseSubsequentRequest(rawJSON []byte, lastRequest []byte, last } } - existingInput := gjson.GetBytes(lastRequest, "input") - mergedInput, errMerge := mergeJSONArrayRaw(existingInput.Raw, normalizeJSONArrayRaw(lastResponseOutput)) - if errMerge != nil { - return nil, lastRequest, &interfaces.ErrorMessage{ - StatusCode: http.StatusBadRequest, - Error: fmt.Errorf("invalid previous response output: %w", errMerge), + // When the client sends a compact replay for a downstream that can consume it + // directly, the input already carries the canonical history. In that case, + // skip merging with stale lastRequest/lastResponseOutput to avoid breaking + // function_call / function_call_output pairings. + // See: https://github.com/router-for-me/CLIProxyAPI/issues/2207 + var mergedInput string + if allowCompactionReplayBypass && inputContainsFullTranscript(nextInput) { + log.Infof("responses websocket: full transcript detected, skipping stale merge (input items=%d)", len(nextInput.Array())) + mergedInput = nextInput.Raw + } else { + appendInputRaw := nextInput.Raw + if inputContainsFullTranscript(nextInput) { + appendInputRaw = inputWithoutCompactionItems(nextInput) } - } - mergedInput, errMerge = mergeJSONArrayRaw(mergedInput, nextInput.Raw) - if errMerge != nil { - return nil, lastRequest, &interfaces.ErrorMessage{ - StatusCode: http.StatusBadRequest, - Error: fmt.Errorf("invalid request input: %w", errMerge), + existingInput := gjson.GetBytes(lastRequest, "input") + var errMerge error + mergedInput, errMerge = mergeJSONArrayRaw(existingInput.Raw, normalizeJSONArrayRaw(lastResponseOutput)) + if errMerge != nil { + return nil, lastRequest, &interfaces.ErrorMessage{ + StatusCode: http.StatusBadRequest, + Error: fmt.Errorf("invalid previous response output: %w", errMerge), + } + } + + mergedInput, errMerge = mergeJSONArrayRaw(mergedInput, appendInputRaw) + if errMerge != nil { + return nil, lastRequest, &interfaces.ErrorMessage{ + StatusCode: http.StatusBadRequest, + Error: fmt.Errorf("invalid request input: %w", errMerge), + } } } dedupedInput, errDedupeFunctionCalls := dedupeFunctionCallsByCallID(mergedInput) @@ -480,72 +562,104 @@ func websocketUpstreamSupportsIncrementalInput(attributes map[string]string, met } func (h *OpenAIResponsesAPIHandler) websocketUpstreamSupportsIncrementalInputForModel(modelName string) bool { - if h == nil || h.AuthManager == nil { + auths, _ := h.responsesWebsocketAvailableAuthsForModel(modelName) + for _, auth := range auths { + if websocketUpstreamSupportsIncrementalInput(auth.Attributes, auth.Metadata) { + return true + } + } + return false +} + +func (h *OpenAIResponsesAPIHandler) websocketUpstreamSupportsCompactionReplayForModel(modelName string) bool { + auths, _ := h.responsesWebsocketAvailableAuthsForModel(modelName) + if len(auths) == 0 { return false } + for _, auth := range auths { + if !responsesWebsocketAuthSupportsCompactionReplay(auth) { + return false + } + } + return true +} + +func (h *OpenAIResponsesAPIHandler) responsesWebsocketAvailableAuthsForModel(modelName string) ([]*coreauth.Auth, string) { + if h == nil || h.AuthManager == nil { + return nil, "" + } + resolvedModelName := responsesWebsocketResolvedModelName(modelName) + providerSet, modelKey := responsesWebsocketProviderSetForModel(resolvedModelName) + if len(providerSet) == 0 { + return nil, modelKey + } - resolvedModelName := modelName + registryRef := registry.GetGlobalRegistry() + now := time.Now() + auths := h.AuthManager.List() + available := make([]*coreauth.Auth, 0, len(auths)) + for _, auth := range auths { + if !responsesWebsocketAuthMatchesModel(auth, providerSet, modelKey, registryRef, now) { + continue + } + available = append(available, auth) + } + return available, modelKey +} + +func responsesWebsocketResolvedModelName(modelName string) string { initialSuffix := thinking.ParseSuffix(modelName) if initialSuffix.ModelName == "auto" { resolvedBase := util.ResolveAutoModel(initialSuffix.ModelName) if initialSuffix.HasSuffix { - resolvedModelName = fmt.Sprintf("%s(%s)", resolvedBase, initialSuffix.RawSuffix) - } else { - resolvedModelName = resolvedBase + return fmt.Sprintf("%s(%s)", resolvedBase, initialSuffix.RawSuffix) } - } else { - resolvedModelName = util.ResolveAutoModel(modelName) + return resolvedBase } + return util.ResolveAutoModel(modelName) +} +func responsesWebsocketProviderSetForModel(resolvedModelName string) (map[string]struct{}, string) { parsed := thinking.ParseSuffix(resolvedModelName) baseModel := strings.TrimSpace(parsed.ModelName) providers := util.GetProviderName(baseModel) if len(providers) == 0 && baseModel != resolvedModelName { providers = util.GetProviderName(resolvedModelName) } - if len(providers) == 0 { - return false - } - providerSet := make(map[string]struct{}, len(providers)) - for i := 0; i < len(providers); i++ { - providerKey := strings.TrimSpace(strings.ToLower(providers[i])) + for _, provider := range providers { + providerKey := strings.TrimSpace(strings.ToLower(provider)) if providerKey == "" { continue } providerSet[providerKey] = struct{}{} } - if len(providerSet) == 0 { - return false - } - modelKey := baseModel if modelKey == "" { modelKey = strings.TrimSpace(resolvedModelName) } - registryRef := registry.GetGlobalRegistry() - now := time.Now() - auths := h.AuthManager.List() - for i := 0; i < len(auths); i++ { - auth := auths[i] - if auth == nil { - continue - } - providerKey := strings.TrimSpace(strings.ToLower(auth.Provider)) - if _, ok := providerSet[providerKey]; !ok { - continue - } - if modelKey != "" && registryRef != nil && !registryRef.ClientSupportsModel(auth.ID, modelKey) { - continue - } - if !responsesWebsocketAuthAvailableForModel(auth, modelKey, now) { - continue - } - if websocketUpstreamSupportsIncrementalInput(auth.Attributes, auth.Metadata) { - return true - } + return providerSet, modelKey +} + +func responsesWebsocketAuthMatchesModel(auth *coreauth.Auth, providerSet map[string]struct{}, modelKey string, registryRef *registry.ModelRegistry, now time.Time) bool { + if auth == nil { + return false } - return false + providerKey := strings.TrimSpace(strings.ToLower(auth.Provider)) + if _, ok := providerSet[providerKey]; !ok { + return false + } + if modelKey != "" && registryRef != nil && !registryRef.ClientSupportsModel(auth.ID, modelKey) { + return false + } + return responsesWebsocketAuthAvailableForModel(auth, modelKey, now) +} + +func responsesWebsocketAuthSupportsCompactionReplay(auth *coreauth.Auth) bool { + if auth == nil { + return false + } + return strings.EqualFold(strings.TrimSpace(auth.Provider), "codex") } func responsesWebsocketAuthAvailableForModel(auth *coreauth.Auth, modelName string, now time.Time) bool { @@ -691,6 +805,42 @@ func mergeJSONArrayRaw(existingRaw, appendRaw string) (string, error) { return string(out), nil } +// inputContainsFullTranscript returns true when the input array carries compact +// replay markers that indicate the client already sent the full conversation +// transcript. Merging that input with stale lastRequest/lastResponseOutput +// would duplicate or break function_call/function_call_output pairings, so the +// caller should use the input as-is. +// +// Assistant messages alone are not enough to classify the payload as a replay: +// incremental websocket requests may legitimately append assistant items. +func inputContainsFullTranscript(input gjson.Result) bool { + if !input.IsArray() { + return false + } + for _, item := range input.Array() { + t := item.Get("type").String() + if t == "compaction" || t == "compaction_summary" { + return true + } + } + return false +} + +func inputWithoutCompactionItems(input gjson.Result) string { + if !input.IsArray() { + return normalizeJSONArrayRaw([]byte(input.Raw)) + } + filtered := make([]string, 0, len(input.Array())) + for _, item := range input.Array() { + t := item.Get("type").String() + if t == "compaction" || t == "compaction_summary" { + continue + } + filtered = append(filtered, item.Raw) + } + return "[" + strings.Join(filtered, ",") + "]" +} + func normalizeJSONArrayRaw(raw []byte) string { trimmed := strings.TrimSpace(string(raw)) if trimmed == "" { @@ -711,7 +861,7 @@ func (h *OpenAIResponsesAPIHandler) forwardResponsesWebsocket( errs <-chan *interfaces.ErrorMessage, wsTimelineLog *strings.Builder, sessionID string, -) ([]byte, error) { +) ([]byte, *interfaces.ErrorMessage, error) { completed := false completedOutput := []byte("[]") downstreamSessionKey := "" @@ -723,7 +873,7 @@ func (h *OpenAIResponsesAPIHandler) forwardResponsesWebsocket( select { case <-c.Request.Context().Done(): cancel(c.Request.Context().Err()) - return completedOutput, c.Request.Context().Err() + return completedOutput, nil, c.Request.Context().Err() case errMsg, ok := <-errs: if !ok { errs = nil @@ -748,7 +898,7 @@ func (h *OpenAIResponsesAPIHandler) forwardResponsesWebsocket( // errWrite, // ) cancel(errMsg.Error) - return completedOutput, errWrite + return completedOutput, errMsg, errWrite } } if errMsg != nil { @@ -756,7 +906,7 @@ func (h *OpenAIResponsesAPIHandler) forwardResponsesWebsocket( } else { cancel(nil) } - return completedOutput, nil + return completedOutput, errMsg, nil case chunk, ok := <-data: if !ok { if !completed { @@ -782,13 +932,13 @@ func (h *OpenAIResponsesAPIHandler) forwardResponsesWebsocket( errWrite, ) cancel(errMsg.Error) - return completedOutput, errWrite + return completedOutput, errMsg, errWrite } cancel(errMsg.Error) - return completedOutput, nil + return completedOutput, errMsg, nil } cancel(nil) - return completedOutput, nil + return completedOutput, nil, nil } payloads := websocketJSONPayloadsFromChunk(chunk) @@ -815,13 +965,31 @@ func (h *OpenAIResponsesAPIHandler) forwardResponsesWebsocket( errWrite, ) cancel(errWrite) - return completedOutput, errWrite + return completedOutput, nil, errWrite } } } } } +func shouldReleaseResponsesWebsocketPinnedAuth(errMsg *interfaces.ErrorMessage) bool { + if errMsg == nil { + return false + } + status := errMsg.StatusCode + if status <= 0 && errMsg.Error != nil { + if se, ok := errMsg.Error.(interface{ StatusCode() int }); ok && se != nil { + status = se.StatusCode() + } + } + switch status { + case http.StatusUnauthorized, http.StatusPaymentRequired, http.StatusForbidden, http.StatusTooManyRequests: + return true + default: + return false + } +} + func responseCompletedOutputFromPayload(payload []byte) []byte { output := gjson.GetBytes(payload, "response.output") if output.Exists() && output.IsArray() { diff --git a/sdk/api/handlers/openai/openai_responses_websocket_test.go b/sdk/api/handlers/openai/openai_responses_websocket_test.go index ecfc90b31b..7ff58fa3c8 100644 --- a/sdk/api/handlers/openai/openai_responses_websocket_test.go +++ b/sdk/api/handlers/openai/openai_responses_websocket_test.go @@ -14,12 +14,12 @@ import ( "github.com/gin-gonic/gin" "github.com/gorilla/websocket" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - coreexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + coreexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdkconfig "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" "github.com/tidwall/gjson" ) @@ -69,6 +69,95 @@ type websocketAuthCaptureExecutor struct { authIDs []string } +type websocketPinnedFailoverExecutor struct { + mu sync.Mutex + authIDs []string + calls map[string]int + payloads map[string][][]byte +} + +type websocketPinnedFailoverStatusError struct { + status int + msg string +} + +func (e websocketPinnedFailoverStatusError) Error() string { return e.msg } + +func (e websocketPinnedFailoverStatusError) StatusCode() int { return e.status } + +type websocketUpstreamDisconnectExecutor struct { + mu sync.Mutex + subscribed chan string + sessions map[string]chan error +} + +func (e *websocketUpstreamDisconnectExecutor) Identifier() string { return "codex" } + +func (e *websocketUpstreamDisconnectExecutor) UpstreamDisconnectChan(sessionID string) <-chan error { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return nil + } + e.mu.Lock() + if e.sessions == nil { + e.sessions = make(map[string]chan error) + } + ch, ok := e.sessions[sessionID] + if !ok { + ch = make(chan error, 1) + e.sessions[sessionID] = ch + } + subscribed := e.subscribed + e.mu.Unlock() + + if subscribed != nil { + select { + case subscribed <- sessionID: + default: + } + } + return ch +} + +func (e *websocketUpstreamDisconnectExecutor) TriggerDisconnect(sessionID string, err error) { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return + } + e.mu.Lock() + ch := e.sessions[sessionID] + delete(e.sessions, sessionID) + e.mu.Unlock() + if ch == nil { + return + } + select { + case ch <- err: + default: + } + close(ch) +} + +func (e *websocketUpstreamDisconnectExecutor) Execute(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (coreexecutor.Response, error) { + return coreexecutor.Response{}, errors.New("not implemented") +} + +func (e *websocketUpstreamDisconnectExecutor) ExecuteStream(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (*coreexecutor.StreamResult, error) { + return nil, errors.New("not implemented") +} + +func (e *websocketUpstreamDisconnectExecutor) Refresh(_ context.Context, auth *coreauth.Auth) (*coreauth.Auth, error) { + return auth, nil +} + +func (e *websocketUpstreamDisconnectExecutor) CountTokens(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (coreexecutor.Response, error) { + return coreexecutor.Response{}, errors.New("not implemented") +} + +func (e *websocketUpstreamDisconnectExecutor) HttpRequest(context.Context, *coreauth.Auth, *http.Request) (*http.Response, error) { + return nil, errors.New("not implemented") +} + func (e *websocketAuthCaptureExecutor) Identifier() string { return "test-provider" } func (e *websocketAuthCaptureExecutor) Execute(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (coreexecutor.Response, error) { @@ -106,6 +195,76 @@ func (e *websocketAuthCaptureExecutor) AuthIDs() []string { return append([]string(nil), e.authIDs...) } +func (e *websocketPinnedFailoverExecutor) Identifier() string { return "test-provider" } + +func (e *websocketPinnedFailoverExecutor) Execute(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (coreexecutor.Response, error) { + return coreexecutor.Response{}, errors.New("not implemented") +} + +func (e *websocketPinnedFailoverExecutor) ExecuteStream(_ context.Context, auth *coreauth.Auth, req coreexecutor.Request, _ coreexecutor.Options) (*coreexecutor.StreamResult, error) { + authID := "" + if auth != nil { + authID = auth.ID + } + + e.mu.Lock() + if e.calls == nil { + e.calls = make(map[string]int) + } + if e.payloads == nil { + e.payloads = make(map[string][][]byte) + } + e.authIDs = append(e.authIDs, authID) + e.calls[authID]++ + call := e.calls[authID] + e.payloads[authID] = append(e.payloads[authID], bytes.Clone(req.Payload)) + e.mu.Unlock() + + if authID == "auth-a" && call == 2 { + chunks := make(chan coreexecutor.StreamChunk, 1) + chunks <- coreexecutor.StreamChunk{Err: websocketPinnedFailoverStatusError{ + status: http.StatusTooManyRequests, + msg: `{"error":{"message":"quota exhausted","type":"rate_limit_error","code":"rate_limit_exceeded"}}`, + }} + close(chunks) + return &coreexecutor.StreamResult{Chunks: chunks}, nil + } + + chunks := make(chan coreexecutor.StreamChunk, 1) + chunks <- coreexecutor.StreamChunk{Payload: []byte(fmt.Sprintf(`{"type":"response.completed","response":{"id":"resp-%s-%d","output":[{"type":"message","id":"out-%s-%d"}]}}`, authID, call, authID, call))} + close(chunks) + return &coreexecutor.StreamResult{Chunks: chunks}, nil +} + +func (e *websocketPinnedFailoverExecutor) Refresh(_ context.Context, auth *coreauth.Auth) (*coreauth.Auth, error) { + return auth, nil +} + +func (e *websocketPinnedFailoverExecutor) CountTokens(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (coreexecutor.Response, error) { + return coreexecutor.Response{}, errors.New("not implemented") +} + +func (e *websocketPinnedFailoverExecutor) HttpRequest(context.Context, *coreauth.Auth, *http.Request) (*http.Response, error) { + return nil, errors.New("not implemented") +} + +func (e *websocketPinnedFailoverExecutor) AuthIDs() []string { + e.mu.Lock() + defer e.mu.Unlock() + return append([]string(nil), e.authIDs...) +} + +func (e *websocketPinnedFailoverExecutor) Payloads(authID string) [][]byte { + e.mu.Lock() + defer e.mu.Unlock() + src := e.payloads[authID] + out := make([][]byte, len(src)) + for i := range src { + out[i] = bytes.Clone(src[i]) + } + return out +} + func (e *websocketCaptureExecutor) Identifier() string { return "test-provider" } func (e *websocketCaptureExecutor) Execute(context.Context, *coreauth.Auth, coreexecutor.Request, coreexecutor.Options) (coreexecutor.Response, error) { @@ -242,7 +401,7 @@ func TestNormalizeResponsesWebsocketRequestWithPreviousResponseIDIncremental(t * ]`) raw := []byte(`{"type":"response.create","previous_response_id":"resp-1","input":[{"type":"function_call_output","call_id":"call-1","id":"tool-out-1"}]}`) - normalized, next, errMsg := normalizeResponsesWebsocketRequestWithMode(raw, lastRequest, lastResponseOutput, true) + normalized, next, errMsg := normalizeResponsesWebsocketRequestWithMode(raw, lastRequest, lastResponseOutput, true, false) if errMsg != nil { t.Fatalf("unexpected error: %v", errMsg.Error) } @@ -278,7 +437,7 @@ func TestNormalizeResponsesWebsocketRequestWithPreviousResponseIDMergedWhenIncre ]`) raw := []byte(`{"type":"response.create","previous_response_id":"resp-1","input":[{"type":"function_call_output","call_id":"call-1","id":"tool-out-1"}]}`) - normalized, next, errMsg := normalizeResponsesWebsocketRequestWithMode(raw, lastRequest, lastResponseOutput, false) + normalized, next, errMsg := normalizeResponsesWebsocketRequestWithMode(raw, lastRequest, lastResponseOutput, false, false) if errMsg != nil { t.Fatalf("unexpected error: %v", errMsg.Error) } @@ -503,6 +662,34 @@ func TestRepairResponsesWebsocketToolCallsInsertsCachedCallForOrphanOutput(t *te } } +func TestRepairResponsesWebsocketToolCallsInsertsCachedCallForPreviousResponseOutput(t *testing.T) { + outputCache := newWebsocketToolOutputCache(time.Minute, 10) + callCache := newWebsocketToolOutputCache(time.Minute, 10) + sessionKey := "session-1" + + callCache.record(sessionKey, "call-1", []byte(`{"type":"function_call","id":"fc-1","call_id":"call-1","name":"tool"}`)) + + raw := []byte(`{"previous_response_id":"resp-latest","input":[{"type":"function_call_output","call_id":"call-1","id":"tool-out-1","output":"ok"},{"type":"message","id":"msg-1"}]}`) + repaired := repairResponsesWebsocketToolCallsWithCaches(outputCache, callCache, sessionKey, raw) + + if got := gjson.GetBytes(repaired, "previous_response_id").String(); got != "resp-latest" { + t.Fatalf("previous_response_id = %q, want resp-latest", got) + } + input := gjson.GetBytes(repaired, "input").Array() + if len(input) != 3 { + t.Fatalf("repaired input len = %d, want 3: %s", len(input), repaired) + } + if input[0].Get("type").String() != "function_call" || input[0].Get("call_id").String() != "call-1" { + t.Fatalf("missing inserted call: %s", input[0].Raw) + } + if input[1].Get("type").String() != "function_call_output" || input[1].Get("call_id").String() != "call-1" { + t.Fatalf("unexpected output item: %s", input[1].Raw) + } + if input[2].Get("type").String() != "message" || input[2].Get("id").String() != "msg-1" { + t.Fatalf("unexpected trailing item: %s", input[2].Raw) + } +} + func TestRepairResponsesWebsocketToolCallsDropsOrphanOutputWhenCallMissing(t *testing.T) { outputCache := newWebsocketToolOutputCache(time.Minute, 10) callCache := newWebsocketToolOutputCache(time.Minute, 10) @@ -681,7 +868,7 @@ func TestForwardResponsesWebsocketPreservesCompletedEvent(t *testing.T) { close(errCh) var timelineLog strings.Builder - completedOutput, err := (*OpenAIResponsesAPIHandler)(nil).forwardResponsesWebsocket( + completedOutput, errMsg, err := (*OpenAIResponsesAPIHandler)(nil).forwardResponsesWebsocket( ctx, conn, func(...interface{}) {}, @@ -694,6 +881,10 @@ func TestForwardResponsesWebsocketPreservesCompletedEvent(t *testing.T) { serverErrCh <- err return } + if errMsg != nil { + serverErrCh <- fmt.Errorf("unexpected websocket error message: %v", errMsg.Error) + return + } if gjson.GetBytes(completedOutput, "0.id").String() != "out-1" { serverErrCh <- errors.New("completed output not captured") return @@ -760,7 +951,7 @@ func TestForwardResponsesWebsocketLogsAttemptedResponseOnWriteFailure(t *testing return } - _, err = (*OpenAIResponsesAPIHandler)(nil).forwardResponsesWebsocket( + _, _, err = (*OpenAIResponsesAPIHandler)(nil).forwardResponsesWebsocket( ctx, conn, func(...interface{}) {}, @@ -844,6 +1035,43 @@ func TestResponsesWebsocketTimelineRecordsDisconnectEvent(t *testing.T) { } } +func TestResponsesWebsocketClosesOnCodexUpstreamDisconnect(t *testing.T) { + gin.SetMode(gin.TestMode) + + executor := &websocketUpstreamDisconnectExecutor{subscribed: make(chan string, 1)} + manager := coreauth.NewManager(nil, nil, nil) + manager.RegisterExecutor(executor) + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewOpenAIResponsesAPIHandler(base) + + router := gin.New() + router.GET("/v1/responses/ws", h.ResponsesWebsocket) + server := httptest.NewServer(router) + defer server.Close() + + wsURL := "ws" + strings.TrimPrefix(server.URL, "http") + "/v1/responses/ws" + conn, _, err := websocket.DefaultDialer.Dial(wsURL, nil) + if err != nil { + t.Fatalf("dial websocket: %v", err) + } + defer func() { _ = conn.Close() }() + + var sessionID string + select { + case sessionID = <-executor.subscribed: + case <-time.After(5 * time.Second): + t.Fatal("timed out waiting for upstream disconnect subscription") + } + + executor.TriggerDisconnect(sessionID, errors.New("upstream disconnected")) + + _ = conn.SetReadDeadline(time.Now().Add(2 * time.Second)) + _, _, err = conn.ReadMessage() + if err == nil { + t.Fatalf("expected downstream websocket to close after upstream disconnect") + } +} + func TestWebsocketUpstreamSupportsIncrementalInputForModel(t *testing.T) { manager := coreauth.NewManager(nil, nil, nil) auth := &coreauth.Auth{ @@ -867,6 +1095,53 @@ func TestWebsocketUpstreamSupportsIncrementalInputForModel(t *testing.T) { } } +func TestWebsocketUpstreamSupportsCompactionReplayForModel(t *testing.T) { + manager := coreauth.NewManager(nil, nil, nil) + auth := &coreauth.Auth{ + ID: "auth-codex", + Provider: "codex", + Status: coreauth.StatusActive, + } + if _, err := manager.Register(context.Background(), auth); err != nil { + t.Fatalf("Register auth: %v", err) + } + registry.GetGlobalRegistry().RegisterClient(auth.ID, auth.Provider, []*registry.ModelInfo{{ID: "test-model"}}) + t.Cleanup(func() { + registry.GetGlobalRegistry().UnregisterClient(auth.ID) + }) + + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewOpenAIResponsesAPIHandler(base) + if !h.websocketUpstreamSupportsCompactionReplayForModel("test-model") { + t.Fatalf("expected codex upstream to support compaction replay") + } +} + +func TestWebsocketUpstreamSupportsCompactionReplayForModelFalseWhenMixedBackends(t *testing.T) { + manager := coreauth.NewManager(nil, nil, nil) + auths := []*coreauth.Auth{ + {ID: "auth-codex", Provider: "codex", Status: coreauth.StatusActive}, + {ID: "auth-claude", Provider: "claude", Status: coreauth.StatusActive}, + } + for _, auth := range auths { + if _, err := manager.Register(context.Background(), auth); err != nil { + t.Fatalf("Register auth %s: %v", auth.ID, err) + } + registry.GetGlobalRegistry().RegisterClient(auth.ID, auth.Provider, []*registry.ModelInfo{{ID: "test-model"}}) + } + t.Cleanup(func() { + for _, auth := range auths { + registry.GetGlobalRegistry().UnregisterClient(auth.ID) + } + }) + + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewOpenAIResponsesAPIHandler(base) + if h.websocketUpstreamSupportsCompactionReplayForModel("test-model") { + t.Fatalf("expected mixed backend model to disable compaction replay bypass") + } +} + func TestResponsesWebsocketPrewarmHandledLocallyForSSEUpstream(t *testing.T) { gin.SetMode(gin.TestMode) @@ -1066,6 +1341,99 @@ func TestResponsesWebsocketPinsOnlyWebsocketCapableAuth(t *testing.T) { } } +func TestResponsesWebsocketReleasesPinnedAuthAfterQuotaError(t *testing.T) { + gin.SetMode(gin.TestMode) + + selector := &orderedWebsocketSelector{order: []string{"auth-a", "auth-b"}} + executor := &websocketPinnedFailoverExecutor{} + manager := coreauth.NewManager(nil, selector, nil) + manager.RegisterExecutor(executor) + + authA := &coreauth.Auth{ + ID: "auth-a", + Provider: executor.Identifier(), + Status: coreauth.StatusActive, + Attributes: map[string]string{"websockets": "true"}, + } + if _, err := manager.Register(context.Background(), authA); err != nil { + t.Fatalf("Register auth A: %v", err) + } + authB := &coreauth.Auth{ + ID: "auth-b", + Provider: executor.Identifier(), + Status: coreauth.StatusActive, + Attributes: map[string]string{"websockets": "true"}, + } + if _, err := manager.Register(context.Background(), authB); err != nil { + t.Fatalf("Register auth B: %v", err) + } + + registry.GetGlobalRegistry().RegisterClient(authA.ID, authA.Provider, []*registry.ModelInfo{{ID: "quota-model"}}) + registry.GetGlobalRegistry().RegisterClient(authB.ID, authB.Provider, []*registry.ModelInfo{{ID: "quota-model"}}) + t.Cleanup(func() { + registry.GetGlobalRegistry().UnregisterClient(authA.ID) + registry.GetGlobalRegistry().UnregisterClient(authB.ID) + }) + + base := handlers.NewBaseAPIHandlers(&sdkconfig.SDKConfig{}, manager) + h := NewOpenAIResponsesAPIHandler(base) + router := gin.New() + router.GET("/v1/responses/ws", h.ResponsesWebsocket) + + server := httptest.NewServer(router) + defer server.Close() + + wsURL := "ws" + strings.TrimPrefix(server.URL, "http") + "/v1/responses/ws" + conn, _, err := websocket.DefaultDialer.Dial(wsURL, nil) + if err != nil { + t.Fatalf("dial websocket: %v", err) + } + defer func() { + if errClose := conn.Close(); errClose != nil { + t.Fatalf("close websocket: %v", errClose) + } + }() + + requests := []string{ + `{"type":"response.create","model":"quota-model","input":[{"type":"message","id":"msg-1"}]}`, + `{"type":"response.create","previous_response_id":"resp-auth-a-1","input":[{"type":"message","id":"msg-2"}]}`, + `{"type":"response.create","previous_response_id":"resp-auth-a-1","input":[{"type":"message","id":"msg-3"}]}`, + } + wantTypes := []string{wsEventTypeCompleted, wsEventTypeError, wsEventTypeCompleted} + for i := range requests { + if errWrite := conn.WriteMessage(websocket.TextMessage, []byte(requests[i])); errWrite != nil { + t.Fatalf("write websocket message %d: %v", i+1, errWrite) + } + _, payload, errReadMessage := conn.ReadMessage() + if errReadMessage != nil { + t.Fatalf("read websocket message %d: %v", i+1, errReadMessage) + } + if got := gjson.GetBytes(payload, "type").String(); got != wantTypes[i] { + t.Fatalf("message %d payload type = %s, want %s: %s", i+1, got, wantTypes[i], payload) + } + if i == 1 && int(gjson.GetBytes(payload, "status").Int()) != http.StatusTooManyRequests { + t.Fatalf("quota payload status = %d, want %d: %s", gjson.GetBytes(payload, "status").Int(), http.StatusTooManyRequests, payload) + } + } + + if got := executor.AuthIDs(); len(got) != 3 || got[0] != "auth-a" || got[1] != "auth-a" || got[2] != "auth-b" { + t.Fatalf("selected auth IDs = %v, want [auth-a auth-a auth-b]", got) + } + + authBPayloads := executor.Payloads("auth-b") + if len(authBPayloads) != 1 { + t.Fatalf("auth-b payload count = %d, want 1", len(authBPayloads)) + } + authBPayload := authBPayloads[0] + if gjson.GetBytes(authBPayload, "previous_response_id").Exists() { + t.Fatalf("previous_response_id leaked after auth failover: %s", authBPayload) + } + authBInput := gjson.GetBytes(authBPayload, "input").Raw + if !strings.Contains(authBInput, `"id":"msg-1"`) || !strings.Contains(authBInput, `"id":"msg-3"`) { + t.Fatalf("auth-b replay input missing expected transcript items: %s", authBInput) + } +} + func TestNormalizeResponsesWebsocketRequestTreatsTranscriptReplacementAsReset(t *testing.T) { lastRequest := []byte(`{"model":"test-model","stream":true,"input":[{"type":"message","id":"msg-1"},{"type":"function_call","id":"fc-1","call_id":"call-1"},{"type":"function_call_output","id":"tool-out-1","call_id":"call-1"},{"type":"message","id":"assistant-1","role":"assistant"}]}`) lastResponseOutput := []byte(`[ @@ -1400,3 +1768,171 @@ func TestResponsesWebsocketCompactionResetsTurnStateOnTranscriptReplacement(t *t t.Fatalf("post-compact function call id = %s, want call-1", items[0].Get("call_id").String()) } } + +func TestInputContainsFullTranscriptFalseForAssistantMessageOnly(t *testing.T) { + input := gjson.Parse(`[ + {"type":"message","role":"user","content":"hello"}, + {"type":"message","role":"assistant","content":"hi there"} + ]`) + if inputContainsFullTranscript(input) { + t.Fatal("assistant message alone must not be treated as full transcript") + } +} + +func TestInputContainsFullTranscriptDetectsCompactionItem(t *testing.T) { + for _, typ := range []string{"compaction", "compaction_summary"} { + input := gjson.Parse(`[{"type":"message","role":"user","content":"hello"},{"type":"` + typ + `","encrypted_content":"summary"}]`) + if !inputContainsFullTranscript(input) { + t.Fatalf("expected full transcript for type=%s", typ) + } + } +} + +func TestInputContainsFullTranscriptFalseForIncremental(t *testing.T) { + // Normal incremental turns: user messages or function_call_output only. + for _, raw := range []string{ + `[{"type":"function_call_output","call_id":"call-1","output":"result"}]`, + `[{"type":"message","role":"user","content":"next question"}]`, + `[]`, + } { + if inputContainsFullTranscript(gjson.Parse(raw)) { + t.Fatalf("incremental input must not be detected as full transcript: %s", raw) + } + } +} + +func TestNormalizeSubsequentRequestCompactSkipsMerge(t *testing.T) { + lastRequest := []byte(`{"model":"gpt-5.4","stream":true,"input":[ + {"type":"message","role":"user","id":"msg-1","content":"original long prompt"}, + {"type":"message","role":"assistant","id":"msg-2","content":"original long response"}, + {"type":"function_call","id":"fc-1","call_id":"call-old","name":"bash","arguments":"{}"}, + {"type":"function_call_output","id":"fco-1","call_id":"call-old","output":"old result"} + ]}`) + lastResponseOutput := []byte(`[ + {"type":"message","role":"assistant","id":"msg-3","content":"another assistant reply"}, + {"type":"function_call","id":"fc-2","call_id":"call-stale","name":"read","arguments":"{}"} + ]`) + + // Remote compact response: user messages + compaction item, NO assistant message. + // This is the primary compact scenario from Codex CLI. + raw := []byte(`{"type":"response.create","input":[ + {"type":"message","role":"user","id":"msg-1c","content":"compacted user msg"}, + {"type":"compaction","encrypted_content":"conversation summary"} + ]}`) + + normalized, _, errMsg := normalizeResponsesWebsocketRequest(raw, lastRequest, lastResponseOutput) + if errMsg != nil { + t.Fatalf("unexpected error: %v", errMsg.Error) + } + + input := gjson.GetBytes(normalized, "input").Array() + if len(input) != 2 { + t.Fatalf("input len = %d, want 2 (compacted only); stale state was not skipped", len(input)) + } + if input[0].Get("id").String() != "msg-1c" { + t.Fatalf("input[0].id = %q, want %q", input[0].Get("id").String(), "msg-1c") + } + if input[1].Get("type").String() != "compaction" { + t.Fatalf("input[1].type = %q, want %q", input[1].Get("type").String(), "compaction") + } +} + +func TestNormalizeSubsequentRequestCompactMergesWhenCompactionReplayUnsupported(t *testing.T) { + lastRequest := []byte(`{"model":"gpt-5.4","stream":true,"input":[ + {"type":"message","role":"user","id":"msg-1","content":"original long prompt"}, + {"type":"message","role":"assistant","id":"msg-2","content":"original long response"}, + {"type":"function_call","id":"fc-1","call_id":"call-old","name":"bash","arguments":"{}"}, + {"type":"function_call_output","id":"fco-1","call_id":"call-old","output":"old result"} + ]}`) + lastResponseOutput := []byte(`[ + {"type":"message","role":"assistant","id":"msg-3","content":"another assistant reply"}, + {"type":"function_call","id":"fc-2","call_id":"call-stale","name":"read","arguments":"{}"} + ]`) + raw := []byte(`{"type":"response.create","input":[ + {"type":"message","role":"user","id":"msg-1c","content":"compacted user msg"}, + {"type":"compaction","encrypted_content":"conversation summary"} + ]}`) + + normalized, _, errMsg := normalizeResponsesWebsocketRequestWithMode(raw, lastRequest, lastResponseOutput, false, false) + if errMsg != nil { + t.Fatalf("unexpected error: %v", errMsg.Error) + } + + input := gjson.GetBytes(normalized, "input").Array() + if len(input) != 7 { + t.Fatalf("input len = %d, want 7 (merged fallback without compaction items)", len(input)) + } + wantIDs := []string{"msg-1", "msg-2", "fc-1", "fco-1", "msg-3", "fc-2", "msg-1c"} + for i, want := range wantIDs { + got := input[i].Get("id").String() + if got != want { + t.Fatalf("input[%d].id = %q, want %q", i, got, want) + } + } + for _, item := range input { + if item.Get("type").String() == "compaction" || item.Get("type").String() == "compaction_summary" { + t.Fatalf("compaction items must be stripped for unsupported downstream fallback: %s", item.Raw) + } + } +} + +func TestNormalizeSubsequentRequestIncrementalInputStillMerges(t *testing.T) { + // Normal incremental flow: user sends function_call_output (no assistant message). + lastRequest := []byte(`{"model":"gpt-5.4","stream":true,"input":[ + {"type":"message","role":"user","id":"msg-1","content":"hello"} + ]}`) + lastResponseOutput := []byte(`[ + {"type":"message","role":"assistant","id":"msg-2","content":"let me check"}, + {"type":"function_call","id":"fc-1","call_id":"call-1","name":"bash","arguments":"{}"} + ]`) + raw := []byte(`{"type":"response.create","input":[ + {"type":"function_call_output","call_id":"call-1","id":"fco-1","output":"done"} + ]}`) + + normalized, _, errMsg := normalizeResponsesWebsocketRequest(raw, lastRequest, lastResponseOutput) + if errMsg != nil { + t.Fatalf("unexpected error: %v", errMsg.Error) + } + + input := gjson.GetBytes(normalized, "input").Array() + + // Should be merged: msg-1 + msg-2 + fc-1 + fco-1 = 4 items + if len(input) != 4 { + t.Fatalf("input len = %d, want 4 (merged)", len(input)) + } + wantIDs := []string{"msg-1", "msg-2", "fc-1", "fco-1"} + for i, want := range wantIDs { + got := input[i].Get("id").String() + if got != want { + t.Fatalf("input[%d].id = %q, want %q", i, got, want) + } + } +} + +func TestNormalizeSubsequentRequestAssistantInputTriggersTranscriptReplacement(t *testing.T) { + // After dev's shouldReplaceWebsocketTranscript, assistant messages in input + // trigger transcript replacement (no merge with prior state). + lastRequest := []byte(`{"model":"gpt-5.4","stream":true,"input":[ + {"type":"message","role":"user","id":"msg-1","content":"hello"} + ]}`) + lastResponseOutput := []byte(`[ + {"type":"message","role":"assistant","id":"msg-2","content":"prior assistant"}, + {"type":"function_call","id":"fc-1","call_id":"call-1","name":"bash","arguments":"{}"} + ]`) + raw := []byte(`{"type":"response.append","input":[ + {"type":"message","role":"assistant","id":"msg-3","content":"patched assistant turn"} + ]}`) + + normalized, _, errMsg := normalizeResponsesWebsocketRequest(raw, lastRequest, lastResponseOutput) + if errMsg != nil { + t.Fatalf("unexpected error: %v", errMsg.Error) + } + + input := gjson.GetBytes(normalized, "input").Array() + if len(input) != 1 { + t.Fatalf("input len = %d, want 1 (transcript replacement, not merge)", len(input)) + } + if input[0].Get("id").String() != "msg-3" { + t.Fatalf("input[0].id = %q, want %q", input[0].Get("id").String(), "msg-3") + } +} diff --git a/sdk/api/handlers/openai/openai_responses_websocket_toolcall_repair.go b/sdk/api/handlers/openai/openai_responses_websocket_toolcall_repair.go index 1a5772ec70..c521bec049 100644 --- a/sdk/api/handlers/openai/openai_responses_websocket_toolcall_repair.go +++ b/sdk/api/handlers/openai/openai_responses_websocket_toolcall_repair.go @@ -300,11 +300,6 @@ func repairResponsesToolCallsArray(outputCache, callCache *websocketToolOutputCa continue } - if allowOrphanOutputs { - filtered = append(filtered, item) - continue - } - if _, ok := callPresent[callID]; ok { filtered = append(filtered, item) continue @@ -322,6 +317,11 @@ func repairResponsesToolCallsArray(outputCache, callCache *websocketToolOutputCa } } + if allowOrphanOutputs { + filtered = append(filtered, item) + continue + } + // Drop orphaned function_call_output items; upstream rejects transcripts with missing calls. continue } diff --git a/sdk/api/handlers/stream_forwarder.go b/sdk/api/handlers/stream_forwarder.go index 401baca8fa..63ddc31e43 100644 --- a/sdk/api/handlers/stream_forwarder.go +++ b/sdk/api/handlers/stream_forwarder.go @@ -5,7 +5,7 @@ import ( "time" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" ) type StreamForwardOptions struct { diff --git a/sdk/api/management.go b/sdk/api/management.go index a5a1cfc490..689cda3dca 100644 --- a/sdk/api/management.go +++ b/sdk/api/management.go @@ -1,16 +1,21 @@ // Package api exposes helpers for embedding CLIProxyAPI. // -// It wraps internal management handler types so external projects can integrate -// management endpoints without importing internal packages. +// It wraps internal management handler types and helpers so external projects +// can integrate management endpoints without importing internal packages. package api import ( + "context" + "github.com/gin-gonic/gin" - internalmanagement "github.com/router-for-me/CLIProxyAPI/v6/internal/api/handlers/management" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + internalmanagement "github.com/router-for-me/CLIProxyAPI/v7/internal/api/handlers/management" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) +// Handler re-exports the management handler used by the internal HTTP API. +type Handler = internalmanagement.Handler + // ManagementTokenRequester exposes a limited subset of management endpoints for requesting tokens. type ManagementTokenRequester interface { RequestAnthropicToken(*gin.Context) @@ -23,13 +28,23 @@ type ManagementTokenRequester interface { } type managementTokenRequester struct { - handler *internalmanagement.Handler + handler *Handler +} + +// NewHandler creates a management handler for SDK consumers. +func NewHandler(cfg *config.Config, configFilePath string, manager *coreauth.Manager) *Handler { + return internalmanagement.NewHandler(cfg, configFilePath, manager) +} + +// NewHandlerWithoutConfigFilePath creates a management handler that skips config file persistence. +func NewHandlerWithoutConfigFilePath(cfg *config.Config, manager *coreauth.Manager) *Handler { + return internalmanagement.NewHandlerWithoutConfigFilePath(cfg, manager) } // NewManagementTokenRequester creates a limited management handler exposing only token request endpoints. func NewManagementTokenRequester(cfg *config.Config, manager *coreauth.Manager) ManagementTokenRequester { return &managementTokenRequester{ - handler: internalmanagement.NewHandlerWithoutConfigFilePath(cfg, manager), + handler: NewHandlerWithoutConfigFilePath(cfg, manager), } } @@ -60,3 +75,63 @@ func (m *managementTokenRequester) GetAuthStatus(c *gin.Context) { func (m *managementTokenRequester) PostOAuthCallback(c *gin.Context) { m.handler.PostOAuthCallback(c) } + +// WriteConfig persists management configuration to disk. +func WriteConfig(path string, data []byte) error { + return internalmanagement.WriteConfig(path, data) +} + +// RegisterOAuthSession records a pending OAuth callback state. +func RegisterOAuthSession(state, provider string) { + internalmanagement.RegisterOAuthSession(state, provider) +} + +// SetOAuthSessionError stores an OAuth session error message. +func SetOAuthSessionError(state, message string) { + internalmanagement.SetOAuthSessionError(state, message) +} + +// CompleteOAuthSession marks a single OAuth session as completed. +func CompleteOAuthSession(state string) { + internalmanagement.CompleteOAuthSession(state) +} + +// CompleteOAuthSessionsByProvider removes all pending OAuth sessions for a provider. +func CompleteOAuthSessionsByProvider(provider string) int { + return internalmanagement.CompleteOAuthSessionsByProvider(provider) +} + +// GetOAuthSession returns the current OAuth session state. +func GetOAuthSession(state string) (provider string, status string, ok bool) { + return internalmanagement.GetOAuthSession(state) +} + +// IsOAuthSessionPending reports whether a provider/state pair is still pending. +func IsOAuthSessionPending(state, provider string) bool { + return internalmanagement.IsOAuthSessionPending(state, provider) +} + +// ValidateOAuthState validates an OAuth state token. +func ValidateOAuthState(state string) error { + return internalmanagement.ValidateOAuthState(state) +} + +// NormalizeOAuthProvider normalizes a provider name to its canonical form. +func NormalizeOAuthProvider(provider string) (string, error) { + return internalmanagement.NormalizeOAuthProvider(provider) +} + +// WriteOAuthCallbackFile writes an OAuth callback payload to disk. +func WriteOAuthCallbackFile(authDir, provider, state, code, errorMessage string) (string, error) { + return internalmanagement.WriteOAuthCallbackFile(authDir, provider, state, code, errorMessage) +} + +// WriteOAuthCallbackFileForPendingSession writes an OAuth callback payload for a pending session. +func WriteOAuthCallbackFileForPendingSession(authDir, provider, state, code, errorMessage string) (string, error) { + return internalmanagement.WriteOAuthCallbackFileForPendingSession(authDir, provider, state, code, errorMessage) +} + +// PopulateAuthContext copies auth metadata from a Gin context into a request context. +func PopulateAuthContext(ctx context.Context, c *gin.Context) context.Context { + return internalmanagement.PopulateAuthContext(ctx, c) +} diff --git a/sdk/api/options.go b/sdk/api/options.go index 8497884bf0..e2bbff78e9 100644 --- a/sdk/api/options.go +++ b/sdk/api/options.go @@ -8,10 +8,10 @@ import ( "time" "github.com/gin-gonic/gin" - internalapi "github.com/router-for-me/CLIProxyAPI/v6/internal/api" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/api/handlers" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/logging" + internalapi "github.com/router-for-me/CLIProxyAPI/v7/internal/api" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/api/handlers" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/logging" ) // ServerOption customises HTTP server construction. diff --git a/sdk/auth/antigravity.go b/sdk/auth/antigravity.go index d52bf1d259..0a947b20f0 100644 --- a/sdk/auth/antigravity.go +++ b/sdk/auth/antigravity.go @@ -8,12 +8,12 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/antigravity" - "github.com/router-for-me/CLIProxyAPI/v6/internal/browser" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/antigravity" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/sdk/auth/claude.go b/sdk/auth/claude.go index d82a718b2d..726fa922ae 100644 --- a/sdk/auth/claude.go +++ b/sdk/auth/claude.go @@ -7,13 +7,13 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/claude" - "github.com/router-for-me/CLIProxyAPI/v6/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/claude" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" // legacy client removed - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/sdk/auth/codearts.go b/sdk/auth/codearts.go new file mode 100644 index 0000000000..f04ce1f887 --- /dev/null +++ b/sdk/auth/codearts.go @@ -0,0 +1,175 @@ +package auth + +import ( + "context" + "crypto/rand" + "fmt" + "net" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codearts" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +var codeartsRefreshLead = 4 * time.Hour + +type CodeArtsAuthenticator struct{} + +func NewCodeArtsAuthenticator() Authenticator { return &CodeArtsAuthenticator{} } + +func (CodeArtsAuthenticator) Provider() string { return "codearts" } + +func (CodeArtsAuthenticator) RefreshLead() *time.Duration { + return &codeartsRefreshLead +} + +type codeartsCallbackResult struct { + Identifier string + Redirect string + Error string +} + +func (a CodeArtsAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if ctx == nil { + ctx = context.Background() + } + if opts == nil { + opts = &LoginOptions{} + } + + listener, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + return nil, fmt.Errorf("codearts: failed to find free port: %w", err) + } + port := listener.Addr().(*net.TCPAddr).Port + + cbChan := make(chan codeartsCallbackResult, 1) + mux := http.NewServeMux() + mux.HandleFunc("/callback", func(w http.ResponseWriter, r *http.Request) { + identifier := r.URL.Query().Get("identifier") + redirect := r.URL.Query().Get("redirect") + cbChan <- codeartsCallbackResult{ + Identifier: identifier, + Redirect: redirect, + } + if redirect != "" { + http.Redirect(w, r, redirect, http.StatusTemporaryRedirect) + return + } + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte(`CodeArts Login` + + `` + + `
` + + `

✓ Login Successful

` + + `

You can close this window and return to the terminal.

` + + `
`)) + }) + + srv := &http.Server{Handler: mux} + go func() { + if errServe := srv.Serve(listener); errServe != nil && !strings.Contains(errServe.Error(), "Server closed") { + log.Warnf("codearts callback server error: %v", errServe) + } + }() + defer func() { + shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + _ = srv.Shutdown(shutdownCtx) + }() + + ticketID := generateCodeArtsTicketID() + codeartsAuth := codearts.NewCodeArtsAuth(nil) + authURL := codeartsAuth.AuthorizationURL(ticketID, port) + + if !opts.NoBrowser { + fmt.Println("Opening browser for CodeArts authentication") + if !browser.IsAvailable() { + log.Warn("No browser available; please open the URL manually") + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } else if errOpen := browser.OpenURL(authURL); errOpen != nil { + log.Warnf("Failed to open browser automatically: %v", errOpen) + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + } else { + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + + fmt.Println("Waiting for CodeArts authentication callback...") + + var cbRes codeartsCallbackResult + timeoutTimer := time.NewTimer(5 * time.Minute) + defer timeoutTimer.Stop() + + select { + case cbRes = <-cbChan: + case <-timeoutTimer.C: + return nil, fmt.Errorf("codearts: authentication timed out") + } + + if cbRes.Error != "" { + return nil, fmt.Errorf("codearts: authentication failed: %s", cbRes.Error) + } + if cbRes.Identifier == "" { + return nil, fmt.Errorf("codearts: missing identifier in callback") + } + + fmt.Println("Callback received, polling for login result...") + + pollCtx, pollCancel := context.WithTimeout(ctx, 2*time.Minute) + defer pollCancel() + + authResult, err := codeartsAuth.PollForLoginResult(pollCtx, ticketID, cbRes.Identifier) + if err != nil { + return nil, fmt.Errorf("codearts: %w", err) + } + + tokenData, err := codeartsAuth.ProcessLoginResult(ctx, authResult) + if err != nil { + return nil, fmt.Errorf("codearts: %w", err) + } + + label := tokenData.UserName + if label == "" { + label = "codearts" + } + + fmt.Println("CodeArts authentication successful") + + return &coreauth.Auth{ + ID: fmt.Sprintf("codearts-%s.json", tokenData.UserName), + Provider: "codearts", + FileName: fmt.Sprintf("codearts-%s.json", tokenData.UserName), + Label: label, + Metadata: map[string]any{ + "type": "codearts", + "ak": tokenData.AK, + "sk": tokenData.SK, + "security_token": tokenData.SecurityToken, + "x_auth_token": tokenData.XAuthToken, + "expires_at": tokenData.ExpiresAt.Format(time.RFC3339), + "user_id": tokenData.UserID, + "user_name": tokenData.UserName, + "domain_id": tokenData.DomainID, + "email": tokenData.Email, + }, + }, nil +} + +func generateCodeArtsTicketID() string { + b := make([]byte, 32) + rand.Read(b) + return fmt.Sprintf("%x", b) +} diff --git a/sdk/auth/codebuddy.go b/sdk/auth/codebuddy.go new file mode 100644 index 0000000000..c65ab4e4c5 --- /dev/null +++ b/sdk/auth/codebuddy.go @@ -0,0 +1,95 @@ +package auth + +import ( + "context" + "fmt" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codebuddy" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +// CodeBuddyAuthenticator implements the browser OAuth polling flow for CodeBuddy. +type CodeBuddyAuthenticator struct{} + +// NewCodeBuddyAuthenticator constructs a new CodeBuddy authenticator. +func NewCodeBuddyAuthenticator() Authenticator { + return &CodeBuddyAuthenticator{} +} + +// Provider returns the provider key for codebuddy. +func (CodeBuddyAuthenticator) Provider() string { + return "codebuddy" +} + +// codeBuddyRefreshLead is the duration before token expiry when a refresh should be attempted. +var codeBuddyRefreshLead = 24 * time.Hour + +// RefreshLead returns how soon before expiry a refresh should be attempted. +// CodeBuddy tokens have a long validity period, so we refresh 24 hours before expiry. +func (CodeBuddyAuthenticator) RefreshLead() *time.Duration { + return &codeBuddyRefreshLead +} + +// Login initiates the browser OAuth flow for CodeBuddy. +func (a CodeBuddyAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("codebuddy: configuration is required") + } + if opts == nil { + opts = &LoginOptions{} + } + if ctx == nil { + ctx = context.Background() + } + + authSvc := codebuddy.NewCodeBuddyAuth(cfg) + + authState, err := authSvc.FetchAuthState(ctx) + if err != nil { + return nil, fmt.Errorf("codebuddy: failed to fetch auth state: %w", err) + } + + fmt.Printf("\nPlease open the following URL in your browser to login:\n\n %s\n\n", authState.AuthURL) + fmt.Println("Waiting for authorization...") + + if !opts.NoBrowser { + if browser.IsAvailable() { + if errOpen := browser.OpenURL(authState.AuthURL); errOpen != nil { + log.Debugf("codebuddy: failed to open browser: %v", errOpen) + } + } + } + + storage, err := authSvc.PollForToken(ctx, authState.State) + if err != nil { + return nil, fmt.Errorf("codebuddy: %s: %w", codebuddy.GetUserFriendlyMessage(err), err) + } + + fmt.Printf("\nSuccessfully logged in! (User ID: %s)\n", storage.UserID) + + authID := fmt.Sprintf("codebuddy-%s.json", storage.UserID) + + label := storage.UserID + if label == "" { + label = "codebuddy-user" + } + + return &coreauth.Auth{ + ID: authID, + Provider: a.Provider(), + FileName: authID, + Label: label, + Storage: storage, + Metadata: map[string]any{ + "access_token": storage.AccessToken, + "refresh_token": storage.RefreshToken, + "user_id": storage.UserID, + "domain": storage.Domain, + "expires_in": storage.ExpiresIn, + }, + }, nil +} diff --git a/sdk/auth/codebuddy_ai.go b/sdk/auth/codebuddy_ai.go new file mode 100644 index 0000000000..79f97ad7d4 --- /dev/null +++ b/sdk/auth/codebuddy_ai.go @@ -0,0 +1,88 @@ +package auth + +import ( + "context" + "fmt" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codebuddy_ai" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +type CodeBuddyAIAuthenticator struct{} + +func NewCodeBuddyAIAuthenticator() Authenticator { + return &CodeBuddyAIAuthenticator{} +} + +func (CodeBuddyAIAuthenticator) Provider() string { + return "codebuddy-ai" +} + +var codeBuddyAIRefreshLead = 24 * time.Hour + +func (CodeBuddyAIAuthenticator) RefreshLead() *time.Duration { + return &codeBuddyAIRefreshLead +} + +func (a CodeBuddyAIAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("codebuddy-ai: configuration is required") + } + if opts == nil { + opts = &LoginOptions{} + } + if ctx == nil { + ctx = context.Background() + } + + authSvc := codebuddy_ai.NewCodeBuddyAIAuth(cfg) + + authState, err := authSvc.FetchAuthState(ctx) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: failed to fetch auth state: %w", err) + } + + fmt.Printf("\nPlease open the following URL in your browser to login:\n\n %s\n\n", authState.AuthURL) + fmt.Println("Waiting for authorization...") + + if !opts.NoBrowser { + if browser.IsAvailable() { + if errOpen := browser.OpenURL(authState.AuthURL); errOpen != nil { + log.Debugf("codebuddy-ai: failed to open browser: %v", errOpen) + } + } + } + + storage, err := authSvc.PollForToken(ctx, authState.State) + if err != nil { + return nil, fmt.Errorf("codebuddy-ai: %s: %w", codebuddy_ai.GetUserFriendlyMessage(err), err) + } + + fmt.Printf("\nSuccessfully logged in! (User ID: %s)\n", storage.UserID) + + authID := fmt.Sprintf("codebuddy-ai-%s.json", storage.UserID) + + label := storage.UserID + if label == "" { + label = "codebuddy-ai-user" + } + + return &coreauth.Auth{ + ID: authID, + Provider: a.Provider(), + FileName: authID, + Label: label, + Storage: storage, + Metadata: map[string]any{ + "access_token": storage.AccessToken, + "refresh_token": storage.RefreshToken, + "user_id": storage.UserID, + "domain": storage.Domain, + "expires_in": storage.ExpiresIn, + }, + }, nil +} diff --git a/sdk/auth/codex.go b/sdk/auth/codex.go index 269e3d8b21..be58c9c5a6 100644 --- a/sdk/auth/codex.go +++ b/sdk/auth/codex.go @@ -7,13 +7,13 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" // legacy client removed - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/misc" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/sdk/auth/codex_device.go b/sdk/auth/codex_device.go index 10f59fb97b..d7ea4e1fe9 100644 --- a/sdk/auth/codex_device.go +++ b/sdk/auth/codex_device.go @@ -13,11 +13,11 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex" - "github.com/router-for-me/CLIProxyAPI/v6/internal/browser" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/codex" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/sdk/auth/cursor.go b/sdk/auth/cursor.go new file mode 100644 index 0000000000..792eb60637 --- /dev/null +++ b/sdk/auth/cursor.go @@ -0,0 +1,98 @@ +package auth + +import ( + "context" + "fmt" + "time" + + cursorauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/cursor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +// CursorAuthenticator implements OAuth PKCE login for Cursor. +type CursorAuthenticator struct{} + +// NewCursorAuthenticator constructs a new Cursor authenticator. +func NewCursorAuthenticator() Authenticator { + return &CursorAuthenticator{} +} + +// Provider returns the provider key for cursor. +func (CursorAuthenticator) Provider() string { + return "cursor" +} + +// RefreshLead returns the time before expiry when a refresh should be attempted. +func (CursorAuthenticator) RefreshLead() *time.Duration { + d := 10 * time.Minute + return &d +} + +// Login initiates the Cursor PKCE authentication flow. +func (a CursorAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cursor auth: configuration is required") + } + if opts == nil { + opts = &LoginOptions{} + } + + // Generate PKCE auth parameters + authParams, err := cursorauth.GenerateAuthParams() + if err != nil { + return nil, fmt.Errorf("cursor: failed to generate auth params: %w", err) + } + + // Display the login URL + log.Info("Starting Cursor authentication...") + log.Infof("Please visit this URL to log in: %s", authParams.LoginURL) + + // Try to open the browser automatically + if !opts.NoBrowser { + if browser.IsAvailable() { + if errOpen := browser.OpenURL(authParams.LoginURL); errOpen != nil { + log.Warnf("Failed to open browser automatically: %v", errOpen) + } + } + } + + log.Info("Waiting for Cursor authorization...") + + // Poll for the auth result + tokens, err := cursorauth.PollForAuth(ctx, authParams.UUID, authParams.Verifier) + if err != nil { + return nil, fmt.Errorf("cursor: authentication failed: %w", err) + } + + expiresAt := cursorauth.GetTokenExpiry(tokens.AccessToken) + + // Auto-identify account from JWT sub claim + sub := cursorauth.ParseJWTSub(tokens.AccessToken) + subHash := cursorauth.SubToShortHash(sub) + + log.Info("Cursor authentication successful!") + + metadata := map[string]any{ + "type": "cursor", + "access_token": tokens.AccessToken, + "refresh_token": tokens.RefreshToken, + "expires_at": expiresAt.Format(time.RFC3339), + "timestamp": time.Now().UnixMilli(), + } + if sub != "" { + metadata["sub"] = sub + } + + fileName := cursorauth.CredentialFileName("", subHash) + + return &coreauth.Auth{ + ID: fileName, + Provider: a.Provider(), + FileName: fileName, + Label: cursorauth.DisplayLabel("", subHash), + Metadata: metadata, + }, nil +} diff --git a/sdk/auth/errors.go b/sdk/auth/errors.go index 78fe9a17bd..f950e925ff 100644 --- a/sdk/auth/errors.go +++ b/sdk/auth/errors.go @@ -3,7 +3,7 @@ package auth import ( "fmt" - "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces" + "github.com/router-for-me/CLIProxyAPI/v7/internal/interfaces" ) // ProjectSelectionError indicates that the user must choose a specific project ID. diff --git a/sdk/auth/filestore.go b/sdk/auth/filestore.go index f8f49f44ba..5675caac29 100644 --- a/sdk/auth/filestore.go +++ b/sdk/auth/filestore.go @@ -15,7 +15,7 @@ import ( "sync" "time" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // FileTokenStore persists token records and auth metadata using the filesystem as backing storage. @@ -72,6 +72,10 @@ func (s *FileTokenStore) Save(ctx context.Context, auth *cliproxyauth.Auth) (str switch { case auth.Storage != nil: + if auth.Metadata == nil { + auth.Metadata = make(map[string]any) + } + auth.Metadata["disabled"] = auth.Disabled if setter, ok := auth.Storage.(metadataSetter); ok { setter.SetMetadata(auth.Metadata) } diff --git a/sdk/auth/filestore_disabled_test.go b/sdk/auth/filestore_disabled_test.go new file mode 100644 index 0000000000..665f9ebf1f --- /dev/null +++ b/sdk/auth/filestore_disabled_test.go @@ -0,0 +1,64 @@ +package auth + +import ( + "context" + "encoding/json" + "os" + "path/filepath" + "testing" + + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +type testTokenStorage struct { + meta map[string]any +} + +func (s *testTokenStorage) SetMetadata(meta map[string]any) { s.meta = meta } + +func (s *testTokenStorage) SaveTokenToFile(authFilePath string) error { + raw, err := json.Marshal(s.meta) + if err != nil { + return err + } + return os.WriteFile(authFilePath, raw, 0o600) +} + +func TestFileTokenStore_Save_DisabledPersistsFlagForTokenStorage(t *testing.T) { + ctx := context.Background() + baseDir := t.TempDir() + path := filepath.Join(baseDir, "disabled.json") + + if err := os.WriteFile(path, []byte(`{"type":"test","disabled":true}`), 0o600); err != nil { + t.Fatalf("seed auth file: %v", err) + } + + store := NewFileTokenStore() + store.SetBaseDir(baseDir) + storage := &testTokenStorage{} + + auth := &cliproxyauth.Auth{ + ID: "disabled.json", + Provider: "test", + FileName: "disabled.json", + Disabled: true, + Storage: storage, + Metadata: map[string]any{"type": "test"}, + } + + if _, err := store.Save(ctx, auth); err != nil { + t.Fatalf("Save() error: %v", err) + } + + raw, err := os.ReadFile(path) + if err != nil { + t.Fatalf("read auth file: %v", err) + } + var meta map[string]any + if err := json.Unmarshal(raw, &meta); err != nil { + t.Fatalf("unmarshal auth file: %v", err) + } + if disabled, _ := meta["disabled"].(bool); !disabled { + t.Fatalf("disabled=%v, want true (raw=%s)", meta["disabled"], string(raw)) + } +} diff --git a/sdk/auth/gemini.go b/sdk/auth/gemini.go index 2b8f9c2b88..ba7c7728ad 100644 --- a/sdk/auth/gemini.go +++ b/sdk/auth/gemini.go @@ -5,10 +5,10 @@ import ( "fmt" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/gemini" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/gemini" // legacy client removed - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // GeminiAuthenticator implements the login flow for Google Gemini CLI accounts. diff --git a/sdk/auth/github_copilot.go b/sdk/auth/github_copilot.go new file mode 100644 index 0000000000..5a0eb7fdd2 --- /dev/null +++ b/sdk/auth/github_copilot.go @@ -0,0 +1,136 @@ +package auth + +import ( + "context" + "fmt" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/copilot" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +// GitHubCopilotAuthenticator implements the OAuth device flow login for GitHub Copilot. +type GitHubCopilotAuthenticator struct{} + +// NewGitHubCopilotAuthenticator constructs a new GitHub Copilot authenticator. +func NewGitHubCopilotAuthenticator() Authenticator { + return &GitHubCopilotAuthenticator{} +} + +// Provider returns the provider key for github-copilot. +func (GitHubCopilotAuthenticator) Provider() string { + return "github-copilot" +} + +// RefreshLead returns nil since GitHub OAuth tokens don't expire in the traditional sense. +// The token remains valid until the user revokes it or the Copilot subscription expires. +func (GitHubCopilotAuthenticator) RefreshLead() *time.Duration { + return nil +} + +// Login initiates the GitHub device flow authentication for Copilot access. +func (a GitHubCopilotAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if opts == nil { + opts = &LoginOptions{} + } + + authSvc := copilot.NewCopilotAuth(cfg) + + // Start the device flow + fmt.Println("Starting GitHub Copilot authentication...") + deviceCode, err := authSvc.StartDeviceFlow(ctx) + if err != nil { + return nil, fmt.Errorf("github-copilot: failed to start device flow: %w", err) + } + + // Display the user code and verification URL + fmt.Printf("\nTo authenticate, please visit: %s\n", deviceCode.VerificationURI) + fmt.Printf("And enter the code: %s\n\n", deviceCode.UserCode) + + // Try to open the browser automatically + if !opts.NoBrowser { + if browser.IsAvailable() { + if errOpen := browser.OpenURL(deviceCode.VerificationURI); errOpen != nil { + log.Warnf("Failed to open browser automatically: %v", errOpen) + } + } + } + + fmt.Println("Waiting for GitHub authorization...") + fmt.Printf("(This will timeout in %d seconds if not authorized)\n", deviceCode.ExpiresIn) + + // Wait for user authorization + authBundle, err := authSvc.WaitForAuthorization(ctx, deviceCode) + if err != nil { + errMsg := copilot.GetUserFriendlyMessage(err) + return nil, fmt.Errorf("github-copilot: %s", errMsg) + } + + // Verify the token can get a Copilot API token + fmt.Println("Verifying Copilot access...") + apiToken, err := authSvc.GetCopilotAPIToken(ctx, authBundle.TokenData.AccessToken) + if err != nil { + return nil, fmt.Errorf("github-copilot: failed to verify Copilot access - you may not have an active Copilot subscription: %w", err) + } + + // Create the token storage + tokenStorage := authSvc.CreateTokenStorage(authBundle) + + // Build metadata with token information for the executor + metadata := map[string]any{ + "type": "github-copilot", + "username": authBundle.Username, + "email": authBundle.Email, + "name": authBundle.Name, + "access_token": authBundle.TokenData.AccessToken, + "token_type": authBundle.TokenData.TokenType, + "scope": authBundle.TokenData.Scope, + "timestamp": time.Now().UnixMilli(), + } + + if apiToken.ExpiresAt > 0 { + metadata["api_token_expires_at"] = apiToken.ExpiresAt + } + + fileName := fmt.Sprintf("github-copilot-%s.json", authBundle.Username) + + label := authBundle.Email + if label == "" { + label = authBundle.Username + } + + fmt.Printf("\nGitHub Copilot authentication successful for user: %s\n", authBundle.Username) + + return &coreauth.Auth{ + ID: fileName, + Provider: a.Provider(), + FileName: fileName, + Label: label, + Storage: tokenStorage, + Metadata: metadata, + }, nil +} + +// RefreshGitHubCopilotToken validates and returns the current token status. +// GitHub OAuth tokens don't need traditional refresh - we just validate they still work. +func RefreshGitHubCopilotToken(ctx context.Context, cfg *config.Config, storage *copilot.CopilotTokenStorage) error { + if storage == nil || storage.AccessToken == "" { + return fmt.Errorf("no token available") + } + + authSvc := copilot.NewCopilotAuth(cfg) + + // Validate the token can still get a Copilot API token + _, err := authSvc.GetCopilotAPIToken(ctx, storage.AccessToken) + if err != nil { + return fmt.Errorf("token validation failed: %w", err) + } + + return nil +} diff --git a/sdk/auth/gitlab.go b/sdk/auth/gitlab.go new file mode 100644 index 0000000000..d53ae69ffb --- /dev/null +++ b/sdk/auth/gitlab.go @@ -0,0 +1,482 @@ +package auth + +import ( + "context" + "fmt" + "os" + "strings" + "time" + + gitlabauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/gitlab" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +const ( + gitLabLoginModeMetadataKey = "login_mode" + gitLabLoginModeOAuth = "oauth" + gitLabLoginModePAT = "pat" + gitLabBaseURLMetadataKey = "base_url" + gitLabOAuthClientIDMetadataKey = "oauth_client_id" + gitLabOAuthClientSecretMetadataKey = "oauth_client_secret" + gitLabPersonalAccessTokenMetadataKey = "personal_access_token" +) + +var gitLabRefreshLead = 5 * time.Minute + +type GitLabAuthenticator struct { + CallbackPort int +} + +func NewGitLabAuthenticator() *GitLabAuthenticator { + return &GitLabAuthenticator{CallbackPort: gitlabauth.DefaultCallbackPort} +} + +func (a *GitLabAuthenticator) Provider() string { + return "gitlab" +} + +func (a *GitLabAuthenticator) RefreshLead() *time.Duration { + return &gitLabRefreshLead +} + +func (a *GitLabAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if ctx == nil { + ctx = context.Background() + } + if opts == nil { + opts = &LoginOptions{} + } + + switch strings.ToLower(strings.TrimSpace(opts.Metadata[gitLabLoginModeMetadataKey])) { + case "", gitLabLoginModeOAuth: + return a.loginOAuth(ctx, cfg, opts) + case gitLabLoginModePAT: + return a.loginPAT(ctx, cfg, opts) + default: + return nil, fmt.Errorf("gitlab auth: unsupported login mode %q", opts.Metadata[gitLabLoginModeMetadataKey]) + } +} + +func (a *GitLabAuthenticator) loginOAuth(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + client := gitlabauth.NewAuthClient(cfg) + baseURL := a.resolveString(opts, gitLabBaseURLMetadataKey, gitlabauth.DefaultBaseURL) + clientID, err := a.requireInput(opts, gitLabOAuthClientIDMetadataKey, "Enter GitLab OAuth application client ID: ") + if err != nil { + return nil, err + } + clientSecret, err := a.optionalInput(opts, gitLabOAuthClientSecretMetadataKey, "Enter GitLab OAuth application client secret (press Enter for public PKCE app): ") + if err != nil { + return nil, err + } + + callbackPort := a.CallbackPort + if opts.CallbackPort > 0 { + callbackPort = opts.CallbackPort + } + redirectURI := gitlabauth.RedirectURL(callbackPort) + + pkceCodes, err := gitlabauth.GeneratePKCECodes() + if err != nil { + return nil, err + } + state, err := misc.GenerateRandomState() + if err != nil { + return nil, fmt.Errorf("gitlab state generation failed: %w", err) + } + + oauthServer := gitlabauth.NewOAuthServer(callbackPort) + if err := oauthServer.Start(); err != nil { + return nil, err + } + defer func() { + stopCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + if stopErr := oauthServer.Stop(stopCtx); stopErr != nil { + log.Warnf("gitlab oauth server stop error: %v", stopErr) + } + }() + + authURL, err := client.GenerateAuthURL(baseURL, clientID, redirectURI, state, pkceCodes) + if err != nil { + return nil, err + } + + if !opts.NoBrowser { + fmt.Println("Opening browser for GitLab Duo authentication") + if !browser.IsAvailable() { + log.Warn("No browser available; please open the URL manually") + util.PrintSSHTunnelInstructions(callbackPort) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } else if err = browser.OpenURL(authURL); err != nil { + log.Warnf("Failed to open browser automatically: %v", err) + util.PrintSSHTunnelInstructions(callbackPort) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + } else { + util.PrintSSHTunnelInstructions(callbackPort) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + + fmt.Println("Waiting for GitLab OAuth callback...") + + callbackCh := make(chan *gitlabauth.OAuthResult, 1) + callbackErrCh := make(chan error, 1) + go func() { + result, waitErr := oauthServer.WaitForCallback(5 * time.Minute) + if waitErr != nil { + callbackErrCh <- waitErr + return + } + callbackCh <- result + }() + + var result *gitlabauth.OAuthResult + var manualPromptTimer *time.Timer + var manualPromptC <-chan time.Time + if opts.Prompt != nil { + manualPromptTimer = time.NewTimer(15 * time.Second) + manualPromptC = manualPromptTimer.C + defer manualPromptTimer.Stop() + } + +waitForCallback: + for { + select { + case result = <-callbackCh: + break waitForCallback + case err = <-callbackErrCh: + return nil, err + case <-manualPromptC: + manualPromptC = nil + if manualPromptTimer != nil { + manualPromptTimer.Stop() + } + input, promptErr := opts.Prompt("Paste the GitLab callback URL (or press Enter to keep waiting): ") + if promptErr != nil { + return nil, promptErr + } + parsed, parseErr := misc.ParseOAuthCallback(input) + if parseErr != nil { + return nil, parseErr + } + if parsed == nil { + continue + } + result = &gitlabauth.OAuthResult{ + Code: parsed.Code, + State: parsed.State, + Error: parsed.Error, + } + break waitForCallback + } + } + + if result.Error != "" { + return nil, fmt.Errorf("gitlab oauth returned error: %s", result.Error) + } + if result.State != state { + return nil, fmt.Errorf("gitlab auth: state mismatch") + } + + tokenResp, err := client.ExchangeCodeForTokens(ctx, baseURL, clientID, clientSecret, redirectURI, result.Code, pkceCodes.CodeVerifier) + if err != nil { + return nil, err + } + accessToken := strings.TrimSpace(tokenResp.AccessToken) + if accessToken == "" { + return nil, fmt.Errorf("gitlab auth: missing access token") + } + + user, err := client.GetCurrentUser(ctx, baseURL, accessToken) + if err != nil { + return nil, err + } + direct, err := client.FetchDirectAccess(ctx, baseURL, accessToken) + if err != nil { + return nil, err + } + + identifier := gitLabAccountIdentifier(user) + fileName := fmt.Sprintf("gitlab-%s.json", sanitizeGitLabFileName(identifier)) + metadata := buildGitLabAuthMetadata(baseURL, gitLabLoginModeOAuth, tokenResp, direct) + metadata["auth_kind"] = "oauth" + metadata[gitLabOAuthClientIDMetadataKey] = clientID + metadata["username"] = strings.TrimSpace(user.Username) + if email := strings.TrimSpace(primaryGitLabEmail(user)); email != "" { + metadata["email"] = email + } + metadata["name"] = strings.TrimSpace(user.Name) + + fmt.Println("GitLab Duo authentication successful") + + return &coreauth.Auth{ + ID: fileName, + Provider: a.Provider(), + FileName: fileName, + Label: identifier, + Metadata: metadata, + }, nil +} + +func (a *GitLabAuthenticator) loginPAT(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + client := gitlabauth.NewAuthClient(cfg) + baseURL := a.resolveString(opts, gitLabBaseURLMetadataKey, gitlabauth.DefaultBaseURL) + token, err := a.requireInput(opts, gitLabPersonalAccessTokenMetadataKey, "Enter GitLab personal access token: ") + if err != nil { + return nil, err + } + + user, err := client.GetCurrentUser(ctx, baseURL, token) + if err != nil { + return nil, err + } + _, err = client.GetPersonalAccessTokenSelf(ctx, baseURL, token) + if err != nil { + return nil, err + } + direct, err := client.FetchDirectAccess(ctx, baseURL, token) + if err != nil { + return nil, err + } + + identifier := gitLabAccountIdentifier(user) + fileName := fmt.Sprintf("gitlab-%s-pat.json", sanitizeGitLabFileName(identifier)) + metadata := buildGitLabAuthMetadata(baseURL, gitLabLoginModePAT, nil, direct) + metadata["auth_kind"] = "personal_access_token" + metadata[gitLabPersonalAccessTokenMetadataKey] = strings.TrimSpace(token) + metadata["token_preview"] = maskGitLabToken(token) + metadata["username"] = strings.TrimSpace(user.Username) + if email := strings.TrimSpace(primaryGitLabEmail(user)); email != "" { + metadata["email"] = email + } + metadata["name"] = strings.TrimSpace(user.Name) + + fmt.Println("GitLab Duo PAT authentication successful") + + return &coreauth.Auth{ + ID: fileName, + Provider: a.Provider(), + FileName: fileName, + Label: identifier + " (PAT)", + Metadata: metadata, + }, nil +} + +func buildGitLabAuthMetadata(baseURL, mode string, tokenResp *gitlabauth.TokenResponse, direct *gitlabauth.DirectAccessResponse) map[string]any { + metadata := map[string]any{ + "type": "gitlab", + "auth_method": strings.TrimSpace(mode), + gitLabBaseURLMetadataKey: gitlabauth.NormalizeBaseURL(baseURL), + "last_refresh": time.Now().UTC().Format(time.RFC3339), + "refresh_interval_seconds": 240, + } + if tokenResp != nil { + metadata["access_token"] = strings.TrimSpace(tokenResp.AccessToken) + if refreshToken := strings.TrimSpace(tokenResp.RefreshToken); refreshToken != "" { + metadata["refresh_token"] = refreshToken + } + if tokenType := strings.TrimSpace(tokenResp.TokenType); tokenType != "" { + metadata["token_type"] = tokenType + } + if scope := strings.TrimSpace(tokenResp.Scope); scope != "" { + metadata["scope"] = scope + } + if expiry := gitlabauth.TokenExpiry(time.Now(), tokenResp); !expiry.IsZero() { + metadata["oauth_expires_at"] = expiry.Format(time.RFC3339) + } + } + mergeGitLabDirectAccessMetadata(metadata, direct) + return metadata +} + +func mergeGitLabDirectAccessMetadata(metadata map[string]any, direct *gitlabauth.DirectAccessResponse) { + if metadata == nil || direct == nil { + return + } + if base := strings.TrimSpace(direct.BaseURL); base != "" { + metadata["duo_gateway_base_url"] = base + } + if token := strings.TrimSpace(direct.Token); token != "" { + metadata["duo_gateway_token"] = token + } + if direct.ExpiresAt > 0 { + expiry := time.Unix(direct.ExpiresAt, 0).UTC() + metadata["duo_gateway_expires_at"] = expiry.Format(time.RFC3339) + now := time.Now().UTC() + if ttl := expiry.Sub(now); ttl > 0 { + interval := int(ttl.Seconds()) / 2 + switch { + case interval < 60: + interval = 60 + case interval > 240: + interval = 240 + } + metadata["refresh_interval_seconds"] = interval + } + } + if len(direct.Headers) > 0 { + headers := make(map[string]string, len(direct.Headers)) + for key, value := range direct.Headers { + key = strings.TrimSpace(key) + value = strings.TrimSpace(value) + if key == "" || value == "" { + continue + } + headers[key] = value + } + if len(headers) > 0 { + metadata["duo_gateway_headers"] = headers + } + } + if direct.ModelDetails != nil { + modelDetails := map[string]any{} + if provider := strings.TrimSpace(direct.ModelDetails.ModelProvider); provider != "" { + modelDetails["model_provider"] = provider + metadata["model_provider"] = provider + } + if model := strings.TrimSpace(direct.ModelDetails.ModelName); model != "" { + modelDetails["model_name"] = model + metadata["model_name"] = model + } + if len(modelDetails) > 0 { + metadata["model_details"] = modelDetails + } + } +} + +func (a *GitLabAuthenticator) resolveString(opts *LoginOptions, key, fallback string) string { + if opts != nil && opts.Metadata != nil { + if value := strings.TrimSpace(opts.Metadata[key]); value != "" { + return value + } + } + for _, envKey := range gitLabEnvKeys(key) { + if raw, ok := os.LookupEnv(envKey); ok { + if trimmed := strings.TrimSpace(raw); trimmed != "" { + return trimmed + } + } + } + if strings.TrimSpace(fallback) != "" { + return fallback + } + return "" +} + +func (a *GitLabAuthenticator) requireInput(opts *LoginOptions, key, prompt string) (string, error) { + if value := a.resolveString(opts, key, ""); value != "" { + return value, nil + } + if opts != nil && opts.Prompt != nil { + value, err := opts.Prompt(prompt) + if err != nil { + return "", err + } + if trimmed := strings.TrimSpace(value); trimmed != "" { + return trimmed, nil + } + } + return "", fmt.Errorf("gitlab auth: missing required %s", key) +} + +func (a *GitLabAuthenticator) optionalInput(opts *LoginOptions, key, prompt string) (string, error) { + if value := a.resolveString(opts, key, ""); value != "" { + return value, nil + } + if opts != nil && opts.Prompt != nil { + value, err := opts.Prompt(prompt) + if err != nil { + return "", err + } + return strings.TrimSpace(value), nil + } + return "", nil +} + +func primaryGitLabEmail(user *gitlabauth.User) string { + if user == nil { + return "" + } + if value := strings.TrimSpace(user.Email); value != "" { + return value + } + return strings.TrimSpace(user.PublicEmail) +} + +func gitLabAccountIdentifier(user *gitlabauth.User) string { + if user == nil { + return "user" + } + for _, value := range []string{user.Username, primaryGitLabEmail(user), user.Name} { + if trimmed := strings.TrimSpace(value); trimmed != "" { + return trimmed + } + } + return "user" +} + +func sanitizeGitLabFileName(value string) string { + value = strings.TrimSpace(strings.ToLower(value)) + if value == "" { + return "user" + } + var builder strings.Builder + lastDash := false + for _, r := range value { + switch { + case r >= 'a' && r <= 'z': + builder.WriteRune(r) + lastDash = false + case r >= '0' && r <= '9': + builder.WriteRune(r) + lastDash = false + case r == '-' || r == '_' || r == '.': + builder.WriteRune(r) + lastDash = false + default: + if !lastDash { + builder.WriteRune('-') + lastDash = true + } + } + } + result := strings.Trim(builder.String(), "-") + if result == "" { + return "user" + } + return result +} + +func maskGitLabToken(token string) string { + trimmed := strings.TrimSpace(token) + if trimmed == "" { + return "" + } + if len(trimmed) <= 8 { + return trimmed + } + return trimmed[:4] + "..." + trimmed[len(trimmed)-4:] +} + +func gitLabEnvKeys(key string) []string { + switch strings.TrimSpace(key) { + case gitLabBaseURLMetadataKey: + return []string{"GITLAB_BASE_URL"} + case gitLabOAuthClientIDMetadataKey: + return []string{"GITLAB_OAUTH_CLIENT_ID"} + case gitLabOAuthClientSecretMetadataKey: + return []string{"GITLAB_OAUTH_CLIENT_SECRET"} + case gitLabPersonalAccessTokenMetadataKey: + return []string{"GITLAB_PERSONAL_ACCESS_TOKEN"} + default: + return nil + } +} diff --git a/sdk/auth/gitlab_test.go b/sdk/auth/gitlab_test.go new file mode 100644 index 0000000000..c28f693258 --- /dev/null +++ b/sdk/auth/gitlab_test.go @@ -0,0 +1,66 @@ +package auth + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" +) + +func TestGitLabAuthenticatorLoginPAT(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + switch r.URL.Path { + case "/api/v4/user": + _ = json.NewEncoder(w).Encode(map[string]any{ + "id": 42, + "username": "duo-user", + "email": "duo@example.com", + "name": "Duo User", + }) + case "/api/v4/personal_access_tokens/self": + _ = json.NewEncoder(w).Encode(map[string]any{ + "id": 5, + "name": "CLIProxyAPI", + "scopes": []string{"api"}, + }) + case "/api/v4/code_suggestions/direct_access": + _ = json.NewEncoder(w).Encode(map[string]any{ + "base_url": "https://cloud.gitlab.example.com", + "token": "gateway-token", + "expires_at": 1710003600, + "headers": map[string]string{"X-Gitlab-Realm": "saas"}, + "model_details": map[string]any{ + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + }) + default: + t.Fatalf("unexpected path %q", r.URL.Path) + } + })) + defer srv.Close() + + authenticator := NewGitLabAuthenticator() + record, err := authenticator.Login(context.Background(), &config.Config{}, &LoginOptions{ + Metadata: map[string]string{ + "login_mode": "pat", + "base_url": srv.URL, + "personal_access_token": "glpat-test-token", + }, + }) + if err != nil { + t.Fatalf("Login() error = %v", err) + } + if record.Provider != "gitlab" { + t.Fatalf("expected gitlab provider, got %q", record.Provider) + } + if got := record.Metadata["model_name"]; got != "claude-sonnet-4-5" { + t.Fatalf("expected discovered model, got %#v", got) + } + if got := record.Metadata["auth_kind"]; got != "personal_access_token" { + t.Fatalf("expected personal_access_token auth kind, got %#v", got) + } +} diff --git a/sdk/auth/iflow.go b/sdk/auth/iflow.go new file mode 100644 index 0000000000..2246e68cf4 --- /dev/null +++ b/sdk/auth/iflow.go @@ -0,0 +1,196 @@ +package auth + +import ( + "context" + "fmt" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/iflow" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +// IFlowAuthenticator implements the OAuth login flow for iFlow accounts. +type IFlowAuthenticator struct{} + +// NewIFlowAuthenticator constructs a new authenticator instance. +func NewIFlowAuthenticator() *IFlowAuthenticator { return &IFlowAuthenticator{} } + +// Provider returns the provider key for the authenticator. +func (a *IFlowAuthenticator) Provider() string { return "iflow" } + +// RefreshLead indicates how soon before expiry a refresh should be attempted. +func (a *IFlowAuthenticator) RefreshLead() *time.Duration { + return new(24 * time.Hour) +} + +// Login performs the OAuth code flow using a local callback server. +func (a *IFlowAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if ctx == nil { + ctx = context.Background() + } + if opts == nil { + opts = &LoginOptions{} + } + + callbackPort := iflow.CallbackPort + if opts.CallbackPort > 0 { + callbackPort = opts.CallbackPort + } + + authSvc := iflow.NewIFlowAuth(cfg) + + oauthServer := iflow.NewOAuthServer(callbackPort) + if err := oauthServer.Start(); err != nil { + if strings.Contains(err.Error(), "already in use") { + return nil, fmt.Errorf("iflow authentication server port in use: %w", err) + } + return nil, fmt.Errorf("iflow authentication server failed: %w", err) + } + defer func() { + stopCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + if stopErr := oauthServer.Stop(stopCtx); stopErr != nil { + log.Warnf("iflow oauth server stop error: %v", stopErr) + } + }() + + state, err := misc.GenerateRandomState() + if err != nil { + return nil, fmt.Errorf("iflow auth: failed to generate state: %w", err) + } + + authURL, redirectURI := authSvc.AuthorizationURL(state, callbackPort) + + if !opts.NoBrowser { + fmt.Println("Opening browser for iFlow authentication") + if !browser.IsAvailable() { + log.Warn("No browser available; please open the URL manually") + util.PrintSSHTunnelInstructions(callbackPort) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } else if err = browser.OpenURL(authURL); err != nil { + log.Warnf("Failed to open browser automatically: %v", err) + util.PrintSSHTunnelInstructions(callbackPort) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + } else { + util.PrintSSHTunnelInstructions(callbackPort) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + + fmt.Println("Waiting for iFlow authentication callback...") + + callbackCh := make(chan *iflow.OAuthResult, 1) + callbackErrCh := make(chan error, 1) + + go func() { + result, errWait := oauthServer.WaitForCallback(5 * time.Minute) + if errWait != nil { + callbackErrCh <- errWait + return + } + callbackCh <- result + }() + + var result *iflow.OAuthResult + var manualPromptTimer *time.Timer + var manualPromptC <-chan time.Time + if opts.Prompt != nil { + manualPromptTimer = time.NewTimer(15 * time.Second) + manualPromptC = manualPromptTimer.C + defer manualPromptTimer.Stop() + } + + var manualInputCh <-chan string + var manualInputErrCh <-chan error + +waitForCallback: + for { + select { + case result = <-callbackCh: + break waitForCallback + case err = <-callbackErrCh: + return nil, fmt.Errorf("iflow auth: callback wait failed: %w", err) + case <-manualPromptC: + manualPromptC = nil + if manualPromptTimer != nil { + manualPromptTimer.Stop() + } + select { + case result = <-callbackCh: + break waitForCallback + case err = <-callbackErrCh: + return nil, fmt.Errorf("iflow auth: callback wait failed: %w", err) + default: + } + manualInputCh, manualInputErrCh = misc.AsyncPrompt(opts.Prompt, "Paste the iFlow callback URL (or press Enter to keep waiting): ") + continue + case input := <-manualInputCh: + manualInputCh = nil + manualInputErrCh = nil + parsed, errParse := misc.ParseOAuthCallback(input) + if errParse != nil { + return nil, errParse + } + if parsed == nil { + continue + } + result = &iflow.OAuthResult{ + Code: parsed.Code, + State: parsed.State, + Error: parsed.Error, + } + break waitForCallback + case errManual := <-manualInputErrCh: + return nil, errManual + } + } + if result.Error != "" { + return nil, fmt.Errorf("iflow auth: provider returned error %s", result.Error) + } + if result.State != state { + return nil, fmt.Errorf("iflow auth: state mismatch") + } + + tokenData, err := authSvc.ExchangeCodeForTokens(ctx, result.Code, redirectURI) + if err != nil { + return nil, fmt.Errorf("iflow authentication failed: %w", err) + } + + tokenStorage := authSvc.CreateTokenStorage(tokenData) + + email := strings.TrimSpace(tokenStorage.Email) + if email == "" { + return nil, fmt.Errorf("iflow authentication failed: missing account identifier") + } + + fileName := fmt.Sprintf("iflow-%s-%d.json", email, time.Now().Unix()) + metadata := map[string]any{ + "email": email, + "api_key": tokenStorage.APIKey, + "access_token": tokenStorage.AccessToken, + "refresh_token": tokenStorage.RefreshToken, + "expired": tokenStorage.Expire, + } + + fmt.Println("iFlow authentication successful") + + return &coreauth.Auth{ + ID: fileName, + Provider: a.Provider(), + FileName: fileName, + Storage: tokenStorage, + Metadata: metadata, + Attributes: map[string]string{ + "api_key": tokenStorage.APIKey, + }, + }, nil +} diff --git a/sdk/auth/interfaces.go b/sdk/auth/interfaces.go index 64cf8ed035..e5582a0cc5 100644 --- a/sdk/auth/interfaces.go +++ b/sdk/auth/interfaces.go @@ -5,8 +5,8 @@ import ( "errors" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) var ErrRefreshNotSupported = errors.New("cliproxy auth: refresh not supported") diff --git a/sdk/auth/joycode.go b/sdk/auth/joycode.go new file mode 100644 index 0000000000..65814d723d --- /dev/null +++ b/sdk/auth/joycode.go @@ -0,0 +1,177 @@ +package auth + +import ( + "context" + "crypto/rand" + "fmt" + "net" + "net/http" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/joycode" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +type JoyCodeAuthenticator struct{} + +func NewJoyCodeAuthenticator() Authenticator { return &JoyCodeAuthenticator{} } + +func (JoyCodeAuthenticator) Provider() string { return "joycode" } + +func (JoyCodeAuthenticator) RefreshLead() *time.Duration { return nil } + +type joycodeCallbackResult struct { + PTKey string + Error string +} + +func (a JoyCodeAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if ctx == nil { + ctx = context.Background() + } + if opts == nil { + opts = &LoginOptions{} + } + + listener, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + return nil, fmt.Errorf("joycode: failed to find free port: %w", err) + } + port := listener.Addr().(*net.TCPAddr).Port + + authKey := generateJoyCodeAuthKey() + cbChan := make(chan joycodeCallbackResult, 1) + + mux := http.NewServeMux() + callbackHandler := func(w http.ResponseWriter, r *http.Request) { + receivedAuthKey := r.URL.Query().Get("authKey") + if receivedAuthKey != "" && receivedAuthKey != authKey { + cbChan <- joycodeCallbackResult{Error: "authKey mismatch"} + w.WriteHeader(http.StatusForbidden) + return + } + + ptKey := r.URL.Query().Get("pt_key") + if ptKey == "" { + ptKey = r.URL.Query().Get("ptKey") + } + + if ptKey != "" { + cbChan <- joycodeCallbackResult{PTKey: ptKey} + } else { + cbChan <- joycodeCallbackResult{Error: "missing pt_key"} + } + + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte(`JoyCode Login` + + `` + + `
` + + `

✓ Authorization Successful

` + + `

Credential captured, syncing. Please return to the command line.

` + + `
`)) + } + mux.HandleFunc("/", callbackHandler) + mux.HandleFunc("/joycode/callback", callbackHandler) + + srv := &http.Server{Handler: mux} + go func() { + if errServe := srv.Serve(listener); errServe != nil && !strings.Contains(errServe.Error(), "Server closed") { + log.Warnf("joycode callback server error: %v", errServe) + } + }() + defer func() { + shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + _ = srv.Shutdown(shutdownCtx) + }() + + authURL := fmt.Sprintf("https://joycode.jd.com/login/?ideAppName=JoyCode&fromIde=ide&redirect=0&authPort=%d&authKey=%s", port, authKey) + + if !opts.NoBrowser { + fmt.Println("Opening browser for JoyCode authentication") + if !browser.IsAvailable() { + log.Warn("No browser available; please open the URL manually") + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } else if errOpen := browser.OpenURL(authURL); errOpen != nil { + log.Warnf("Failed to open browser automatically: %v", errOpen) + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + } else { + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + + fmt.Println("Waiting for JoyCode authentication callback...") + + var cbRes joycodeCallbackResult + timeoutTimer := time.NewTimer(5 * time.Minute) + defer timeoutTimer.Stop() + + select { + case cbRes = <-cbChan: + case <-timeoutTimer.C: + return nil, fmt.Errorf("joycode: authentication timed out") + } + + if cbRes.Error != "" { + return nil, fmt.Errorf("joycode: authentication failed: %s", cbRes.Error) + } + if cbRes.PTKey == "" { + return nil, fmt.Errorf("joycode: missing pt_key in callback") + } + + fmt.Println("Callback received, verifying token...") + + verifyCtx, verifyCancel := context.WithTimeout(ctx, 30*time.Second) + defer verifyCancel() + + jcAuth := joycode.NewJoyCodeAuth(nil) + tokenData, err := jcAuth.VerifyToken(verifyCtx, cbRes.PTKey) + if err != nil { + fmt.Printf("Token verification failed: %v\n", err) + fmt.Println("Saving raw token for manual use") + tokenData = &joycode.JoyCodeTokenData{ + PTKey: cbRes.PTKey, + LoginType: "IDE", + } + } + + label := tokenData.UserID + if label == "" { + label = "joycode" + } + + fmt.Println("JoyCode authentication successful") + + return &coreauth.Auth{ + ID: fmt.Sprintf("joycode-%s.json", tokenData.UserID), + Provider: "joycode", + FileName: fmt.Sprintf("joycode-%s.json", tokenData.UserID), + Label: label, + Metadata: map[string]any{ + "type": "joycode", + "ptKey": tokenData.PTKey, + "userId": tokenData.UserID, + "tenant": tokenData.Tenant, + "orgFullName": tokenData.OrgFullName, + "loginType": tokenData.LoginType, + }, + }, nil +} + +func generateJoyCodeAuthKey() string { + b := make([]byte, 16) + rand.Read(b) + return fmt.Sprintf("%x", b) +} diff --git a/sdk/auth/kilo.go b/sdk/auth/kilo.go new file mode 100644 index 0000000000..08da8678b4 --- /dev/null +++ b/sdk/auth/kilo.go @@ -0,0 +1,121 @@ +package auth + +import ( + "context" + "fmt" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kilo" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +// KiloAuthenticator implements the login flow for Kilo AI accounts. +type KiloAuthenticator struct{} + +// NewKiloAuthenticator constructs a Kilo authenticator. +func NewKiloAuthenticator() *KiloAuthenticator { + return &KiloAuthenticator{} +} + +func (a *KiloAuthenticator) Provider() string { + return "kilo" +} + +func (a *KiloAuthenticator) RefreshLead() *time.Duration { + return nil +} + +// Login manages the device flow authentication for Kilo AI. +func (a *KiloAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if ctx == nil { + ctx = context.Background() + } + if opts == nil { + opts = &LoginOptions{} + } + + kilocodeAuth := kilo.NewKiloAuth() + + fmt.Println("Initiating Kilo device authentication...") + resp, err := kilocodeAuth.InitiateDeviceFlow(ctx) + if err != nil { + return nil, fmt.Errorf("failed to initiate device flow: %w", err) + } + + fmt.Printf("Please visit: %s\n", resp.VerificationURL) + fmt.Printf("And enter code: %s\n", resp.Code) + + fmt.Println("Waiting for authorization...") + status, err := kilocodeAuth.PollForToken(ctx, resp.Code) + if err != nil { + return nil, fmt.Errorf("authentication failed: %w", err) + } + + fmt.Printf("Authentication successful for %s\n", status.UserEmail) + + profile, err := kilocodeAuth.GetProfile(ctx, status.Token) + if err != nil { + return nil, fmt.Errorf("failed to fetch profile: %w", err) + } + + var orgID string + if len(profile.Orgs) > 1 { + fmt.Println("Multiple organizations found. Please select one:") + for i, org := range profile.Orgs { + fmt.Printf("[%d] %s (%s)\n", i+1, org.Name, org.ID) + } + + if opts.Prompt != nil { + input, err := opts.Prompt("Enter the number of the organization: ") + if err != nil { + return nil, err + } + var choice int + _, err = fmt.Sscan(input, &choice) + if err == nil && choice > 0 && choice <= len(profile.Orgs) { + orgID = profile.Orgs[choice-1].ID + } else { + orgID = profile.Orgs[0].ID + fmt.Printf("Invalid choice, defaulting to %s\n", profile.Orgs[0].Name) + } + } else { + orgID = profile.Orgs[0].ID + fmt.Printf("Non-interactive mode, defaulting to organization: %s\n", profile.Orgs[0].Name) + } + } else if len(profile.Orgs) == 1 { + orgID = profile.Orgs[0].ID + } + + defaults, err := kilocodeAuth.GetDefaults(ctx, status.Token, orgID) + if err != nil { + fmt.Printf("Warning: failed to fetch defaults: %v\n", err) + defaults = &kilo.Defaults{} + } + + ts := &kilo.KiloTokenStorage{ + Token: status.Token, + OrganizationID: orgID, + Model: defaults.Model, + Email: status.UserEmail, + Type: "kilo", + } + + fileName := kilo.CredentialFileName(status.UserEmail) + metadata := map[string]any{ + "email": status.UserEmail, + "organization_id": orgID, + "model": defaults.Model, + } + + return &coreauth.Auth{ + ID: fileName, + Provider: a.Provider(), + FileName: fileName, + Storage: ts, + Metadata: metadata, + }, nil +} diff --git a/sdk/auth/kimi.go b/sdk/auth/kimi.go index 12ae101e7d..4dbff1e87e 100644 --- a/sdk/auth/kimi.go +++ b/sdk/auth/kimi.go @@ -6,10 +6,10 @@ import ( "strings" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/kimi" - "github.com/router-for-me/CLIProxyAPI/v6/internal/browser" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kimi" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" log "github.com/sirupsen/logrus" ) diff --git a/sdk/auth/kiro.go b/sdk/auth/kiro.go new file mode 100644 index 0000000000..5e2f85083c --- /dev/null +++ b/sdk/auth/kiro.go @@ -0,0 +1,458 @@ +package auth + +import ( + "context" + "encoding/json" + "fmt" + "os" + "path/filepath" + "strings" + "time" + + kiroauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kiro" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" +) + +// extractKiroIdentifier extracts a meaningful identifier for file naming. +// Returns account name if provided, otherwise profile ARN ID, then client ID. +// All extracted values are sanitized to prevent path injection attacks. +func extractKiroIdentifier(accountName, profileArn, clientID string) string { + // Priority 1: Use account name if provided + if accountName != "" { + return kiroauth.SanitizeEmailForFilename(accountName) + } + + // Priority 2: Use profile ARN ID part (sanitized to prevent path injection) + if profileArn != "" { + parts := strings.Split(profileArn, "/") + if len(parts) >= 2 { + // Sanitize the ARN component to prevent path traversal + return kiroauth.SanitizeEmailForFilename(parts[len(parts)-1]) + } + } + + // Priority 3: Use client ID (for IDC auth without email/profileArn) + if clientID != "" { + return kiroauth.SanitizeEmailForFilename(clientID) + } + + // Fallback: timestamp + return fmt.Sprintf("%d", time.Now().UnixNano()%100000) +} + +// KiroAuthenticator implements OAuth authentication for Kiro with Google login. +type KiroAuthenticator struct{} + +// NewKiroAuthenticator constructs a Kiro authenticator. +func NewKiroAuthenticator() *KiroAuthenticator { + return &KiroAuthenticator{} +} + +// Provider returns the provider key for the authenticator. +func (a *KiroAuthenticator) Provider() string { + return "kiro" +} + +// RefreshLead indicates how soon before expiry a refresh should be attempted. +// Set to 20 minutes for proactive refresh before token expiry. +func (a *KiroAuthenticator) RefreshLead() *time.Duration { + d := 20 * time.Minute + return &d +} + +// createAuthRecord creates an auth record from token data. +func (a *KiroAuthenticator) createAuthRecord(tokenData *kiroauth.KiroTokenData, source string) (*coreauth.Auth, error) { + // Parse expires_at + expiresAt, err := time.Parse(time.RFC3339, tokenData.ExpiresAt) + if err != nil { + expiresAt = time.Now().Add(1 * time.Hour) + } + + // Determine label and identifier based on auth method + // Generate sequence number for uniqueness + seq := time.Now().UnixNano() % 100000 + + var label, idPart string + if tokenData.AuthMethod == "idc" { + label = "kiro-idc" + // Priority: email > startUrl identifier > sequence only + // Email is unique, so no sequence needed when email is available + if tokenData.Email != "" { + idPart = kiroauth.SanitizeEmailForFilename(tokenData.Email) + } else if tokenData.StartURL != "" { + identifier := kiroauth.ExtractIDCIdentifier(tokenData.StartURL) + if identifier != "" { + idPart = fmt.Sprintf("%s-%05d", identifier, seq) + } else { + idPart = fmt.Sprintf("%05d", seq) + } + } else { + idPart = fmt.Sprintf("%05d", seq) + } + } else { + label = fmt.Sprintf("kiro-%s", source) + idPart = extractKiroIdentifier(tokenData.Email, tokenData.ProfileArn, tokenData.ClientID) + } + + now := time.Now() + fileName := fmt.Sprintf("%s-%s.json", label, idPart) + + metadata := map[string]any{ + "type": "kiro", + "access_token": tokenData.AccessToken, + "refresh_token": tokenData.RefreshToken, + "profile_arn": tokenData.ProfileArn, + "expires_at": tokenData.ExpiresAt, + "auth_method": tokenData.AuthMethod, + "provider": tokenData.Provider, + "client_id": tokenData.ClientID, + "client_secret": tokenData.ClientSecret, + "email": tokenData.Email, + } + + // Add IDC-specific fields if present + if tokenData.StartURL != "" { + metadata["start_url"] = tokenData.StartURL + } + if tokenData.Region != "" { + metadata["region"] = tokenData.Region + } + + attributes := map[string]string{ + "profile_arn": tokenData.ProfileArn, + "source": source, + "email": tokenData.Email, + } + + // Add IDC-specific attributes if present + if tokenData.AuthMethod == "idc" { + attributes["source"] = "aws-idc" + if tokenData.StartURL != "" { + attributes["start_url"] = tokenData.StartURL + } + if tokenData.Region != "" { + attributes["region"] = tokenData.Region + } + } + + record := &coreauth.Auth{ + ID: fileName, + Provider: "kiro", + FileName: fileName, + Label: label, + Status: coreauth.StatusActive, + CreatedAt: now, + UpdatedAt: now, + Metadata: metadata, + Attributes: attributes, + // NextRefreshAfter: 20 minutes before expiry + NextRefreshAfter: expiresAt.Add(-20 * time.Minute), + } + + if tokenData.Email != "" { + fmt.Printf("\n✓ Kiro authentication completed successfully! (Account: %s)\n", tokenData.Email) + } else { + fmt.Println("\n✓ Kiro authentication completed successfully!") + } + + return record, nil +} + +// Login performs OAuth login for Kiro with AWS (Builder ID or IDC). +// This shows a method selection prompt and handles both flows. +func (a *KiroAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("kiro auth: configuration is required") + } + + // Extract IDC options from metadata if present + var idcOpts *kiroauth.IDCLoginOptions + if opts != nil && opts.Metadata != nil { + if startURL := opts.Metadata["start-url"]; startURL != "" { + idcOpts = &kiroauth.IDCLoginOptions{ + StartURL: startURL, + Region: opts.Metadata["region"], + UseDeviceCode: opts.Metadata["flow"] == "device", + } + } + } + + // Use the unified method selection flow (Builder ID or IDC) + ssoClient := kiroauth.NewSSOOIDCClient(cfg) + tokenData, err := ssoClient.LoginWithMethodSelection(ctx, idcOpts) + if err != nil { + return nil, fmt.Errorf("login failed: %w", err) + } + + return a.createAuthRecord(tokenData, "aws") +} + +// LoginWithAuthCode performs OAuth login for Kiro with AWS Builder ID using authorization code flow. +// This provides a better UX than device code flow as it uses automatic browser callback. +func (a *KiroAuthenticator) LoginWithAuthCode(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("kiro auth: configuration is required") + } + + oauth := kiroauth.NewKiroOAuth(cfg) + + // Use AWS Builder ID authorization code flow + tokenData, err := oauth.LoginWithBuilderIDAuthCode(ctx) + if err != nil { + return nil, fmt.Errorf("login failed: %w", err) + } + + // Parse expires_at + expiresAt, err := time.Parse(time.RFC3339, tokenData.ExpiresAt) + if err != nil { + expiresAt = time.Now().Add(1 * time.Hour) + } + + // Extract identifier for file naming + idPart := extractKiroIdentifier(tokenData.Email, tokenData.ProfileArn, tokenData.ClientID) + + now := time.Now() + fileName := fmt.Sprintf("kiro-aws-%s.json", idPart) + + record := &coreauth.Auth{ + ID: fileName, + Provider: "kiro", + FileName: fileName, + Label: "kiro-aws", + Status: coreauth.StatusActive, + CreatedAt: now, + UpdatedAt: now, + Metadata: map[string]any{ + "type": "kiro", + "access_token": tokenData.AccessToken, + "refresh_token": tokenData.RefreshToken, + "profile_arn": tokenData.ProfileArn, + "expires_at": tokenData.ExpiresAt, + "auth_method": tokenData.AuthMethod, + "provider": tokenData.Provider, + "client_id": tokenData.ClientID, + "client_secret": tokenData.ClientSecret, + "email": tokenData.Email, + }, + Attributes: map[string]string{ + "profile_arn": tokenData.ProfileArn, + "source": "aws-builder-id-authcode", + "email": tokenData.Email, + }, + // NextRefreshAfter: 20 minutes before expiry + NextRefreshAfter: expiresAt.Add(-20 * time.Minute), + } + + if tokenData.Email != "" { + fmt.Printf("\n✓ Kiro authentication completed successfully! (Account: %s)\n", tokenData.Email) + } else { + fmt.Println("\n✓ Kiro authentication completed successfully!") + } + + return record, nil +} + +// LoginWithGoogle performs OAuth login for Kiro with Google. +// NOTE: Google login is not available for third-party applications due to AWS Cognito restrictions. +// Please use AWS Builder ID or import your token from Kiro IDE. +func (a *KiroAuthenticator) LoginWithGoogle(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + return nil, fmt.Errorf("Google login is not available for third-party applications due to AWS Cognito restrictions.\n\nAlternatives:\n 1. Use AWS Builder ID: cliproxy kiro --builder-id\n 2. Import token from Kiro IDE: cliproxy kiro --import\n\nTo get a token from Kiro IDE:\n 1. Open Kiro IDE and login with Google\n 2. Find: ~/.kiro/kiro-auth-token.json\n 3. Run: cliproxy kiro --import") +} + +// LoginWithGitHub performs OAuth login for Kiro with GitHub. +// NOTE: GitHub login is not available for third-party applications due to AWS Cognito restrictions. +// Please use AWS Builder ID or import your token from Kiro IDE. +func (a *KiroAuthenticator) LoginWithGitHub(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + return nil, fmt.Errorf("GitHub login is not available for third-party applications due to AWS Cognito restrictions.\n\nAlternatives:\n 1. Use AWS Builder ID: cliproxy kiro --builder-id\n 2. Import token from Kiro IDE: cliproxy kiro --import\n\nTo get a token from Kiro IDE:\n 1. Open Kiro IDE and login with GitHub\n 2. Find: ~/.kiro/kiro-auth-token.json\n 3. Run: cliproxy kiro --import") +} + +// ImportFromKiroIDE imports token from Kiro IDE's token file. +func (a *KiroAuthenticator) ImportFromKiroIDE(ctx context.Context, cfg *config.Config) (*coreauth.Auth, error) { + tokenData, err := kiroauth.LoadKiroIDEToken() + if err != nil { + return nil, fmt.Errorf("failed to load Kiro IDE token: %w", err) + } + + // Parse expires_at + expiresAt, err := time.Parse(time.RFC3339, tokenData.ExpiresAt) + if err != nil { + expiresAt = time.Now().Add(1 * time.Hour) + } + + // Extract email from JWT if not already set (for imported tokens) + if tokenData.Email == "" { + tokenData.Email = kiroauth.ExtractEmailFromJWT(tokenData.AccessToken) + } + + // Extract identifier for file naming + idPart := extractKiroIdentifier(tokenData.Email, tokenData.ProfileArn, tokenData.ClientID) + // Sanitize provider to prevent path traversal (defense-in-depth) + provider := kiroauth.SanitizeEmailForFilename(strings.ToLower(strings.TrimSpace(tokenData.Provider))) + if provider == "" { + provider = "imported" // Fallback for legacy tokens without provider + } + + now := time.Now() + fileName := fmt.Sprintf("kiro-%s-%s.json", provider, idPart) + + record := &coreauth.Auth{ + ID: fileName, + Provider: "kiro", + FileName: fileName, + Label: fmt.Sprintf("kiro-%s", provider), + Status: coreauth.StatusActive, + CreatedAt: now, + UpdatedAt: now, + Metadata: map[string]any{ + "type": "kiro", + "access_token": tokenData.AccessToken, + "refresh_token": tokenData.RefreshToken, + "profile_arn": tokenData.ProfileArn, + "expires_at": tokenData.ExpiresAt, + "auth_method": tokenData.AuthMethod, + "provider": tokenData.Provider, + "client_id": tokenData.ClientID, + "client_secret": tokenData.ClientSecret, + "client_id_hash": tokenData.ClientIDHash, + "email": tokenData.Email, + "region": tokenData.Region, + "start_url": tokenData.StartURL, + }, + Attributes: map[string]string{ + "profile_arn": tokenData.ProfileArn, + "source": "kiro-ide-import", + "email": tokenData.Email, + "region": tokenData.Region, + }, + // NextRefreshAfter: 20 minutes before expiry + NextRefreshAfter: expiresAt.Add(-20 * time.Minute), + } + + // Display the email if extracted + if tokenData.Email != "" { + fmt.Printf("\n✓ Imported Kiro token from IDE (Provider: %s, Account: %s)\n", tokenData.Provider, tokenData.Email) + } else { + fmt.Printf("\n✓ Imported Kiro token from IDE (Provider: %s)\n", tokenData.Provider) + } + + return record, nil +} + +// Refresh refreshes an expired Kiro token using AWS SSO OIDC. +func (a *KiroAuthenticator) Refresh(ctx context.Context, cfg *config.Config, auth *coreauth.Auth) (*coreauth.Auth, error) { + if auth == nil || auth.Metadata == nil { + return nil, fmt.Errorf("invalid auth record") + } + + refreshToken, ok := auth.Metadata["refresh_token"].(string) + if !ok || refreshToken == "" { + return nil, fmt.Errorf("refresh token not found") + } + + clientID, _ := auth.Metadata["client_id"].(string) + clientSecret, _ := auth.Metadata["client_secret"].(string) + clientIDHash, _ := auth.Metadata["client_id_hash"].(string) + authMethod, _ := auth.Metadata["auth_method"].(string) + startURL, _ := auth.Metadata["start_url"].(string) + region, _ := auth.Metadata["region"].(string) + + // For Enterprise Kiro IDE (IDC auth), try to load clientId/clientSecret from device registration + // if they are missing from metadata. This handles the case where token was imported without + // clientId/clientSecret but has clientIdHash. + if (clientID == "" || clientSecret == "") && clientIDHash != "" { + if loadedClientID, loadedClientSecret, err := loadDeviceRegistrationCredentials(clientIDHash); err == nil { + clientID = loadedClientID + clientSecret = loadedClientSecret + } + } + + var tokenData *kiroauth.KiroTokenData + var err error + + ssoClient := kiroauth.NewSSOOIDCClient(cfg) + + // Use SSO OIDC refresh for AWS Builder ID or IDC, otherwise use Kiro's OAuth refresh endpoint + switch { + case clientID != "" && clientSecret != "" && authMethod == "idc" && region != "": + // IDC refresh with region-specific endpoint + tokenData, err = ssoClient.RefreshTokenWithRegion(ctx, clientID, clientSecret, refreshToken, region, startURL) + case clientID != "" && clientSecret != "" && (authMethod == "builder-id" || authMethod == "idc"): + // Builder ID or IDC refresh with default endpoint (us-east-1) + tokenData, err = ssoClient.RefreshToken(ctx, clientID, clientSecret, refreshToken) + default: + // Fallback to Kiro's refresh endpoint (for social auth: Google/GitHub) + oauth := kiroauth.NewKiroOAuth(cfg) + tokenData, err = oauth.RefreshToken(ctx, refreshToken) + } + + if err != nil { + return nil, fmt.Errorf("token refresh failed: %w", err) + } + + // Parse expires_at + expiresAt, err := time.Parse(time.RFC3339, tokenData.ExpiresAt) + if err != nil { + expiresAt = time.Now().Add(1 * time.Hour) + } + + // Clone auth to avoid mutating the input parameter + updated := auth.Clone() + now := time.Now() + updated.UpdatedAt = now + updated.LastRefreshedAt = now + updated.Metadata["access_token"] = tokenData.AccessToken + updated.Metadata["refresh_token"] = tokenData.RefreshToken + updated.Metadata["expires_at"] = tokenData.ExpiresAt + updated.Metadata["last_refresh"] = now.Format(time.RFC3339) // For double-check optimization + // Store clientId/clientSecret if they were loaded from device registration + if clientID != "" && updated.Metadata["client_id"] == nil { + updated.Metadata["client_id"] = clientID + } + if clientSecret != "" && updated.Metadata["client_secret"] == nil { + updated.Metadata["client_secret"] = clientSecret + } + // NextRefreshAfter: 20 minutes before expiry + updated.NextRefreshAfter = expiresAt.Add(-20 * time.Minute) + + return updated, nil +} + +// loadDeviceRegistrationCredentials loads clientId and clientSecret from device registration file. +// This is used when refreshing tokens that were imported without clientId/clientSecret. +func loadDeviceRegistrationCredentials(clientIDHash string) (clientID, clientSecret string, err error) { + if clientIDHash == "" { + return "", "", fmt.Errorf("clientIdHash is empty") + } + + // Sanitize clientIdHash to prevent path traversal + if strings.Contains(clientIDHash, "/") || strings.Contains(clientIDHash, "\\") || strings.Contains(clientIDHash, "..") { + return "", "", fmt.Errorf("invalid clientIdHash: contains path separator") + } + + homeDir, err := os.UserHomeDir() + if err != nil { + return "", "", fmt.Errorf("failed to get home directory: %w", err) + } + + deviceRegPath := filepath.Join(homeDir, ".aws", "sso", "cache", clientIDHash+".json") + data, err := os.ReadFile(deviceRegPath) + if err != nil { + return "", "", fmt.Errorf("failed to read device registration file: %w", err) + } + + var deviceReg struct { + ClientID string `json:"clientId"` + ClientSecret string `json:"clientSecret"` + } + + if err := json.Unmarshal(data, &deviceReg); err != nil { + return "", "", fmt.Errorf("failed to parse device registration: %w", err) + } + + if deviceReg.ClientID == "" || deviceReg.ClientSecret == "" { + return "", "", fmt.Errorf("device registration missing clientId or clientSecret") + } + + return deviceReg.ClientID, deviceReg.ClientSecret, nil +} diff --git a/sdk/auth/manager.go b/sdk/auth/manager.go index c6469a7d19..79e6e71931 100644 --- a/sdk/auth/manager.go +++ b/sdk/auth/manager.go @@ -3,9 +3,10 @@ package auth import ( "context" "fmt" + "sort" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) // Manager aggregates authenticators and coordinates persistence via a token store. @@ -74,3 +75,155 @@ func (m *Manager) Login(ctx context.Context, provider string, cfg *config.Config } return record, savedPath, nil } + +type ProviderInfo struct { + Key string `json:"key"` + DisplayName string `json:"display_name"` + FlowType string `json:"flow_type"` + AuthURLEndpoint string `json:"auth_url_endpoint"` + Aliases []string `json:"aliases,omitempty"` + Configured bool `json:"configured"` +} + +var providerMetadata = map[string]ProviderInfo{ + "claude": { + Key: "claude", + DisplayName: "Claude (Anthropic)", + FlowType: "authorization_code_pkce", + AuthURLEndpoint: "/anthropic-auth-url", + Aliases: []string{"anthropic"}, + }, + "codex": { + Key: "codex", + DisplayName: "Codex (OpenAI)", + FlowType: "authorization_code_pkce", + AuthURLEndpoint: "/codex-auth-url", + Aliases: []string{"openai"}, + }, + "gemini": { + Key: "gemini", + DisplayName: "Gemini CLI", + FlowType: "google_oauth2", + AuthURLEndpoint: "/gemini-cli-auth-url", + Aliases: []string{"google"}, + }, + "antigravity": { + Key: "antigravity", + DisplayName: "Antigravity", + FlowType: "google_oauth2", + AuthURLEndpoint: "/antigravity-auth-url", + Aliases: []string{"anti-gravity"}, + }, + "kimi": { + Key: "kimi", + DisplayName: "Kimi", + FlowType: "device_code", + AuthURLEndpoint: "/kimi-auth-url", + }, + "kiro": { + Key: "kiro", + DisplayName: "Kiro", + FlowType: "aws_builder_id", + AuthURLEndpoint: "/kiro-auth-url", + }, + "github-copilot": { + Key: "github-copilot", + DisplayName: "GitHub Copilot", + FlowType: "device_code", + AuthURLEndpoint: "/github-auth-url", + Aliases: []string{"github"}, + }, + "gitlab": { + Key: "gitlab", + DisplayName: "GitLab", + FlowType: "authorization_code_pkce", + AuthURLEndpoint: "/gitlab-auth-url", + }, + "codebuddy": { + Key: "codebuddy", + DisplayName: "CodeBuddy", + FlowType: "token", + AuthURLEndpoint: "", + }, + "codebuddy-ai": { + Key: "codebuddy-ai", + DisplayName: "CodeBuddy AI", + FlowType: "token", + AuthURLEndpoint: "", + }, + "cursor": { + Key: "cursor", + DisplayName: "Cursor", + FlowType: "pkce_polling", + AuthURLEndpoint: "/cursor-auth-url", + }, + "qoder": { + Key: "qoder", + DisplayName: "Qoder", + FlowType: "pkce_custom_uri", + AuthURLEndpoint: "/qoder-auth-url", + }, + "codearts": { + Key: "codearts", + DisplayName: "CodeArts", + FlowType: "web_oauth", + AuthURLEndpoint: "", + }, + "joycode": { + Key: "joycode", + DisplayName: "JoyCode", + FlowType: "web_oauth", + AuthURLEndpoint: "", + }, + "kilo": { + Key: "kilo", + DisplayName: "Kilo", + FlowType: "device_code", + AuthURLEndpoint: "/kilo-auth-url", + }, +} + +func (m *Manager) ListProviders() []ProviderInfo { + configuredKeys := make(map[string]bool) + if m.authenticators != nil { + for key := range m.authenticators { + configuredKeys[key] = true + } + } + + seen := make(map[string]bool) + result := make([]ProviderInfo, 0, len(providerMetadata)+len(configuredKeys)) + + for key, info := range providerMetadata { + info.Configured = configuredKeys[key] + result = append(result, info) + seen[key] = true + } + + for key := range configuredKeys { + if !seen[key] { + result = append(result, ProviderInfo{ + Key: key, + DisplayName: key, + FlowType: "unknown", + Configured: true, + }) + } + } + + sort.Slice(result, func(i, j int) bool { return result[i].Key < result[j].Key }) + return result +} + +// SaveAuth persists an auth record using the configured store. +func (m *Manager) SaveAuth(record *coreauth.Auth, cfg *config.Config) (string, error) { + if m.store == nil { + return "", fmt.Errorf("no store configured") + } + if cfg != nil { + if dirSetter, ok := m.store.(interface{ SetBaseDir(string) }); ok { + dirSetter.SetBaseDir(cfg.AuthDir) + } + } + return m.store.Save(context.Background(), record) +} diff --git a/sdk/auth/qoder.go b/sdk/auth/qoder.go new file mode 100644 index 0000000000..c3cb698f68 --- /dev/null +++ b/sdk/auth/qoder.go @@ -0,0 +1,370 @@ +package auth + +import ( + "context" + "crypto/sha256" + "fmt" + "net" + "net/http" + "net/url" + "strings" + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/qoder" + "github.com/router-for-me/CLIProxyAPI/v7/internal/browser" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/misc" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + log "github.com/sirupsen/logrus" +) + +// qoderRefreshLead is the duration before token expiry when refresh should occur. +var qoderRefreshLead = 5 * time.Minute + +// QoderAuthenticator implements the PKCE + URI-scheme login for the Qoder provider. +type QoderAuthenticator struct{} + +// NewQoderAuthenticator constructs a new authenticator instance. +func NewQoderAuthenticator() Authenticator { return &QoderAuthenticator{} } + +// Provider returns the provider key for qoder. +func (QoderAuthenticator) Provider() string { return "qoder" } + +// RefreshLead instructs the manager to refresh five minutes before expiry. +func (QoderAuthenticator) RefreshLead() *time.Duration { + return &qoderRefreshLead +} + +// qoderCallbackResult holds the parsed callback data. +type qoderCallbackResult struct { + TokenString string + AuthField string + Error string +} + +// Login launches a local HTTP server to catch the qoder:// URI callback, +// opens the browser for Qoder login, and waits for the token. +func (a QoderAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) { + if cfg == nil { + return nil, fmt.Errorf("cliproxy auth: configuration is required") + } + if ctx == nil { + ctx = context.Background() + } + if opts == nil { + opts = &LoginOptions{} + } + + callbackPort := qoder.CallbackPort + if opts.CallbackPort > 0 { + callbackPort = opts.CallbackPort + } + + // Generate PKCE + machine ID + nonce, challenge, verifier, err := qoder.GeneratePKCE() + if err != nil { + return nil, fmt.Errorf("qoder: %w", err) + } + + machineID := qoder.GenerateMachineID("cliproxy", "00:00:00:00:00:00", "server", "x86_64") + _ = verifier // stored in metadata for potential future token exchange + _ = nonce + + // Start local callback server + srv, port, cbChan, errServer := startQoderCallbackServer(callbackPort) + if errServer != nil { + return nil, fmt.Errorf("qoder: failed to start callback server: %w", errServer) + } + defer func() { + shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + _ = srv.Shutdown(shutdownCtx) + }() + + _ = port // port is used for the callback URL + + // Register qoder:// URI protocol handler (Windows: registry + VBS, other: no-op) + cleanupURI := qoder.RegisterURIHandler(port) + defer cleanupURI() + + authURL := qoder.BuildAuthURL(nonce, challenge, machineID) + + if !opts.NoBrowser { + fmt.Println("Opening browser for Qoder authentication") + if !browser.IsAvailable() { + log.Warn("No browser available; please open the URL manually") + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } else if errOpen := browser.OpenURL(authURL); errOpen != nil { + log.Warnf("Failed to open browser automatically: %v", errOpen) + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + } else { + util.PrintSSHTunnelInstructions(port) + fmt.Printf("Visit the following URL to continue authentication:\n%s\n", authURL) + } + + fmt.Println("Waiting for Qoder authentication callback...") + + var cbRes qoderCallbackResult + timeoutTimer := time.NewTimer(5 * time.Minute) + defer timeoutTimer.Stop() + + var manualPromptTimer *time.Timer + var manualPromptC <-chan time.Time + if opts.Prompt != nil { + manualPromptTimer = time.NewTimer(15 * time.Second) + manualPromptC = manualPromptTimer.C + defer manualPromptTimer.Stop() + } + + var manualInputCh <-chan string + var manualInputErrCh <-chan error + +waitForCallback: + for { + select { + case res := <-cbChan: + cbRes = res + break waitForCallback + case <-manualPromptC: + manualPromptC = nil + if manualPromptTimer != nil { + manualPromptTimer.Stop() + } + select { + case res := <-cbChan: + cbRes = res + break waitForCallback + default: + } + manualInputCh, manualInputErrCh = misc.AsyncPrompt(opts.Prompt, "Paste the Qoder callback URL (or press Enter to keep waiting): ") + continue + case input := <-manualInputCh: + manualInputCh = nil + manualInputErrCh = nil + input = strings.TrimSpace(input) + if input == "" { + continue + } + // Try to parse as qoder:// URL + if strings.Contains(input, "tokenString=") || strings.Contains(input, "token=") { + // Extract query string portion + qs := input + if idx := strings.Index(input, "?"); idx >= 0 { + qs = input[idx+1:] + } + parsed, errParse := url.ParseQuery(qs) + if errParse == nil { + token := "" + for _, k := range []string{"tokenString", "token"} { + if v := parsed.Get(k); v != "" { + token = v + break + } + } + if token != "" { + cbRes = qoderCallbackResult{ + TokenString: token, + AuthField: parsed.Get("auth"), + } + break waitForCallback + } + } + } + continue + case errManual := <-manualInputErrCh: + return nil, errManual + case <-timeoutTimer.C: + return nil, fmt.Errorf("qoder: authentication timed out") + } + } + + if cbRes.Error != "" { + return nil, fmt.Errorf("qoder: authentication failed: %s", cbRes.Error) + } + if cbRes.TokenString == "" { + return nil, fmt.Errorf("qoder: missing token in callback") + } + + fmt.Printf("Token received: %s...\n", cbRes.TokenString[:min(40, len(cbRes.TokenString))]) + + // Decode auth field to get UID + uid := "" + name := "" + email := "" + if cbRes.AuthField != "" { + authInfo, errDecode := qoder.DecodeAuthField(cbRes.AuthField) + if errDecode != nil { + log.Warnf("qoder: failed to decode auth field: %v", errDecode) + } else { + if v, ok := authInfo["uid"].(string); ok { + uid = v + } + if v, ok := authInfo["name"].(string); ok { + name = v + } + } + } + + // If UID not found via auth field, try the user status endpoint + if uid == "" { + authSvc := qoder.NewQoderAuth(nil) + user, errUser := authSvc.FetchUserStatus(cbRes.TokenString) + if errUser != nil { + log.Warnf("qoder: user status probe failed: %v", errUser) + } else { + uid = user.ID + name = user.Name + email = user.Email + } + } + + if uid == "" { + // Fallback: derive a stable UID from the token hash so we can still save credentials + tokenHash := fmt.Sprintf("%x", sha256.Sum256([]byte(cbRes.TokenString))) + uid = tokenHash[:16] + log.Warnf("qoder: using derived UID from token hash: %s", uid) + } + + now := time.Now() + metadata := map[string]any{ + "type": "qoder", + "access_token": cbRes.TokenString, + "auth": cbRes.AuthField, + "nonce": nonce, + "verifier": verifier, + "machine_id": machineID, + "uid": uid, + "timestamp": now.UnixMilli(), + } + if name != "" { + metadata["name"] = name + } + if email != "" { + metadata["email"] = email + } + + fileName := qoder.CredentialFileName(uid) + label := name + if label == "" { + label = uid + } + if label == "" { + label = "qoder" + } + + fmt.Println("Qoder authentication successful") + return &coreauth.Auth{ + ID: fileName, + Provider: "qoder", + FileName: fileName, + Label: label, + Metadata: metadata, + }, nil +} + +func startQoderCallbackServer(port int) (*http.Server, int, <-chan qoderCallbackResult, error) { + if port <= 0 { + port = qoder.CallbackPort + } + addr := fmt.Sprintf(":%d", port) + listener, err := net.Listen("tcp", addr) + if err != nil { + return nil, 0, nil, err + } + port = listener.Addr().(*net.TCPAddr).Port + resultCh := make(chan qoderCallbackResult, 1) + + mux := http.NewServeMux() + mux.HandleFunc("/forward", func(w http.ResponseWriter, r *http.Request) { + rawURL := r.URL.Query().Get("url") + // Match Python: raw_url = unquote(raw_url) — VBS double-encodes the URL + rawURL, _ = url.QueryUnescape(rawURL) + prefix := "qoder://aicoding.aicoding-agent/login-success?" + if strings.HasPrefix(rawURL, prefix) { + qs := rawURL[len(prefix):] + // Now parse_qs equivalent — url.ParseQuery auto-decodes %xx values + parsed, errParse := url.ParseQuery(qs) + if errParse == nil { + token := "" + for _, k := range []string{"tokenString", "token"} { + if v := parsed.Get(k); v != "" { + token = v + break + } + } + if token != "" { + resultCh <- qoderCallbackResult{ + TokenString: token, + AuthField: parsed.Get("auth"), + } + } + } + } + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte(`Qoder Login` + + `` + + `
` + + `

✓ Login Successful

` + + `

You can close this window and return to the terminal.

` + + `
`)) + }) + + srv := &http.Server{Handler: mux} + go func() { + if errServe := srv.Serve(listener); errServe != nil && !strings.Contains(errServe.Error(), "Server closed") { + log.Warnf("qoder callback server error: %v", errServe) + } + }() + + return srv, port, resultCh, nil +} + +// extractQoderCallbackParams extracts tokenString and auth from the Qoder callback +// query string. The query string comes from a qoder:// URI where parameter values +// contain URL-encoded special characters (like %26 for &). Standard url.ParseQuery +// would incorrectly interpret encoded %26 as a parameter separator after URL decoding, +// so we extract the raw parameter values by finding known key= prefixes and splitting +// on key boundaries, then URL-decode each value individually. +func extractQoderCallbackParams(qs string) (token, authField string) { + // Known parameter keys in the callback URL + params := map[string]string{} + keys := []string{"tokenString", "token", "auth"} + + for _, key := range keys { + prefix := key + "=" + idx := strings.Index(qs, prefix) + if idx < 0 { + continue + } + valueStart := idx + len(prefix) + // Find the end of this parameter: look for the next "&key=" boundary + rest := qs[valueStart:] + endIdx := len(rest) + for _, nextKey := range keys { + boundary := "&" + nextKey + "=" + if bi := strings.Index(rest, boundary); bi >= 0 && bi < endIdx { + endIdx = bi + } + } + rawValue := rest[:endIdx] + // URL-decode the value + decoded, err := url.QueryUnescape(rawValue) + if err != nil { + decoded = rawValue + } + params[key] = decoded + } + + // Try tokenString first, fallback to token + token = params["tokenString"] + if token == "" { + token = params["token"] + } + authField = params["auth"] + return token, authField +} diff --git a/sdk/auth/refresh_registry.go b/sdk/auth/refresh_registry.go index ae60f56a64..fe25231507 100644 --- a/sdk/auth/refresh_registry.go +++ b/sdk/auth/refresh_registry.go @@ -3,7 +3,7 @@ package auth import ( "time" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func init() { diff --git a/sdk/auth/store_registry.go b/sdk/auth/store_registry.go index 760449f8cf..1971947bc8 100644 --- a/sdk/auth/store_registry.go +++ b/sdk/auth/store_registry.go @@ -3,7 +3,7 @@ package auth import ( "sync" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) var ( diff --git a/sdk/cliproxy/auth/antigravity_credits.go b/sdk/cliproxy/auth/antigravity_credits.go new file mode 100644 index 0000000000..77b03bfd3e --- /dev/null +++ b/sdk/cliproxy/auth/antigravity_credits.go @@ -0,0 +1,90 @@ +package auth + +import ( + "context" + "strings" + "sync" + "time" +) + +type antigravityUseCreditsContextKey struct{} + +// WithAntigravityCredits returns a child context that signals the executor to +// inject enabledCreditTypes into the request payload. +func WithAntigravityCredits(ctx context.Context) context.Context { + return context.WithValue(ctx, antigravityUseCreditsContextKey{}, true) +} + +// AntigravityCreditsRequested reports whether the context carries the credits flag. +func AntigravityCreditsRequested(ctx context.Context) bool { + if ctx == nil { + return false + } + v, _ := ctx.Value(antigravityUseCreditsContextKey{}).(bool) + return v +} + +// AntigravityCreditsHint stores the latest known AI credits state for one auth. +type AntigravityCreditsHint struct { + Known bool + Available bool + CreditAmount float64 + MinCreditAmount float64 + PaidTierID string + UpdatedAt time.Time +} + +var antigravityCreditsHintByAuth sync.Map + +// SetAntigravityCreditsHint updates the latest known AI credits state for an auth. +func SetAntigravityCreditsHint(authID string, hint AntigravityCreditsHint) { + authID = strings.TrimSpace(authID) + if authID == "" { + return + } + if hint.UpdatedAt.IsZero() { + hint.UpdatedAt = time.Now() + } + antigravityCreditsHintByAuth.Store(authID, hint) +} + +// GetAntigravityCreditsHint returns the latest known AI credits state for an auth. +func GetAntigravityCreditsHint(authID string) (AntigravityCreditsHint, bool) { + authID = strings.TrimSpace(authID) + if authID == "" { + return AntigravityCreditsHint{}, false + } + value, ok := antigravityCreditsHintByAuth.Load(authID) + if !ok { + return AntigravityCreditsHint{}, false + } + hint, ok := value.(AntigravityCreditsHint) + if !ok { + antigravityCreditsHintByAuth.Delete(authID) + return AntigravityCreditsHint{}, false + } + return hint, true +} + +// HasKnownAntigravityCreditsHint reports whether credits state has been discovered for an auth. +func HasKnownAntigravityCreditsHint(authID string) bool { + hint, ok := GetAntigravityCreditsHint(authID) + return ok && hint.Known +} + +func antigravityCreditsAvailableForModel(auth *Auth, model string) bool { + if auth == nil { + return false + } + if !strings.EqualFold(strings.TrimSpace(auth.Provider), "antigravity") { + return false + } + if !strings.Contains(strings.ToLower(strings.TrimSpace(model)), "claude") { + return false + } + hint, ok := GetAntigravityCreditsHint(auth.ID) + if !ok || !hint.Known { + return false + } + return hint.Available +} diff --git a/sdk/cliproxy/auth/antigravity_credits_test.go b/sdk/cliproxy/auth/antigravity_credits_test.go new file mode 100644 index 0000000000..34a475dc6a --- /dev/null +++ b/sdk/cliproxy/auth/antigravity_credits_test.go @@ -0,0 +1,154 @@ +package auth + +import ( + "context" + "fmt" + "net/http" + "testing" + "time" + + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" +) + +type antigravityCreditsFallbackExecutor struct { + streamCreditsRequested []bool +} + +func (e *antigravityCreditsFallbackExecutor) Identifier() string { return "antigravity" } + +func (e *antigravityCreditsFallbackExecutor) Execute(context.Context, *Auth, cliproxyexecutor.Request, cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, &Error{HTTPStatus: http.StatusNotImplemented, Message: "Execute not implemented"} +} + +func (e *antigravityCreditsFallbackExecutor) ExecuteStream(ctx context.Context, _ *Auth, req cliproxyexecutor.Request, _ cliproxyexecutor.Options) (*cliproxyexecutor.StreamResult, error) { + creditsRequested := AntigravityCreditsRequested(ctx) + e.streamCreditsRequested = append(e.streamCreditsRequested, creditsRequested) + ch := make(chan cliproxyexecutor.StreamChunk, 1) + if !creditsRequested { + ch <- cliproxyexecutor.StreamChunk{Err: &Error{HTTPStatus: http.StatusTooManyRequests, Message: "quota exhausted"}} + close(ch) + return &cliproxyexecutor.StreamResult{Headers: http.Header{"X-Initial": {req.Model}}, Chunks: ch}, nil + } + ch <- cliproxyexecutor.StreamChunk{Payload: []byte("credits fallback")} + close(ch) + return &cliproxyexecutor.StreamResult{Headers: http.Header{"X-Credits": {req.Model}}, Chunks: ch}, nil +} + +func (e *antigravityCreditsFallbackExecutor) Refresh(_ context.Context, auth *Auth) (*Auth, error) { + return auth, nil +} + +func (e *antigravityCreditsFallbackExecutor) CountTokens(context.Context, *Auth, cliproxyexecutor.Request, cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { + return cliproxyexecutor.Response{}, &Error{HTTPStatus: http.StatusNotImplemented, Message: "CountTokens not implemented"} +} + +func (e *antigravityCreditsFallbackExecutor) HttpRequest(context.Context, *Auth, *http.Request) (*http.Response, error) { + return nil, &Error{HTTPStatus: http.StatusNotImplemented, Message: "HttpRequest not implemented"} +} + +func TestManagerExecuteStream_AntigravityCreditsFallbackAfterBootstrap429(t *testing.T) { + const model = "claude-opus-4-6-thinking" + executor := &antigravityCreditsFallbackExecutor{} + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{ + QuotaExceeded: internalconfig.QuotaExceeded{AntigravityCredits: true}, + }) + manager.RegisterExecutor(executor) + registry.GetGlobalRegistry().RegisterClient("ag-credits", "antigravity", []*registry.ModelInfo{{ID: model}}) + t.Cleanup(func() { registry.GetGlobalRegistry().UnregisterClient("ag-credits") }) + if _, errRegister := manager.Register(context.Background(), &Auth{ID: "ag-credits", Provider: "antigravity"}); errRegister != nil { + t.Fatalf("register auth: %v", errRegister) + } + + streamResult, errExecute := manager.ExecuteStream(context.Background(), []string{"antigravity"}, cliproxyexecutor.Request{Model: model}, cliproxyexecutor.Options{}) + if errExecute != nil { + t.Fatalf("execute stream: %v", errExecute) + } + + var payload []byte + for chunk := range streamResult.Chunks { + if chunk.Err != nil { + t.Fatalf("unexpected stream error: %v", chunk.Err) + } + payload = append(payload, chunk.Payload...) + } + if string(payload) != "credits fallback" { + t.Fatalf("payload = %q, want %q", string(payload), "credits fallback") + } + if got := streamResult.Headers.Get("X-Credits"); got != model { + t.Fatalf("X-Credits header = %q, want routed model", got) + } + if len(executor.streamCreditsRequested) != 2 { + t.Fatalf("stream calls = %d, want 2", len(executor.streamCreditsRequested)) + } + if executor.streamCreditsRequested[0] || !executor.streamCreditsRequested[1] { + t.Fatalf("credits flags = %v, want [false true]", executor.streamCreditsRequested) + } +} + +func TestStatusCodeFromError_UnwrapsStreamBootstrap429(t *testing.T) { + bootstrapErr := newStreamBootstrapError(&Error{HTTPStatus: http.StatusTooManyRequests, Message: "quota exhausted"}, nil) + wrappedErr := fmt.Errorf("conductor stream failed: %w", bootstrapErr) + + if status := statusCodeFromError(wrappedErr); status != http.StatusTooManyRequests { + t.Fatalf("statusCodeFromError() = %d, want %d", status, http.StatusTooManyRequests) + } +} + +func TestIsAuthBlockedForModel_ClaudeWithCreditsStillBlockedDuringCooldown(t *testing.T) { + auth := &Auth{ + ID: "ag-1", + Provider: "antigravity", + ModelStates: map[string]*ModelState{ + "claude-sonnet-4-6": { + Unavailable: true, + NextRetryAfter: time.Now().Add(10 * time.Minute), + Quota: QuotaState{ + Exceeded: true, + NextRecoverAt: time.Now().Add(10 * time.Minute), + }, + }, + }, + } + + SetAntigravityCreditsHint(auth.ID, AntigravityCreditsHint{ + Known: true, + Available: true, + UpdatedAt: time.Now(), + }) + + blocked, reason, _ := isAuthBlockedForModel(auth, "claude-sonnet-4-6", time.Now()) + if !blocked || reason != blockReasonCooldown { + t.Fatalf("expected auth to be blocked during cooldown even with credits, got blocked=%v reason=%v", blocked, reason) + } +} + +func TestIsAuthBlockedForModel_KeepsGeminiBlockedWithoutCreditsBypass(t *testing.T) { + auth := &Auth{ + ID: "ag-2", + Provider: "antigravity", + ModelStates: map[string]*ModelState{ + "gemini-3-flash": { + Unavailable: true, + NextRetryAfter: time.Now().Add(10 * time.Minute), + Quota: QuotaState{ + Exceeded: true, + NextRecoverAt: time.Now().Add(10 * time.Minute), + }, + }, + }, + } + + SetAntigravityCreditsHint(auth.ID, AntigravityCreditsHint{ + Known: true, + Available: true, + UpdatedAt: time.Now(), + }) + + blocked, reason, _ := isAuthBlockedForModel(auth, "gemini-3-flash", time.Now()) + if !blocked || reason != blockReasonCooldown { + t.Fatalf("expected gemini auth to remain blocked, got blocked=%v reason=%v", blocked, reason) + } +} diff --git a/sdk/cliproxy/auth/api_key_model_alias_test.go b/sdk/cliproxy/auth/api_key_model_alias_test.go index 70915d9e37..25da4df4ed 100644 --- a/sdk/cliproxy/auth/api_key_model_alias_test.go +++ b/sdk/cliproxy/auth/api_key_model_alias_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestLookupAPIKeyUpstreamModel(t *testing.T) { diff --git a/sdk/cliproxy/auth/auto_refresh_loop.go b/sdk/cliproxy/auth/auto_refresh_loop.go index 9767ee5803..2b544631fe 100644 --- a/sdk/cliproxy/auth/auto_refresh_loop.go +++ b/sdk/cliproxy/auth/auto_refresh_loop.go @@ -336,7 +336,7 @@ func (l *authAutoRefreshLoop) remove(authID string) { } func nextRefreshCheckAt(now time.Time, auth *Auth, interval time.Duration) (time.Time, bool) { - if auth == nil || auth.Disabled { + if auth == nil { return time.Time{}, false } diff --git a/sdk/cliproxy/auth/auto_refresh_loop_test.go b/sdk/cliproxy/auth/auto_refresh_loop_test.go index 420aae237a..e4edb2df55 100644 --- a/sdk/cliproxy/auth/auto_refresh_loop_test.go +++ b/sdk/cliproxy/auth/auto_refresh_loop_test.go @@ -34,9 +34,31 @@ func setRefreshLeadFactory(t *testing.T, provider string, factory func() *time.D func TestNextRefreshCheckAt_DisabledUnschedule(t *testing.T) { now := time.Date(2026, 4, 12, 0, 0, 0, 0, time.UTC) - auth := &Auth{ID: "a1", Provider: "test", Disabled: true} - if _, ok := nextRefreshCheckAt(now, auth, 15*time.Minute); ok { - t.Fatalf("nextRefreshCheckAt() ok = true, want false") + expiry := now.Add(time.Hour) + lead := 10 * time.Minute + setRefreshLeadFactory(t, "disabled-schedule", func() *time.Duration { + d := lead + return &d + }) + + auth := &Auth{ + ID: "a1", + Provider: "disabled-schedule", + Disabled: true, + Status: StatusDisabled, + Metadata: map[string]any{ + "email": "x@example.com", + "expires_at": expiry.Format(time.RFC3339), + }, + } + + got, ok := nextRefreshCheckAt(now, auth, 15*time.Minute) + if !ok { + t.Fatalf("nextRefreshCheckAt() ok = false, want true") + } + want := expiry.Add(-lead) + if !got.Equal(want) { + t.Fatalf("nextRefreshCheckAt() = %s, want %s", got, want) } } diff --git a/sdk/cliproxy/auth/conductor.go b/sdk/cliproxy/auth/conductor.go index f58722039c..5d6a303568 100644 --- a/sdk/cliproxy/auth/conductor.go +++ b/sdk/cliproxy/auth/conductor.go @@ -16,12 +16,14 @@ import ( "time" "github.com/google/uuid" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - "github.com/router-for-me/CLIProxyAPI/v6/internal/util" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/home" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + coreusage "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" log "github.com/sirupsen/logrus" ) @@ -49,6 +51,7 @@ type ExecutionSessionCloser interface { } const ( + homeAuthCountMetadataKey = "__cliproxy_home_auth_count" // CloseAllExecutionSessionsID asks an executor to release all active execution sessions. // Executors that do not support this marker may ignore it. CloseAllExecutionSessionsID = "__all_execution_sessions__" @@ -64,8 +67,13 @@ const ( refreshMaxConcurrency = 16 refreshPendingBackoff = time.Minute refreshFailureBackoff = 5 * time.Minute - quotaBackoffBase = time.Second - quotaBackoffMax = 30 * time.Minute + // refreshIneffectiveBackoff throttles refresh attempts when an executor returns + // success but the auth still evaluates as needing refresh (e.g. token expiry + // wasn't updated). Without this guard, the auto-refresh loop can tight-loop and + // burn CPU at idle. + refreshIneffectiveBackoff = 30 * time.Second + quotaBackoffBase = time.Second + quotaBackoffMax = 30 * time.Minute ) var quotaCooldownDisabled atomic.Bool @@ -143,6 +151,9 @@ type Manager struct { mu sync.RWMutex auths map[string]*Auth scheduler *authScheduler + // homeRuntimeAuths caches auths returned by Home so websocket sessions can + // reuse an established upstream credential without dispatching every turn. + homeRuntimeAuths map[string]map[string]*Auth // providerOffsets tracks per-model provider rotation state for multi-provider routing. providerOffsets map[string]int @@ -187,6 +198,7 @@ func NewManager(store Store, selector Selector, hook Hook) *Manager { selector: selector, hook: hook, auths: make(map[string]*Auth), + homeRuntimeAuths: make(map[string]map[string]*Auth), providerOffsets: make(map[string]int), modelPoolOffsets: make(map[string]int), } @@ -368,9 +380,21 @@ func (m *Manager) SetConfig(cfg *internalconfig.Config) { cfg = &internalconfig.Config{} } m.runtimeConfig.Store(cfg) + if !cfg.Home.Enabled { + m.clearHomeRuntimeAuths() + } m.rebuildAPIKeyModelAliasFromRuntimeConfig() } +// HomeEnabled reports whether the home control plane integration is enabled in the runtime config. +func (m *Manager) HomeEnabled() bool { + if m == nil { + return false + } + cfg, _ := m.runtimeConfig.Load().(*internalconfig.Config) + return cfg != nil && cfg.Home.Enabled +} + func (m *Manager) lookupAPIKeyUpstreamModel(authID, requestedModel string) string { if m == nil { return "" @@ -516,6 +540,11 @@ func preserveRequestedModelSuffix(requestedModel, resolved string) string { } func (m *Manager) executionModelCandidates(auth *Auth, routeModel string) []string { + if auth != nil && auth.Attributes != nil { + if homeModel := strings.TrimSpace(auth.Attributes[homeUpstreamModelAttributeKey]); homeModel != "" { + return []string{homeModel} + } + } requestedModel := rewriteModelForAuth(routeModel, auth) requestedModel = m.applyOAuthModelAlias(auth, requestedModel) if pool := m.resolveOpenAICompatUpstreamModelPool(auth, requestedModel); len(pool) > 0 { @@ -549,6 +578,14 @@ func (m *Manager) selectionModelKeyForAuth(auth *Auth, routeModel string) string } func (m *Manager) stateModelForExecution(auth *Auth, routeModel, upstreamModel string, pooled bool) string { + if auth != nil && auth.Attributes != nil { + if homeModel := strings.TrimSpace(auth.Attributes[homeUpstreamModelAttributeKey]); homeModel != "" { + if resolved := strings.TrimSpace(upstreamModel); resolved != "" { + return resolved + } + return homeModel + } + } stateModel := executionResultModel(routeModel, upstreamModel, pooled) selectionModel := m.selectionModelForAuth(auth, routeModel) if canonicalModelKey(selectionModel) == canonicalModelKey(upstreamModel) && strings.TrimSpace(selectionModel) != "" { @@ -822,6 +859,7 @@ func (m *Manager) executeStreamWithModelPool(ctx context.Context, executor Provi if executor == nil { return nil, &Error{Code: "executor_not_found", Message: "executor not registered"} } + ctx = contextWithRequestedModelAlias(ctx, opts, routeModel) var lastErr error for idx, execModel := range execModels { resultModel := m.stateModelForExecution(auth, routeModel, execModel, pooled) @@ -1121,6 +1159,9 @@ func (m *Manager) Update(ctx context.Context, auth *Auth) (*Auth, error) { auth.Index = existing.Index auth.indexAssigned = existing.indexAssigned } + auth.Success = existing.Success + auth.Failed = existing.Failed + auth.recentRequests = existing.recentRequests if !existing.Disabled && existing.Status != StatusDisabled && !auth.Disabled && auth.Status != StatusDisabled { if len(auth.ModelStates) == 0 && len(existing.ModelStates) > 0 { auth.ModelStates = existing.ModelStates @@ -1197,12 +1238,16 @@ func (m *Manager) Execute(ctx context.Context, providers []string, req cliproxye } } if lastErr != nil { + if shouldAttemptAntigravityCreditsFallback(m, lastErr, normalized) { + if resp, ok := m.tryAntigravityCreditsExecute(ctx, req, opts); ok { + return resp, nil + } + } return cliproxyexecutor.Response{}, lastErr } return cliproxyexecutor.Response{}, &Error{Code: "auth_not_found", Message: "no auth available"} } -// ExecuteCount performs a non-streaming execution using the configured selector and executor. // It supports multiple providers for the same model and round-robins the starting provider per model. func (m *Manager) ExecuteCount(ctx context.Context, providers []string, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { normalized := m.normalizeProviders(providers) @@ -1259,6 +1304,15 @@ func (m *Manager) ExecuteStream(ctx context.Context, providers []string, req cli } } if lastErr != nil { + if shouldAttemptAntigravityCreditsFallback(m, lastErr, normalized) { + if result, ok := m.tryAntigravityCreditsExecuteStream(ctx, req, opts); ok { + return result, nil + } + } + var bootstrapErr *streamBootstrapError + if errors.As(lastErr, &bootstrapErr) && bootstrapErr != nil { + return streamErrorResult(bootstrapErr.Headers(), bootstrapErr.cause), nil + } return nil, lastErr } return nil, &Error{Code: "auth_not_found", Message: "no auth available"} @@ -1270,19 +1324,25 @@ func (m *Manager) executeMixedOnce(ctx context.Context, providers []string, req } routeModel := req.Model opts = ensureRequestedModelMetadata(opts, routeModel) + homeMode := m.HomeEnabled() + homeAuthCount := 1 tried := make(map[string]struct{}) attempted := make(map[string]struct{}) var lastErr error for { - if maxRetryCredentials > 0 && len(attempted) >= maxRetryCredentials { + if !homeMode && maxRetryCredentials > 0 && len(attempted) >= maxRetryCredentials { if lastErr != nil { return cliproxyexecutor.Response{}, lastErr } return cliproxyexecutor.Response{}, &Error{Code: "auth_not_found", Message: "no auth available"} } - auth, executor, provider, errPick := m.pickNextMixed(ctx, providers, routeModel, opts, tried) + pickOpts := opts + if homeMode { + pickOpts = withHomeAuthCount(opts, homeAuthCount) + } + auth, executor, provider, errPick := m.pickNextMixed(ctx, providers, routeModel, pickOpts, tried) if errPick != nil { - if lastErr != nil { + if shouldReturnLastErrorOnPickFailure(homeMode, lastErr, errPick) { return cliproxyexecutor.Response{}, lastErr } return cliproxyexecutor.Response{}, errPick @@ -1298,6 +1358,7 @@ func (m *Manager) executeMixedOnce(ctx context.Context, providers []string, req execCtx = context.WithValue(execCtx, roundTripperContextKey{}, rt) execCtx = context.WithValue(execCtx, "cliproxy.roundtripper", rt) } + execCtx = contextWithRequestedModelAlias(execCtx, opts, routeModel) models, pooled := m.preparedExecutionModels(auth, routeModel) if len(models) == 0 { @@ -1337,6 +1398,9 @@ func (m *Manager) executeMixedOnce(ctx context.Context, providers []string, req return cliproxyexecutor.Response{}, authErr } lastErr = authErr + if homeMode { + homeAuthCount++ + } continue } } @@ -1348,19 +1412,25 @@ func (m *Manager) executeCountMixedOnce(ctx context.Context, providers []string, } routeModel := req.Model opts = ensureRequestedModelMetadata(opts, routeModel) + homeMode := m.HomeEnabled() + homeAuthCount := 1 tried := make(map[string]struct{}) attempted := make(map[string]struct{}) var lastErr error for { - if maxRetryCredentials > 0 && len(attempted) >= maxRetryCredentials { + if !homeMode && maxRetryCredentials > 0 && len(attempted) >= maxRetryCredentials { if lastErr != nil { return cliproxyexecutor.Response{}, lastErr } return cliproxyexecutor.Response{}, &Error{Code: "auth_not_found", Message: "no auth available"} } - auth, executor, provider, errPick := m.pickNextMixed(ctx, providers, routeModel, opts, tried) + pickOpts := opts + if homeMode { + pickOpts = withHomeAuthCount(opts, homeAuthCount) + } + auth, executor, provider, errPick := m.pickNextMixed(ctx, providers, routeModel, pickOpts, tried) if errPick != nil { - if lastErr != nil { + if shouldReturnLastErrorOnPickFailure(homeMode, lastErr, errPick) { return cliproxyexecutor.Response{}, lastErr } return cliproxyexecutor.Response{}, errPick @@ -1376,6 +1446,7 @@ func (m *Manager) executeCountMixedOnce(ctx context.Context, providers []string, execCtx = context.WithValue(execCtx, roundTripperContextKey{}, rt) execCtx = context.WithValue(execCtx, "cliproxy.roundtripper", rt) } + execCtx = contextWithRequestedModelAlias(execCtx, opts, routeModel) models, pooled := m.preparedExecutionModels(auth, routeModel) if len(models) == 0 { @@ -1415,6 +1486,9 @@ func (m *Manager) executeCountMixedOnce(ctx context.Context, providers []string, return cliproxyexecutor.Response{}, authErr } lastErr = authErr + if homeMode { + homeAuthCount++ + } continue } } @@ -1426,27 +1500,25 @@ func (m *Manager) executeStreamMixedOnce(ctx context.Context, providers []string } routeModel := req.Model opts = ensureRequestedModelMetadata(opts, routeModel) + homeMode := m.HomeEnabled() + homeAuthCount := 1 tried := make(map[string]struct{}) attempted := make(map[string]struct{}) var lastErr error for { - if maxRetryCredentials > 0 && len(attempted) >= maxRetryCredentials { + if !homeMode && maxRetryCredentials > 0 && len(attempted) >= maxRetryCredentials { if lastErr != nil { - var bootstrapErr *streamBootstrapError - if errors.As(lastErr, &bootstrapErr) && bootstrapErr != nil { - return streamErrorResult(bootstrapErr.Headers(), bootstrapErr.cause), nil - } return nil, lastErr } return nil, &Error{Code: "auth_not_found", Message: "no auth available"} } - auth, executor, provider, errPick := m.pickNextMixed(ctx, providers, routeModel, opts, tried) + pickOpts := opts + if homeMode { + pickOpts = withHomeAuthCount(opts, homeAuthCount) + } + auth, executor, provider, errPick := m.pickNextMixed(ctx, providers, routeModel, pickOpts, tried) if errPick != nil { - if lastErr != nil { - var bootstrapErr *streamBootstrapError - if errors.As(lastErr, &bootstrapErr) && bootstrapErr != nil { - return streamErrorResult(bootstrapErr.Headers(), bootstrapErr.cause), nil - } + if shouldReturnLastErrorOnPickFailure(homeMode, lastErr, errPick) { return nil, lastErr } return nil, errPick @@ -1476,6 +1548,9 @@ func (m *Manager) executeStreamMixedOnce(ctx context.Context, providers []string return nil, errStream } lastErr = errStream + if homeMode { + homeAuthCount++ + } continue } return streamResult, nil @@ -1503,6 +1578,40 @@ func ensureRequestedModelMetadata(opts cliproxyexecutor.Options, requestedModel return opts } +func withHomeAuthCount(opts cliproxyexecutor.Options, count int) cliproxyexecutor.Options { + if count <= 0 { + count = 1 + } + meta := make(map[string]any, len(opts.Metadata)+1) + for k, v := range opts.Metadata { + meta[k] = v + } + meta[homeAuthCountMetadataKey] = count + opts.Metadata = meta + return opts +} + +func homeAuthCountFromMetadata(meta map[string]any) int { + if len(meta) == 0 { + return 1 + } + switch value := meta[homeAuthCountMetadataKey].(type) { + case int: + if value > 0 { + return value + } + case int64: + if value > 0 { + return int(value) + } + case float64: + if value > 0 { + return int(value) + } + } + return 1 +} + func hasRequestedModelMetadata(meta map[string]any) bool { if len(meta) == 0 { return false @@ -1521,6 +1630,36 @@ func hasRequestedModelMetadata(meta map[string]any) bool { } } +func contextWithRequestedModelAlias(ctx context.Context, opts cliproxyexecutor.Options, fallback string) context.Context { + alias := requestedModelAliasFromOptions(opts, fallback) + return coreusage.WithRequestedModelAlias(ctx, alias) +} + +func requestedModelAliasFromOptions(opts cliproxyexecutor.Options, fallback string) string { + fallback = strings.TrimSpace(fallback) + if len(opts.Metadata) == 0 { + return fallback + } + raw, ok := opts.Metadata[cliproxyexecutor.RequestedModelMetadataKey] + if !ok || raw == nil { + return fallback + } + switch value := raw.(type) { + case string: + if strings.TrimSpace(value) == "" { + return fallback + } + return strings.TrimSpace(value) + case []byte: + if len(value) == 0 { + return fallback + } + return strings.TrimSpace(string(value)) + default: + return fallback + } +} + func pinnedAuthIDFromMetadata(meta map[string]any) string { if len(meta) == 0 { return "" @@ -1539,6 +1678,38 @@ func pinnedAuthIDFromMetadata(meta map[string]any) string { } } +func disallowFreeAuthFromMetadata(meta map[string]any) bool { + if len(meta) == 0 { + return false + } + raw, ok := meta[cliproxyexecutor.DisallowFreeAuthMetadataKey] + if !ok || raw == nil { + return false + } + switch val := raw.(type) { + case bool: + return val + case string: + parsed, err := strconv.ParseBool(strings.TrimSpace(val)) + return err == nil && parsed + case []byte: + parsed, err := strconv.ParseBool(strings.TrimSpace(string(val))) + return err == nil && parsed + default: + return false + } +} + +func isFreeCodexAuth(auth *Auth) bool { + if auth == nil || auth.Attributes == nil { + return false + } + if !strings.EqualFold(strings.TrimSpace(auth.Provider), "codex") { + return false + } + return strings.EqualFold(strings.TrimSpace(auth.Attributes["plan_type"]), "free") +} + func publishSelectedAuthMetadata(meta map[string]any, authID string) { if len(meta) == 0 { return @@ -1757,6 +1928,9 @@ func resolveOpenAICompatConfig(cfg *internalconfig.Config, providerKey, compatNa } for i := range cfg.OpenAICompatibility { compat := &cfg.OpenAICompatibility[i] + if compat.Disabled { + continue + } for _, candidate := range candidates { if candidate != "" && strings.EqualFold(strings.TrimSpace(candidate), compat.Name) { return compat @@ -1976,6 +2150,12 @@ func (m *Manager) MarkResult(ctx context.Context, result Result) { m.mu.Lock() if auth, ok := m.auths[result.AuthID]; ok && auth != nil { now := time.Now() + auth.recordRecentRequest(now, result.Success) + if result.Success { + auth.Success++ + } else { + auth.Failed++ + } if result.Success { if result.Model != "" { @@ -2285,6 +2465,13 @@ func cloneError(err *Error) *Error { } } +func errorString(err error) string { + if err == nil { + return "" + } + return err.Error() +} + func statusCodeFromError(err error) int { if err == nil { return 0 @@ -2314,7 +2501,8 @@ func retryAfterFromError(err error) *time.Duration { if retryAfter == nil { return nil } - return new(*retryAfter) + value := *retryAfter + return &value } func statusCodeFromResult(err *Error) int { @@ -2404,11 +2592,18 @@ func isRequestInvalidError(err error) bool { status := statusCodeFromError(err) switch status { case http.StatusBadRequest: - return strings.Contains(err.Error(), "invalid_request_error") + msg := err.Error() + return strings.Contains(msg, "invalid_request_error") || + strings.Contains(msg, "INVALID_ARGUMENT") || + strings.Contains(msg, "FAILED_PRECONDITION") case http.StatusNotFound: return isRequestScopedNotFoundMessage(err.Error()) case http.StatusUnprocessableEntity: return true + case http.StatusInternalServerError: + msg := err.Error() + return strings.Contains(msg, "\"status\":\"UNKNOWN\"") || + strings.Contains(msg, "\"status\": \"UNKNOWN\"") default: return false } @@ -2530,6 +2725,23 @@ func (m *Manager) GetByID(id string) (*Auth, bool) { return auth.Clone(), true } +// GetExecutionSessionAuthByID retrieves a Home runtime auth scoped to an execution session. +func (m *Manager) GetExecutionSessionAuthByID(sessionID string, authID string) (*Auth, bool) { + sessionID = strings.TrimSpace(sessionID) + authID = strings.TrimSpace(authID) + if m == nil || sessionID == "" || authID == "" { + return nil, false + } + m.mu.RLock() + defer m.mu.RUnlock() + sessionAuths := m.homeRuntimeAuths[sessionID] + auth := sessionAuths[authID] + if auth == nil { + return nil, false + } + return auth.Clone(), true +} + // Executor returns the registered provider executor for a provider key. func (m *Manager) Executor(provider string) (ProviderExecutor, bool) { if m == nil { @@ -2563,12 +2775,17 @@ func (m *Manager) CloseExecutionSession(sessionID string) { return } - m.mu.RLock() + m.mu.Lock() + if sessionID == CloseAllExecutionSessionsID { + m.clearHomeRuntimeAuthsLocked() + } else { + m.clearHomeRuntimeAuthsForSessionLocked(sessionID) + } executors := make([]ProviderExecutor, 0, len(m.executors)) for _, exec := range m.executors { executors = append(executors, exec) } - m.mu.RUnlock() + m.mu.Unlock() for i := range executors { if closer, ok := executors[i].(ExecutionSessionCloser); ok && closer != nil { @@ -2607,7 +2824,13 @@ func (m *Manager) routeAwareSelectionRequired(auth *Auth, routeModel string) boo } func (m *Manager) pickNextLegacy(ctx context.Context, provider, model string, opts cliproxyexecutor.Options, tried map[string]struct{}) (*Auth, ProviderExecutor, error) { + if m.HomeEnabled() { + auth, exec, _, err := m.pickNextViaHome(ctx, model, opts, tried) + return auth, exec, err + } + pinnedAuthID := pinnedAuthIDFromMetadata(opts.Metadata) + disallowFreeAuth := disallowFreeAuthFromMetadata(opts.Metadata) m.mu.RLock() executor, okExecutor := m.executors[provider] @@ -2632,6 +2855,9 @@ func (m *Manager) pickNextLegacy(ctx context.Context, provider, model string, op if pinnedAuthID != "" && candidate.ID != pinnedAuthID { continue } + if disallowFreeAuth && isFreeCodexAuth(candidate) { + continue + } if _, used := tried[candidate.ID]; used { continue } @@ -2672,6 +2898,11 @@ func (m *Manager) pickNextLegacy(ctx context.Context, provider, model string, op } func (m *Manager) pickNext(ctx context.Context, provider, model string, opts cliproxyexecutor.Options, tried map[string]struct{}) (*Auth, ProviderExecutor, error) { + if m.HomeEnabled() { + auth, exec, _, err := m.pickNextViaHome(ctx, model, opts, tried) + return auth, exec, err + } + if !m.useSchedulerFastPath() { return m.pickNextLegacy(ctx, provider, model, opts, tried) } @@ -2695,31 +2926,46 @@ func (m *Manager) pickNext(ctx context.Context, provider, model string, opts cli if !okExecutor { return nil, nil, &Error{Code: "executor_not_found", Message: "executor not registered"} } - selected, errPick := m.scheduler.pickSingle(ctx, provider, model, opts, tried) - if errPick != nil && model != "" && shouldRetrySchedulerPick(errPick) { - m.syncScheduler() - selected, errPick = m.scheduler.pickSingle(ctx, provider, model, opts, tried) - } - if errPick != nil { - return nil, nil, errPick - } - if selected == nil { - return nil, nil, &Error{Code: "auth_not_found", Message: "selector returned no auth"} - } - authCopy := selected.Clone() - if !selected.indexAssigned { - m.mu.Lock() - if current := m.auths[authCopy.ID]; current != nil && !current.indexAssigned { - current.EnsureIndex() - authCopy = current.Clone() + disallowFreeAuth := disallowFreeAuthFromMetadata(opts.Metadata) + for { + selected, errPick := m.scheduler.pickSingle(ctx, provider, model, opts, tried) + if errPick != nil && model != "" && shouldRetrySchedulerPick(errPick) { + m.syncScheduler() + selected, errPick = m.scheduler.pickSingle(ctx, provider, model, opts, tried) } - m.mu.Unlock() + if errPick != nil { + return nil, nil, errPick + } + if selected == nil { + return nil, nil, &Error{Code: "auth_not_found", Message: "selector returned no auth"} + } + if disallowFreeAuth && isFreeCodexAuth(selected) { + if tried == nil { + tried = make(map[string]struct{}) + } + tried[selected.ID] = struct{}{} + continue + } + authCopy := selected.Clone() + if !selected.indexAssigned { + m.mu.Lock() + if current := m.auths[authCopy.ID]; current != nil && !current.indexAssigned { + current.EnsureIndex() + authCopy = current.Clone() + } + m.mu.Unlock() + } + return authCopy, executor, nil } - return authCopy, executor, nil } func (m *Manager) pickNextMixedLegacy(ctx context.Context, providers []string, model string, opts cliproxyexecutor.Options, tried map[string]struct{}) (*Auth, ProviderExecutor, string, error) { + if m.HomeEnabled() { + return m.pickNextViaHome(ctx, model, opts, tried) + } + pinnedAuthID := pinnedAuthIDFromMetadata(opts.Metadata) + disallowFreeAuth := disallowFreeAuthFromMetadata(opts.Metadata) providerSet := make(map[string]struct{}, len(providers)) for _, provider := range providers { @@ -2751,6 +2997,9 @@ func (m *Manager) pickNextMixedLegacy(ctx context.Context, providers []string, m if pinnedAuthID != "" && candidate.ID != pinnedAuthID { continue } + if disallowFreeAuth && isFreeCodexAuth(candidate) { + continue + } providerKey := strings.TrimSpace(strings.ToLower(candidate.Provider)) if providerKey == "" { continue @@ -2807,6 +3056,10 @@ func (m *Manager) pickNextMixedLegacy(ctx context.Context, providers []string, m } func (m *Manager) pickNextMixed(ctx context.Context, providers []string, model string, opts cliproxyexecutor.Options, tried map[string]struct{}) (*Auth, ProviderExecutor, string, error) { + if m.HomeEnabled() { + return m.pickNextViaHome(ctx, model, opts, tried) + } + if !m.useSchedulerFastPath() { return m.pickNextMixedLegacy(ctx, providers, model, opts, tried) } @@ -2854,33 +3107,492 @@ func (m *Manager) pickNextMixed(ctx context.Context, providers []string, model s m.mu.RUnlock() } - selected, providerKey, errPick := m.scheduler.pickMixed(ctx, eligibleProviders, model, opts, tried) - if errPick != nil && model != "" && shouldRetrySchedulerPick(errPick) { - m.syncScheduler() - selected, providerKey, errPick = m.scheduler.pickMixed(ctx, eligibleProviders, model, opts, tried) + disallowFreeAuth := disallowFreeAuthFromMetadata(opts.Metadata) + for { + selected, providerKey, errPick := m.scheduler.pickMixed(ctx, eligibleProviders, model, opts, tried) + if errPick != nil && model != "" && shouldRetrySchedulerPick(errPick) { + m.syncScheduler() + selected, providerKey, errPick = m.scheduler.pickMixed(ctx, eligibleProviders, model, opts, tried) + } + if errPick != nil { + return nil, nil, "", errPick + } + if selected == nil { + return nil, nil, "", &Error{Code: "auth_not_found", Message: "selector returned no auth"} + } + if disallowFreeAuth && isFreeCodexAuth(selected) { + if tried == nil { + tried = make(map[string]struct{}) + } + tried[selected.ID] = struct{}{} + continue + } + executor, okExecutor := m.Executor(providerKey) + if !okExecutor { + return nil, nil, "", &Error{Code: "executor_not_found", Message: "executor not registered"} + } + authCopy := selected.Clone() + if !selected.indexAssigned { + m.mu.Lock() + if current := m.auths[authCopy.ID]; current != nil && !current.indexAssigned { + current.EnsureIndex() + authCopy = current.Clone() + } + m.mu.Unlock() + } + return authCopy, executor, providerKey, nil } - if errPick != nil { - return nil, nil, "", errPick +} + +type homeErrorEnvelope struct { + Error *homeErrorDetail `json:"error"` +} + +type homeErrorDetail struct { + Type string `json:"type"` + Message string `json:"message"` + Code string `json:"code,omitempty"` +} + +const ( + homeUpstreamModelAttributeKey = "home_upstream_model" + homeRequestRetryExceededErrorCode = "request_retry_exceeded" +) + +func isHomeRequestRetryExceededError(err error) bool { + var authErr *Error + if !errors.As(err, &authErr) || authErr == nil { + return false } - if selected == nil { - return nil, nil, "", &Error{Code: "auth_not_found", Message: "selector returned no auth"} + return strings.EqualFold(strings.TrimSpace(authErr.Code), homeRequestRetryExceededErrorCode) +} + +func shouldReturnLastErrorOnPickFailure(homeMode bool, lastErr error, errPick error) bool { + if lastErr == nil { + return false } - executor, okExecutor := m.Executor(providerKey) - if !okExecutor { - return nil, nil, "", &Error{Code: "executor_not_found", Message: "executor not registered"} + if !homeMode { + return true } - authCopy := selected.Clone() - if !selected.indexAssigned { - m.mu.Lock() - if current := m.auths[authCopy.ID]; current != nil && !current.indexAssigned { - current.EnsureIndex() - authCopy = current.Clone() + return isHomeRequestRetryExceededError(errPick) +} + +type homeAuthDispatchResponse struct { + Model string `json:"model"` + Provider string `json:"provider"` + AuthIndex string `json:"auth_index"` + UserAPIKey string `json:"user_api_key"` + Auth Auth `json:"auth"` +} + +func setHomeUserAPIKeyOnGinContext(ctx context.Context, apiKey string) { + apiKey = strings.TrimSpace(apiKey) + if apiKey == "" || ctx == nil { + return + } + ginCtx, ok := ctx.Value("gin").(interface{ Set(string, any) }) + if !ok || ginCtx == nil { + return + } + ginCtx.Set("userApiKey", apiKey) +} + +func homeExecutionSessionIDFromMetadata(meta map[string]any) string { + if len(meta) == 0 { + return "" + } + raw, ok := meta[cliproxyexecutor.ExecutionSessionMetadataKey] + if !ok || raw == nil { + return "" + } + switch value := raw.(type) { + case string: + return strings.TrimSpace(value) + case []byte: + return strings.TrimSpace(string(value)) + default: + return "" + } +} + +func (m *Manager) clearHomeRuntimeAuths() { + if m == nil { + return + } + m.mu.Lock() + m.clearHomeRuntimeAuthsLocked() + m.mu.Unlock() +} + +func (m *Manager) clearHomeRuntimeAuthsLocked() { + if m == nil { + return + } + m.homeRuntimeAuths = make(map[string]map[string]*Auth) +} + +func (m *Manager) clearHomeRuntimeAuthsForSessionLocked(sessionID string) { + sessionID = strings.TrimSpace(sessionID) + if m == nil || sessionID == "" { + return + } + delete(m.homeRuntimeAuths, sessionID) +} + +func (m *Manager) rememberHomeRuntimeAuth(sessionID string, auth *Auth) { + sessionID = strings.TrimSpace(sessionID) + authID := "" + if auth != nil { + authID = strings.TrimSpace(auth.ID) + } + if m == nil || auth == nil || sessionID == "" || authID == "" || !authWebsocketsEnabled(auth) { + return + } + m.mu.Lock() + if m.homeRuntimeAuths == nil { + m.homeRuntimeAuths = make(map[string]map[string]*Auth) + } + sessionAuths := m.homeRuntimeAuths[sessionID] + if sessionAuths == nil { + sessionAuths = make(map[string]*Auth) + m.homeRuntimeAuths[sessionID] = sessionAuths + } + sessionAuths[authID] = auth.Clone() + m.mu.Unlock() +} + +func (m *Manager) homeRuntimeAuthByID(sessionID string, authID string) (*Auth, ProviderExecutor, string, bool) { + sessionID = strings.TrimSpace(sessionID) + authID = strings.TrimSpace(authID) + if m == nil || sessionID == "" || authID == "" { + return nil, nil, "", false + } + m.mu.RLock() + sessionAuths := m.homeRuntimeAuths[sessionID] + auth := sessionAuths[authID] + m.mu.RUnlock() + if auth == nil || !authWebsocketsEnabled(auth) { + return nil, nil, "", false + } + providerKey := strings.ToLower(strings.TrimSpace(auth.Provider)) + if providerKey == "" { + return nil, nil, "", false + } + executor, ok := m.Executor(providerKey) + if !ok && auth.Attributes != nil && strings.TrimSpace(auth.Attributes["base_url"]) != "" { + executor, ok = m.Executor("openai-compatibility") + if ok { + providerKey = "openai-compatibility" } - m.mu.Unlock() + } + if !ok { + return nil, nil, "", false + } + return auth.Clone(), executor, providerKey, true +} + +func (m *Manager) pickNextViaHome(ctx context.Context, model string, opts cliproxyexecutor.Options, tried map[string]struct{}) (*Auth, ProviderExecutor, string, error) { + if m == nil { + return nil, nil, "", &Error{Code: "auth_not_found", Message: "no auth available"} + } + if ctx == nil { + ctx = context.Background() + } + executionSessionID := homeExecutionSessionIDFromMetadata(opts.Metadata) + count := homeAuthCountFromMetadata(opts.Metadata) + if cliproxyexecutor.DownstreamWebsocket(ctx) && executionSessionID != "" && count <= 1 { + if pinnedAuthID := pinnedAuthIDFromMetadata(opts.Metadata); pinnedAuthID != "" { + _, alreadyTried := tried[pinnedAuthID] + if !alreadyTried { + if auth, executor, providerKey, ok := m.homeRuntimeAuthByID(executionSessionID, pinnedAuthID); ok { + return auth, executor, providerKey, nil + } + } + } + } + + client := home.Current() + if client == nil || !client.HeartbeatOK() { + return nil, nil, "", &Error{Code: "home_unavailable", Message: "home control center unavailable", HTTPStatus: http.StatusServiceUnavailable} + } + + requestedModel := requestedModelFromMetadata(opts.Metadata, model) + sessionID := ExtractSessionID(opts.Headers, opts.OriginalRequest, opts.Metadata) + + raw, err := client.RPopAuth(ctx, requestedModel, sessionID, opts.Headers, count) + if err != nil { + return nil, nil, "", &Error{Code: "auth_not_found", Message: err.Error(), HTTPStatus: http.StatusServiceUnavailable} + } + + var env homeErrorEnvelope + if errUnmarshal := json.Unmarshal(raw, &env); errUnmarshal == nil && env.Error != nil { + code := strings.TrimSpace(env.Error.Type) + if code == "" { + code = strings.TrimSpace(env.Error.Code) + } + msg := strings.TrimSpace(env.Error.Message) + if msg == "" { + msg = "home returned error" + } + status := http.StatusBadGateway + switch strings.ToLower(code) { + case "model_not_found": + status = http.StatusNotFound + case "authentication_error", "unauthorized": + status = http.StatusUnauthorized + } + return nil, nil, "", &Error{Code: code, Message: msg, HTTPStatus: status} + } + + var dispatch homeAuthDispatchResponse + if errUnmarshal := json.Unmarshal(raw, &dispatch); errUnmarshal != nil { + return nil, nil, "", &Error{Code: "invalid_auth", Message: "home returned invalid auth payload", HTTPStatus: http.StatusBadGateway} + } + setHomeUserAPIKeyOnGinContext(ctx, dispatch.UserAPIKey) + auth := dispatch.Auth + if strings.TrimSpace(auth.ID) == "" { + // Backward compatibility: older home instances returned the auth directly. + if errUnmarshal := json.Unmarshal(raw, &auth); errUnmarshal != nil { + return nil, nil, "", &Error{Code: "invalid_auth", Message: "home returned invalid auth payload", HTTPStatus: http.StatusBadGateway} + } + } + if upstreamModel := strings.TrimSpace(dispatch.Model); upstreamModel != "" { + if auth.Attributes == nil { + auth.Attributes = make(map[string]string, 1) + } + auth.Attributes[homeUpstreamModelAttributeKey] = upstreamModel + } + if strings.TrimSpace(auth.ID) == "" { + return nil, nil, "", &Error{Code: "invalid_auth", Message: "home returned auth without id", HTTPStatus: http.StatusBadGateway} + } + providerKey := strings.ToLower(strings.TrimSpace(auth.Provider)) + if providerKey == "" { + return nil, nil, "", &Error{Code: "invalid_auth", Message: "home returned auth without provider", HTTPStatus: http.StatusBadGateway} + } + + homeAuthIndex := strings.TrimSpace(dispatch.AuthIndex) + if homeAuthIndex != "" { + auth.Index = homeAuthIndex + auth.indexAssigned = true + } else { + auth.EnsureIndex() + } + + executor, ok := m.Executor(providerKey) + if !ok && auth.Attributes != nil && strings.TrimSpace(auth.Attributes["base_url"]) != "" { + executor, ok = m.Executor("openai-compatibility") + if ok { + providerKey = "openai-compatibility" + } + } + if !ok { + return nil, nil, "", &Error{Code: "executor_not_found", Message: "executor not registered", HTTPStatus: http.StatusBadGateway} + } + + authCopy := auth.Clone() + if cliproxyexecutor.DownstreamWebsocket(ctx) && executionSessionID != "" && authWebsocketsEnabled(authCopy) { + m.rememberHomeRuntimeAuth(executionSessionID, authCopy) } return authCopy, executor, providerKey, nil } +func requestedModelFromMetadata(metadata map[string]any, fallback string) string { + if metadata != nil { + if v, ok := metadata[cliproxyexecutor.RequestedModelMetadataKey]; ok { + switch typed := v.(type) { + case string: + if trimmed := strings.TrimSpace(typed); trimmed != "" { + return trimmed + } + case []byte: + if trimmed := strings.TrimSpace(string(typed)); trimmed != "" { + return trimmed + } + } + } + } + fallback = strings.TrimSpace(fallback) + if fallback == "" { + return "unknown" + } + return fallback +} + +func (m *Manager) findAllAntigravityCreditsCandidateAuths(routeModel string, opts cliproxyexecutor.Options) []creditsCandidateEntry { + if m == nil { + return nil + } + pinnedAuthID := pinnedAuthIDFromMetadata(opts.Metadata) + m.mu.RLock() + defer m.mu.RUnlock() + var known []creditsCandidateEntry + var unknown []creditsCandidateEntry + for _, auth := range m.auths { + if auth == nil || auth.Disabled || auth.Status == StatusDisabled { + continue + } + if pinnedAuthID != "" && auth.ID != pinnedAuthID { + continue + } + if !strings.EqualFold(strings.TrimSpace(auth.Provider), "antigravity") { + continue + } + if !strings.Contains(strings.ToLower(strings.TrimSpace(routeModel)), "claude") { + continue + } + providerKey := strings.TrimSpace(strings.ToLower(auth.Provider)) + executor, ok := m.executors[providerKey] + if !ok { + continue + } + + hint, okHint := GetAntigravityCreditsHint(auth.ID) + if okHint && hint.Known { + if !hint.Available { + continue + } + known = append(known, creditsCandidateEntry{ + auth: auth.Clone(), + executor: executor, + provider: providerKey, + }) + continue + } + unknown = append(unknown, creditsCandidateEntry{ + auth: auth.Clone(), + executor: executor, + provider: providerKey, + }) + } + sort.Slice(known, func(i, j int) bool { + return known[i].auth.ID < known[j].auth.ID + }) + sort.Slice(unknown, func(i, j int) bool { + return unknown[i].auth.ID < unknown[j].auth.ID + }) + return append(known, unknown...) +} + +type creditsCandidateEntry struct { + auth *Auth + executor ProviderExecutor + provider string +} + +func shouldAttemptAntigravityCreditsFallback(m *Manager, lastErr error, providers []string) bool { + status := statusCodeFromError(lastErr) + log.WithFields(log.Fields{ + "lastErr": errorString(lastErr), + "status": status, + "providers": providers, + }).Debug("shouldAttemptAntigravityCreditsFallback") + if m == nil || lastErr == nil { + return false + } + if len(providers) > 0 { + hasAntigravity := false + for _, p := range providers { + if strings.EqualFold(strings.TrimSpace(p), "antigravity") { + hasAntigravity = true + break + } + } + if !hasAntigravity { + return false + } + } + cfg, _ := m.runtimeConfig.Load().(*internalconfig.Config) + if cfg == nil || !cfg.QuotaExceeded.AntigravityCredits { + return false + } + switch status { + case http.StatusTooManyRequests, http.StatusServiceUnavailable: + return true + case 0: + var authErr *Error + if errors.As(lastErr, &authErr) && authErr != nil { + return authErr.Code == "auth_not_found" || authErr.Code == "auth_unavailable" || authErr.Code == "model_cooldown" + } + var cooldownErr *modelCooldownError + if errors.As(lastErr, &cooldownErr) { + return true + } + return false + default: + return false + } +} + +func (m *Manager) tryAntigravityCreditsExecute(ctx context.Context, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, bool) { + routeModel := req.Model + candidates := m.findAllAntigravityCreditsCandidateAuths(routeModel, opts) + for _, c := range candidates { + if ctx.Err() != nil { + return cliproxyexecutor.Response{}, false + } + creditsCtx := WithAntigravityCredits(ctx) + if rt := m.roundTripperFor(c.auth); rt != nil { + creditsCtx = context.WithValue(creditsCtx, roundTripperContextKey{}, rt) + creditsCtx = context.WithValue(creditsCtx, "cliproxy.roundtripper", rt) + } + creditsOpts := ensureRequestedModelMetadata(opts, routeModel) + creditsCtx = contextWithRequestedModelAlias(creditsCtx, creditsOpts, routeModel) + publishSelectedAuthMetadata(creditsOpts.Metadata, c.auth.ID) + models := m.executionModelCandidates(c.auth, routeModel) + if len(models) == 0 { + continue + } + for _, upstreamModel := range models { + resultModel := m.stateModelForExecution(c.auth, routeModel, upstreamModel, len(models) > 1) + execReq := req + execReq.Model = upstreamModel + resp, errExec := c.executor.Execute(creditsCtx, c.auth, execReq, creditsOpts) + result := Result{AuthID: c.auth.ID, Provider: c.provider, Model: resultModel, Success: errExec == nil} + if errExec != nil { + result.Error = &Error{Message: errExec.Error()} + if se, ok := errors.AsType[cliproxyexecutor.StatusError](errExec); ok && se != nil { + result.Error.HTTPStatus = se.StatusCode() + } + if ra := retryAfterFromError(errExec); ra != nil { + result.RetryAfter = ra + } + m.MarkResult(creditsCtx, result) + continue + } + m.MarkResult(creditsCtx, result) + return resp, true + } + } + return cliproxyexecutor.Response{}, false +} + +func (m *Manager) tryAntigravityCreditsExecuteStream(ctx context.Context, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (*cliproxyexecutor.StreamResult, bool) { + routeModel := req.Model + candidates := m.findAllAntigravityCreditsCandidateAuths(routeModel, opts) + for _, c := range candidates { + if ctx.Err() != nil { + return nil, false + } + creditsCtx := WithAntigravityCredits(ctx) + if rt := m.roundTripperFor(c.auth); rt != nil { + creditsCtx = context.WithValue(creditsCtx, roundTripperContextKey{}, rt) + creditsCtx = context.WithValue(creditsCtx, "cliproxy.roundtripper", rt) + } + creditsOpts := ensureRequestedModelMetadata(opts, routeModel) + publishSelectedAuthMetadata(creditsOpts.Metadata, c.auth.ID) + models := m.executionModelCandidates(c.auth, routeModel) + if len(models) == 0 { + continue + } + result, errStream := m.executeStreamWithModelPool(creditsCtx, c.executor, c.auth, c.provider, req, creditsOpts, routeModel, models, len(models) > 1) + if errStream != nil { + continue + } + return result, true + } + return nil, false +} + func (m *Manager) persist(ctx context.Context, auth *Auth) error { if m.store == nil || auth == nil { return nil @@ -2965,7 +3677,7 @@ func (m *Manager) queueRefreshReschedule(authID string) { } func (m *Manager) shouldRefresh(a *Auth, now time.Time) bool { - if a == nil || a.Disabled { + if a == nil { return false } if !a.NextRefreshAfter.IsZero() && now.Before(a.NextRefreshAfter) { @@ -3172,7 +3884,7 @@ func lookupMetadataTime(meta map[string]any, keys ...string) (time.Time, bool) { func (m *Manager) markRefreshPending(id string, now time.Time) bool { m.mu.Lock() auth, ok := m.auths[id] - if !ok || auth == nil || auth.Disabled { + if !ok || auth == nil { m.mu.Unlock() return false } @@ -3195,14 +3907,15 @@ func (m *Manager) refreshAuth(ctx context.Context, id string) { m.mu.RLock() auth := m.auths[id] var exec ProviderExecutor + var cloned *Auth if auth != nil { exec = m.executors[auth.Provider] + cloned = auth.Clone() } m.mu.RUnlock() if auth == nil || exec == nil { return } - cloned := auth.Clone() updated, err := exec.Refresh(ctx, cloned) if err != nil && errors.Is(err, context.Canceled) { log.Debugf("refresh canceled for %s, %s", auth.Provider, auth.ID) @@ -3240,6 +3953,9 @@ func (m *Manager) refreshAuth(ctx context.Context, id string) { updated.NextRefreshAfter = time.Time{} updated.LastError = nil updated.UpdatedAt = now + if m.shouldRefresh(updated, now) { + updated.NextRefreshAfter = now.Add(refreshIneffectiveBackoff) + } _, _ = m.Update(ctx, updated) } diff --git a/sdk/cliproxy/auth/conductor_credits_candidates_test.go b/sdk/cliproxy/auth/conductor_credits_candidates_test.go new file mode 100644 index 0000000000..f9487b0b9b --- /dev/null +++ b/sdk/cliproxy/auth/conductor_credits_candidates_test.go @@ -0,0 +1,61 @@ +package auth + +import ( + "testing" + "time" + + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" +) + +func TestFindAllAntigravityCreditsCandidateAuths_PrefersKnownCreditsThenUnknown(t *testing.T) { + m := &Manager{ + auths: map[string]*Auth{ + "zz-credits": {ID: "zz-credits", Provider: "antigravity"}, + "aa-unknown": {ID: "aa-unknown", Provider: "antigravity"}, + "mm-no": {ID: "mm-no", Provider: "antigravity"}, + }, + executors: map[string]ProviderExecutor{ + "antigravity": schedulerTestExecutor{}, + }, + } + + SetAntigravityCreditsHint("zz-credits", AntigravityCreditsHint{ + Known: true, + Available: true, + UpdatedAt: time.Now(), + }) + SetAntigravityCreditsHint("mm-no", AntigravityCreditsHint{ + Known: true, + Available: false, + UpdatedAt: time.Now(), + }) + + opts := cliproxyexecutor.Options{} + + candidates := m.findAllAntigravityCreditsCandidateAuths("claude-sonnet-4-6", opts) + if len(candidates) != 2 { + t.Fatalf("candidates len = %d, want 2", len(candidates)) + } + if candidates[0].auth.ID != "zz-credits" { + t.Fatalf("candidates[0].auth.ID = %q, want %q", candidates[0].auth.ID, "zz-credits") + } + if candidates[1].auth.ID != "aa-unknown" { + t.Fatalf("candidates[1].auth.ID = %q, want %q", candidates[1].auth.ID, "aa-unknown") + } + + nonClaude := m.findAllAntigravityCreditsCandidateAuths("gemini-3-flash", opts) + if len(nonClaude) != 0 { + t.Fatalf("nonClaude len = %d, want 0", len(nonClaude)) + } + + pinnedOpts := cliproxyexecutor.Options{ + Metadata: map[string]any{cliproxyexecutor.PinnedAuthMetadataKey: "aa-unknown"}, + } + pinned := m.findAllAntigravityCreditsCandidateAuths("claude-sonnet-4-6", pinnedOpts) + if len(pinned) != 1 { + t.Fatalf("pinned len = %d, want 1", len(pinned)) + } + if pinned[0].auth.ID != "aa-unknown" { + t.Fatalf("pinned[0].auth.ID = %q, want %q", pinned[0].auth.ID, "aa-unknown") + } +} diff --git a/sdk/cliproxy/auth/conductor_executor_replace_test.go b/sdk/cliproxy/auth/conductor_executor_replace_test.go index 2ee91a87c1..99ecf466a6 100644 --- a/sdk/cliproxy/auth/conductor_executor_replace_test.go +++ b/sdk/cliproxy/auth/conductor_executor_replace_test.go @@ -6,7 +6,7 @@ import ( "sync" "testing" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) type replaceAwareExecutor struct { diff --git a/sdk/cliproxy/auth/conductor_oauth_alias_suspension_test.go b/sdk/cliproxy/auth/conductor_oauth_alias_suspension_test.go index 8bc779e53d..ba8371dc61 100644 --- a/sdk/cliproxy/auth/conductor_oauth_alias_suspension_test.go +++ b/sdk/cliproxy/auth/conductor_oauth_alias_suspension_test.go @@ -7,23 +7,26 @@ import ( "testing" "time" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + coreusage "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" ) type aliasRoutingExecutor struct { id string - mu sync.Mutex - executeModels []string + mu sync.Mutex + executeModels []string + executeAliases []string } func (e *aliasRoutingExecutor) Identifier() string { return e.id } -func (e *aliasRoutingExecutor) Execute(_ context.Context, _ *Auth, req cliproxyexecutor.Request, _ cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { +func (e *aliasRoutingExecutor) Execute(ctx context.Context, _ *Auth, req cliproxyexecutor.Request, _ cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { e.mu.Lock() e.executeModels = append(e.executeModels, req.Model) + e.executeAliases = append(e.executeAliases, coreusage.RequestedModelAliasFromContext(ctx)) e.mu.Unlock() return cliproxyexecutor.Response{Payload: []byte(req.Model)}, nil } @@ -52,6 +55,14 @@ func (e *aliasRoutingExecutor) ExecuteModels() []string { return out } +func (e *aliasRoutingExecutor) ExecuteAliases() []string { + e.mu.Lock() + defer e.mu.Unlock() + out := make([]string, len(e.executeAliases)) + copy(out, e.executeAliases) + return out +} + func TestManagerExecute_OAuthAliasBypassesBlockedRouteModel(t *testing.T) { const ( provider = "antigravity" @@ -108,4 +119,12 @@ func TestManagerExecute_OAuthAliasBypassesBlockedRouteModel(t *testing.T) { if gotModels[0] != targetModel { t.Fatalf("execute model = %q, want %q", gotModels[0], targetModel) } + + gotAliases := executor.ExecuteAliases() + if len(gotAliases) != 1 { + t.Fatalf("execute aliases len = %d, want 1", len(gotAliases)) + } + if gotAliases[0] != routeModel { + t.Fatalf("execute alias = %q, want %q", gotAliases[0], routeModel) + } } diff --git a/sdk/cliproxy/auth/conductor_overrides_test.go b/sdk/cliproxy/auth/conductor_overrides_test.go index f74621bec7..017602e362 100644 --- a/sdk/cliproxy/auth/conductor_overrides_test.go +++ b/sdk/cliproxy/auth/conductor_overrides_test.go @@ -8,9 +8,9 @@ import ( "time" "github.com/google/uuid" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) const requestScopedNotFoundMessage = "Item with id 'rs_0b5f3eb6f51f175c0169ca74e4a85881998539920821603a74' not found. Items are not persisted when `store` is set to false. Try again with `store` set to true, or remove this item from your input." diff --git a/sdk/cliproxy/auth/conductor_recent_requests_test.go b/sdk/cliproxy/auth/conductor_recent_requests_test.go new file mode 100644 index 0000000000..d2003b7ccb --- /dev/null +++ b/sdk/cliproxy/auth/conductor_recent_requests_test.go @@ -0,0 +1,95 @@ +package auth + +import ( + "context" + "testing" + "time" +) + +func TestManagerMarkResultRecordsRecentRequests(t *testing.T) { + mgr := NewManager(nil, nil, nil) + auth := &Auth{ + ID: "auth-1", + Provider: "antigravity", + Attributes: map[string]string{ + "runtime_only": "true", + }, + Metadata: map[string]any{ + "type": "antigravity", + }, + } + + if _, err := mgr.Register(WithSkipPersist(context.Background()), auth); err != nil { + t.Fatalf("Register returned error: %v", err) + } + + mgr.MarkResult(context.Background(), Result{AuthID: "auth-1", Provider: "antigravity", Model: "gpt-5", Success: true}) + mgr.MarkResult(context.Background(), Result{AuthID: "auth-1", Provider: "antigravity", Model: "gpt-5", Success: false}) + + gotAuth, ok := mgr.GetByID("auth-1") + if !ok || gotAuth == nil { + t.Fatalf("GetByID returned ok=%v auth=%v", ok, gotAuth) + } + + if gotAuth.Success != 1 || gotAuth.Failed != 1 { + t.Fatalf("auth totals = success=%d failed=%d, want 1/1", gotAuth.Success, gotAuth.Failed) + } + + snapshot := gotAuth.RecentRequestsSnapshot(time.Now()) + var successTotal int64 + var failedTotal int64 + for _, bucket := range snapshot { + successTotal += bucket.Success + failedTotal += bucket.Failed + } + if successTotal != 1 || failedTotal != 1 { + t.Fatalf("totals = success=%d failed=%d, want 1/1", successTotal, failedTotal) + } +} + +func TestManagerUpdatePreservesRecentRequestsAndTotals(t *testing.T) { + mgr := NewManager(nil, nil, nil) + auth := &Auth{ + ID: "auth-1", + Provider: "antigravity", + Metadata: map[string]any{ + "type": "antigravity", + }, + } + if _, err := mgr.Register(WithSkipPersist(context.Background()), auth); err != nil { + t.Fatalf("Register returned error: %v", err) + } + + mgr.MarkResult(context.Background(), Result{AuthID: "auth-1", Provider: "antigravity", Model: "gpt-5", Success: true}) + + updated := &Auth{ + ID: "auth-1", + Provider: "antigravity", + Metadata: map[string]any{ + "type": "antigravity", + "note": "updated", + }, + } + if _, err := mgr.Update(WithSkipPersist(context.Background()), updated); err != nil { + t.Fatalf("Update returned error: %v", err) + } + + gotAuth, ok := mgr.GetByID("auth-1") + if !ok || gotAuth == nil { + t.Fatalf("GetByID returned ok=%v auth=%v", ok, gotAuth) + } + if gotAuth.Success != 1 || gotAuth.Failed != 0 { + t.Fatalf("auth totals = success=%d failed=%d, want 1/0", gotAuth.Success, gotAuth.Failed) + } + + snapshot := gotAuth.RecentRequestsSnapshot(time.Now()) + var successTotal int64 + var failedTotal int64 + for _, bucket := range snapshot { + successTotal += bucket.Success + failedTotal += bucket.Failed + } + if successTotal != 1 || failedTotal != 0 { + t.Fatalf("bucket totals = success=%d failed=%d, want 1/0", successTotal, failedTotal) + } +} diff --git a/sdk/cliproxy/auth/conductor_scheduler_refresh_test.go b/sdk/cliproxy/auth/conductor_scheduler_refresh_test.go index 5c6eff7805..508cdfd137 100644 --- a/sdk/cliproxy/auth/conductor_scheduler_refresh_test.go +++ b/sdk/cliproxy/auth/conductor_scheduler_refresh_test.go @@ -6,8 +6,8 @@ import ( "net/http" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) type schedulerProviderTestExecutor struct { diff --git a/sdk/cliproxy/auth/home_websocket_reuse_test.go b/sdk/cliproxy/auth/home_websocket_reuse_test.go new file mode 100644 index 0000000000..28d4800429 --- /dev/null +++ b/sdk/cliproxy/auth/home_websocket_reuse_test.go @@ -0,0 +1,270 @@ +package auth + +import ( + "context" + "errors" + "net/http" + "testing" + + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" +) + +func TestPickNextViaHomeReusesPinnedWebsocketAuthWithoutHomeDispatch(t *testing.T) { + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{Home: internalconfig.HomeConfig{Enabled: true}}) + manager.RegisterExecutor(schedulerTestExecutor{}) + + auth := &Auth{ + ID: "home-auth-1", + Provider: "test", + Status: StatusActive, + Attributes: map[string]string{ + "websockets": "true", + homeUpstreamModelAttributeKey: "upstream-model", + }, + Metadata: map[string]any{"email": "home@example.com"}, + } + auth.EnsureIndex() + manager.rememberHomeRuntimeAuth("session-1", auth) + cachedAuth, ok := manager.GetExecutionSessionAuthByID("session-1", "home-auth-1") + if !ok || cachedAuth == nil || !authWebsocketsEnabled(cachedAuth) { + t.Fatalf("GetExecutionSessionAuthByID() did not expose remembered websocket home auth: auth=%#v ok=%v", cachedAuth, ok) + } + + ctx := cliproxyexecutor.WithDownstreamWebsocket(context.Background()) + opts := cliproxyexecutor.Options{ + Metadata: map[string]any{ + cliproxyexecutor.ExecutionSessionMetadataKey: "session-1", + cliproxyexecutor.PinnedAuthMetadataKey: "home-auth-1", + }, + Headers: http.Header{"Authorization": {"Bearer client-key"}}, + } + + got, executor, provider, errPick := manager.pickNextViaHome(ctx, "gpt-5.4", opts, nil) + if errPick != nil { + t.Fatalf("pickNextViaHome() error = %v", errPick) + } + if got == nil || got.ID != "home-auth-1" { + t.Fatalf("pickNextViaHome() auth = %#v, want home-auth-1", got) + } + if executor == nil { + t.Fatal("pickNextViaHome() executor is nil") + } + if provider != "test" { + t.Fatalf("pickNextViaHome() provider = %q, want test", provider) + } +} + +func TestPickNextViaHomeKeepsSameAuthIDPayloadSessionScoped(t *testing.T) { + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{Home: internalconfig.HomeConfig{Enabled: true}}) + manager.RegisterExecutor(schedulerTestExecutor{}) + + manager.rememberHomeRuntimeAuth("session-1", &Auth{ + ID: "home-auth-1", + Provider: "test", + Status: StatusActive, + Attributes: map[string]string{ + "websockets": "true", + homeUpstreamModelAttributeKey: "upstream-model-a", + }, + }) + manager.rememberHomeRuntimeAuth("session-2", &Auth{ + ID: "home-auth-1", + Provider: "test", + Status: StatusActive, + Attributes: map[string]string{ + "websockets": "true", + homeUpstreamModelAttributeKey: "upstream-model-b", + }, + }) + + ctx := cliproxyexecutor.WithDownstreamWebsocket(context.Background()) + optsSession1 := cliproxyexecutor.Options{ + Metadata: map[string]any{ + cliproxyexecutor.ExecutionSessionMetadataKey: "session-1", + cliproxyexecutor.PinnedAuthMetadataKey: "home-auth-1", + }, + } + optsSession2 := cliproxyexecutor.Options{ + Metadata: map[string]any{ + cliproxyexecutor.ExecutionSessionMetadataKey: "session-2", + cliproxyexecutor.PinnedAuthMetadataKey: "home-auth-1", + }, + } + + gotSession1, _, _, errSession1 := manager.pickNextViaHome(ctx, "gpt-5.4", optsSession1, nil) + if errSession1 != nil { + t.Fatalf("pickNextViaHome(session-1) error = %v", errSession1) + } + if got := gotSession1.Attributes[homeUpstreamModelAttributeKey]; got != "upstream-model-a" { + t.Fatalf("pickNextViaHome(session-1) upstream model = %q, want upstream-model-a", got) + } + + gotSession2, _, _, errSession2 := manager.pickNextViaHome(ctx, "gpt-5.4", optsSession2, nil) + if errSession2 != nil { + t.Fatalf("pickNextViaHome(session-2) error = %v", errSession2) + } + if got := gotSession2.Attributes[homeUpstreamModelAttributeKey]; got != "upstream-model-b" { + t.Fatalf("pickNextViaHome(session-2) upstream model = %q, want upstream-model-b", got) + } +} + +func TestPickNextViaHomeDoesNotReuseTriedPinnedWebsocketAuth(t *testing.T) { + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{Home: internalconfig.HomeConfig{Enabled: true}}) + manager.RegisterExecutor(schedulerTestExecutor{}) + + auth := &Auth{ + ID: "home-auth-1", + Provider: "test", + Status: StatusActive, + Attributes: map[string]string{ + "websockets": "true", + }, + } + manager.rememberHomeRuntimeAuth("session-1", auth) + + ctx := cliproxyexecutor.WithDownstreamWebsocket(context.Background()) + opts := cliproxyexecutor.Options{ + Metadata: map[string]any{ + cliproxyexecutor.ExecutionSessionMetadataKey: "session-1", + cliproxyexecutor.PinnedAuthMetadataKey: "home-auth-1", + }, + } + tried := map[string]struct{}{"home-auth-1": {}} + + got, executor, provider, errPick := manager.pickNextViaHome(ctx, "gpt-5.4", opts, tried) + if errPick == nil { + t.Fatal("pickNextViaHome() error is nil, want home unavailable error") + } + var authErr *Error + if !errors.As(errPick, &authErr) || authErr.Code != "home_unavailable" { + t.Fatalf("pickNextViaHome() error = %v, want home_unavailable", errPick) + } + if got != nil || executor != nil || provider != "" { + t.Fatalf("pickNextViaHome() reused tried auth: auth=%#v executor=%#v provider=%q", got, executor, provider) + } +} + +func TestPickNextViaHomeDoesNotReusePinnedWebsocketAuthAfterFirstHomeAttempt(t *testing.T) { + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{Home: internalconfig.HomeConfig{Enabled: true}}) + manager.RegisterExecutor(schedulerTestExecutor{}) + + auth := &Auth{ + ID: "home-auth-1", + Provider: "test", + Status: StatusActive, + Attributes: map[string]string{ + "websockets": "true", + }, + } + manager.rememberHomeRuntimeAuth("session-1", auth) + + ctx := cliproxyexecutor.WithDownstreamWebsocket(context.Background()) + opts := withHomeAuthCount(cliproxyexecutor.Options{ + Metadata: map[string]any{ + cliproxyexecutor.ExecutionSessionMetadataKey: "session-1", + cliproxyexecutor.PinnedAuthMetadataKey: "home-auth-1", + }, + }, 2) + + got, executor, provider, errPick := manager.pickNextViaHome(ctx, "gpt-5.4", opts, nil) + if errPick == nil { + t.Fatal("pickNextViaHome() error is nil, want home unavailable error") + } + var authErr *Error + if !errors.As(errPick, &authErr) || authErr.Code != "home_unavailable" { + t.Fatalf("pickNextViaHome() error = %v, want home_unavailable", errPick) + } + if got != nil || executor != nil || provider != "" { + t.Fatalf("pickNextViaHome() reused auth after first home attempt: auth=%#v executor=%#v provider=%q", got, executor, provider) + } +} + +func TestPickNextViaHomeDoesNotReusePinnedNonWebsocketAuth(t *testing.T) { + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{Home: internalconfig.HomeConfig{Enabled: true}}) + manager.RegisterExecutor(schedulerTestExecutor{}) + + manager.mu.Lock() + manager.homeRuntimeAuths["session-1"] = map[string]*Auth{ + "home-auth-1": &Auth{ + ID: "home-auth-1", + Provider: "test", + Status: StatusActive, + }, + } + manager.mu.Unlock() + + ctx := cliproxyexecutor.WithDownstreamWebsocket(context.Background()) + opts := cliproxyexecutor.Options{ + Metadata: map[string]any{ + cliproxyexecutor.ExecutionSessionMetadataKey: "session-1", + cliproxyexecutor.PinnedAuthMetadataKey: "home-auth-1", + }, + Headers: http.Header{"Authorization": {"Bearer client-key"}}, + } + + got, executor, provider, errPick := manager.pickNextViaHome(ctx, "gpt-5.4", opts, nil) + if errPick == nil { + t.Fatal("pickNextViaHome() error is nil, want home unavailable error") + } + var authErr *Error + if !errors.As(errPick, &authErr) || authErr.Code != "home_unavailable" { + t.Fatalf("pickNextViaHome() error = %v, want home_unavailable", errPick) + } + if got != nil || executor != nil || provider != "" { + t.Fatalf("pickNextViaHome() reused non-websocket auth: auth=%#v executor=%#v provider=%q", got, executor, provider) + } +} + +func TestHomeRuntimeAuthsClearWhenHomeDisabled(t *testing.T) { + manager := NewManager(nil, nil, nil) + manager.SetConfig(&internalconfig.Config{Home: internalconfig.HomeConfig{Enabled: true}}) + manager.rememberHomeRuntimeAuth("session-1", &Auth{ + ID: "home-auth-1", + Provider: "test", + Attributes: map[string]string{ + "websockets": "true", + }, + }) + + if _, ok := manager.GetExecutionSessionAuthByID("session-1", "home-auth-1"); !ok { + t.Fatal("expected remembered home auth before disabling home") + } + + manager.SetConfig(&internalconfig.Config{}) + if _, ok := manager.GetExecutionSessionAuthByID("session-1", "home-auth-1"); ok { + t.Fatal("remembered home auth was not cleared when home was disabled") + } +} + +func TestCloseExecutionSessionClearsHomeRuntimeAuthForSession(t *testing.T) { + manager := NewManager(nil, nil, nil) + auth := &Auth{ + ID: "home-auth-1", + Provider: "test", + Attributes: map[string]string{ + "websockets": "true", + }, + } + + manager.rememberHomeRuntimeAuth("session-1", auth) + manager.rememberHomeRuntimeAuth("session-2", auth) + + manager.CloseExecutionSession("session-1") + if _, ok := manager.GetExecutionSessionAuthByID("session-1", "home-auth-1"); ok { + t.Fatal("home auth for closed session was not cleared") + } + if _, ok := manager.GetExecutionSessionAuthByID("session-2", "home-auth-1"); !ok { + t.Fatal("home auth for another session was cleared") + } + + manager.CloseExecutionSession("session-2") + if _, ok := manager.GetExecutionSessionAuthByID("session-2", "home-auth-1"); ok { + t.Fatal("home auth was not cleared when its last session closed") + } +} diff --git a/sdk/cliproxy/auth/oauth_model_alias.go b/sdk/cliproxy/auth/oauth_model_alias.go index 46c82a9c53..57aed774af 100644 --- a/sdk/cliproxy/auth/oauth_model_alias.go +++ b/sdk/cliproxy/auth/oauth_model_alias.go @@ -3,8 +3,8 @@ package auth import ( "strings" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" ) type modelAliasEntry interface { @@ -289,7 +289,7 @@ func OAuthModelAliasChannel(provider, authKind string) string { return "" } return "codex" - case "gemini-cli", "aistudio", "antigravity", "kimi": + case "gemini-cli", "aistudio", "antigravity", "kimi", "kiro", "github-copilot", "gitlab", "cursor", "codebuddy", "codebuddy-ai", "codearts", "joycode", "qoder", "kilo", "bt", "iflow": return provider default: return "" diff --git a/sdk/cliproxy/auth/oauth_model_alias_test.go b/sdk/cliproxy/auth/oauth_model_alias_test.go index 73ddbe675d..521e158e55 100644 --- a/sdk/cliproxy/auth/oauth_model_alias_test.go +++ b/sdk/cliproxy/auth/oauth_model_alias_test.go @@ -3,7 +3,7 @@ package auth import ( "testing" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func TestResolveOAuthUpstreamModel_SuffixPreservation(t *testing.T) { diff --git a/sdk/cliproxy/auth/openai_compat_pool_test.go b/sdk/cliproxy/auth/openai_compat_pool_test.go index ff2c4dd040..f052c486f4 100644 --- a/sdk/cliproxy/auth/openai_compat_pool_test.go +++ b/sdk/cliproxy/auth/openai_compat_pool_test.go @@ -7,9 +7,9 @@ import ( "sync" "testing" - internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) type openAICompatPoolExecutor struct { diff --git a/sdk/cliproxy/auth/scheduler.go b/sdk/cliproxy/auth/scheduler.go index b5a3928286..9947f59c63 100644 --- a/sdk/cliproxy/auth/scheduler.go +++ b/sdk/cliproxy/auth/scheduler.go @@ -7,8 +7,8 @@ import ( "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) // schedulerStrategy identifies which built-in routing semantics the scheduler should apply. diff --git a/sdk/cliproxy/auth/scheduler_benchmark_test.go b/sdk/cliproxy/auth/scheduler_benchmark_test.go index 050a7cbd1e..4d160276f2 100644 --- a/sdk/cliproxy/auth/scheduler_benchmark_test.go +++ b/sdk/cliproxy/auth/scheduler_benchmark_test.go @@ -6,8 +6,8 @@ import ( "net/http" "testing" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) type schedulerBenchmarkExecutor struct { diff --git a/sdk/cliproxy/auth/scheduler_test.go b/sdk/cliproxy/auth/scheduler_test.go index d744ec32d0..864fa938e9 100644 --- a/sdk/cliproxy/auth/scheduler_test.go +++ b/sdk/cliproxy/auth/scheduler_test.go @@ -6,8 +6,8 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) type schedulerTestExecutor struct{} @@ -333,6 +333,39 @@ func TestManager_PickNextMixed_UsesWeightedProviderRotationBeforeCredentialRotat } } +func TestManager_PickNextMixed_DisallowFreeAuthSkipsCodexFreePlan(t *testing.T) { + t.Parallel() + + model := "gpt-5.4-mini" + registerSchedulerModels(t, "codex", model, "codex-a-free", "codex-b-plus") + + manager := NewManager(nil, &RoundRobinSelector{}, nil) + manager.executors["codex"] = schedulerTestExecutor{} + if _, errRegister := manager.Register(context.Background(), &Auth{ID: "codex-a-free", Provider: "codex", Attributes: map[string]string{"plan_type": "free"}}); errRegister != nil { + t.Fatalf("Register(codex-a-free) error = %v", errRegister) + } + if _, errRegister := manager.Register(context.Background(), &Auth{ID: "codex-b-plus", Provider: "codex", Attributes: map[string]string{"plan_type": "plus"}}); errRegister != nil { + t.Fatalf("Register(codex-b-plus) error = %v", errRegister) + } + + opts := cliproxyexecutor.Options{ + Metadata: map[string]any{cliproxyexecutor.DisallowFreeAuthMetadataKey: true}, + } + got, _, provider, errPick := manager.pickNextMixed(context.Background(), []string{"codex"}, model, opts, map[string]struct{}{}) + if errPick != nil { + t.Fatalf("pickNextMixed() error = %v", errPick) + } + if got == nil { + t.Fatalf("pickNextMixed() auth = nil") + } + if provider != "codex" { + t.Fatalf("pickNextMixed() provider = %q, want %q", provider, "codex") + } + if got.ID != "codex-b-plus" { + t.Fatalf("pickNextMixed() auth.ID = %q, want %q", got.ID, "codex-b-plus") + } +} + func TestManagerCustomSelector_FallsBackToLegacyPath(t *testing.T) { t.Parallel() diff --git a/sdk/cliproxy/auth/selector.go b/sdk/cliproxy/auth/selector.go index 51275a3115..5e23c46f55 100644 --- a/sdk/cliproxy/auth/selector.go +++ b/sdk/cliproxy/auth/selector.go @@ -18,9 +18,9 @@ import ( log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" - "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) // RoundRobinSelector provides a simple provider scoped round-robin selection strategy. @@ -469,11 +469,14 @@ func NewSessionAffinitySelectorWithConfig(cfg SessionAffinityConfig) *SessionAff // Pick selects an auth with session affinity when possible. // Priority for session ID extraction: -// 1. metadata.user_id (Claude Code format) - highest priority +// 1. metadata.user_id (Claude Code format with _session_{uuid}) - highest priority // 2. X-Session-ID header -// 3. metadata.user_id (non-Claude Code format) -// 4. conversation_id field -// 5. Hash-based fallback from messages +// 3. Session_id header (Codex) +// 4. X-Amp-Thread-Id header (Amp CLI thread ID) +// 5. X-Client-Request-Id header (PI) +// 6. metadata.user_id (non-Claude Code format) +// 7. conversation_id field in request body +// 8. Stable hash from first few messages content (fallback) // // Note: The cache key includes provider, session ID, and model to handle cases where // a session uses multiple models (e.g., gemini-2.5-pro and gemini-3-flash-preview) @@ -570,9 +573,12 @@ func (s *SessionAffinitySelector) InvalidateAuth(authID string) { // Priority order: // 1. metadata.user_id (Claude Code format with _session_{uuid}) - highest priority for Claude Code clients // 2. X-Session-ID header -// 3. metadata.user_id (non-Claude Code format) -// 4. conversation_id field in request body -// 5. Stable hash from first few messages content (fallback) +// 3. Session_id header (Codex) +// 4. X-Amp-Thread-Id header (Amp CLI thread ID) +// 5. X-Client-Request-Id header (PI) +// 6. metadata.user_id (non-Claude Code format) +// 7. conversation_id field in request body +// 8. Stable hash from first few messages content (fallback) func ExtractSessionID(headers http.Header, payload []byte, metadata map[string]any) string { primary, _ := extractSessionIDs(headers, payload, metadata) return primary @@ -608,22 +614,43 @@ func extractSessionIDs(headers http.Header, payload []byte, metadata map[string] } } + // 3. Session_id header (Codex) + if headers != nil { + if sid := headers.Get("Session_id"); sid != "" { + return "codex:" + sid, "" + } + } + + // 4. X-Amp-Thread-Id header (Amp CLI thread ID) + if headers != nil { + if tid := headers.Get("X-Amp-Thread-Id"); tid != "" { + return "amp:" + tid, "" + } + } + + // 5. X-Client-Request-Id header (PI) + if headers != nil { + if rid := headers.Get("X-Client-Request-Id"); rid != "" { + return "clientreq:" + rid, "" + } + } + if len(payload) == 0 { return "", "" } - // 3. metadata.user_id (non-Claude Code format) + // 6. metadata.user_id (non-Claude Code format) userID := gjson.GetBytes(payload, "metadata.user_id").String() if userID != "" { return "user:" + userID, "" } - // 4. conversation_id field + // 7. conversation_id field if convID := gjson.GetBytes(payload, "conversation_id").String(); convID != "" { return "conv:" + convID, "" } - // 5. Hash-based fallback from message content + // 8. Hash-based fallback from message content return extractMessageHashIDs(payload) } diff --git a/sdk/cliproxy/auth/selector_test.go b/sdk/cliproxy/auth/selector_test.go index 560d3b9e97..99231bdf78 100644 --- a/sdk/cliproxy/auth/selector_test.go +++ b/sdk/cliproxy/auth/selector_test.go @@ -11,7 +11,7 @@ import ( "testing" "time" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" ) func TestFillFirstSelectorPick_Deterministic(t *testing.T) { @@ -776,6 +776,100 @@ func TestExtractSessionID_Headers(t *testing.T) { } } +func TestExtractSessionID_CodexSessionIDHeader(t *testing.T) { + t.Parallel() + + headers := make(http.Header) + headers.Set("Session_id", "codex-session-123") + + got := ExtractSessionID(headers, nil, nil) + want := "codex:codex-session-123" + if got != want { + t.Errorf("ExtractSessionID() with Session_id = %q, want %q", got, want) + } +} + +func TestExtractSessionID_ClientRequestIDHeader(t *testing.T) { + t.Parallel() + + headers := make(http.Header) + headers.Set("X-Client-Request-Id", "pi-session-123") + + got := ExtractSessionID(headers, nil, nil) + want := "clientreq:pi-session-123" + if got != want { + t.Errorf("ExtractSessionID() with X-Client-Request-Id = %q, want %q", got, want) + } +} + +func TestExtractSessionID_CodexSessionIDPriorityOverClientRequestID(t *testing.T) { + t.Parallel() + + headers := make(http.Header) + headers.Set("X-Client-Request-Id", "pi-session-123") + headers.Set("Session_id", "codex-session-456") + + got := ExtractSessionID(headers, nil, nil) + want := "codex:codex-session-456" + if got != want { + t.Errorf("ExtractSessionID() = %q, want %q (Session_id should take priority over X-Client-Request-Id)", got, want) + } +} + +func TestExtractSessionID_AmpThreadId(t *testing.T) { + t.Parallel() + + headers := make(http.Header) + headers.Set("X-Amp-Thread-Id", "T-7873e6bd-6354-4a9a-be2c-c7702c6e1b64") + + got := ExtractSessionID(headers, nil, nil) + want := "amp:T-7873e6bd-6354-4a9a-be2c-c7702c6e1b64" + if got != want { + t.Errorf("ExtractSessionID() with X-Amp-Thread-Id = %q, want %q", got, want) + } +} + +func TestExtractSessionID_AmpThreadIdPriorityOverClientRequestID(t *testing.T) { + t.Parallel() + + headers := make(http.Header) + headers.Set("X-Amp-Thread-Id", "T-priority-test") + headers.Set("X-Client-Request-Id", "pi-session-123") + + got := ExtractSessionID(headers, nil, nil) + want := "amp:T-priority-test" + if got != want { + t.Errorf("ExtractSessionID() = %q, want %q (X-Amp-Thread-Id should take priority over X-Client-Request-Id)", got, want) + } +} + +// TestExtractSessionID_AmpThreadIdLowerPriority verifies X-Amp-Thread-Id is lower +// priority than Claude Code metadata.user_id but higher than conversation_id. +func TestExtractSessionID_AmpThreadIdPriority(t *testing.T) { + t.Parallel() + + // X-Amp-Thread-Id should be used when no Claude Code user_id is present + headers := make(http.Header) + headers.Set("X-Amp-Thread-Id", "T-priority-test") + + payload := []byte(`{"conversation_id":"conv-12345"}`) + got := ExtractSessionID(headers, payload, nil) + want := "amp:T-priority-test" + if got != want { + t.Errorf("ExtractSessionID() = %q, want %q (Amp thread ID should take priority over conversation_id)", got, want) + } + + // Claude Code user_id should take priority over X-Amp-Thread-Id + headers2 := make(http.Header) + headers2.Set("X-Amp-Thread-Id", "T-priority-test") + payload2 := []byte(`{"metadata":{"user_id":"user_xxx_account__session_ac980658-63bd-4fb3-97ba-8da64cb1e344"}}`) + got2 := ExtractSessionID(headers2, payload2, nil) + want2 := "claude:ac980658-63bd-4fb3-97ba-8da64cb1e344" + if got2 != want2 { + t.Errorf("ExtractSessionID() = %q, want %q (Claude Code should take priority over Amp thread ID)", got2, want2) + } +} + // TestExtractSessionID_IdempotencyKey verifies that idempotency_key is intentionally // ignored for session affinity (it's auto-generated per-request, causing cache misses). func TestExtractSessionID_IdempotencyKey(t *testing.T) { diff --git a/sdk/cliproxy/auth/types.go b/sdk/cliproxy/auth/types.go index f30f4dc011..882c25eabd 100644 --- a/sdk/cliproxy/auth/types.go +++ b/sdk/cliproxy/auth/types.go @@ -7,12 +7,13 @@ import ( "encoding/json" "net/http" "net/url" + "path/filepath" "strconv" "strings" "sync" "time" - baseauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth" + baseauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth" ) // PostAuthHook defines a function that is called after an Auth record is created @@ -92,7 +93,32 @@ type Auth struct { // Runtime carries non-serialisable data used during execution (in-memory only). Runtime any `json:"-"` - indexAssigned bool `json:"-"` + Success int64 `json:"-"` + Failed int64 `json:"-"` + + recentRequests recentRequestRing `json:"-"` + indexAssigned bool `json:"-"` +} + +const ( + recentRequestBucketSeconds int64 = 10 * 60 + recentRequestBucketCount = 20 +) + +type recentRequestBucket struct { + bucketID int64 + success int64 + failed int64 +} + +type recentRequestRing struct { + buckets [recentRequestBucketCount]recentRequestBucket +} + +type RecentRequestBucket struct { + Time string `json:"time"` + Success int64 `json:"success"` + Failed int64 `json:"failed"` } // QuotaState contains limiter tracking data for a credential. @@ -125,6 +151,70 @@ type ModelState struct { UpdatedAt time.Time `json:"updated_at"` } +func recentRequestBucketID(now time.Time) int64 { + if now.IsZero() { + return 0 + } + return now.Unix() / recentRequestBucketSeconds +} + +func recentRequestBucketIndex(bucketID int64) int { + mod := bucketID % int64(recentRequestBucketCount) + if mod < 0 { + mod += int64(recentRequestBucketCount) + } + return int(mod) +} + +func formatRecentRequestBucketLabel(bucketID int64) string { + start := time.Unix(bucketID*recentRequestBucketSeconds, 0).In(time.Local) + end := start.Add(time.Duration(recentRequestBucketSeconds) * time.Second) + return start.Format("15:04") + "-" + end.Format("15:04") +} + +func (a *Auth) recordRecentRequest(now time.Time, success bool) { + if a == nil { + return + } + bucketID := recentRequestBucketID(now) + idx := recentRequestBucketIndex(bucketID) + bucket := &a.recentRequests.buckets[idx] + if bucket.bucketID != bucketID { + bucket.bucketID = bucketID + bucket.success = 0 + bucket.failed = 0 + } + if success { + bucket.success++ + return + } + bucket.failed++ +} + +func (a *Auth) RecentRequestsSnapshot(now time.Time) []RecentRequestBucket { + out := make([]RecentRequestBucket, 0, recentRequestBucketCount) + if a == nil { + return out + } + + currentBucketID := recentRequestBucketID(now) + for i := recentRequestBucketCount - 1; i >= 0; i-- { + bucketID := currentBucketID - int64(i) + idx := recentRequestBucketIndex(bucketID) + bucket := a.recentRequests.buckets[idx] + entry := RecentRequestBucket{ + Time: formatRecentRequestBucketLabel(bucketID), + } + if bucket.bucketID == bucketID { + entry.Success = bucket.success + entry.Failed = bucket.failed + } + out = append(out, entry) + } + + return out +} + // Clone shallow copies the Auth structure, duplicating maps to avoid accidental mutation. func (a *Auth) Clone() *Auth { if a == nil { @@ -167,45 +257,65 @@ func (a *Auth) indexSeed() string { return "" } - if fileName := strings.TrimSpace(a.FileName); fileName != "" { - return "file:" + fileName - } - - providerKey := strings.ToLower(strings.TrimSpace(a.Provider)) + provider := strings.ToLower(strings.TrimSpace(a.Provider)) compatName := "" baseURL := "" apiKey := "" - source := "" + filePath := "" if a.Attributes != nil { - if value := strings.TrimSpace(a.Attributes["provider_key"]); value != "" { - providerKey = strings.ToLower(value) - } - compatName = strings.ToLower(strings.TrimSpace(a.Attributes["compat_name"])) + compatName = strings.TrimSpace(a.Attributes["compat_name"]) baseURL = strings.TrimSpace(a.Attributes["base_url"]) apiKey = strings.TrimSpace(a.Attributes["api_key"]) - source = strings.TrimSpace(a.Attributes["source"]) + filePath = strings.TrimSpace(a.Attributes["path"]) + if filePath == "" { + filePath = strings.TrimSpace(a.Attributes["source"]) + } + } + + if filePath == "" { + filePath = strings.TrimSpace(a.FileName) + } + if filePath == "" { + filePath = strings.TrimSpace(a.ID) } - proxyURL := strings.TrimSpace(a.ProxyURL) - hasCredentialIdentity := compatName != "" || baseURL != "" || proxyURL != "" || apiKey != "" || source != "" - if providerKey != "" && hasCredentialIdentity { - parts := []string{"provider=" + providerKey} - if compatName != "" { - parts = append(parts, "compat="+compatName) + if filePath != "" && strings.HasSuffix(strings.ToLower(filePath), ".json") { + abs, errAbs := filepath.Abs(filePath) + if errAbs == nil && strings.TrimSpace(abs) != "" { + filePath = abs } - if baseURL != "" { - parts = append(parts, "base="+baseURL) + filePath = filepath.Clean(filePath) + + authType := "" + if a.Metadata != nil { + if rawType, ok := a.Metadata["type"].(string); ok { + authType = strings.TrimSpace(rawType) + } } - if proxyURL != "" { - parts = append(parts, "proxy="+proxyURL) + if authType == "" { + authType = strings.TrimSpace(provider) } - if apiKey != "" { - parts = append(parts, "api_key="+apiKey) + authType = strings.ToLower(strings.TrimSpace(authType)) + if authType != "" { + return authType + ":" + filePath } - if source != "" { - parts = append(parts, "source="+source) + } + + apiPrefix := "" + if apiKey != "" { + switch { + case compatName != "" || strings.EqualFold(provider, "openai-compatibility"): + apiPrefix = "openai-compatibility" + case strings.EqualFold(provider, "gemini"): + apiPrefix = "gemini-api-key" + case strings.EqualFold(provider, "codex"): + apiPrefix = "codex-api-key" + case strings.EqualFold(provider, "claude"): + apiPrefix = "claude-api-key" } - return "config:" + strings.Join(parts, "\x00") + } + if apiPrefix != "" { + return apiPrefix + ":" + strings.TrimSpace(baseURL) + "+" + strings.TrimSpace(apiKey) } if id := strings.TrimSpace(a.ID); id != "" { @@ -266,19 +376,28 @@ func (a *Auth) ProxyInfo() string { return "via proxy" } -// DisableCoolingOverride returns the auth-file scoped disable_cooling override when present. +// DisableCoolingOverride returns the auth scoped disable_cooling override when present. // The value is read from metadata key "disable_cooling" (or legacy "disable-cooling"). +// +// NOTE: This override is intentionally "true-only". When the metadata value is false, it is treated +// as "not set" so the global disable-cooling flag can still take effect. func (a *Auth) DisableCoolingOverride() (bool, bool) { if a == nil || a.Metadata == nil { return false, false } if val, ok := a.Metadata["disable_cooling"]; ok { if parsed, okParse := parseBoolAny(val); okParse { + if !parsed { + return false, false + } return parsed, true } } if val, ok := a.Metadata["disable-cooling"]; ok { if parsed, okParse := parseBoolAny(val); okParse { + if !parsed { + return false, false + } return parsed, true } } diff --git a/sdk/cliproxy/auth/types_test.go b/sdk/cliproxy/auth/types_test.go index e7029385a3..f579bfda2e 100644 --- a/sdk/cliproxy/auth/types_test.go +++ b/sdk/cliproxy/auth/types_test.go @@ -1,6 +1,12 @@ package auth -import "testing" +import ( + "os" + "path/filepath" + "strings" + "testing" + "time" +) func TestToolPrefixDisabled(t *testing.T) { var a *Auth @@ -92,7 +98,108 @@ func TestEnsureIndexUsesCredentialIdentity(t *testing.T) { if geminiIndex == altBaseIndex { t.Fatalf("same provider/key with different base_url produced duplicate auth_index %q", geminiIndex) } - if geminiIndex == duplicateIndex { - t.Fatalf("duplicate config entries should be separated by source-derived seed, got %q", geminiIndex) + if geminiIndex != duplicateIndex { + t.Fatalf("same provider/key with different source should share auth_index, got %q vs %q", geminiIndex, duplicateIndex) + } +} + +func TestEnsureIndexUsesOAuthTypeAndAbsolutePath(t *testing.T) { + t.Parallel() + + wd, errWd := os.Getwd() + if errWd != nil { + t.Fatalf("os.Getwd returned error: %v", errWd) + } + + relPath := "test-oauth.json" + absPath := filepath.Join(wd, relPath) + expectedSeed := "gemini:" + filepath.Clean(absPath) + expectedIndex := stableAuthIndex(expectedSeed) + + a := &Auth{ + Provider: "gemini-cli", + Attributes: map[string]string{ + "path": relPath, + }, + Metadata: map[string]any{ + "type": "gemini", + }, + } + + got := a.EnsureIndex() + if got == "" { + t.Fatal("auth index should not be empty") + } + if got != expectedIndex { + t.Fatalf("auth index = %q, want %q", got, expectedIndex) + } +} + +func TestRecentRequestsSnapshotEmptyReturnsTwentyBuckets(t *testing.T) { + now := time.Unix(1_700_000_000, 0).In(time.Local) + a := &Auth{} + + got := a.RecentRequestsSnapshot(now) + if len(got) != recentRequestBucketCount { + t.Fatalf("len = %d, want %d", len(got), recentRequestBucketCount) + } + + currentBucketID := now.Unix() / recentRequestBucketSeconds + baseBucketID := currentBucketID - int64(recentRequestBucketCount-1) + for i, bucket := range got { + if bucket.Success != 0 || bucket.Failed != 0 { + t.Fatalf("bucket[%d] counts = %d/%d, want 0/0", i, bucket.Success, bucket.Failed) + } + if strings.TrimSpace(bucket.Time) == "" { + t.Fatalf("bucket[%d] time label is empty", i) + } + expectedBucketID := baseBucketID + int64(i) + start := time.Unix(expectedBucketID*recentRequestBucketSeconds, 0).In(time.Local) + end := start.Add(10 * time.Minute) + expected := start.Format("15:04") + "-" + end.Format("15:04") + if bucket.Time != expected { + t.Fatalf("bucket[%d] time = %q, want %q", i, bucket.Time, expected) + } + } +} + +func TestRecentRequestsSnapshotIncludesCounts(t *testing.T) { + now := time.Unix(1_700_000_000, 0).In(time.Local) + a := &Auth{} + + a.recordRecentRequest(now, true) + a.recordRecentRequest(now, false) + + got := a.RecentRequestsSnapshot(now) + if len(got) != recentRequestBucketCount { + t.Fatalf("len = %d, want %d", len(got), recentRequestBucketCount) + } + + newest := got[len(got)-1] + if newest.Success != 1 || newest.Failed != 1 { + t.Fatalf("newest bucket = success=%d failed=%d, want 1/1", newest.Success, newest.Failed) + } +} + +func TestRecentRequestsSnapshotBucketAdvanceMovesCounts(t *testing.T) { + now := time.Unix(1_700_000_000, 0).In(time.Local) + next := now.Add(10 * time.Minute) + a := &Auth{} + + a.recordRecentRequest(now, true) + a.recordRecentRequest(next, false) + + got := a.RecentRequestsSnapshot(next) + if len(got) != recentRequestBucketCount { + t.Fatalf("len = %d, want %d", len(got), recentRequestBucketCount) + } + + secondNewest := got[len(got)-2] + newest := got[len(got)-1] + if secondNewest.Success != 1 || secondNewest.Failed != 0 { + t.Fatalf("second newest bucket = success=%d failed=%d, want 1/0", secondNewest.Success, secondNewest.Failed) + } + if newest.Success != 0 || newest.Failed != 1 { + t.Fatalf("newest bucket = success=%d failed=%d, want 0/1", newest.Success, newest.Failed) } } diff --git a/sdk/cliproxy/builder.go b/sdk/cliproxy/builder.go index b8cf991c14..c7e187ee6b 100644 --- a/sdk/cliproxy/builder.go +++ b/sdk/cliproxy/builder.go @@ -8,12 +8,12 @@ import ( "strings" "time" - configaccess "github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + configaccess "github.com/router-for-me/CLIProxyAPI/v7/internal/access/config_access" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) // Builder constructs a Service instance with customizable providers. @@ -214,7 +214,7 @@ func (b *Builder) Build() (*Service, error) { if b.cfg != nil { strategy = strings.ToLower(strings.TrimSpace(b.cfg.Routing.Strategy)) // Support both legacy ClaudeCodeSessionAffinity and new universal SessionAffinity - sessionAffinity = b.cfg.Routing.ClaudeCodeSessionAffinity || b.cfg.Routing.SessionAffinity + sessionAffinity = b.cfg.Routing.SessionAffinity if ttlStr := strings.TrimSpace(b.cfg.Routing.SessionAffinityTTL); ttlStr != "" { if parsed, err := time.ParseDuration(ttlStr); err == nil && parsed > 0 { sessionAffinityTTL = parsed diff --git a/sdk/cliproxy/executor/types.go b/sdk/cliproxy/executor/types.go index 4ea8103947..fd1da2e537 100644 --- a/sdk/cliproxy/executor/types.go +++ b/sdk/cliproxy/executor/types.go @@ -4,12 +4,19 @@ import ( "net/http" "net/url" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) // RequestedModelMetadataKey stores the client-requested model name in Options.Metadata. const RequestedModelMetadataKey = "requested_model" +// RequestPathMetadataKey stores the inbound HTTP request path (e.g. "/v1/images/generations") in Options.Metadata. +// It is optional and may be absent for non-HTTP executions. +const RequestPathMetadataKey = "request_path" + +// DisallowFreeAuthMetadataKey instructs auth selection to skip known free-tier credentials. +const DisallowFreeAuthMetadataKey = "disallow_free_auth" + const ( // PinnedAuthMetadataKey locks execution to a specific auth ID. PinnedAuthMetadataKey = "pinned_auth_id" diff --git a/sdk/cliproxy/model_registry.go b/sdk/cliproxy/model_registry.go index 01cea5b715..9cb928c98a 100644 --- a/sdk/cliproxy/model_registry.go +++ b/sdk/cliproxy/model_registry.go @@ -1,6 +1,6 @@ package cliproxy -import "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" +import "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" // ModelInfo re-exports the registry model info structure. type ModelInfo = registry.ModelInfo diff --git a/sdk/cliproxy/pipeline/context.go b/sdk/cliproxy/pipeline/context.go index fc6754eb97..4cffb0b4d9 100644 --- a/sdk/cliproxy/pipeline/context.go +++ b/sdk/cliproxy/pipeline/context.go @@ -4,9 +4,9 @@ import ( "context" "net/http" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) // Context encapsulates execution state shared across middleware, translators, and executors. diff --git a/sdk/cliproxy/pprof_server.go b/sdk/cliproxy/pprof_server.go index 3fafef4cd4..ec30b4bef3 100644 --- a/sdk/cliproxy/pprof_server.go +++ b/sdk/cliproxy/pprof_server.go @@ -9,7 +9,7 @@ import ( "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" log "github.com/sirupsen/logrus" ) diff --git a/sdk/cliproxy/providers.go b/sdk/cliproxy/providers.go index 7ce89f76fe..542b2d9d6a 100644 --- a/sdk/cliproxy/providers.go +++ b/sdk/cliproxy/providers.go @@ -3,8 +3,8 @@ package cliproxy import ( "context" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) // NewFileTokenClientProvider returns the default token-backed client loader. diff --git a/sdk/cliproxy/rtprovider.go b/sdk/cliproxy/rtprovider.go index 5c4f579a85..d07b4cb4f9 100644 --- a/sdk/cliproxy/rtprovider.go +++ b/sdk/cliproxy/rtprovider.go @@ -5,8 +5,8 @@ import ( "strings" "sync" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/proxyutil" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/proxyutil" log "github.com/sirupsen/logrus" ) diff --git a/sdk/cliproxy/rtprovider_test.go b/sdk/cliproxy/rtprovider_test.go index f907081e29..6ea08432c1 100644 --- a/sdk/cliproxy/rtprovider_test.go +++ b/sdk/cliproxy/rtprovider_test.go @@ -4,7 +4,7 @@ import ( "net/http" "testing" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" ) func TestRoundTripperForDirectBypassesProxy(t *testing.T) { diff --git a/sdk/cliproxy/service.go b/sdk/cliproxy/service.go index 5e873d370b..9cdb989b5b 100644 --- a/sdk/cliproxy/service.go +++ b/sdk/cliproxy/service.go @@ -12,17 +12,21 @@ import ( "sync" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/usage" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher" - "github.com/router-for-me/CLIProxyAPI/v6/internal/wsrelay" - sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access" - sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/usage" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api" + "github.com/router-for-me/CLIProxyAPI/v7/internal/home" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + kiroauth "github.com/router-for-me/CLIProxyAPI/v7/internal/auth/kiro" + "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor" + "github.com/router-for-me/CLIProxyAPI/v7/internal/util" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher/diff" + "github.com/router-for-me/CLIProxyAPI/v7/internal/wsrelay" + sdkaccess "github.com/router-for-me/CLIProxyAPI/v7/sdk/access" + sdkAuth "github.com/router-for-me/CLIProxyAPI/v7/sdk/auth" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/usage" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" log "github.com/sirupsen/logrus" ) @@ -36,6 +40,9 @@ type Service struct { // cfgMu protects concurrent access to the configuration. cfgMu sync.RWMutex + // configUpdateMu serializes config updates across watcher + home. + configUpdateMu sync.Mutex + // configPath is the path to the configuration file. configPath string @@ -89,6 +96,9 @@ type Service struct { // wsGateway manages websocket Gemini providers. wsGateway *wsrelay.Manager + + homeClient *home.Client + homeCancel context.CancelFunc } // RegisterUsagePlugin registers a usage plugin on the global usage manager. @@ -424,6 +434,30 @@ func (s *Service) ensureExecutorsForAuthWithMode(a *coreauth.Auth, forceReplace s.coreManager.RegisterExecutor(executor.NewClaudeExecutor(s.cfg)) case "kimi": s.coreManager.RegisterExecutor(executor.NewKimiExecutor(s.cfg)) + case "kiro": + s.coreManager.RegisterExecutor(executor.NewKiroExecutor(s.cfg)) + case "kilo": + s.coreManager.RegisterExecutor(executor.NewKiloExecutor(s.cfg)) + case "cursor": + s.coreManager.RegisterExecutor(executor.NewCursorExecutor(s.cfg)) + case "github-copilot": + s.coreManager.RegisterExecutor(executor.NewGitHubCopilotExecutor(s.cfg)) + case "codebuddy": + s.coreManager.RegisterExecutor(executor.NewCodeBuddyExecutor(s.cfg)) + case "codebuddy-ai": + s.coreManager.RegisterExecutor(executor.NewCodeBuddyAIExecutor(s.cfg)) + case "codearts": + s.coreManager.RegisterExecutor(executor.NewCodeArtsExecutor(s.cfg)) + case "joycode": + s.coreManager.RegisterExecutor(executor.NewJoyCodeExecutor(s.cfg)) + case "gitlab": + s.coreManager.RegisterExecutor(executor.NewGitLabExecutor(s.cfg)) + case "bt": + s.coreManager.RegisterExecutor(executor.NewBTExecutor(s.cfg)) + case "qoder": + s.coreManager.RegisterExecutor(executor.NewQoderExecutor(s.cfg)) + case "iflow": + s.coreManager.RegisterExecutor(executor.NewIFlowExecutor(s.cfg)) default: providerKey := strings.ToLower(strings.TrimSpace(a.Provider)) if providerKey == "" { @@ -462,6 +496,270 @@ func (s *Service) rebindExecutors() { } } +func (s *Service) applyConfigUpdate(newCfg *config.Config) { + if s == nil { + return + } + + s.configUpdateMu.Lock() + defer s.configUpdateMu.Unlock() + + previousStrategy := "" + var previousSessionAffinity bool + var previousSessionAffinityTTL string + s.cfgMu.RLock() + if s.cfg != nil { + previousStrategy = strings.ToLower(strings.TrimSpace(s.cfg.Routing.Strategy)) + previousSessionAffinity = s.cfg.Routing.SessionAffinity + previousSessionAffinityTTL = s.cfg.Routing.SessionAffinityTTL + } + s.cfgMu.RUnlock() + + if newCfg == nil { + s.cfgMu.RLock() + newCfg = s.cfg + s.cfgMu.RUnlock() + } + if newCfg == nil { + return + } + + nextStrategy := strings.ToLower(strings.TrimSpace(newCfg.Routing.Strategy)) + normalizeStrategy := func(strategy string) string { + switch strategy { + case "fill-first", "fillfirst", "ff": + return "fill-first" + default: + return "round-robin" + } + } + previousStrategy = normalizeStrategy(previousStrategy) + nextStrategy = normalizeStrategy(nextStrategy) + + nextSessionAffinity := newCfg.Routing.SessionAffinity + nextSessionAffinityTTL := newCfg.Routing.SessionAffinityTTL + + selectorChanged := previousStrategy != nextStrategy || + previousSessionAffinity != nextSessionAffinity || + previousSessionAffinityTTL != nextSessionAffinityTTL + + if s.coreManager != nil && selectorChanged { + var selector coreauth.Selector + switch nextStrategy { + case "fill-first": + selector = &coreauth.FillFirstSelector{} + default: + selector = &coreauth.RoundRobinSelector{} + } + + if nextSessionAffinity { + ttl := time.Hour + if ttlStr := strings.TrimSpace(nextSessionAffinityTTL); ttlStr != "" { + if parsed, err := time.ParseDuration(ttlStr); err == nil && parsed > 0 { + ttl = parsed + } + } + selector = coreauth.NewSessionAffinitySelectorWithConfig(coreauth.SessionAffinityConfig{ + Fallback: selector, + TTL: ttl, + }) + } + + s.coreManager.SetSelector(selector) + } + + s.applyRetryConfig(newCfg) + s.applyPprofConfig(newCfg) + if s.server != nil { + s.server.UpdateClients(newCfg) + } + s.cfgMu.Lock() + s.cfg = newCfg + s.cfgMu.Unlock() + if s.coreManager != nil { + s.coreManager.SetConfig(newCfg) + s.coreManager.SetOAuthModelAlias(newCfg.OAuthModelAlias) + } + s.rebindExecutors() +} + +func forceHomeRuntimeConfig(cfg *config.Config) { + if cfg == nil { + return + } + cfg.APIKeys = nil + cfg.UsageStatisticsEnabled = true + cfg.DisableCooling = true + cfg.WebsocketAuth = false + cfg.EnableGeminiCLIEndpoint = false + cfg.RemoteManagement.AllowRemote = false + cfg.RemoteManagement.DisableControlPanel = true +} + +func (s *Service) registerHomeExecutors() { + if s == nil || s.coreManager == nil || s.cfg == nil { + return + } + + // Register baseline executors so home-dispatched auth entries can execute without + // requiring any local auth-dir credentials. + s.coreManager.RegisterExecutor(executor.NewCodexAutoExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewClaudeExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewGeminiExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewGeminiVertexExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewGeminiCLIExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewAIStudioExecutor(s.cfg, "", s.wsGateway)) + s.coreManager.RegisterExecutor(executor.NewAntigravityExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewKimiExecutor(s.cfg)) + s.coreManager.RegisterExecutor(executor.NewOpenAICompatExecutor("openai-compatibility", s.cfg)) +} + +func (s *Service) applyHomeOverlay(remoteCfg *config.Config) { + if s == nil || remoteCfg == nil { + return + } + + s.cfgMu.RLock() + baseCfg := s.cfg + s.cfgMu.RUnlock() + if baseCfg == nil { + return + } + + merged := *remoteCfg + merged.Host = baseCfg.Host + merged.Port = baseCfg.Port + merged.TLS = baseCfg.TLS + merged.Home = baseCfg.Home + forceHomeRuntimeConfig(&merged) + + logHomeConfigChanges(baseCfg, &merged) + s.applyConfigUpdate(&merged) +} + +func logHomeConfigChanges(oldCfg, newCfg *config.Config) { + if oldCfg == nil || newCfg == nil || !newCfg.Home.Enabled || (!oldCfg.Debug && !newCfg.Debug) { + return + } + + details := diff.BuildConfigChangeDetails(oldCfg, newCfg) + if len(details) == 0 { + return + } + + if newCfg.Debug && !log.IsLevelEnabled(log.DebugLevel) { + util.SetLogLevel(newCfg) + } + + log.Debugf("home config changes detected:") + for _, detail := range details { + log.Debugf(" %s", detail) + } +} + +func (s *Service) startHomeUsageForwarder(ctx context.Context, client *home.Client) { + if s == nil || client == nil { + return + } + if ctx == nil { + ctx = context.Background() + } + + sleep := func(d time.Duration) bool { + if d <= 0 { + return true + } + timer := time.NewTimer(d) + defer timer.Stop() + select { + case <-ctx.Done(): + return false + case <-timer.C: + return true + } + } + + go func() { + for { + select { + case <-ctx.Done(): + return + default: + } + + if !client.HeartbeatOK() { + if !sleep(time.Second) { + return + } + continue + } + + items := redisqueue.PopOldest(64) + if len(items) == 0 { + if !sleep(500 * time.Millisecond) { + return + } + continue + } + + for i := range items { + if errPush := client.LPushUsage(ctx, items[i]); errPush != nil { + for j := i; j < len(items); j++ { + redisqueue.Enqueue(items[j]) + } + if !sleep(time.Second) { + return + } + break + } + } + } + }() +} + +func (s *Service) startHomeSubscriber(ctx context.Context) { + if s == nil { + return + } + s.cfgMu.RLock() + cfg := s.cfg + s.cfgMu.RUnlock() + if cfg == nil || !cfg.Home.Enabled { + return + } + + if s.homeCancel != nil { + s.homeCancel() + s.homeCancel = nil + } + if s.homeClient != nil { + s.homeClient.Close() + s.homeClient = nil + } + + homeCtx := ctx + if homeCtx == nil { + homeCtx = context.Background() + } + homeCtx, cancel := context.WithCancel(homeCtx) + s.homeCancel = cancel + + client := home.New(cfg.Home) + s.homeClient = client + home.SetCurrent(client) + + go client.StartConfigSubscriber(homeCtx, func(raw []byte) error { + parsed, err := config.ParseConfigBytes(raw) + if err != nil { + log.Warnf("failed to parse home config payload: %v", err) + return err + } + s.applyHomeOverlay(parsed) + return nil + }) + s.startHomeUsageForwarder(homeCtx, client) +} + // Run starts the service and blocks until the context is cancelled or the server stops. // It initializes all components including authentication, file watching, HTTP server, // and starts processing requests. The method blocks until the context is cancelled. @@ -480,6 +778,11 @@ func (s *Service) Run(ctx context.Context) error { } usage.StartDefault(ctx) + homeEnabled := s.cfg != nil && s.cfg.Home.Enabled + if homeEnabled { + forceHomeRuntimeConfig(s.cfg) + redisqueue.SetUsageStatisticsEnabled(true) + } shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second) defer shutdownCancel() @@ -489,32 +792,36 @@ func (s *Service) Run(ctx context.Context) error { } }() - if err := s.ensureAuthDir(); err != nil { - return err + if !homeEnabled { + if errEnsureAuthDir := s.ensureAuthDir(); errEnsureAuthDir != nil { + return errEnsureAuthDir + } } s.applyRetryConfig(s.cfg) - if s.coreManager != nil { + if s.coreManager != nil && !homeEnabled { if errLoad := s.coreManager.Load(ctx); errLoad != nil { log.Warnf("failed to load auth store: %v", errLoad) } } - tokenResult, err := s.tokenProvider.Load(ctx, s.cfg) - if err != nil && !errors.Is(err, context.Canceled) { - return err - } - if tokenResult == nil { - tokenResult = &TokenClientResult{} - } + if !homeEnabled { + tokenResult, err := s.tokenProvider.Load(ctx, s.cfg) + if err != nil && !errors.Is(err, context.Canceled) { + return err + } + if tokenResult == nil { + tokenResult = &TokenClientResult{} + } - apiKeyResult, err := s.apiKeyProvider.Load(ctx, s.cfg) - if err != nil && !errors.Is(err, context.Canceled) { - return err - } - if apiKeyResult == nil { - apiKeyResult = &APIKeyClientResult{} + apiKeyResult, err := s.apiKeyProvider.Load(ctx, s.cfg) + if err != nil && !errors.Is(err, context.Canceled) { + return err + } + if apiKeyResult == nil { + apiKeyResult = &APIKeyClientResult{} + } } // legacy clients removed; no caches to refresh @@ -526,6 +833,10 @@ func (s *Service) Run(ctx context.Context) error { s.authManager = newDefaultAuthManager() } + if homeEnabled { + s.startHomeSubscriber(ctx) + } + s.ensureWebsocketGateway() if s.server != nil && s.wsGateway != nil { s.server.AttachWebsocketRoute(s.wsGateway.Path(), s.wsGateway.Handler()) @@ -547,6 +858,12 @@ func (s *Service) Run(ctx context.Context) error { }) } + if homeEnabled { + s.registerHomeExecutors() + // Home mode does not expose in-process Redis RESP usage output; usage is forwarded to home instead. + redisqueue.SetEnabled(true) + } + if s.hooks.OnBeforeStart != nil { s.hooks.OnBeforeStart(s.cfg) } @@ -607,107 +924,31 @@ func (s *Service) Run(ctx context.Context) error { s.hooks.OnAfterStart(s) } - var watcherWrapper *WatcherWrapper - reloadCallback := func(newCfg *config.Config) { - previousStrategy := "" - var previousSessionAffinity bool - var previousSessionAffinityTTL string - s.cfgMu.RLock() - if s.cfg != nil { - previousStrategy = strings.ToLower(strings.TrimSpace(s.cfg.Routing.Strategy)) - previousSessionAffinity = s.cfg.Routing.ClaudeCodeSessionAffinity || s.cfg.Routing.SessionAffinity - previousSessionAffinityTTL = s.cfg.Routing.SessionAffinityTTL - } - s.cfgMu.RUnlock() + if !homeEnabled { + var watcherWrapper *WatcherWrapper + reloadCallback := func(newCfg *config.Config) { s.applyConfigUpdate(newCfg) } - if newCfg == nil { - s.cfgMu.RLock() - newCfg = s.cfg - s.cfgMu.RUnlock() - } - if newCfg == nil { - return + watcherWrapper, errCreate := s.watcherFactory(s.configPath, s.cfg.AuthDir, reloadCallback) + if errCreate != nil { + return fmt.Errorf("cliproxy: failed to create watcher: %w", errCreate) } - - nextStrategy := strings.ToLower(strings.TrimSpace(newCfg.Routing.Strategy)) - normalizeStrategy := func(strategy string) string { - switch strategy { - case "fill-first", "fillfirst", "ff": - return "fill-first" - default: - return "round-robin" - } - } - previousStrategy = normalizeStrategy(previousStrategy) - nextStrategy = normalizeStrategy(nextStrategy) - - nextSessionAffinity := newCfg.Routing.ClaudeCodeSessionAffinity || newCfg.Routing.SessionAffinity - nextSessionAffinityTTL := newCfg.Routing.SessionAffinityTTL - - selectorChanged := previousStrategy != nextStrategy || - previousSessionAffinity != nextSessionAffinity || - previousSessionAffinityTTL != nextSessionAffinityTTL - - if s.coreManager != nil && selectorChanged { - var selector coreauth.Selector - switch nextStrategy { - case "fill-first": - selector = &coreauth.FillFirstSelector{} - default: - selector = &coreauth.RoundRobinSelector{} - } - - if nextSessionAffinity { - ttl := time.Hour - if ttlStr := strings.TrimSpace(nextSessionAffinityTTL); ttlStr != "" { - if parsed, err := time.ParseDuration(ttlStr); err == nil && parsed > 0 { - ttl = parsed - } - } - selector = coreauth.NewSessionAffinitySelectorWithConfig(coreauth.SessionAffinityConfig{ - Fallback: selector, - TTL: ttl, - }) - } - - s.coreManager.SetSelector(selector) + s.watcher = watcherWrapper + s.ensureAuthUpdateQueue(ctx) + if s.authUpdates != nil { + watcherWrapper.SetAuthUpdateQueue(s.authUpdates) } + watcherWrapper.SetConfig(s.cfg) - s.applyRetryConfig(newCfg) - s.applyPprofConfig(newCfg) - if s.server != nil { - s.server.UpdateClients(newCfg) - } - s.cfgMu.Lock() - s.cfg = newCfg - s.cfgMu.Unlock() - if s.coreManager != nil { - s.coreManager.SetConfig(newCfg) - s.coreManager.SetOAuthModelAlias(newCfg.OAuthModelAlias) + watcherCtx, watcherCancel := context.WithCancel(context.Background()) + s.watcherCancel = watcherCancel + if errStart := watcherWrapper.Start(watcherCtx); errStart != nil { + return fmt.Errorf("cliproxy: failed to start watcher: %w", errStart) } - s.rebindExecutors() + log.Info("file watcher started for config and auth directory changes") } - watcherWrapper, err = s.watcherFactory(s.configPath, s.cfg.AuthDir, reloadCallback) - if err != nil { - return fmt.Errorf("cliproxy: failed to create watcher: %w", err) - } - s.watcher = watcherWrapper - s.ensureAuthUpdateQueue(ctx) - if s.authUpdates != nil { - watcherWrapper.SetAuthUpdateQueue(s.authUpdates) - } - watcherWrapper.SetConfig(s.cfg) - - watcherCtx, watcherCancel := context.WithCancel(context.Background()) - s.watcherCancel = watcherCancel - if err = watcherWrapper.Start(watcherCtx); err != nil { - return fmt.Errorf("cliproxy: failed to start watcher: %w", err) - } - log.Info("file watcher started for config and auth directory changes") - // Prefer core auth manager auto refresh if available. - if s.coreManager != nil { + if s.coreManager != nil && !homeEnabled { interval := 15 * time.Minute s.coreManager.StartAutoRefresh(context.Background(), interval) log.Infof("core auth auto-refresh started (interval=%s)", interval) @@ -717,8 +958,8 @@ func (s *Service) Run(ctx context.Context) error { case <-ctx.Done(): log.Debug("service context cancelled, shutting down...") return ctx.Err() - case err = <-s.serverErr: - return err + case errServer := <-s.serverErr: + return errServer } } @@ -741,6 +982,16 @@ func (s *Service) Shutdown(ctx context.Context) error { ctx = context.Background() } + if s.homeCancel != nil { + s.homeCancel() + s.homeCancel = nil + } + if s.homeClient != nil { + s.homeClient.Close() + s.homeClient = nil + } + home.ClearCurrent() + // legacy refresh loop removed; only stopping core auth manager below if s.watcherCancel != nil { @@ -927,6 +1178,42 @@ func (s *Service) registerModelsForAuth(a *coreauth.Auth) { case "kimi": models = registry.GetKimiModels() models = applyExcludedModels(models, excluded) + case "kiro": + models = s.fetchKiroModels(a) + models = applyExcludedModels(models, excluded) + case "kilo": + models = registry.GetStaticModelDefinitionsByChannel("kilo") + models = applyExcludedModels(models, excluded) + case "cursor": + models = registry.GetCursorModels() + models = applyExcludedModels(models, excluded) + case "github-copilot": + models = registry.GetGitHubCopilotModels() + models = applyExcludedModels(models, excluded) + case "codebuddy": + models = registry.GetCodeBuddyModels() + models = applyExcludedModels(models, excluded) + case "codebuddy-ai": + models = registry.GetCodeBuddyAIModels() + models = applyExcludedModels(models, excluded) + case "codearts": + models = registry.GetStaticModelDefinitionsByChannel("codearts") + models = applyExcludedModels(models, excluded) + case "joycode": + models = registry.GetStaticModelDefinitionsByChannel("joycode") + models = applyExcludedModels(models, excluded) + case "gitlab": + models = registry.GetStaticModelDefinitionsByChannel("gitlab") + models = applyExcludedModels(models, excluded) + case "bt": + models = registry.GetBTModels() + models = applyExcludedModels(models, excluded) + case "qoder": + models = registry.GetQoderModels() + models = applyExcludedModels(models, excluded) + case "iflow": + models = registry.GetStaticModelDefinitionsByChannel("iflow") + models = applyExcludedModels(models, excluded) default: // Handle OpenAI-compatibility providers by name using config if s.cfg != nil { @@ -968,6 +1255,9 @@ func (s *Service) registerModelsForAuth(a *coreauth.Auth) { } for i := range s.cfg.OpenAICompatibility { compat := &s.cfg.OpenAICompatibility[i] + if compat.Disabled { + continue + } if strings.EqualFold(compat.Name, compatName) { isCompatAuth = true // Convert compatibility models to registry models @@ -1410,7 +1700,7 @@ func buildCodexConfigModels(entry *config.CodexKey) []*ModelInfo { if entry == nil { return nil } - return buildConfigModels(entry.Models, "openai", "openai") + return registry.WithCodexBuiltins(buildConfigModels(entry.Models, "openai", "openai")) } func rewriteModelInfoName(name, oldID, newID string) string { @@ -1542,3 +1832,215 @@ func applyOAuthModelAlias(cfg *config.Config, provider, authKind string, models } return out } + +func (s *Service) fetchKiroModels(a *coreauth.Auth) []*ModelInfo { + if a == nil { + log.Debug("kiro: auth is nil, using static models") + return registry.GetKiroModels() + } + + // Extract token data from auth attributes + tokenData := s.extractKiroTokenData(a) + if tokenData == nil || tokenData.AccessToken == "" { + log.Debug("kiro: no valid token data in auth, using static models") + return registry.GetKiroModels() + } + + // Create KiroAuth instance + kAuth := kiroauth.NewKiroAuth(s.cfg) + if kAuth == nil { + log.Warn("kiro: failed to create KiroAuth instance, using static models") + return registry.GetKiroModels() + } + + // Use timeout context for API call + ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second) + defer cancel() + + // Attempt to fetch dynamic models + apiModels, err := kAuth.ListAvailableModels(ctx, tokenData) + if err != nil { + log.Warnf("kiro: failed to fetch dynamic models: %v, using static models", err) + return registry.GetKiroModels() + } + + if len(apiModels) == 0 { + log.Debug("kiro: API returned no models, using static models") + return registry.GetKiroModels() + } + + // Convert API models to ModelInfo + models := convertKiroAPIModels(apiModels) + + // Generate agentic variants + models = generateKiroAgenticVariants(models) + + log.Infof("kiro: successfully fetched %d models from API (including agentic variants)", len(models)) + return models +} + +// extractKiroTokenData extracts KiroTokenData from auth attributes and metadata. +// It supports both config-based tokens (stored in Attributes) and file-based tokens (stored in Metadata). +func (s *Service) extractKiroTokenData(a *coreauth.Auth) *kiroauth.KiroTokenData { + if a == nil { + return nil + } + + var accessToken, profileArn, refreshToken string + + // Priority 1: Try to get from Attributes (config.yaml source) + if a.Attributes != nil { + accessToken = strings.TrimSpace(a.Attributes["access_token"]) + profileArn = strings.TrimSpace(a.Attributes["profile_arn"]) + refreshToken = strings.TrimSpace(a.Attributes["refresh_token"]) + } + + // Priority 2: If not found in Attributes, try Metadata (JSON file source) + if accessToken == "" && a.Metadata != nil { + if at, ok := a.Metadata["access_token"].(string); ok { + accessToken = strings.TrimSpace(at) + } + if pa, ok := a.Metadata["profile_arn"].(string); ok { + profileArn = strings.TrimSpace(pa) + } + if rt, ok := a.Metadata["refresh_token"].(string); ok { + refreshToken = strings.TrimSpace(rt) + } + } + + // access_token is required + if accessToken == "" { + return nil + } + + return &kiroauth.KiroTokenData{ + AccessToken: accessToken, + ProfileArn: profileArn, + RefreshToken: refreshToken, + } +} + +// convertKiroAPIModels converts Kiro API models to ModelInfo slice. +func convertKiroAPIModels(apiModels []*kiroauth.KiroModel) []*ModelInfo { + if len(apiModels) == 0 { + return nil + } + + now := time.Now().Unix() + models := make([]*ModelInfo, 0, len(apiModels)) + + for _, m := range apiModels { + if m == nil || m.ModelID == "" { + continue + } + + // Create model ID with kiro- prefix + modelID := "kiro-" + normalizeKiroModelID(m.ModelID) + + info := &ModelInfo{ + ID: modelID, + Object: "model", + Created: now, + OwnedBy: "aws", + Type: "kiro", + DisplayName: formatKiroDisplayName(m.ModelName, m.RateMultiplier), + Description: m.Description, + ContextLength: 200000, + MaxCompletionTokens: 64000, + Thinking: ®istry.ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true}, + } + + if m.MaxInputTokens > 0 { + info.ContextLength = m.MaxInputTokens + } + + models = append(models, info) + } + + return models +} + +// normalizeKiroModelID normalizes a Kiro model ID by converting dots to dashes +// and removing common prefixes. +func normalizeKiroModelID(modelID string) string { + // Remove common prefixes + modelID = strings.TrimPrefix(modelID, "anthropic.") + modelID = strings.TrimPrefix(modelID, "amazon.") + + // Replace dots with dashes for consistency + modelID = strings.ReplaceAll(modelID, ".", "-") + + // Replace underscores with dashes + modelID = strings.ReplaceAll(modelID, "_", "-") + + return strings.ToLower(modelID) +} + +// formatKiroDisplayName formats the display name with rate multiplier info. +func formatKiroDisplayName(modelName string, rateMultiplier float64) string { + if modelName == "" { + return "" + } + + displayName := "Kiro " + modelName + if rateMultiplier > 0 && rateMultiplier != 1.0 { + displayName += fmt.Sprintf(" (%.1fx credit)", rateMultiplier) + } + + return displayName +} + +// generateKiroAgenticVariants generates agentic variants for Kiro models. +// Agentic variants have optimized system prompts for coding agents. +func generateKiroAgenticVariants(models []*ModelInfo) []*ModelInfo { + if len(models) == 0 { + return models + } + + result := make([]*ModelInfo, 0, len(models)*2) + result = append(result, models...) + + for _, m := range models { + if m == nil { + continue + } + + // Skip if already an agentic variant + if strings.HasSuffix(m.ID, "-agentic") { + continue + } + + // Skip auto models from agentic variant generation + if strings.Contains(m.ID, "-auto") { + continue + } + + // Create agentic variant + agentic := &ModelInfo{ + ID: m.ID + "-agentic", + Object: m.Object, + Created: m.Created, + OwnedBy: m.OwnedBy, + Type: m.Type, + DisplayName: m.DisplayName + " (Agentic)", + Description: m.Description + " - Optimized for coding agents (chunked writes)", + ContextLength: m.ContextLength, + MaxCompletionTokens: m.MaxCompletionTokens, + } + + // Copy thinking support if present + if m.Thinking != nil { + agentic.Thinking = ®istry.ThinkingSupport{ + Min: m.Thinking.Min, + Max: m.Thinking.Max, + ZeroAllowed: m.Thinking.ZeroAllowed, + DynamicAllowed: m.Thinking.DynamicAllowed, + } + } + + result = append(result, agentic) + } + + return result +} + diff --git a/sdk/cliproxy/service_codearts.go b/sdk/cliproxy/service_codearts.go new file mode 100644 index 0000000000..f1e1d477e2 --- /dev/null +++ b/sdk/cliproxy/service_codearts.go @@ -0,0 +1,59 @@ +package cliproxy + +import ( + "time" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" +) + +// getCodeArtsModels returns the hardcoded list of CodeArts models. +func getCodeArtsModels() []*ModelInfo { + now := time.Now().Unix() + return []*ModelInfo{ + { + ID: "Glm-5-internal", + Object: "model", + Created: now, + OwnedBy: "huaweicloud", + Type: "codearts", + DisplayName: "GLM-5 Internal", + Thinking: ®istry.ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "GLM-5.1", + Object: "model", + Created: now, + OwnedBy: "huaweicloud", + Type: "codearts", + DisplayName: "GLM-5.1", + Thinking: ®istry.ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "deepseek-v3.2", + Object: "model", + Created: now, + OwnedBy: "huaweicloud", + Type: "codearts", + DisplayName: "DeepSeek V3.2", + Thinking: ®istry.ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "Glm-4.7-internal", + Object: "model", + Created: now, + OwnedBy: "huaweicloud", + Type: "codearts", + DisplayName: "GLM-4.7 Internal", + Thinking: ®istry.ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + { + ID: "GLM-4.7-SFT-Harmony", + Object: "model", + Created: now, + OwnedBy: "huaweicloud", + Type: "codearts", + DisplayName: "GLM-4.7 SFT Harmony", + Thinking: ®istry.ThinkingSupport{Levels: []string{"low", "medium", "high"}}, + }, + } +} diff --git a/sdk/cliproxy/service_codex_executor_binding_test.go b/sdk/cliproxy/service_codex_executor_binding_test.go index bb4fc84e10..20a9cd7c86 100644 --- a/sdk/cliproxy/service_codex_executor_binding_test.go +++ b/sdk/cliproxy/service_codex_executor_binding_test.go @@ -3,8 +3,8 @@ package cliproxy import ( "testing" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestEnsureExecutorsForAuth_CodexDoesNotReplaceInNormalMode(t *testing.T) { diff --git a/sdk/cliproxy/service_excluded_models_test.go b/sdk/cliproxy/service_excluded_models_test.go index 198a5bed73..fc16c09561 100644 --- a/sdk/cliproxy/service_excluded_models_test.go +++ b/sdk/cliproxy/service_excluded_models_test.go @@ -4,8 +4,8 @@ import ( "strings" "testing" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestRegisterModelsForAuth_UsesPreMergedExcludedModelsAttribute(t *testing.T) { diff --git a/sdk/cliproxy/service_gitlab_models_test.go b/sdk/cliproxy/service_gitlab_models_test.go new file mode 100644 index 0000000000..c487346a2a --- /dev/null +++ b/sdk/cliproxy/service_gitlab_models_test.go @@ -0,0 +1,86 @@ +package cliproxy + +import ( + "testing" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" +) + +func TestRegisterModelsForAuth_GitLabUsesDiscoveredModels(t *testing.T) { + service := &Service{cfg: &config.Config{}} + auth := &coreauth.Auth{ + ID: "gitlab-auth.json", + Provider: "gitlab", + Status: coreauth.StatusActive, + Metadata: map[string]any{ + "model_details": map[string]any{ + "model_provider": "anthropic", + "model_name": "claude-sonnet-4-5", + }, + }, + } + + reg := registry.GetGlobalRegistry() + reg.UnregisterClient(auth.ID) + t.Cleanup(func() { reg.UnregisterClient(auth.ID) }) + + service.registerModelsForAuth(auth) + models := reg.GetModelsForClient(auth.ID) + if len(models) < 2 { + t.Fatalf("expected stable alias and discovered model, got %d entries", len(models)) + } + + seenAlias := false + seenDiscovered := false + for _, model := range models { + switch model.ID { + case "gitlab-duo": + seenAlias = true + case "claude-sonnet-4-5": + seenDiscovered = true + } + } + if !seenAlias || !seenDiscovered { + t.Fatalf("expected gitlab-duo and discovered model, got %+v", models) + } +} + +func TestRegisterModelsForAuth_GitLabIncludesAgenticCatalog(t *testing.T) { + service := &Service{cfg: &config.Config{}} + auth := &coreauth.Auth{ + ID: "gitlab-agentic-auth.json", + Provider: "gitlab", + Status: coreauth.StatusActive, + } + + reg := registry.GetGlobalRegistry() + reg.UnregisterClient(auth.ID) + t.Cleanup(func() { reg.UnregisterClient(auth.ID) }) + + service.registerModelsForAuth(auth) + models := reg.GetModelsForClient(auth.ID) + if len(models) < 5 { + t.Fatalf("expected stable alias plus built-in agentic catalog, got %d entries", len(models)) + } + + required := map[string]bool{ + "gitlab-duo": false, + "duo-chat-opus-4-6": false, + "duo-chat-haiku-4-5": false, + "duo-chat-sonnet-4-5": false, + "duo-chat-opus-4-5": false, + "duo-chat-gpt-5-codex": false, + } + for _, model := range models { + if _, ok := required[model.ID]; ok { + required[model.ID] = true + } + } + for id, seen := range required { + if !seen { + t.Fatalf("expected built-in GitLab Duo model %q, got %+v", id, models) + } + } +} diff --git a/sdk/cliproxy/service_oauth_model_alias_test.go b/sdk/cliproxy/service_oauth_model_alias_test.go index 2caf7a178f..7405f7caca 100644 --- a/sdk/cliproxy/service_oauth_model_alias_test.go +++ b/sdk/cliproxy/service_oauth_model_alias_test.go @@ -3,7 +3,7 @@ package cliproxy import ( "testing" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestApplyOAuthModelAlias_Rename(t *testing.T) { diff --git a/sdk/cliproxy/service_stale_state_test.go b/sdk/cliproxy/service_stale_state_test.go index 010218d966..53849eb349 100644 --- a/sdk/cliproxy/service_stale_state_test.go +++ b/sdk/cliproxy/service_stale_state_test.go @@ -5,9 +5,9 @@ import ( "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func TestServiceApplyCoreAuthAddOrUpdate_DeleteReAddDoesNotInheritStaleRuntimeState(t *testing.T) { @@ -99,3 +99,32 @@ func TestServiceApplyCoreAuthAddOrUpdate_DeleteReAddDoesNotInheritStaleRuntimeSt t.Fatalf("expected re-added auth to re-register models in global registry") } } + +func TestForceHomeRuntimeConfigEnablesUsageStatistics(t *testing.T) { + cfg := &config.Config{ + UsageStatisticsEnabled: false, + } + + forceHomeRuntimeConfig(cfg) + + if !cfg.UsageStatisticsEnabled { + t.Fatal("expected home runtime config to force usage statistics enabled") + } +} + +func TestApplyHomeOverlayForcesUsageStatisticsEnabled(t *testing.T) { + baseCfg := &config.Config{} + baseCfg.Home.Enabled = true + service := &Service{cfg: baseCfg} + + service.applyHomeOverlay(&config.Config{ + UsageStatisticsEnabled: false, + }) + + if service.cfg == nil || !service.cfg.UsageStatisticsEnabled { + t.Fatal("expected home overlay to force usage statistics enabled") + } + if !service.cfg.Home.Enabled { + t.Fatal("expected home overlay to preserve local home settings") + } +} diff --git a/sdk/cliproxy/types.go b/sdk/cliproxy/types.go index 1521dffee4..c30b712bdd 100644 --- a/sdk/cliproxy/types.go +++ b/sdk/cliproxy/types.go @@ -6,9 +6,9 @@ package cliproxy import ( "context" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) // TokenClientProvider loads clients backed by stored authentication tokens. diff --git a/sdk/cliproxy/usage/manager.go b/sdk/cliproxy/usage/manager.go index 8d24f51f4e..2305d9a484 100644 --- a/sdk/cliproxy/usage/manager.go +++ b/sdk/cliproxy/usage/manager.go @@ -2,6 +2,7 @@ package usage import ( "context" + "strings" "sync" "time" @@ -12,16 +13,25 @@ import ( type Record struct { Provider string Model string + Alias string APIKey string AuthID string AuthIndex string + AuthType string Source string RequestedAt time.Time Latency time.Duration Failed bool + Fail Failure Detail Detail } +// Failure holds HTTP failure metadata for an upstream request attempt. +type Failure struct { + StatusCode int + Body string +} + // Detail holds the token usage breakdown. type Detail struct { InputTokens int64 @@ -31,6 +41,36 @@ type Detail struct { TotalTokens int64 } +type requestedModelAliasContextKey struct{} + +// WithRequestedModelAlias stores the client-requested model name for usage sinks. +func WithRequestedModelAlias(ctx context.Context, alias string) context.Context { + if ctx == nil { + ctx = context.Background() + } + alias = strings.TrimSpace(alias) + if alias == "" { + return ctx + } + return context.WithValue(ctx, requestedModelAliasContextKey{}, alias) +} + +// RequestedModelAliasFromContext returns the client-requested model name stored in ctx. +func RequestedModelAliasFromContext(ctx context.Context) string { + if ctx == nil { + return "" + } + raw := ctx.Value(requestedModelAliasContextKey{}) + switch value := raw.(type) { + case string: + return strings.TrimSpace(value) + case []byte: + return strings.TrimSpace(string(value)) + default: + return "" + } +} + // Plugin consumes usage records emitted by the proxy runtime. type Plugin interface { HandleUsage(ctx context.Context, record Record) diff --git a/sdk/cliproxy/watcher.go b/sdk/cliproxy/watcher.go index caeadf19b9..e4a9081b41 100644 --- a/sdk/cliproxy/watcher.go +++ b/sdk/cliproxy/watcher.go @@ -3,9 +3,9 @@ package cliproxy import ( "context" - "github.com/router-for-me/CLIProxyAPI/v6/internal/watcher" - coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - "github.com/router-for-me/CLIProxyAPI/v6/sdk/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/watcher" + coreauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + "github.com/router-for-me/CLIProxyAPI/v7/sdk/config" ) func defaultWatcherFactory(configPath, authDir string, reload func(*config.Config)) (*WatcherWrapper, error) { diff --git a/sdk/config/config.go b/sdk/config/config.go index 14163418f7..d39e512de1 100644 --- a/sdk/config/config.go +++ b/sdk/config/config.go @@ -4,7 +4,7 @@ // embed CLIProxyAPI without importing internal packages. package config -import internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config" +import internalconfig "github.com/router-for-me/CLIProxyAPI/v7/internal/config" type SDKConfig = internalconfig.SDKConfig @@ -41,6 +41,8 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) { return internalconfig.LoadConfigOptional(configFile, optional) } +func ParseConfigBytes(data []byte) (*Config, error) { return internalconfig.ParseConfigBytes(data) } + func SaveConfigPreserveComments(configFile string, cfg *Config) error { return internalconfig.SaveConfigPreserveComments(configFile, cfg) } diff --git a/sdk/logging/request_logger.go b/sdk/logging/request_logger.go index ddbda6b8b0..5f8cf754e1 100644 --- a/sdk/logging/request_logger.go +++ b/sdk/logging/request_logger.go @@ -1,7 +1,7 @@ // Package logging re-exports request logging primitives for SDK consumers. package logging -import internallogging "github.com/router-for-me/CLIProxyAPI/v6/internal/logging" +import internallogging "github.com/router-for-me/CLIProxyAPI/v7/internal/logging" const defaultErrorLogsMaxFiles = 10 diff --git a/sdk/translator/builtin/builtin.go b/sdk/translator/builtin/builtin.go index 798e43f1a9..f95e65870f 100644 --- a/sdk/translator/builtin/builtin.go +++ b/sdk/translator/builtin/builtin.go @@ -2,9 +2,9 @@ package builtin import ( - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" ) // Registry exposes the default registry populated with all built-in translators. diff --git a/server b/server new file mode 100755 index 0000000000..3fbaa342cb Binary files /dev/null and b/server differ diff --git a/test/amp_management_test.go b/test/amp_management_test.go index e384ef0e8b..6c694db6fa 100644 --- a/test/amp_management_test.go +++ b/test/amp_management_test.go @@ -10,8 +10,8 @@ import ( "testing" "github.com/gin-gonic/gin" - "github.com/router-for-me/CLIProxyAPI/v6/internal/api/handlers/management" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/api/handlers/management" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" ) func init() { diff --git a/test/builtin_tools_translation_test.go b/test/builtin_tools_translation_test.go index 07d7671544..70ee0ac1b9 100644 --- a/test/builtin_tools_translation_test.go +++ b/test/builtin_tools_translation_test.go @@ -3,9 +3,9 @@ package test import ( "testing" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" ) diff --git a/test/thinking_conversion_test.go b/test/thinking_conversion_test.go index c6ade7b2a6..9173aa0194 100644 --- a/test/thinking_conversion_test.go +++ b/test/thinking_conversion_test.go @@ -2,24 +2,23 @@ package test import ( "fmt" - "strings" "testing" "time" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/translator" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/translator" // Import provider packages to trigger init() registration of ProviderAppliers - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/antigravity" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/claude" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/codex" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/gemini" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/geminicli" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/kimi" - _ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/openai" - - "github.com/router-for-me/CLIProxyAPI/v6/internal/registry" - "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/antigravity" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/claude" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/codex" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/gemini" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/geminicli" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/kimi" + _ "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking/provider/openai" + + "github.com/router-for-me/CLIProxyAPI/v7/internal/registry" + "github.com/router-for-me/CLIProxyAPI/v7/internal/thinking" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" "github.com/tidwall/gjson" "github.com/tidwall/sjson" ) @@ -1066,12 +1065,12 @@ func TestThinkingE2EMatrix_Suffix(t *testing.T) { expectErr: false, }, - // Gemini Family Cross-Channel Consistency (Cases 106-114) + // Gemini Family Cross-Channel Consistency (Cases 90-95) // Tests that gemini/gemini-cli/antigravity as same API family should have consistent validation behavior - // Case 106: Gemini to Antigravity, budget 64000 (suffix) → clamped to Max + // Case 90: Gemini to Antigravity, budget 64000 (suffix) → clamped to Max { - name: "106", + name: "90", from: "gemini", to: "antigravity", model: "gemini-budget-model(64000)", @@ -1081,9 +1080,9 @@ func TestThinkingE2EMatrix_Suffix(t *testing.T) { includeThoughts: "true", expectErr: false, }, - // Case 107: Gemini to Gemini-CLI, budget 64000 (suffix) → clamped to Max + // Case 91: Gemini to Gemini-CLI, budget 64000 (suffix) → clamped to Max { - name: "107", + name: "91", from: "gemini", to: "gemini-cli", model: "gemini-budget-model(64000)", @@ -1093,9 +1092,9 @@ func TestThinkingE2EMatrix_Suffix(t *testing.T) { includeThoughts: "true", expectErr: false, }, - // Case 108: Gemini-CLI to Antigravity, budget 64000 (suffix) → clamped to Max + // Case 92: Gemini-CLI to Antigravity, budget 64000 (suffix) → clamped to Max { - name: "108", + name: "92", from: "gemini-cli", to: "antigravity", model: "gemini-budget-model(64000)", @@ -1105,9 +1104,9 @@ func TestThinkingE2EMatrix_Suffix(t *testing.T) { includeThoughts: "true", expectErr: false, }, - // Case 109: Gemini-CLI to Gemini, budget 64000 (suffix) → clamped to Max + // Case 93: Gemini-CLI to Gemini, budget 64000 (suffix) → clamped to Max { - name: "109", + name: "93", from: "gemini-cli", to: "gemini", model: "gemini-budget-model(64000)", @@ -1117,9 +1116,9 @@ func TestThinkingE2EMatrix_Suffix(t *testing.T) { includeThoughts: "true", expectErr: false, }, - // Case 110: Gemini to Antigravity, budget 8192 → passthrough (normal value) + // Case 94: Gemini to Antigravity, budget 8192 → passthrough (normal value) { - name: "110", + name: "94", from: "gemini", to: "antigravity", model: "gemini-budget-model(8192)", @@ -1129,9 +1128,9 @@ func TestThinkingE2EMatrix_Suffix(t *testing.T) { includeThoughts: "true", expectErr: false, }, - // Case 111: Gemini-CLI to Antigravity, budget 8192 → passthrough (normal value) + // Case 95: Gemini-CLI to Antigravity, budget 8192 → passthrough (normal value) { - name: "111", + name: "95", from: "gemini-cli", to: "antigravity", model: "gemini-budget-model(8192)", @@ -2167,12 +2166,12 @@ func TestThinkingE2EMatrix_Body(t *testing.T) { expectErr: true, }, - // Gemini Family Cross-Channel Consistency (Cases 106-114) + // Gemini Family Cross-Channel Consistency (Cases 90-95) // Tests that gemini/gemini-cli/antigravity as same API family should have consistent validation behavior - // Case 106: Gemini to Antigravity, thinkingBudget=64000 → exceeds Max error (same family strict validation) + // Case 90: Gemini to Antigravity, thinkingBudget=64000 → exceeds Max error (same family strict validation) { - name: "106", + name: "90", from: "gemini", to: "antigravity", model: "gemini-budget-model", @@ -2180,9 +2179,9 @@ func TestThinkingE2EMatrix_Body(t *testing.T) { expectField: "", expectErr: true, }, - // Case 107: Gemini to Gemini-CLI, thinkingBudget=64000 → exceeds Max error (same family strict validation) + // Case 91: Gemini to Gemini-CLI, thinkingBudget=64000 → exceeds Max error (same family strict validation) { - name: "107", + name: "91", from: "gemini", to: "gemini-cli", model: "gemini-budget-model", @@ -2190,9 +2189,9 @@ func TestThinkingE2EMatrix_Body(t *testing.T) { expectField: "", expectErr: true, }, - // Case 108: Gemini-CLI to Antigravity, thinkingBudget=64000 → exceeds Max error (same family strict validation) + // Case 92: Gemini-CLI to Antigravity, thinkingBudget=64000 → exceeds Max error (same family strict validation) { - name: "108", + name: "92", from: "gemini-cli", to: "antigravity", model: "gemini-budget-model", @@ -2200,9 +2199,9 @@ func TestThinkingE2EMatrix_Body(t *testing.T) { expectField: "", expectErr: true, }, - // Case 109: Gemini-CLI to Gemini, thinkingBudget=64000 → exceeds Max error (same family strict validation) + // Case 93: Gemini-CLI to Gemini, thinkingBudget=64000 → exceeds Max error (same family strict validation) { - name: "109", + name: "93", from: "gemini-cli", to: "gemini", model: "gemini-budget-model", @@ -2210,9 +2209,9 @@ func TestThinkingE2EMatrix_Body(t *testing.T) { expectField: "", expectErr: true, }, - // Case 110: Gemini to Antigravity, thinkingBudget=8192 → passthrough (normal value) + // Case 94: Gemini to Antigravity, thinkingBudget=8192 → passthrough (normal value) { - name: "110", + name: "94", from: "gemini", to: "antigravity", model: "gemini-budget-model", @@ -2222,9 +2221,9 @@ func TestThinkingE2EMatrix_Body(t *testing.T) { includeThoughts: "true", expectErr: false, }, - // Case 111: Gemini-CLI to Antigravity, thinkingBudget=8192 → passthrough (normal value) + // Case 95: Gemini-CLI to Antigravity, thinkingBudget=8192 → passthrough (normal value) { - name: "111", + name: "95", from: "gemini-cli", to: "antigravity", model: "gemini-budget-model", diff --git a/test/usage_logging_test.go b/test/usage_logging_test.go index 41c2ee341a..bcf6d19254 100644 --- a/test/usage_logging_test.go +++ b/test/usage_logging_test.go @@ -2,21 +2,22 @@ package test import ( "context" + "encoding/json" "fmt" "net/http" "net/http/httptest" "testing" "time" - "github.com/router-for-me/CLIProxyAPI/v6/internal/config" - runtimeexecutor "github.com/router-for-me/CLIProxyAPI/v6/internal/runtime/executor" - internalusage "github.com/router-for-me/CLIProxyAPI/v6/internal/usage" - cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth" - cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor" - sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator" + "github.com/router-for-me/CLIProxyAPI/v7/internal/config" + "github.com/router-for-me/CLIProxyAPI/v7/internal/redisqueue" + runtimeexecutor "github.com/router-for-me/CLIProxyAPI/v7/internal/runtime/executor" + cliproxyauth "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/auth" + cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v7/sdk/cliproxy/executor" + sdktranslator "github.com/router-for-me/CLIProxyAPI/v7/sdk/translator" ) -func TestGeminiExecutorRecordsSuccessfulZeroUsageInStatistics(t *testing.T) { +func TestGeminiExecutorRecordsSuccessfulZeroUsageInQueue(t *testing.T) { model := fmt.Sprintf("gemini-2.5-flash-zero-usage-%d", time.Now().UnixNano()) source := fmt.Sprintf("zero-usage-%d@example.com", time.Now().UnixNano()) @@ -42,10 +43,15 @@ func TestGeminiExecutorRecordsSuccessfulZeroUsageInStatistics(t *testing.T) { }, } - prevStatsEnabled := internalusage.StatisticsEnabled() - internalusage.SetStatisticsEnabled(true) + prevQueueEnabled := redisqueue.Enabled() + prevUsageEnabled := redisqueue.UsageStatisticsEnabled() + redisqueue.SetEnabled(false) + redisqueue.SetEnabled(true) + redisqueue.SetUsageStatisticsEnabled(true) t.Cleanup(func() { - internalusage.SetStatisticsEnabled(prevStatsEnabled) + redisqueue.SetEnabled(false) + redisqueue.SetEnabled(prevQueueEnabled) + redisqueue.SetUsageStatisticsEnabled(prevUsageEnabled) }) _, err := executor.Execute(context.Background(), auth, cliproxyexecutor.Request{ @@ -59,39 +65,58 @@ func TestGeminiExecutorRecordsSuccessfulZeroUsageInStatistics(t *testing.T) { t.Fatalf("Execute error: %v", err) } - detail := waitForStatisticsDetail(t, "gemini", model, source) - if detail.Failed { - t.Fatalf("detail failed = true, want false") - } - if detail.Tokens.TotalTokens != 0 { - t.Fatalf("total tokens = %d, want 0", detail.Tokens.TotalTokens) - } + waitForQueuedUsageModelTotalTokens(t, "gemini", model, 0) } -func waitForStatisticsDetail(t *testing.T, apiName, model, source string) internalusage.RequestDetail { +func waitForQueuedUsageModelTotalTokens(t *testing.T, wantProvider, wantModel string, wantTokens int64) { t.Helper() deadline := time.Now().Add(2 * time.Second) for time.Now().Before(deadline) { - snapshot := internalusage.GetRequestStatistics().Snapshot() - apiSnapshot, ok := snapshot.APIs[apiName] - if !ok { - time.Sleep(10 * time.Millisecond) - continue - } - modelSnapshot, ok := apiSnapshot.Models[model] - if !ok { - time.Sleep(10 * time.Millisecond) - continue - } - for _, detail := range modelSnapshot.Details { - if detail.Source == source { - return detail + items := redisqueue.PopOldest(10) + for _, item := range items { + got, ok := parseQueuedUsagePayload(t, item) + if !ok { + continue } + if got.Provider != wantProvider || got.Model != wantModel { + continue + } + if got.Failed { + t.Fatalf("payload failed = true, want false") + } + if got.Tokens.TotalTokens != wantTokens { + t.Fatalf("payload total tokens = %d, want %d", got.Tokens.TotalTokens, wantTokens) + } + return } time.Sleep(10 * time.Millisecond) } - t.Fatalf("timed out waiting for statistics detail for api=%q model=%q source=%q", apiName, model, source) - return internalusage.RequestDetail{} + t.Fatalf("timed out waiting for queued usage payload for provider=%q model=%q", wantProvider, wantModel) +} + +type queuedUsagePayload struct { + Provider string `json:"provider"` + Model string `json:"model"` + Failed bool `json:"failed"` + Tokens struct { + TotalTokens int64 `json:"total_tokens"` + } `json:"tokens"` +} + +func parseQueuedUsagePayload(t *testing.T, payload []byte) (queuedUsagePayload, bool) { + t.Helper() + + var parsed queuedUsagePayload + if len(payload) == 0 { + return parsed, false + } + if err := json.Unmarshal(payload, &parsed); err != nil { + return parsed, false + } + if parsed.Provider == "" || parsed.Model == "" { + return parsed, false + } + return parsed, true } diff --git a/web/.gitignore b/web/.gitignore new file mode 100644 index 0000000000..5ef6a52078 --- /dev/null +++ b/web/.gitignore @@ -0,0 +1,41 @@ +# See https://help.github.com/articles/ignoring-files/ for more about ignoring files. + +# dependencies +/node_modules +/.pnp +.pnp.* +.yarn/* +!.yarn/patches +!.yarn/plugins +!.yarn/releases +!.yarn/versions + +# testing +/coverage + +# next.js +/.next/ +/out/ + +# production +/build + +# misc +.DS_Store +*.pem + +# debug +npm-debug.log* +yarn-debug.log* +yarn-error.log* +.pnpm-debug.log* + +# env files (can opt-in for committing if needed) +.env* + +# vercel +.vercel + +# typescript +*.tsbuildinfo +next-env.d.ts diff --git a/web/components.json b/web/components.json new file mode 100644 index 0000000000..2a427859ba --- /dev/null +++ b/web/components.json @@ -0,0 +1,25 @@ +{ + "$schema": "https://ui.shadcn.com/schema.json", + "style": "radix-nova", + "rsc": true, + "tsx": true, + "tailwind": { + "config": "", + "css": "src/app/globals.css", + "baseColor": "neutral", + "cssVariables": true, + "prefix": "" + }, + "iconLibrary": "lucide", + "rtl": false, + "aliases": { + "components": "@/components", + "utils": "@/lib/utils", + "ui": "@/components/ui", + "lib": "@/lib", + "hooks": "@/hooks" + }, + "menuColor": "default", + "menuAccent": "subtle", + "registries": {} +} diff --git a/web/eslint.config.mjs b/web/eslint.config.mjs new file mode 100644 index 0000000000..aa6d09d5cf --- /dev/null +++ b/web/eslint.config.mjs @@ -0,0 +1,13 @@ +import nextConfig from "eslint-config-next"; + +const eslintConfig = [ + ...nextConfig, + { + rules: { + "react-hooks/set-state-in-effect": "off", + "react-hooks/static-components": "off", + }, + }, +]; + +export default eslintConfig; diff --git a/web/next.config.ts b/web/next.config.ts new file mode 100644 index 0000000000..dc0f77b073 --- /dev/null +++ b/web/next.config.ts @@ -0,0 +1,7 @@ +import type { NextConfig } from "next"; + +const nextConfig: NextConfig = { + output: 'export', +}; + +export default nextConfig; diff --git a/web/package-lock.json b/web/package-lock.json new file mode 100644 index 0000000000..80580d3abd --- /dev/null +++ b/web/package-lock.json @@ -0,0 +1,11486 @@ +{ + "name": "cli-proxy-management", + "version": "0.1.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "cli-proxy-management", + "version": "0.1.0", + "dependencies": { + "class-variance-authority": "^0.7.1", + "clsx": "^2.1.1", + "lucide-react": "^1.14.0", + "next": "16.2.4", + "next-themes": "^0.4.6", + "radix-ui": "^1.4.3", + "react": "19.2.4", + "react-dom": "19.2.4", + "shadcn": "^4.6.0", + "sonner": "^2.0.7", + "tailwind-merge": "^3.5.0", + "tw-animate-css": "^1.4.0" + }, + "devDependencies": { + "@tailwindcss/postcss": "^4", + "@types/node": "^20", + "@types/react": "^19", + "@types/react-dom": "^19", + "eslint": "^9", + "eslint-config-next": "16.2.4", + "tailwindcss": "^4", + "typescript": "^5" + } + }, + "node_modules/@alloc/quick-lru": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/@alloc/quick-lru/-/quick-lru-5.2.0.tgz", + "integrity": "sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@babel/code-frame": { + "version": "7.29.0", + "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.29.0.tgz", + "integrity": "sha512-9NhCeYjq9+3uxgdtp20LSiJXJvN0FeCtNGpJxuMFZ1Kv3cWUNb6DOhJwUvcVCzKGR66cw4njwM6hrJLqgOwbcw==", + "license": "MIT", + "dependencies": { + "@babel/helper-validator-identifier": "^7.28.5", + "js-tokens": "^4.0.0", + "picocolors": "^1.1.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/compat-data": { + "version": "7.29.3", + "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.29.3.tgz", + "integrity": "sha512-LIVqM46zQWZhj17qA8wb4nW/ixr2y1Nw+r1etiAWgRM6U1IqP+LNhL1yg440jYZR72jCWcWbLWzIosH+uP1fqg==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/core": { + "version": "7.29.0", + "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.29.0.tgz", + "integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==", + "license": "MIT", + "peer": true, + "dependencies": { + "@babel/code-frame": "^7.29.0", + "@babel/generator": "^7.29.0", + "@babel/helper-compilation-targets": "^7.28.6", + "@babel/helper-module-transforms": "^7.28.6", + "@babel/helpers": "^7.28.6", + "@babel/parser": "^7.29.0", + "@babel/template": "^7.28.6", + "@babel/traverse": "^7.29.0", + "@babel/types": "^7.29.0", + "@jridgewell/remapping": "^2.3.5", + "convert-source-map": "^2.0.0", + "debug": "^4.1.0", + "gensync": "^1.0.0-beta.2", + "json5": "^2.2.3", + "semver": "^6.3.1" + }, + "engines": { + "node": ">=6.9.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/babel" + } + }, + "node_modules/@babel/generator": { + "version": "7.29.1", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.29.1.tgz", + "integrity": "sha512-qsaF+9Qcm2Qv8SRIMMscAvG4O3lJ0F1GuMo5HR/Bp02LopNgnZBC/EkbevHFeGs4ls/oPz9v+Bsmzbkbe+0dUw==", + "license": "MIT", + "dependencies": { + "@babel/parser": "^7.29.0", + "@babel/types": "^7.29.0", + "@jridgewell/gen-mapping": "^0.3.12", + "@jridgewell/trace-mapping": "^0.3.28", + "jsesc": "^3.0.2" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-annotate-as-pure": { + "version": "7.27.3", + "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.27.3.tgz", + "integrity": "sha512-fXSwMQqitTGeHLBC08Eq5yXz2m37E4pJX1qAU1+2cNedz/ifv/bVXft90VeSav5nFO61EcNgwr0aJxbyPaWBPg==", + "license": "MIT", + "dependencies": { + "@babel/types": "^7.27.3" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-compilation-targets": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.28.6.tgz", + "integrity": "sha512-JYtls3hqi15fcx5GaSNL7SCTJ2MNmjrkHXg4FSpOA/grxK8KwyZ5bubHsCq8FXCkua6xhuaaBit+3b7+VZRfcA==", + "license": "MIT", + "dependencies": { + "@babel/compat-data": "^7.28.6", + "@babel/helper-validator-option": "^7.27.1", + "browserslist": "^4.24.0", + "lru-cache": "^5.1.1", + "semver": "^6.3.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-create-class-features-plugin": { + "version": "7.29.3", + "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.29.3.tgz", + "integrity": "sha512-RpLYy2sb51oNLjuu1iD3bwBqCBWUzjO0ocp+iaCP/lJtb2CPLcnC2Fftw+4sAzaMELGeWTgExSKADbdo0GFVzA==", + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.27.3", + "@babel/helper-member-expression-to-functions": "^7.28.5", + "@babel/helper-optimise-call-expression": "^7.27.1", + "@babel/helper-replace-supers": "^7.28.6", + "@babel/helper-skip-transparent-expression-wrappers": "^7.27.1", + "@babel/traverse": "^7.29.0", + "semver": "^6.3.1" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-globals": { + "version": "7.28.0", + "resolved": "https://registry.npmjs.org/@babel/helper-globals/-/helper-globals-7.28.0.tgz", + "integrity": "sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-member-expression-to-functions": { + "version": "7.28.5", + "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.28.5.tgz", + "integrity": "sha512-cwM7SBRZcPCLgl8a7cY0soT1SptSzAlMH39vwiRpOQkJlh53r5hdHwLSCZpQdVLT39sZt+CRpNwYG4Y2v77atg==", + "license": "MIT", + "dependencies": { + "@babel/traverse": "^7.28.5", + "@babel/types": "^7.28.5" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-module-imports": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.28.6.tgz", + "integrity": "sha512-l5XkZK7r7wa9LucGw9LwZyyCUscb4x37JWTPz7swwFE/0FMQAGpiWUZn8u9DzkSBWEcK25jmvubfpw2dnAMdbw==", + "license": "MIT", + "dependencies": { + "@babel/traverse": "^7.28.6", + "@babel/types": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-module-transforms": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.28.6.tgz", + "integrity": "sha512-67oXFAYr2cDLDVGLXTEABjdBJZ6drElUSI7WKp70NrpyISso3plG9SAGEF6y7zbha/wOzUByWWTJvEDVNIUGcA==", + "license": "MIT", + "dependencies": { + "@babel/helper-module-imports": "^7.28.6", + "@babel/helper-validator-identifier": "^7.28.5", + "@babel/traverse": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-optimise-call-expression": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.27.1.tgz", + "integrity": "sha512-URMGH08NzYFhubNSGJrpUEphGKQwMQYBySzat5cAByY1/YgIRkULnIy3tAMeszlL/so2HbeilYloUmSpd7GdVw==", + "license": "MIT", + "dependencies": { + "@babel/types": "^7.27.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-plugin-utils": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.28.6.tgz", + "integrity": "sha512-S9gzZ/bz83GRysI7gAD4wPT/AI3uCnY+9xn+Mx/KPs2JwHJIz1W8PZkg2cqyt3RNOBM8ejcXhV6y8Og7ly/Dug==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-replace-supers": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.28.6.tgz", + "integrity": "sha512-mq8e+laIk94/yFec3DxSjCRD2Z0TAjhVbEJY3UQrlwVo15Lmt7C2wAUbK4bjnTs4APkwsYLTahXRraQXhb1WCg==", + "license": "MIT", + "dependencies": { + "@babel/helper-member-expression-to-functions": "^7.28.5", + "@babel/helper-optimise-call-expression": "^7.27.1", + "@babel/traverse": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-skip-transparent-expression-wrappers": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-skip-transparent-expression-wrappers/-/helper-skip-transparent-expression-wrappers-7.27.1.tgz", + "integrity": "sha512-Tub4ZKEXqbPjXgWLl2+3JpQAYBJ8+ikpQ2Ocj/q/r0LwE3UhENh7EUabyHjz2kCEsrRY83ew2DQdHluuiDQFzg==", + "license": "MIT", + "dependencies": { + "@babel/traverse": "^7.27.1", + "@babel/types": "^7.27.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-string-parser": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz", + "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-validator-identifier": { + "version": "7.28.5", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.28.5.tgz", + "integrity": "sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-validator-option": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.27.1.tgz", + "integrity": "sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helpers": { + "version": "7.29.2", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.29.2.tgz", + "integrity": "sha512-HoGuUs4sCZNezVEKdVcwqmZN8GoHirLUcLaYVNBK2J0DadGtdcqgr3BCbvH8+XUo4NGjNl3VOtSjEKNzqfFgKw==", + "license": "MIT", + "dependencies": { + "@babel/template": "^7.28.6", + "@babel/types": "^7.29.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/parser": { + "version": "7.29.3", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.29.3.tgz", + "integrity": "sha512-b3ctpQwp+PROvU/cttc4OYl4MzfJUWy6FZg+PMXfzmt/+39iHVF0sDfqay8TQM3JA2EUOyKcFZt75jWriQijsA==", + "license": "MIT", + "dependencies": { + "@babel/types": "^7.29.0" + }, + "bin": { + "parser": "bin/babel-parser.js" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@babel/plugin-syntax-jsx": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.28.6.tgz", + "integrity": "sha512-wgEmr06G6sIpqr8YDwA2dSRTE3bJ+V0IfpzfSY3Lfgd7YWOaAdlykvJi13ZKBt8cZHfgH1IXN+CL656W3uUa4w==", + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-typescript": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-typescript/-/plugin-syntax-typescript-7.28.6.tgz", + "integrity": "sha512-+nDNmQye7nlnuuHDboPbGm00Vqg3oO8niRRL27/4LYHUsHYh0zJ1xWOz0uRwNFmM1Avzk8wZbc6rdiYhomzv/A==", + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-modules-commonjs": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-commonjs/-/plugin-transform-modules-commonjs-7.28.6.tgz", + "integrity": "sha512-jppVbf8IV9iWWwWTQIxJMAJCWBuuKx71475wHwYytrRGQ2CWiDvYlADQno3tcYpS/T2UUWFQp3nVtYfK/YBQrA==", + "license": "MIT", + "dependencies": { + "@babel/helper-module-transforms": "^7.28.6", + "@babel/helper-plugin-utils": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-typescript": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-typescript/-/plugin-transform-typescript-7.28.6.tgz", + "integrity": "sha512-0YWL2RFxOqEm9Efk5PvreamxPME8OyY0wM5wh5lHjF+VtVhdneCWGzZeSqzOfiobVqQaNCd2z0tQvnI9DaPWPw==", + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.27.3", + "@babel/helper-create-class-features-plugin": "^7.28.6", + "@babel/helper-plugin-utils": "^7.28.6", + "@babel/helper-skip-transparent-expression-wrappers": "^7.27.1", + "@babel/plugin-syntax-typescript": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/preset-typescript": { + "version": "7.28.5", + "resolved": "https://registry.npmjs.org/@babel/preset-typescript/-/preset-typescript-7.28.5.tgz", + "integrity": "sha512-+bQy5WOI2V6LJZpPVxY+yp66XdZ2yifu0Mc1aP5CQKgjn4QM5IN2i5fAZ4xKop47pr8rpVhiAeu+nDQa12C8+g==", + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.27.1", + "@babel/helper-validator-option": "^7.27.1", + "@babel/plugin-syntax-jsx": "^7.27.1", + "@babel/plugin-transform-modules-commonjs": "^7.27.1", + "@babel/plugin-transform-typescript": "^7.28.5" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/template": { + "version": "7.28.6", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.28.6.tgz", + "integrity": "sha512-YA6Ma2KsCdGb+WC6UpBVFJGXL58MDA6oyONbjyF/+5sBgxY/dwkhLogbMT2GXXyU84/IhRw/2D1Os1B/giz+BQ==", + "license": "MIT", + "dependencies": { + "@babel/code-frame": "^7.28.6", + "@babel/parser": "^7.28.6", + "@babel/types": "^7.28.6" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/traverse": { + "version": "7.29.0", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.29.0.tgz", + "integrity": "sha512-4HPiQr0X7+waHfyXPZpWPfWL/J7dcN1mx9gL6WdQVMbPnF3+ZhSMs8tCxN7oHddJE9fhNE7+lxdnlyemKfJRuA==", + "license": "MIT", + "dependencies": { + "@babel/code-frame": "^7.29.0", + "@babel/generator": "^7.29.0", + "@babel/helper-globals": "^7.28.0", + "@babel/parser": "^7.29.0", + "@babel/template": "^7.28.6", + "@babel/types": "^7.29.0", + "debug": "^4.3.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/types": { + "version": "7.29.0", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.29.0.tgz", + "integrity": "sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==", + "license": "MIT", + "dependencies": { + "@babel/helper-string-parser": "^7.27.1", + "@babel/helper-validator-identifier": "^7.28.5" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@dotenvx/dotenvx": { + "version": "1.64.0", + "resolved": "https://registry.npmjs.org/@dotenvx/dotenvx/-/dotenvx-1.64.0.tgz", + "integrity": "sha512-6+xRpZaWuHXEqnhBjae+VmQI9Uaqw5Uzu/ScpO+W7ww9Zp3lHSNBoNjFcUxhrCyc7pRGQzyDjhKzloqrPHERiQ==", + "license": "BSD-3-Clause", + "dependencies": { + "commander": "^11.1.0", + "dotenv": "^17.2.1", + "eciesjs": "^0.4.10", + "execa": "^5.1.1", + "fdir": "^6.2.0", + "ignore": "^5.3.0", + "object-treeify": "1.1.33", + "picomatch": "^4.0.4", + "which": "^4.0.0", + "yocto-spinner": "^1.1.0" + }, + "bin": { + "dotenvx": "src/cli/dotenvx.js" + }, + "funding": { + "url": "https://dotenvx.com" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/commander": { + "version": "11.1.0", + "resolved": "https://registry.npmjs.org/commander/-/commander-11.1.0.tgz", + "integrity": "sha512-yPVavfyCcRhmorC7rWlkHn15b4wDVgVmBA7kV4QVBsF7kv/9TKJAbAXVTxvTnwP8HHKjRCJDClKbciiYS7p0DQ==", + "license": "MIT", + "engines": { + "node": ">=16" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/execa": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/execa/-/execa-5.1.1.tgz", + "integrity": "sha512-8uSpZZocAZRBAPIEINJj3Lo9HyGitllczc27Eh5YYojjMFMn8yHMDMaUHE2Jqfq05D/wucwI4JGURyXt1vchyg==", + "license": "MIT", + "dependencies": { + "cross-spawn": "^7.0.3", + "get-stream": "^6.0.0", + "human-signals": "^2.1.0", + "is-stream": "^2.0.0", + "merge-stream": "^2.0.0", + "npm-run-path": "^4.0.1", + "onetime": "^5.1.2", + "signal-exit": "^3.0.3", + "strip-final-newline": "^2.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sindresorhus/execa?sponsor=1" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/fdir": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", + "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==", + "license": "MIT", + "engines": { + "node": ">=12.0.0" + }, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/get-stream": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-6.0.1.tgz", + "integrity": "sha512-ts6Wi+2j3jQjqi70w5AlN8DFnkSwC+MqmxEzdEALB2qXZYV3X/b1CTfgPLGJNMeAWxdPfU8FO1ms3NUfaHCPYg==", + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/human-signals": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-2.1.0.tgz", + "integrity": "sha512-B4FFZ6q/T2jhhksgkbEW3HBvWIfDW85snkQgawt07S7J5QXTk6BkNV+0yAeZrM5QpMAdYlocGoljn0sJ/WQkFw==", + "license": "Apache-2.0", + "engines": { + "node": ">=10.17.0" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/is-stream": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz", + "integrity": "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==", + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/isexe": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/isexe/-/isexe-3.1.5.tgz", + "integrity": "sha512-6B3tLtFqtQS4ekarvLVMZ+X+VlvQekbe4taUkf/rhVO3d/h0M2rfARm/pXLcPEsjjMsFgrFgSrhQIxcSVrBz8w==", + "license": "BlueOak-1.0.0", + "engines": { + "node": ">=18" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/npm-run-path": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz", + "integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==", + "license": "MIT", + "dependencies": { + "path-key": "^3.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/onetime": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz", + "integrity": "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==", + "license": "MIT", + "dependencies": { + "mimic-fn": "^2.1.0" + }, + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/picomatch": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz", + "integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==", + "license": "MIT", + "peer": true, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/signal-exit": { + "version": "3.0.7", + "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz", + "integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==", + "license": "ISC" + }, + "node_modules/@dotenvx/dotenvx/node_modules/strip-final-newline": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-2.0.0.tgz", + "integrity": "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/@dotenvx/dotenvx/node_modules/which": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/which/-/which-4.0.0.tgz", + "integrity": "sha512-GlaYyEb07DPxYCKhKzplCWBJtvxZcZMrL+4UkrTSJHHPyZU4mYYTv3qaOe77H7EODLSSopAUFAc6W8U4yqvscg==", + "license": "ISC", + "dependencies": { + "isexe": "^3.1.1" + }, + "bin": { + "node-which": "bin/which.js" + }, + "engines": { + "node": "^16.13.0 || >=18.0.0" + } + }, + "node_modules/@ecies/ciphers": { + "version": "0.2.6", + "resolved": "https://registry.npmjs.org/@ecies/ciphers/-/ciphers-0.2.6.tgz", + "integrity": "sha512-patgsRPKGkhhoBjETV4XxD0En4ui5fbX0hzayqI3M8tvNMGUoUvmyYAIWwlxBc1KX5cturfqByYdj5bYGRpN9g==", + "license": "MIT", + "engines": { + "bun": ">=1", + "deno": ">=2.7.10", + "node": ">=16" + }, + "peerDependencies": { + "@noble/ciphers": "^1.0.0" + } + }, + "node_modules/@emnapi/core": { + "version": "1.10.0", + "resolved": "https://registry.npmjs.org/@emnapi/core/-/core-1.10.0.tgz", + "integrity": "sha512-yq6OkJ4p82CAfPl0u9mQebQHKPJkY7WrIuk205cTYnYe+k2Z8YBh11FrbRG/H6ihirqcacOgl2BIO8oyMQLeXw==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "@emnapi/wasi-threads": "1.2.1", + "tslib": "^2.4.0" + } + }, + "node_modules/@emnapi/runtime": { + "version": "1.10.0", + "resolved": "https://registry.npmjs.org/@emnapi/runtime/-/runtime-1.10.0.tgz", + "integrity": "sha512-ewvYlk86xUoGI0zQRNq/mC+16R1QeDlKQy21Ki3oSYXNgLb45GV1P6A0M+/s6nyCuNDqe5VpaY84BzXGwVbwFA==", + "license": "MIT", + "optional": true, + "dependencies": { + "tslib": "^2.4.0" + } + }, + "node_modules/@emnapi/wasi-threads": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/@emnapi/wasi-threads/-/wasi-threads-1.2.1.tgz", + "integrity": "sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "tslib": "^2.4.0" + } + }, + "node_modules/@eslint-community/eslint-utils": { + "version": "4.9.1", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.9.1.tgz", + "integrity": "sha512-phrYmNiYppR7znFEdqgfWHXR6NCkZEK7hwWDHZUjit/2/U0r6XvkDl0SYnoM51Hq7FhCGdLDT6zxCCOY1hexsQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "eslint-visitor-keys": "^3.4.3" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + }, + "peerDependencies": { + "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0" + } + }, + "node_modules/@eslint-community/eslint-utils/node_modules/eslint-visitor-keys": { + "version": "3.4.3", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", + "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@eslint-community/regexpp": { + "version": "4.12.2", + "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.12.2.tgz", + "integrity": "sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.0.0 || ^14.0.0 || >=16.0.0" + } + }, + "node_modules/@eslint/config-array": { + "version": "0.21.2", + "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.21.2.tgz", + "integrity": "sha512-nJl2KGTlrf9GjLimgIru+V/mzgSK0ABCDQRvxw5BjURL7WfH5uoWmizbH7QB6MmnMBd8cIC9uceWnezL1VZWWw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/object-schema": "^2.1.7", + "debug": "^4.3.1", + "minimatch": "^3.1.5" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/config-helpers": { + "version": "0.4.2", + "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.4.2.tgz", + "integrity": "sha512-gBrxN88gOIf3R7ja5K9slwNayVcZgK6SOUORm2uBzTeIEfeVaIhOpCtTox3P6R7o2jLFwLFTLnC7kU/RGcYEgw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^0.17.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/core": { + "version": "0.17.0", + "resolved": "https://registry.npmjs.org/@eslint/core/-/core-0.17.0.tgz", + "integrity": "sha512-yL/sLrpmtDaFEiUj1osRP4TI2MDz1AddJL+jZ7KSqvBuliN4xqYY54IfdN8qD8Toa6g1iloph1fxQNkjOxrrpQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@types/json-schema": "^7.0.15" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/eslintrc": { + "version": "3.3.5", + "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-3.3.5.tgz", + "integrity": "sha512-4IlJx0X0qftVsN5E+/vGujTRIFtwuLbNsVUe7TO6zYPDR1O6nFwvwhIKEKSrl6dZchmYBITazxKoUYOjdtjlRg==", + "dev": true, + "license": "MIT", + "dependencies": { + "ajv": "^6.14.0", + "debug": "^4.3.2", + "espree": "^10.0.1", + "globals": "^14.0.0", + "ignore": "^5.2.0", + "import-fresh": "^3.2.1", + "js-yaml": "^4.1.1", + "minimatch": "^3.1.5", + "strip-json-comments": "^3.1.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@eslint/js": { + "version": "9.39.4", + "resolved": "https://registry.npmjs.org/@eslint/js/-/js-9.39.4.tgz", + "integrity": "sha512-nE7DEIchvtiFTwBw4Lfbu59PG+kCofhjsKaCWzxTpt4lfRjRMqG6uMBzKXuEcyXhOHoUp9riAm7/aWYGhXZ9cw==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://eslint.org/donate" + } + }, + "node_modules/@eslint/object-schema": { + "version": "2.1.7", + "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-2.1.7.tgz", + "integrity": "sha512-VtAOaymWVfZcmZbp6E2mympDIHvyjXs/12LqWYjVw6qjrfF+VK+fyG33kChz3nnK+SU5/NeHOqrTEHS8sXO3OA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/plugin-kit": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.4.1.tgz", + "integrity": "sha512-43/qtrDUokr7LJqoF2c3+RInu/t4zfrpYdoSDfYyhg52rwLV6TnOvdG4fXm7IkSB3wErkcmJS9iEhjVtOSEjjA==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^0.17.0", + "levn": "^0.4.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@floating-ui/core": { + "version": "1.7.5", + "resolved": "https://registry.npmjs.org/@floating-ui/core/-/core-1.7.5.tgz", + "integrity": "sha512-1Ih4WTWyw0+lKyFMcBHGbb5U5FtuHJuujoyyr5zTaWS5EYMeT6Jb2AuDeftsCsEuchO+mM2ij5+q9crhydzLhQ==", + "license": "MIT", + "dependencies": { + "@floating-ui/utils": "^0.2.11" + } + }, + "node_modules/@floating-ui/dom": { + "version": "1.7.6", + "resolved": "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.7.6.tgz", + "integrity": "sha512-9gZSAI5XM36880PPMm//9dfiEngYoC6Am2izES1FF406YFsjvyBMmeJ2g4SAju3xWwtuynNRFL2s9hgxpLI5SQ==", + "license": "MIT", + "dependencies": { + "@floating-ui/core": "^1.7.5", + "@floating-ui/utils": "^0.2.11" + } + }, + "node_modules/@floating-ui/react-dom": { + "version": "2.1.8", + "resolved": "https://registry.npmjs.org/@floating-ui/react-dom/-/react-dom-2.1.8.tgz", + "integrity": "sha512-cC52bHwM/n/CxS87FH0yWdngEZrjdtLW/qVruo68qg+prK7ZQ4YGdut2GyDVpoGeAYe/h899rVeOVm6Oi40k2A==", + "license": "MIT", + "dependencies": { + "@floating-ui/dom": "^1.7.6" + }, + "peerDependencies": { + "react": ">=16.8.0", + "react-dom": ">=16.8.0" + } + }, + "node_modules/@floating-ui/utils": { + "version": "0.2.11", + "resolved": "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.11.tgz", + "integrity": "sha512-RiB/yIh78pcIxl6lLMG0CgBXAZ2Y0eVHqMPYugu+9U0AeT6YBeiJpf7lbdJNIugFP5SIjwNRgo4DhR1Qxi26Gg==", + "license": "MIT" + }, + "node_modules/@hono/node-server": { + "version": "1.19.14", + "resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.14.tgz", + "integrity": "sha512-GwtvgtXxnWsucXvbQXkRgqksiH2Qed37H9xHZocE5sA3N8O8O8/8FA3uclQXxXVzc9XBZuEOMK7+r02FmSpHtw==", + "license": "MIT", + "engines": { + "node": ">=18.14.1" + }, + "peerDependencies": { + "hono": "^4" + } + }, + "node_modules/@humanfs/core": { + "version": "0.19.2", + "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.2.tgz", + "integrity": "sha512-UhXNm+CFMWcbChXywFwkmhqjs3PRCmcSa/hfBgLIb7oQ5HNb1wS0icWsGtSAUNgefHeI+eBrA8I1fxmbHsGdvA==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@humanfs/types": "^0.15.0" + }, + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node": { + "version": "0.16.8", + "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.8.tgz", + "integrity": "sha512-gE1eQNZ3R++kTzFUpdGlpmy8kDZD/MLyHqDwqjkVQI0JMdI1D51sy1H958PNXYkM2rAac7e5/CnIKZrHtPh3BQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@humanfs/core": "^0.19.2", + "@humanfs/types": "^0.15.0", + "@humanwhocodes/retry": "^0.4.0" + }, + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/types": { + "version": "0.15.0", + "resolved": "https://registry.npmjs.org/@humanfs/types/-/types-0.15.0.tgz", + "integrity": "sha512-ZZ1w0aoQkwuUuC7Yf+7sdeaNfqQiiLcSRbfI08oAxqLtpXQr9AIVX7Ay7HLDuiLYAaFPu8oBYNq/QIi9URHJ3Q==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanwhocodes/module-importer": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz", + "integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=12.22" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@humanwhocodes/retry": { + "version": "0.4.3", + "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.4.3.tgz", + "integrity": "sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@img/colour": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/@img/colour/-/colour-1.1.0.tgz", + "integrity": "sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ==", + "license": "MIT", + "optional": true, + "engines": { + "node": ">=18" + } + }, + "node_modules/@img/sharp-darwin-arm64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-darwin-arm64/-/sharp-darwin-arm64-0.34.5.tgz", + "integrity": "sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==", + "cpu": [ + "arm64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-darwin-arm64": "1.2.4" + } + }, + "node_modules/@img/sharp-darwin-x64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-darwin-x64/-/sharp-darwin-x64-0.34.5.tgz", + "integrity": "sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==", + "cpu": [ + "x64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-darwin-x64": "1.2.4" + } + }, + "node_modules/@img/sharp-libvips-darwin-arm64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-darwin-arm64/-/sharp-libvips-darwin-arm64-1.2.4.tgz", + "integrity": "sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==", + "cpu": [ + "arm64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "darwin" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-darwin-x64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-darwin-x64/-/sharp-libvips-darwin-x64-1.2.4.tgz", + "integrity": "sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==", + "cpu": [ + "x64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "darwin" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linux-arm": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linux-arm/-/sharp-libvips-linux-arm-1.2.4.tgz", + "integrity": "sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==", + "cpu": [ + "arm" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linux-arm64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linux-arm64/-/sharp-libvips-linux-arm64-1.2.4.tgz", + "integrity": "sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==", + "cpu": [ + "arm64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linux-ppc64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linux-ppc64/-/sharp-libvips-linux-ppc64-1.2.4.tgz", + "integrity": "sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==", + "cpu": [ + "ppc64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linux-riscv64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linux-riscv64/-/sharp-libvips-linux-riscv64-1.2.4.tgz", + "integrity": "sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==", + "cpu": [ + "riscv64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linux-s390x": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linux-s390x/-/sharp-libvips-linux-s390x-1.2.4.tgz", + "integrity": "sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==", + "cpu": [ + "s390x" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linux-x64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linux-x64/-/sharp-libvips-linux-x64-1.2.4.tgz", + "integrity": "sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==", + "cpu": [ + "x64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linuxmusl-arm64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linuxmusl-arm64/-/sharp-libvips-linuxmusl-arm64-1.2.4.tgz", + "integrity": "sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==", + "cpu": [ + "arm64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-libvips-linuxmusl-x64": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@img/sharp-libvips-linuxmusl-x64/-/sharp-libvips-linuxmusl-x64-1.2.4.tgz", + "integrity": "sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==", + "cpu": [ + "x64" + ], + "license": "LGPL-3.0-or-later", + "optional": true, + "os": [ + "linux" + ], + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-linux-arm": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linux-arm/-/sharp-linux-arm-0.34.5.tgz", + "integrity": "sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==", + "cpu": [ + "arm" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linux-arm": "1.2.4" + } + }, + "node_modules/@img/sharp-linux-arm64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linux-arm64/-/sharp-linux-arm64-0.34.5.tgz", + "integrity": "sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==", + "cpu": [ + "arm64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linux-arm64": "1.2.4" + } + }, + "node_modules/@img/sharp-linux-ppc64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linux-ppc64/-/sharp-linux-ppc64-0.34.5.tgz", + "integrity": "sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==", + "cpu": [ + "ppc64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linux-ppc64": "1.2.4" + } + }, + "node_modules/@img/sharp-linux-riscv64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linux-riscv64/-/sharp-linux-riscv64-0.34.5.tgz", + "integrity": "sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==", + "cpu": [ + "riscv64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linux-riscv64": "1.2.4" + } + }, + "node_modules/@img/sharp-linux-s390x": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linux-s390x/-/sharp-linux-s390x-0.34.5.tgz", + "integrity": "sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==", + "cpu": [ + "s390x" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linux-s390x": "1.2.4" + } + }, + "node_modules/@img/sharp-linux-x64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linux-x64/-/sharp-linux-x64-0.34.5.tgz", + "integrity": "sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==", + "cpu": [ + "x64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linux-x64": "1.2.4" + } + }, + "node_modules/@img/sharp-linuxmusl-arm64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linuxmusl-arm64/-/sharp-linuxmusl-arm64-0.34.5.tgz", + "integrity": "sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==", + "cpu": [ + "arm64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linuxmusl-arm64": "1.2.4" + } + }, + "node_modules/@img/sharp-linuxmusl-x64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-linuxmusl-x64/-/sharp-linuxmusl-x64-0.34.5.tgz", + "integrity": "sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==", + "cpu": [ + "x64" + ], + "license": "Apache-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-libvips-linuxmusl-x64": "1.2.4" + } + }, + "node_modules/@img/sharp-wasm32": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-wasm32/-/sharp-wasm32-0.34.5.tgz", + "integrity": "sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==", + "cpu": [ + "wasm32" + ], + "license": "Apache-2.0 AND LGPL-3.0-or-later AND MIT", + "optional": true, + "dependencies": { + "@emnapi/runtime": "^1.7.0" + }, + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-win32-arm64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-win32-arm64/-/sharp-win32-arm64-0.34.5.tgz", + "integrity": "sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==", + "cpu": [ + "arm64" + ], + "license": "Apache-2.0 AND LGPL-3.0-or-later", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-win32-ia32": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-win32-ia32/-/sharp-win32-ia32-0.34.5.tgz", + "integrity": "sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==", + "cpu": [ + "ia32" + ], + "license": "Apache-2.0 AND LGPL-3.0-or-later", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@img/sharp-win32-x64": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/@img/sharp-win32-x64/-/sharp-win32-x64-0.34.5.tgz", + "integrity": "sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==", + "cpu": [ + "x64" + ], + "license": "Apache-2.0 AND LGPL-3.0-or-later", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + } + }, + "node_modules/@inquirer/ansi": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/@inquirer/ansi/-/ansi-2.0.5.tgz", + "integrity": "sha512-doc2sWgJpbFQ64UflSVd17ibMGDuxO1yKgOgLMwavzESnXjFWJqUeG8saYosqKpHp4kWiM5x1nXvEjbpx90gzw==", + "license": "MIT", + "engines": { + "node": ">=23.5.0 || ^22.13.0 || ^21.7.0 || ^20.12.0" + } + }, + "node_modules/@inquirer/confirm": { + "version": "6.0.12", + "resolved": "https://registry.npmjs.org/@inquirer/confirm/-/confirm-6.0.12.tgz", + "integrity": "sha512-h9FgGun3QwVYNj5TWIZZ+slii73bMoBFjPfVIGtnFuL4t8gBiNDV9PcSfIzkuxvgquJKt9nr1QzszpBzTbH8Og==", + "license": "MIT", + "dependencies": { + "@inquirer/core": "^11.1.9", + "@inquirer/type": "^4.0.5" + }, + "engines": { + "node": ">=23.5.0 || ^22.13.0 || ^21.7.0 || ^20.12.0" + }, + "peerDependencies": { + "@types/node": ">=18" + }, + "peerDependenciesMeta": { + "@types/node": { + "optional": true + } + } + }, + "node_modules/@inquirer/core": { + "version": "11.1.9", + "resolved": "https://registry.npmjs.org/@inquirer/core/-/core-11.1.9.tgz", + "integrity": "sha512-BDE4fG22uYh1bGSifcj7JSx119TVYNViMhMu85usp4Fswrzh6M0DV3yld64jA98uOAa2GSQ4Bg4bZRm2d2cwSg==", + "license": "MIT", + "dependencies": { + "@inquirer/ansi": "^2.0.5", + "@inquirer/figures": "^2.0.5", + "@inquirer/type": "^4.0.5", + "cli-width": "^4.1.0", + "fast-wrap-ansi": "^0.2.0", + "mute-stream": "^3.0.0", + "signal-exit": "^4.1.0" + }, + "engines": { + "node": ">=23.5.0 || ^22.13.0 || ^21.7.0 || ^20.12.0" + }, + "peerDependencies": { + "@types/node": ">=18" + }, + "peerDependenciesMeta": { + "@types/node": { + "optional": true + } + } + }, + "node_modules/@inquirer/figures": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/@inquirer/figures/-/figures-2.0.5.tgz", + "integrity": "sha512-NsSs4kzfm12lNetHwAn3GEuH317IzpwrMCbOuMIVytpjnJ90YYHNwdRgYGuKmVxwuIqSgqk3M5qqQt1cDk0tGQ==", + "license": "MIT", + "engines": { + "node": ">=23.5.0 || ^22.13.0 || ^21.7.0 || ^20.12.0" + } + }, + "node_modules/@inquirer/type": { + "version": "4.0.5", + "resolved": "https://registry.npmjs.org/@inquirer/type/-/type-4.0.5.tgz", + "integrity": "sha512-aetVUNeKNc/VriqXlw1NRSW0zhMBB0W4bNbWRJgzRl/3d0QNDQFfk0GO5SDdtjMZVg6o8ZKEiadd7SCCzoOn5Q==", + "license": "MIT", + "engines": { + "node": ">=23.5.0 || ^22.13.0 || ^21.7.0 || ^20.12.0" + }, + "peerDependencies": { + "@types/node": ">=18" + }, + "peerDependenciesMeta": { + "@types/node": { + "optional": true + } + } + }, + "node_modules/@jridgewell/gen-mapping": { + "version": "0.3.13", + "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.13.tgz", + "integrity": "sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==", + "license": "MIT", + "dependencies": { + "@jridgewell/sourcemap-codec": "^1.5.0", + "@jridgewell/trace-mapping": "^0.3.24" + } + }, + "node_modules/@jridgewell/remapping": { + "version": "2.3.5", + "resolved": "https://registry.npmjs.org/@jridgewell/remapping/-/remapping-2.3.5.tgz", + "integrity": "sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ==", + "license": "MIT", + "dependencies": { + "@jridgewell/gen-mapping": "^0.3.5", + "@jridgewell/trace-mapping": "^0.3.24" + } + }, + "node_modules/@jridgewell/resolve-uri": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz", + "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==", + "license": "MIT", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/sourcemap-codec": { + "version": "1.5.5", + "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz", + "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==", + "license": "MIT" + }, + "node_modules/@jridgewell/trace-mapping": { + "version": "0.3.31", + "resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.31.tgz", + "integrity": "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==", + "license": "MIT", + "dependencies": { + "@jridgewell/resolve-uri": "^3.1.0", + "@jridgewell/sourcemap-codec": "^1.4.14" + } + }, + "node_modules/@modelcontextprotocol/sdk": { + "version": "1.29.0", + "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.29.0.tgz", + "integrity": "sha512-zo37mZA9hJWpULgkRpowewez1y6ML5GsXJPY8FI0tBBCd77HEvza4jDqRKOXgHNn867PVGCyTdzqpz0izu5ZjQ==", + "license": "MIT", + "dependencies": { + "@hono/node-server": "^1.19.9", + "ajv": "^8.17.1", + "ajv-formats": "^3.0.1", + "content-type": "^1.0.5", + "cors": "^2.8.5", + "cross-spawn": "^7.0.5", + "eventsource": "^3.0.2", + "eventsource-parser": "^3.0.0", + "express": "^5.2.1", + "express-rate-limit": "^8.2.1", + "hono": "^4.11.4", + "jose": "^6.1.3", + "json-schema-typed": "^8.0.2", + "pkce-challenge": "^5.0.0", + "raw-body": "^3.0.0", + "zod": "^3.25 || ^4.0", + "zod-to-json-schema": "^3.25.1" + }, + "engines": { + "node": ">=18" + }, + "peerDependencies": { + "@cfworker/json-schema": "^4.1.1", + "zod": "^3.25 || ^4.0" + }, + "peerDependenciesMeta": { + "@cfworker/json-schema": { + "optional": true + }, + "zod": { + "optional": false + } + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/ajv": { + "version": "8.20.0", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.20.0.tgz", + "integrity": "sha512-Thbli+OlOj+iMPYFBVBfJ3OmCAnaSyNn4M1vz9T6Gka5Jt9ba/HIR56joy65tY6kx/FCF5VXNB819Y7/GUrBGA==", + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.3", + "fast-uri": "^3.0.1", + "json-schema-traverse": "^1.0.0", + "require-from-string": "^2.0.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/@modelcontextprotocol/sdk/node_modules/json-schema-traverse": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", + "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==", + "license": "MIT" + }, + "node_modules/@mswjs/interceptors": { + "version": "0.41.8", + "resolved": "https://registry.npmjs.org/@mswjs/interceptors/-/interceptors-0.41.8.tgz", + "integrity": "sha512-pRLMNKTSGRoLq+KnEB/7OY5vijw1XmcheAAOiv6pj7W1FG32kAGqj1C/RK/cqxRGr1Fh+zBi8sDur8kj3EQv6A==", + "license": "MIT", + "dependencies": { + "@open-draft/deferred-promise": "^2.2.0", + "@open-draft/logger": "^0.3.0", + "@open-draft/until": "^2.0.0", + "is-node-process": "^1.2.0", + "outvariant": "^1.4.3", + "strict-event-emitter": "^0.5.1" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/@mswjs/interceptors/node_modules/@open-draft/deferred-promise": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/@open-draft/deferred-promise/-/deferred-promise-2.2.0.tgz", + "integrity": "sha512-CecwLWx3rhxVQF6V4bAgPS5t+So2sTbPgAzafKkVizyi7tlwpcFpdFqq+wqF2OwNBmqFuu6tOyouTuxgpMfzmA==", + "license": "MIT" + }, + "node_modules/@napi-rs/wasm-runtime": { + "version": "0.2.12", + "resolved": "https://registry.npmjs.org/@napi-rs/wasm-runtime/-/wasm-runtime-0.2.12.tgz", + "integrity": "sha512-ZVWUcfwY4E/yPitQJl481FjFo3K22D6qF0DuFH6Y/nbnE11GY5uguDxZMGXPQ8WQ0128MXQD7TnfHyK4oWoIJQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "@emnapi/core": "^1.4.3", + "@emnapi/runtime": "^1.4.3", + "@tybys/wasm-util": "^0.10.0" + } + }, + "node_modules/@next/env": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/env/-/env-16.2.4.tgz", + "integrity": "sha512-dKkkOzOSwFYe5RX6y26fZgkSpVAlIOJKQHIiydQcrWH6y/97+RceSOAdjZ14Qa3zLduVUy0TXcn+EiM6t4rPgw==", + "license": "MIT" + }, + "node_modules/@next/eslint-plugin-next": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/eslint-plugin-next/-/eslint-plugin-next-16.2.4.tgz", + "integrity": "sha512-tOX826JJ96gYK/go18sPUgMq9FK1tqxBFfUCEufJb5XIkWFFmpgU7mahJANKGkHs7F41ir3tReJ3Lv5La0RvhA==", + "dev": true, + "license": "MIT", + "dependencies": { + "fast-glob": "3.3.1" + } + }, + "node_modules/@next/swc-darwin-arm64": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-16.2.4.tgz", + "integrity": "sha512-OXTFFox5EKN1Ym08vfrz+OXxmCcEjT4SFMbNRsWZE99dMqt2Kcusl5MqPXcW232RYkMLQTy0hqgAMEsfEd/l2A==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-darwin-x64": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-16.2.4.tgz", + "integrity": "sha512-XhpVnUfmYWvD3YrXu55XdcAkQtOnvaI6wtQa8fuF5fGoKoxIUZ0kWPtcOfqJEWngFF/lOS9l3+O9CcownhiQxQ==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-linux-arm64-gnu": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-16.2.4.tgz", + "integrity": "sha512-Mx/tjlNA3G8kg14QvuGAJ4xBwPk1tUHq56JxZ8CXnZwz1Etz714soCEzGQQzVMz4bEnGPowzkV6Xrp6wAkEWOQ==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-linux-arm64-musl": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-16.2.4.tgz", + "integrity": "sha512-iVMMp14514u7Nup2umQS03nT/bN9HurK8ufylC3FZNykrwjtx7V1A7+4kvhbDSCeonTVqV3Txnv0Lu+m2oDXNg==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-linux-x64-gnu": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-16.2.4.tgz", + "integrity": "sha512-EZOvm1aQWgnI/N/xcWOlnS3RQBk0VtVav5Zo7n4p0A7UKyTDx047k8opDbXgBpHl4CulRqRfbw3QrX2w5UOXMQ==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-linux-x64-musl": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-16.2.4.tgz", + "integrity": "sha512-h9FxsngCm9cTBf71AR4fGznDEDx1hS7+kSEiIRjq5kO1oXWm07DxVGZjCvk0SGx7TSjlUqhI8oOyz7NfwAdPoA==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-win32-arm64-msvc": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-16.2.4.tgz", + "integrity": "sha512-3NdJV5OXMSOeJYijX+bjaLge3mJBlh4ybydbT4GFoB/2hAojWHtMhl3CYlYoMrjPuodp0nzFVi4Tj2+WaMg+Ow==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@next/swc-win32-x64-msvc": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-16.2.4.tgz", + "integrity": "sha512-kMVGgsqhO5YTYODD9IPGGhA6iprWidQckK3LmPeW08PIFENRmgfb4MjXHO+p//d+ts2rpjvK5gXWzXSMrPl9cw==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 10" + } + }, + "node_modules/@noble/ciphers": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/@noble/ciphers/-/ciphers-1.3.0.tgz", + "integrity": "sha512-2I0gnIVPtfnMw9ee9h1dJG7tp81+8Ob3OJb3Mv37rx5L40/b0i7djjCVvGOVqc9AEIQyvyu1i6ypKdFw8R8gQw==", + "license": "MIT", + "peer": true, + "engines": { + "node": "^14.21.3 || >=16" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/@noble/curves": { + "version": "1.9.7", + "resolved": "https://registry.npmjs.org/@noble/curves/-/curves-1.9.7.tgz", + "integrity": "sha512-gbKGcRUYIjA3/zCCNaWDciTMFI0dCkvou3TL8Zmy5Nc7sJ47a0jtOeZoTaMxkuqRo9cRhjOdZJXegxYE5FN/xw==", + "license": "MIT", + "dependencies": { + "@noble/hashes": "1.8.0" + }, + "engines": { + "node": "^14.21.3 || >=16" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/@noble/hashes": { + "version": "1.8.0", + "resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.8.0.tgz", + "integrity": "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==", + "license": "MIT", + "engines": { + "node": "^14.21.3 || >=16" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/@nodelib/fs.scandir": { + "version": "2.1.5", + "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", + "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==", + "license": "MIT", + "dependencies": { + "@nodelib/fs.stat": "2.0.5", + "run-parallel": "^1.1.9" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.stat": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz", + "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==", + "license": "MIT", + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.walk": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz", + "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==", + "license": "MIT", + "dependencies": { + "@nodelib/fs.scandir": "2.1.5", + "fastq": "^1.6.0" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nolyfill/is-core-module": { + "version": "1.0.39", + "resolved": "https://registry.npmjs.org/@nolyfill/is-core-module/-/is-core-module-1.0.39.tgz", + "integrity": "sha512-nn5ozdjYQpUCZlWGuxcJY/KpxkWQs4DcbMCmKojjyrYDEAGy4Ce19NN4v5MduafTwJlbKc99UA8YhSVqq9yPZA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12.4.0" + } + }, + "node_modules/@open-draft/deferred-promise": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/@open-draft/deferred-promise/-/deferred-promise-3.0.0.tgz", + "integrity": "sha512-XW375UK8/9SqUVNVa6M0yEy8+iTi4QN5VZ7aZuRFQmy76LRwI9wy5F4YIBU6T+eTe2/DNDo8tqu8RHlwLHM6RA==", + "license": "MIT" + }, + "node_modules/@open-draft/logger": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/@open-draft/logger/-/logger-0.3.0.tgz", + "integrity": "sha512-X2g45fzhxH238HKO4xbSr7+wBS8Fvw6ixhTDuvLd5mqh6bJJCFAPwU9mPDxbcrRtfxv4u5IHCEH77BmxvXmmxQ==", + "license": "MIT", + "dependencies": { + "is-node-process": "^1.2.0", + "outvariant": "^1.4.0" + } + }, + "node_modules/@open-draft/until": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/@open-draft/until/-/until-2.1.0.tgz", + "integrity": "sha512-U69T3ItWHvLwGg5eJ0n3I62nWuE6ilHlmz7zM0npLBRvPRd7e6NYmg54vvRtP5mZG7kZqZCFVdsTWo7BPtBujg==", + "license": "MIT" + }, + "node_modules/@radix-ui/number": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/number/-/number-1.1.1.tgz", + "integrity": "sha512-MkKCwxlXTgz6CFoJx3pCwn07GKp36+aZyu/u2Ln2VrA5DcdyCZkASEDBTd8x5whTQQL5CiYf4prXKLcgQdv29g==", + "license": "MIT" + }, + "node_modules/@radix-ui/primitive": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/@radix-ui/primitive/-/primitive-1.1.3.tgz", + "integrity": "sha512-JTF99U/6XIjCBo0wqkU5sK10glYe27MRRsfwoiq5zzOEZLHU3A3KCMa5X/azekYRCJ0HlwI0crAXS/5dEHTzDg==", + "license": "MIT" + }, + "node_modules/@radix-ui/react-accessible-icon": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-accessible-icon/-/react-accessible-icon-1.1.7.tgz", + "integrity": "sha512-XM+E4WXl0OqUJFovy6GjmxxFyx9opfCAIUku4dlKRd5YEPqt4kALOkQOp0Of6reHuUkJuiPBEc5k0o4z4lTC8A==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-visually-hidden": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-accordion": { + "version": "1.2.12", + "resolved": "https://registry.npmjs.org/@radix-ui/react-accordion/-/react-accordion-1.2.12.tgz", + "integrity": "sha512-T4nygeh9YE9dLRPhAHSeOZi7HBXo+0kYIPJXayZfvWOWA0+n3dESrZbjfDPUABkUNym6Hd+f2IR113To8D2GPA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collapsible": "1.1.12", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-alert-dialog": { + "version": "1.1.15", + "resolved": "https://registry.npmjs.org/@radix-ui/react-alert-dialog/-/react-alert-dialog-1.1.15.tgz", + "integrity": "sha512-oTVLkEw5GpdRe29BqJ0LSDFWI3qu0vR1M0mUkOQWDIUnY/QIkLpgDMWuKxP94c2NAC2LGcgVhG1ImF3jkZ5wXw==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-dialog": "1.1.15", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-slot": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-arrow": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-arrow/-/react-arrow-1.1.7.tgz", + "integrity": "sha512-F+M1tLhO+mlQaOWspE8Wstg+z6PwxwRd8oQ8IXceWz92kfAmalTRf0EjrouQeo7QssEPfCn05B4Ihs1K9WQ/7w==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-aspect-ratio": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-aspect-ratio/-/react-aspect-ratio-1.1.7.tgz", + "integrity": "sha512-Yq6lvO9HQyPwev1onK1daHCHqXVLzPhSVjmsNjCa2Zcxy2f7uJD2itDtxknv6FzAKCwD1qQkeVDmX/cev13n/g==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-avatar": { + "version": "1.1.10", + "resolved": "https://registry.npmjs.org/@radix-ui/react-avatar/-/react-avatar-1.1.10.tgz", + "integrity": "sha512-V8piFfWapM5OmNCXTzVQY+E1rDa53zY+MQ4Y7356v4fFz6vqCyUtIz2rUD44ZEdwg78/jKmMJHj07+C/Z/rcog==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-is-hydrated": "0.1.0", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-checkbox": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-checkbox/-/react-checkbox-1.3.3.tgz", + "integrity": "sha512-wBbpv+NQftHDdG86Qc0pIyXk5IR3tM8Vd0nWLKDcX8nNn4nXFOFwsKuqw2okA/1D/mpaAkmuyndrPJTYDNZtFw==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-use-size": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-collapsible": { + "version": "1.1.12", + "resolved": "https://registry.npmjs.org/@radix-ui/react-collapsible/-/react-collapsible-1.1.12.tgz", + "integrity": "sha512-Uu+mSh4agx2ib1uIGPP4/CKNULyajb3p92LsVXmH2EHVMTfZWpll88XJ0j4W0z3f8NK1eYl1+Mf/szHPmcHzyA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-collection": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-collection/-/react-collection-1.1.7.tgz", + "integrity": "sha512-Fh9rGN0MoI4ZFUNyfFVNU4y9LUz93u9/0K+yLgA2bwRojxM8JU1DyvvMBabnZPBgMWREAJvU2jjVzq+LrFUglw==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-slot": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-compose-refs": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/@radix-ui/react-compose-refs/-/react-compose-refs-1.1.2.tgz", + "integrity": "sha512-z4eqJvfiNnFMHIIvXP3CY57y2WJs5g2v3X0zm9mEJkrkNv4rDxu+sg9Jh8EkXyeqBkB7SOcboo9dMVqhyrACIg==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-context": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/@radix-ui/react-context/-/react-context-1.1.2.tgz", + "integrity": "sha512-jCi/QKUM2r1Ju5a3J64TH2A5SpKAgh0LpknyqdQ4m6DCV0xJ2HG1xARRwNGPQfi1SLdLWZ1OJz6F4OMBBNiGJA==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-context-menu": { + "version": "2.2.16", + "resolved": "https://registry.npmjs.org/@radix-ui/react-context-menu/-/react-context-menu-2.2.16.tgz", + "integrity": "sha512-O8morBEW+HsVG28gYDZPTrT9UUovQUlJue5YO836tiTJhuIWBm/zQHc7j388sHWtdH/xUZurK9olD2+pcqx5ww==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-menu": "2.1.16", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-dialog": { + "version": "1.1.15", + "resolved": "https://registry.npmjs.org/@radix-ui/react-dialog/-/react-dialog-1.1.15.tgz", + "integrity": "sha512-TCglVRtzlffRNxRMEyR36DGBLJpeusFcgMVD9PZEzAKnUs1lKCgX5u9BmC2Yg+LL9MgZDugFFs1Vl+Jp4t/PGw==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-focus-guards": "1.1.3", + "@radix-ui/react-focus-scope": "1.1.7", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-slot": "1.2.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "aria-hidden": "^1.2.4", + "react-remove-scroll": "^2.6.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-direction": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-direction/-/react-direction-1.1.1.tgz", + "integrity": "sha512-1UEWRX6jnOA2y4H5WczZ44gOOjTEmlqv1uNW4GAJEO5+bauCBhv8snY65Iw5/VOS/ghKN9gr2KjnLKxrsvoMVw==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-dismissable-layer": { + "version": "1.1.11", + "resolved": "https://registry.npmjs.org/@radix-ui/react-dismissable-layer/-/react-dismissable-layer-1.1.11.tgz", + "integrity": "sha512-Nqcp+t5cTB8BinFkZgXiMJniQH0PsUt2k51FUhbdfeKvc4ACcG2uQniY/8+h1Yv6Kza4Q7lD7PQV0z0oicE0Mg==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-escape-keydown": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-dropdown-menu": { + "version": "2.1.16", + "resolved": "https://registry.npmjs.org/@radix-ui/react-dropdown-menu/-/react-dropdown-menu-2.1.16.tgz", + "integrity": "sha512-1PLGQEynI/3OX/ftV54COn+3Sud/Mn8vALg2rWnBLnRaGtJDduNW/22XjlGgPdpcIbiQxjKtb7BkcjP00nqfJw==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-menu": "2.1.16", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-focus-guards": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-focus-guards/-/react-focus-guards-1.1.3.tgz", + "integrity": "sha512-0rFg/Rj2Q62NCm62jZw0QX7a3sz6QCQU0LpZdNrJX8byRGaGVTqbrW9jAoIAHyMQqsNpeZ81YgSizOt5WXq0Pw==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-focus-scope": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-focus-scope/-/react-focus-scope-1.1.7.tgz", + "integrity": "sha512-t2ODlkXBQyn7jkl6TNaw/MtVEVvIGelJDCG41Okq/KwUsJBwQ4XVZsHAVUkK4mBv3ewiAS3PGuUWuY2BoK4ZUw==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-form": { + "version": "0.1.8", + "resolved": "https://registry.npmjs.org/@radix-ui/react-form/-/react-form-0.1.8.tgz", + "integrity": "sha512-QM70k4Zwjttifr5a4sZFts9fn8FzHYvQ5PiB19O2HsYibaHSVt9fH9rzB0XZo/YcM+b7t/p7lYCT/F5eOeF5yQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-label": "2.1.7", + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-hover-card": { + "version": "1.1.15", + "resolved": "https://registry.npmjs.org/@radix-ui/react-hover-card/-/react-hover-card-1.1.15.tgz", + "integrity": "sha512-qgTkjNT1CfKMoP0rcasmlH2r1DAiYicWsDsufxl940sT2wHNEWWv6FMWIQXWhVdmC1d/HYfbhQx60KYyAtKxjg==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-popper": "1.2.8", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-id": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-id/-/react-id-1.1.1.tgz", + "integrity": "sha512-kGkGegYIdQsOb4XjsfM97rXsiHaBwco+hFI66oO4s9LU+PLAC5oJ7khdOVFxkhsmlbpUqDAvXw11CluXP+jkHg==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-label": { + "version": "2.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-label/-/react-label-2.1.7.tgz", + "integrity": "sha512-YT1GqPSL8kJn20djelMX7/cTRp/Y9w5IZHvfxQTVHrOqa2yMl7i/UfMqKRU5V7mEyKTrUVgJXhNQPVCG8PBLoQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-menu": { + "version": "2.1.16", + "resolved": "https://registry.npmjs.org/@radix-ui/react-menu/-/react-menu-2.1.16.tgz", + "integrity": "sha512-72F2T+PLlphrqLcAotYPp0uJMr5SjP5SL01wfEspJbru5Zs5vQaSHb4VB3ZMJPimgHHCHG7gMOeOB9H3Hdmtxg==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-focus-guards": "1.1.3", + "@radix-ui/react-focus-scope": "1.1.7", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-popper": "1.2.8", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-slot": "1.2.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "aria-hidden": "^1.2.4", + "react-remove-scroll": "^2.6.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-menubar": { + "version": "1.1.16", + "resolved": "https://registry.npmjs.org/@radix-ui/react-menubar/-/react-menubar-1.1.16.tgz", + "integrity": "sha512-EB1FktTz5xRRi2Er974AUQZWg2yVBb1yjip38/lgwtCVRd3a+maUoGHN/xs9Yv8SY8QwbSEb+YrxGadVWbEutA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-menu": "2.1.16", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-navigation-menu": { + "version": "1.2.14", + "resolved": "https://registry.npmjs.org/@radix-ui/react-navigation-menu/-/react-navigation-menu-1.2.14.tgz", + "integrity": "sha512-YB9mTFQvCOAQMHU+C/jVl96WmuWeltyUEpRJJky51huhds5W2FQr1J8D/16sQlf0ozxkPK8uF3niQMdUwZPv5w==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-layout-effect": "1.1.1", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-visually-hidden": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-one-time-password-field": { + "version": "0.1.8", + "resolved": "https://registry.npmjs.org/@radix-ui/react-one-time-password-field/-/react-one-time-password-field-0.1.8.tgz", + "integrity": "sha512-ycS4rbwURavDPVjCb5iS3aG4lURFDILi6sKI/WITUMZ13gMmn/xGjpLoqBAalhJaDk8I3UbCM5GzKHrnzwHbvg==", + "license": "MIT", + "dependencies": { + "@radix-ui/number": "1.1.1", + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-effect-event": "0.0.2", + "@radix-ui/react-use-is-hydrated": "0.1.0", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-password-toggle-field": { + "version": "0.1.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-password-toggle-field/-/react-password-toggle-field-0.1.3.tgz", + "integrity": "sha512-/UuCrDBWravcaMix4TdT+qlNdVwOM1Nck9kWx/vafXsdfj1ChfhOdfi3cy9SGBpWgTXwYCuboT/oYpJy3clqfw==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-effect-event": "0.0.2", + "@radix-ui/react-use-is-hydrated": "0.1.0" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-popover": { + "version": "1.1.15", + "resolved": "https://registry.npmjs.org/@radix-ui/react-popover/-/react-popover-1.1.15.tgz", + "integrity": "sha512-kr0X2+6Yy/vJzLYJUPCZEc8SfQcf+1COFoAqauJm74umQhta9M7lNJHP7QQS3vkvcGLQUbWpMzwrXYwrYztHKA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-focus-guards": "1.1.3", + "@radix-ui/react-focus-scope": "1.1.7", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-popper": "1.2.8", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-slot": "1.2.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "aria-hidden": "^1.2.4", + "react-remove-scroll": "^2.6.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-popper": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.8.tgz", + "integrity": "sha512-0NJQ4LFFUuWkE7Oxf0htBKS6zLkkjBH+hM1uk7Ng705ReR8m/uelduy1DBo0PyBXPKVnBA6YBlU94MBGXrSBCw==", + "license": "MIT", + "dependencies": { + "@floating-ui/react-dom": "^2.0.0", + "@radix-ui/react-arrow": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-layout-effect": "1.1.1", + "@radix-ui/react-use-rect": "1.1.1", + "@radix-ui/react-use-size": "1.1.1", + "@radix-ui/rect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-portal": { + "version": "1.1.9", + "resolved": "https://registry.npmjs.org/@radix-ui/react-portal/-/react-portal-1.1.9.tgz", + "integrity": "sha512-bpIxvq03if6UNwXZ+HTK71JLh4APvnXntDc6XOX8UVq4XQOVl7lwok0AvIl+b8zgCw3fSaVTZMpAPPagXbKmHQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-presence": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/@radix-ui/react-presence/-/react-presence-1.1.5.tgz", + "integrity": "sha512-/jfEwNDdQVBCNvjkGit4h6pMOzq8bHkopq458dPt2lMjx+eBQUohZNG9A7DtO/O5ukSbxuaNGXMjHicgwy6rQQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-primitive": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.3.tgz", + "integrity": "sha512-m9gTwRkhy2lvCPe6QJp4d3G1TYEUHn/FzJUtq9MjH46an1wJU+GdoGC5VLof8RX8Ft/DlpshApkhswDLZzHIcQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-slot": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-progress": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-progress/-/react-progress-1.1.7.tgz", + "integrity": "sha512-vPdg/tF6YC/ynuBIJlk1mm7Le0VgW6ub6J2UWnTQ7/D23KXcPI1qy+0vBkgKgd38RCMJavBXpB83HPNFMTb0Fg==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-radio-group": { + "version": "1.3.8", + "resolved": "https://registry.npmjs.org/@radix-ui/react-radio-group/-/react-radio-group-1.3.8.tgz", + "integrity": "sha512-VBKYIYImA5zsxACdisNQ3BjCBfmbGH3kQlnFVqlWU4tXwjy7cGX8ta80BcrO+WJXIn5iBylEH3K6ZTlee//lgQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-use-size": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-roving-focus": { + "version": "1.1.11", + "resolved": "https://registry.npmjs.org/@radix-ui/react-roving-focus/-/react-roving-focus-1.1.11.tgz", + "integrity": "sha512-7A6S9jSgm/S+7MdtNDSb+IU859vQqJ/QAtcYQcfFC6W8RS4IxIZDldLR0xqCFZ6DCyrQLjLPsxtTNch5jVA4lA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-scroll-area": { + "version": "1.2.10", + "resolved": "https://registry.npmjs.org/@radix-ui/react-scroll-area/-/react-scroll-area-1.2.10.tgz", + "integrity": "sha512-tAXIa1g3sM5CGpVT0uIbUx/U3Gs5N8T52IICuCtObaos1S8fzsrPXG5WObkQN3S6NVl6wKgPhAIiBGbWnvc97A==", + "license": "MIT", + "dependencies": { + "@radix-ui/number": "1.1.1", + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-select": { + "version": "2.2.6", + "resolved": "https://registry.npmjs.org/@radix-ui/react-select/-/react-select-2.2.6.tgz", + "integrity": "sha512-I30RydO+bnn2PQztvo25tswPH+wFBjehVGtmagkU78yMdwTwVf12wnAOF+AeP8S2N8xD+5UPbGhkUfPyvT+mwQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/number": "1.1.1", + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-focus-guards": "1.1.3", + "@radix-ui/react-focus-scope": "1.1.7", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-popper": "1.2.8", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-slot": "1.2.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-layout-effect": "1.1.1", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-visually-hidden": "1.2.3", + "aria-hidden": "^1.2.4", + "react-remove-scroll": "^2.6.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-separator": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/@radix-ui/react-separator/-/react-separator-1.1.7.tgz", + "integrity": "sha512-0HEb8R9E8A+jZjvmFCy/J4xhbXy3TV+9XSnGJ3KvTtjlIUy/YQ/p6UYZvi7YbeoeXdyU9+Y3scizK6hkY37baA==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-slider": { + "version": "1.3.6", + "resolved": "https://registry.npmjs.org/@radix-ui/react-slider/-/react-slider-1.3.6.tgz", + "integrity": "sha512-JPYb1GuM1bxfjMRlNLE+BcmBC8onfCi60Blk7OBqi2MLTFdS+8401U4uFjnwkOr49BLmXxLC6JHkvAsx5OJvHw==", + "license": "MIT", + "dependencies": { + "@radix-ui/number": "1.1.1", + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-layout-effect": "1.1.1", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-use-size": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-slot": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz", + "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-compose-refs": "1.1.2" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-switch": { + "version": "1.2.6", + "resolved": "https://registry.npmjs.org/@radix-ui/react-switch/-/react-switch-1.2.6.tgz", + "integrity": "sha512-bByzr1+ep1zk4VubeEVViV592vu2lHE2BZY5OnzehZqOOgogN80+mNtCqPkhn2gklJqOpxWgPoYTSnhBCqpOXQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-use-size": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-tabs": { + "version": "1.1.13", + "resolved": "https://registry.npmjs.org/@radix-ui/react-tabs/-/react-tabs-1.1.13.tgz", + "integrity": "sha512-7xdcatg7/U+7+Udyoj2zodtI9H/IIopqo+YOIcZOq1nJwXWBZ9p8xiu5llXlekDbZkca79a/fozEYQXIA4sW6A==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-toast": { + "version": "1.2.15", + "resolved": "https://registry.npmjs.org/@radix-ui/react-toast/-/react-toast-1.2.15.tgz", + "integrity": "sha512-3OSz3TacUWy4WtOXV38DggwxoqJK4+eDkNMl5Z/MJZaoUPaP4/9lf81xXMe1I2ReTAptverZUpbPY4wWwWyL5g==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-layout-effect": "1.1.1", + "@radix-ui/react-visually-hidden": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-toggle": { + "version": "1.1.10", + "resolved": "https://registry.npmjs.org/@radix-ui/react-toggle/-/react-toggle-1.1.10.tgz", + "integrity": "sha512-lS1odchhFTeZv3xwHH31YPObmJn8gOg7Lq12inrr0+BH/l3Tsq32VfjqH1oh80ARM3mlkfMic15n0kg4sD1poQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-toggle-group": { + "version": "1.1.11", + "resolved": "https://registry.npmjs.org/@radix-ui/react-toggle-group/-/react-toggle-group-1.1.11.tgz", + "integrity": "sha512-5umnS0T8JQzQT6HbPyO7Hh9dgd82NmS36DQr+X/YJ9ctFNCiiQd6IJAYYZ33LUwm8M+taCz5t2ui29fHZc4Y6Q==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-toggle": "1.1.10", + "@radix-ui/react-use-controllable-state": "1.2.2" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-toolbar": { + "version": "1.1.11", + "resolved": "https://registry.npmjs.org/@radix-ui/react-toolbar/-/react-toolbar-1.1.11.tgz", + "integrity": "sha512-4ol06/1bLoFu1nwUqzdD4Y5RZ9oDdKeiHIsntug54Hcr1pgaHiPqHFEaXI1IFP/EsOfROQZ8Mig9VTIRza6Tjg==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-separator": "1.1.7", + "@radix-ui/react-toggle-group": "1.1.11" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-tooltip": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/@radix-ui/react-tooltip/-/react-tooltip-1.2.8.tgz", + "integrity": "sha512-tY7sVt1yL9ozIxvmbtN5qtmH2krXcBCfjEiCgKGLqunJHvgvZG2Pcl2oQ3kbcZARb1BGEHdkLzcYGO8ynVlieg==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-id": "1.1.1", + "@radix-ui/react-popper": "1.2.8", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-slot": "1.2.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-visually-hidden": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-callback-ref": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-callback-ref/-/react-use-callback-ref-1.1.1.tgz", + "integrity": "sha512-FkBMwD+qbGQeMu1cOHnuGB6x4yzPjho8ap5WtbEJ26umhgqVXbhekKUQO+hZEL1vU92a3wHwdp0HAcqAUF5iDg==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-controllable-state": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-controllable-state/-/react-use-controllable-state-1.2.2.tgz", + "integrity": "sha512-BjasUjixPFdS+NKkypcyyN5Pmg83Olst0+c6vGov0diwTEo6mgdqVR6hxcEgFuh4QrAs7Rc+9KuGJ9TVCj0Zzg==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-use-effect-event": "0.0.2", + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-effect-event": { + "version": "0.0.2", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-effect-event/-/react-use-effect-event-0.0.2.tgz", + "integrity": "sha512-Qp8WbZOBe+blgpuUT+lw2xheLP8q0oatc9UpmiemEICxGvFLYmHm9QowVZGHtJlGbS6A6yJ3iViad/2cVjnOiA==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-escape-keydown": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-escape-keydown/-/react-use-escape-keydown-1.1.1.tgz", + "integrity": "sha512-Il0+boE7w/XebUHyBjroE+DbByORGR9KKmITzbR7MyQ4akpORYP/ZmbhAr0DG7RmmBqoOnZdy2QlvajJ2QA59g==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-use-callback-ref": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-is-hydrated": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-is-hydrated/-/react-use-is-hydrated-0.1.0.tgz", + "integrity": "sha512-U+UORVEq+cTnRIaostJv9AGdV3G6Y+zbVd+12e18jQ5A3c0xL03IhnHuiU4UV69wolOQp5GfR58NW/EgdQhwOA==", + "license": "MIT", + "dependencies": { + "use-sync-external-store": "^1.5.0" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-layout-effect": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-layout-effect/-/react-use-layout-effect-1.1.1.tgz", + "integrity": "sha512-RbJRS4UWQFkzHTTwVymMTUv8EqYhOp8dOOviLj2ugtTiXRaRQS7GLGxZTLL1jWhMeoSCf5zmcZkqTl9IiYfXcQ==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-previous": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-previous/-/react-use-previous-1.1.1.tgz", + "integrity": "sha512-2dHfToCj/pzca2Ck724OZ5L0EVrr3eHRNsG/b3xQJLA2hZpVCS99bLAX+hm1IHXDEnzU6by5z/5MIY794/a8NQ==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-rect": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-rect/-/react-use-rect-1.1.1.tgz", + "integrity": "sha512-QTYuDesS0VtuHNNvMh+CjlKJ4LJickCMUAqjlE3+j8w+RlRpwyX3apEQKGFzbZGdo7XNG1tXa+bQqIE7HIXT2w==", + "license": "MIT", + "dependencies": { + "@radix-ui/rect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-use-size": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-size/-/react-use-size-1.1.1.tgz", + "integrity": "sha512-ewrXRDTAqAXlkl6t/fkXWNAhFX9I+CkKlw6zjEwk86RSPKwZr3xpBRso655aqYafwtnbpHLj6toFzmd6xdVptQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-use-layout-effect": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-visually-hidden": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-visually-hidden/-/react-visually-hidden-1.2.3.tgz", + "integrity": "sha512-pzJq12tEaaIhqjbzpCuv/OypJY/BPavOofm+dbab+MHLajy277+1lLm6JFcGgF5eskJ6mquGirhXY2GD/8u8Ug==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-primitive": "2.1.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/rect": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/rect/-/rect-1.1.1.tgz", + "integrity": "sha512-HPwpGIzkl28mWyZqG52jiqDJ12waP11Pa1lGoiyUkIEuMLBP0oeK/C89esbXrxsky5we7dfd8U58nm0SgAWpVw==", + "license": "MIT" + }, + "node_modules/@rtsao/scc": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/@rtsao/scc/-/scc-1.1.0.tgz", + "integrity": "sha512-zt6OdqaDoOnJ1ZYsCYGt9YmWzDXl4vQdKTyJev62gFhRGKdx7mcT54V9KIjg+d2wi9EXsPvAPKe7i7WjfVWB8g==", + "dev": true, + "license": "MIT" + }, + "node_modules/@sec-ant/readable-stream": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/@sec-ant/readable-stream/-/readable-stream-0.4.1.tgz", + "integrity": "sha512-831qok9r2t8AlxLko40y2ebgSDhenenCatLVeW/uBtnHPyhHOvG0C7TvfgecV+wHzIm5KUICgzmVpWS+IMEAeg==", + "license": "MIT" + }, + "node_modules/@sindresorhus/merge-streams": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/@sindresorhus/merge-streams/-/merge-streams-4.0.0.tgz", + "integrity": "sha512-tlqY9xq5ukxTUZBmoOp+m61cqwQD5pHJtFY3Mn8CA8ps6yghLH/Hw8UPdqg4OLmFW3IFlcXnQNmo/dh8HzXYIQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@swc/helpers": { + "version": "0.5.15", + "resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.15.tgz", + "integrity": "sha512-JQ5TuMi45Owi4/BIMAJBoSQoOJu12oOk/gADqlcUL9JEdHB8vyjUSsxqeNXnmXHjYKMi2WcYtezGEEhqUI/E2g==", + "license": "Apache-2.0", + "dependencies": { + "tslib": "^2.8.0" + } + }, + "node_modules/@tailwindcss/node": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/node/-/node-4.2.4.tgz", + "integrity": "sha512-Ai7+yQPxz3ddrDQzFfBKdHEVBg0w3Zl83jnjuwxnZOsnH9pGn93QHQtpU0p/8rYWxvbFZHneni6p1BSLK4DkGA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@jridgewell/remapping": "^2.3.5", + "enhanced-resolve": "^5.19.0", + "jiti": "^2.6.1", + "lightningcss": "1.32.0", + "magic-string": "^0.30.21", + "source-map-js": "^1.2.1", + "tailwindcss": "4.2.4" + } + }, + "node_modules/@tailwindcss/oxide": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide/-/oxide-4.2.4.tgz", + "integrity": "sha512-9El/iI069DKDSXwTvB9J4BwdO5JhRrOweGaK25taBAvBXyXqJAX+Jqdvs8r8gKpsI/1m0LeJLyQYTf/WLrBT1Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 20" + }, + "optionalDependencies": { + "@tailwindcss/oxide-android-arm64": "4.2.4", + "@tailwindcss/oxide-darwin-arm64": "4.2.4", + "@tailwindcss/oxide-darwin-x64": "4.2.4", + "@tailwindcss/oxide-freebsd-x64": "4.2.4", + "@tailwindcss/oxide-linux-arm-gnueabihf": "4.2.4", + "@tailwindcss/oxide-linux-arm64-gnu": "4.2.4", + "@tailwindcss/oxide-linux-arm64-musl": "4.2.4", + "@tailwindcss/oxide-linux-x64-gnu": "4.2.4", + "@tailwindcss/oxide-linux-x64-musl": "4.2.4", + "@tailwindcss/oxide-wasm32-wasi": "4.2.4", + "@tailwindcss/oxide-win32-arm64-msvc": "4.2.4", + "@tailwindcss/oxide-win32-x64-msvc": "4.2.4" + } + }, + "node_modules/@tailwindcss/oxide-android-arm64": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-android-arm64/-/oxide-android-arm64-4.2.4.tgz", + "integrity": "sha512-e7MOr1SAn9U8KlZzPi1ZXGZHeC5anY36qjNwmZv9pOJ8E4Q6jmD1vyEHkQFmNOIN7twGPEMXRHmitN4zCMN03g==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-darwin-arm64": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-darwin-arm64/-/oxide-darwin-arm64-4.2.4.tgz", + "integrity": "sha512-tSC/Kbqpz/5/o/C2sG7QvOxAKqyd10bq+ypZNf+9Fi2TvbVbv1zNpcEptcsU7DPROaSbVgUXmrzKhurFvo5eDg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-darwin-x64": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-darwin-x64/-/oxide-darwin-x64-4.2.4.tgz", + "integrity": "sha512-yPyUXn3yO/ufR6+Kzv0t4fCg2qNr90jxXc5QqBpjlPNd0NqyDXcmQb/6weunH/MEDXW5dhyEi+agTDiqa3WsGg==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-freebsd-x64": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-freebsd-x64/-/oxide-freebsd-x64-4.2.4.tgz", + "integrity": "sha512-BoMIB4vMQtZsXdGLVc2z+P9DbETkiopogfWZKbWwM8b/1Vinbs4YcUwo+kM/KeLkX3Ygrf4/PsRndKaYhS8Eiw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-linux-arm-gnueabihf": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-arm-gnueabihf/-/oxide-linux-arm-gnueabihf-4.2.4.tgz", + "integrity": "sha512-7pIHBLTHYRAlS7V22JNuTh33yLH4VElwKtB3bwchK/UaKUPpQ0lPQiOWcbm4V3WP2I6fNIJ23vABIvoy2izdwA==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-linux-arm64-gnu": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-arm64-gnu/-/oxide-linux-arm64-gnu-4.2.4.tgz", + "integrity": "sha512-+E4wxJ0ZGOzSH325reXTWB48l42i93kQqMvDyz5gqfRzRZ7faNhnmvlV4EPGJU3QJM/3Ab5jhJ5pCRUsKn6OQw==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-linux-arm64-musl": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-arm64-musl/-/oxide-linux-arm64-musl-4.2.4.tgz", + "integrity": "sha512-bBADEGAbo4ASnppIziaQJelekCxdMaxisrk+fB7Thit72IBnALp9K6ffA2G4ruj90G9XRS2VQ6q2bCKbfFV82g==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-linux-x64-gnu": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-x64-gnu/-/oxide-linux-x64-gnu-4.2.4.tgz", + "integrity": "sha512-7Mx25E4WTfnht0TVRTyC00j3i0M+EeFe7wguMDTlX4mRxafznw0CA8WJkFjWYH5BlgELd1kSjuU2JiPnNZbJDA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-linux-x64-musl": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-linux-x64-musl/-/oxide-linux-x64-musl-4.2.4.tgz", + "integrity": "sha512-2wwJRF7nyhOR0hhHoChc04xngV3iS+akccHTGtz965FwF0up4b2lOdo6kI1EbDaEXKgvcrFBYcYQQ/rrnWFVfA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-wasm32-wasi": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-wasm32-wasi/-/oxide-wasm32-wasi-4.2.4.tgz", + "integrity": "sha512-FQsqApeor8Fo6gUEklzmaa9994orJZZDBAlQpK2Mq+DslRKFJeD6AjHpBQ0kZFQohVr8o85PPh8eOy86VlSCmw==", + "bundleDependencies": [ + "@napi-rs/wasm-runtime", + "@emnapi/core", + "@emnapi/runtime", + "@tybys/wasm-util", + "@emnapi/wasi-threads", + "tslib" + ], + "cpu": [ + "wasm32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "@emnapi/core": "^1.8.1", + "@emnapi/runtime": "^1.8.1", + "@emnapi/wasi-threads": "^1.1.0", + "@napi-rs/wasm-runtime": "^1.1.1", + "@tybys/wasm-util": "^0.10.1", + "tslib": "^2.8.1" + }, + "engines": { + "node": ">=14.0.0" + } + }, + "node_modules/@tailwindcss/oxide-win32-arm64-msvc": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-arm64-msvc/-/oxide-win32-arm64-msvc-4.2.4.tgz", + "integrity": "sha512-L9BXqxC4ToVgwMFqj3pmZRqyHEztulpUJzCxUtLjobMCzTPsGt1Fa9enKbOpY2iIyVtaHNeNvAK8ERP/64sqGQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/oxide-win32-x64-msvc": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-x64-msvc/-/oxide-win32-x64-msvc-4.2.4.tgz", + "integrity": "sha512-ESlKG0EpVJQwRjXDDa9rLvhEAh0mhP1sF7sap9dNZT0yyl9SAG6T7gdP09EH0vIv0UNTlo6jPWyujD6559fZvw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 20" + } + }, + "node_modules/@tailwindcss/postcss": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/@tailwindcss/postcss/-/postcss-4.2.4.tgz", + "integrity": "sha512-wgAVj6nUWAolAu8YFvzT2cTBIElWHkjZwFYovF+xsqKsW2ADxM/X2opxj5NsF/qVccAOjRNe8X2IdPzMsWyHTg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@alloc/quick-lru": "^5.2.0", + "@tailwindcss/node": "4.2.4", + "@tailwindcss/oxide": "4.2.4", + "postcss": "^8.5.6", + "tailwindcss": "4.2.4" + } + }, + "node_modules/@ts-morph/common": { + "version": "0.27.0", + "resolved": "https://registry.npmjs.org/@ts-morph/common/-/common-0.27.0.tgz", + "integrity": "sha512-Wf29UqxWDpc+i61k3oIOzcUfQt79PIT9y/MWfAGlrkjg6lBC1hwDECLXPVJAhWjiGbfBCxZd65F/LIZF3+jeJQ==", + "license": "MIT", + "dependencies": { + "fast-glob": "^3.3.3", + "minimatch": "^10.0.1", + "path-browserify": "^1.0.1" + } + }, + "node_modules/@ts-morph/common/node_modules/balanced-match": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-4.0.4.tgz", + "integrity": "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==", + "license": "MIT", + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/@ts-morph/common/node_modules/brace-expansion": { + "version": "5.0.5", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-5.0.5.tgz", + "integrity": "sha512-VZznLgtwhn+Mact9tfiwx64fA9erHH/MCXEUfB/0bX/6Fz6ny5EGTXYltMocqg4xFAQZtnO3DHWWXi8RiuN7cQ==", + "license": "MIT", + "dependencies": { + "balanced-match": "^4.0.2" + }, + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/@ts-morph/common/node_modules/fast-glob": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.3.3.tgz", + "integrity": "sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg==", + "license": "MIT", + "dependencies": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.2", + "merge2": "^1.3.0", + "micromatch": "^4.0.8" + }, + "engines": { + "node": ">=8.6.0" + } + }, + "node_modules/@ts-morph/common/node_modules/glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/@ts-morph/common/node_modules/minimatch": { + "version": "10.2.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.5.tgz", + "integrity": "sha512-MULkVLfKGYDFYejP07QOurDLLQpcjk7Fw+7jXS2R2czRQzR56yHRveU5NDJEOviH+hETZKSkIk5c+T23GjFUMg==", + "license": "BlueOak-1.0.0", + "dependencies": { + "brace-expansion": "^5.0.5" + }, + "engines": { + "node": "18 || 20 || >=22" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/@tybys/wasm-util": { + "version": "0.10.1", + "resolved": "https://registry.npmjs.org/@tybys/wasm-util/-/wasm-util-0.10.1.tgz", + "integrity": "sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "tslib": "^2.4.0" + } + }, + "node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/json-schema": { + "version": "7.0.15", + "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz", + "integrity": "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/json5": { + "version": "0.0.29", + "resolved": "https://registry.npmjs.org/@types/json5/-/json5-0.0.29.tgz", + "integrity": "sha512-dRLjCWHYg4oaA77cxO64oO+7JwCwnIzkZPdrrC71jQmQtlhM556pwKo5bUzqvZndkVbeFLIIi+9TC40JNF5hNQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/node": { + "version": "20.19.39", + "resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.39.tgz", + "integrity": "sha512-orrrD74MBUyK8jOAD/r0+lfa1I2MO6I+vAkmAWzMYbCcgrN4lCrmK52gRFQq/JRxfYPfonkr4b0jcY7Olqdqbw==", + "license": "MIT", + "peer": true, + "dependencies": { + "undici-types": "~6.21.0" + } + }, + "node_modules/@types/react": { + "version": "19.2.14", + "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz", + "integrity": "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w==", + "devOptional": true, + "license": "MIT", + "peer": true, + "dependencies": { + "csstype": "^3.2.2" + } + }, + "node_modules/@types/react-dom": { + "version": "19.2.3", + "resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-19.2.3.tgz", + "integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==", + "devOptional": true, + "license": "MIT", + "peer": true, + "peerDependencies": { + "@types/react": "^19.2.0" + } + }, + "node_modules/@types/set-cookie-parser": { + "version": "2.4.10", + "resolved": "https://registry.npmjs.org/@types/set-cookie-parser/-/set-cookie-parser-2.4.10.tgz", + "integrity": "sha512-GGmQVGpQWUe5qglJozEjZV/5dyxbOOZ0LHe/lqyWssB88Y4svNfst0uqBVscdDeIKl5Jy5+aPSvy7mI9tYRguw==", + "license": "MIT", + "dependencies": { + "@types/node": "*" + } + }, + "node_modules/@types/statuses": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@types/statuses/-/statuses-2.0.6.tgz", + "integrity": "sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA==", + "license": "MIT" + }, + "node_modules/@types/validate-npm-package-name": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/@types/validate-npm-package-name/-/validate-npm-package-name-4.0.2.tgz", + "integrity": "sha512-lrpDziQipxCEeK5kWxvljWYhUvOiB2A9izZd9B2AFarYAkqZshb4lPbRs7zKEic6eGtH8V/2qJW+dPp9OtF6bw==", + "license": "MIT" + }, + "node_modules/@typescript-eslint/eslint-plugin": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.59.1.tgz", + "integrity": "sha512-BOziFIfE+6osHO9FoJG4zjoHUcvI7fTNBSpdAwrNH0/TLvzjsk2oo8XSSOT2HhqUyhZPfHv4UOffoJ9oEEQ7Ag==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/regexpp": "^4.12.2", + "@typescript-eslint/scope-manager": "8.59.1", + "@typescript-eslint/type-utils": "8.59.1", + "@typescript-eslint/utils": "8.59.1", + "@typescript-eslint/visitor-keys": "8.59.1", + "ignore": "^7.0.5", + "natural-compare": "^1.4.0", + "ts-api-utils": "^2.5.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "@typescript-eslint/parser": "^8.59.1", + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/eslint-plugin/node_modules/ignore": { + "version": "7.0.5", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-7.0.5.tgz", + "integrity": "sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/@typescript-eslint/parser": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.59.1.tgz", + "integrity": "sha512-HDQH9O/47Dxi1ceDhBXdaldtf/WV9yRYMjbjCuNk3qnaTD564qwv61Y7+gTxwxRKzSrgO5uhtw584igXVuuZkA==", + "dev": true, + "license": "MIT", + "peer": true, + "dependencies": { + "@typescript-eslint/scope-manager": "8.59.1", + "@typescript-eslint/types": "8.59.1", + "@typescript-eslint/typescript-estree": "8.59.1", + "@typescript-eslint/visitor-keys": "8.59.1", + "debug": "^4.4.3" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/project-service": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.59.1.tgz", + "integrity": "sha512-+MuHQlHiEr00Of/IQbE/MmEoi44znZHbR/Pz7Opq4HryUOlRi+/44dro9Ycy8Fyo+/024IWtw8m4JUMCGTYxDg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/tsconfig-utils": "^8.59.1", + "@typescript-eslint/types": "^8.59.1", + "debug": "^4.4.3" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/scope-manager": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.59.1.tgz", + "integrity": "sha512-LwuHQI4pDOYVKvmH2dkaJo6YZCSgouVgnS/z7yBPKBMvgtBvyLqiLy9Z6b7+m/TRcX1NFYUqZetI5Y+aT4GEfg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.59.1", + "@typescript-eslint/visitor-keys": "8.59.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/tsconfig-utils": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.59.1.tgz", + "integrity": "sha512-/0nEyPbX7gRsk0Uwfe4ALwwgxuA66d/l2mhRDNlAvaj4U3juhUtJNq0DsY8M2AYwwb9rEq2hrC3IcIcEt++iJA==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/type-utils": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.59.1.tgz", + "integrity": "sha512-klWPBR2ciQHS3f++ug/mVnWKPjBUo7icEL3FAO1lhAR1Z1i5NQYZ1EannMSRYcq5qCv5wNALlXr6fksRHyYl7w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.59.1", + "@typescript-eslint/typescript-estree": "8.59.1", + "@typescript-eslint/utils": "8.59.1", + "debug": "^4.4.3", + "ts-api-utils": "^2.5.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/types": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.59.1.tgz", + "integrity": "sha512-ZDCjgccSdYPw5Bxh+my4Z0lJU96ZDN7jbBzvmEn0FZx3RtU1C7VWl6NbDx94bwY3V5YsgwRzJPOgeY2Q/nLG8A==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/typescript-estree": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.59.1.tgz", + "integrity": "sha512-OUd+vJS05sSkOip+BkZ/2NS8RMxrAAJemsC6vU3kmfLyeaJT0TftHkV9mcx2107MmsBVXXexhVu4F0TZXyMl4g==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/project-service": "8.59.1", + "@typescript-eslint/tsconfig-utils": "8.59.1", + "@typescript-eslint/types": "8.59.1", + "@typescript-eslint/visitor-keys": "8.59.1", + "debug": "^4.4.3", + "minimatch": "^10.2.2", + "semver": "^7.7.3", + "tinyglobby": "^0.2.15", + "ts-api-utils": "^2.5.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/typescript-estree/node_modules/balanced-match": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-4.0.4.tgz", + "integrity": "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==", + "dev": true, + "license": "MIT", + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/@typescript-eslint/typescript-estree/node_modules/brace-expansion": { + "version": "5.0.5", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-5.0.5.tgz", + "integrity": "sha512-VZznLgtwhn+Mact9tfiwx64fA9erHH/MCXEUfB/0bX/6Fz6ny5EGTXYltMocqg4xFAQZtnO3DHWWXi8RiuN7cQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^4.0.2" + }, + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/@typescript-eslint/typescript-estree/node_modules/minimatch": { + "version": "10.2.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.5.tgz", + "integrity": "sha512-MULkVLfKGYDFYejP07QOurDLLQpcjk7Fw+7jXS2R2czRQzR56yHRveU5NDJEOviH+hETZKSkIk5c+T23GjFUMg==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "brace-expansion": "^5.0.5" + }, + "engines": { + "node": "18 || 20 || >=22" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/@typescript-eslint/typescript-estree/node_modules/semver": { + "version": "7.7.4", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.4.tgz", + "integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/@typescript-eslint/utils": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.59.1.tgz", + "integrity": "sha512-3pIeoXhCeYH9FSCBI8P3iNwJlGuzPlYKkTlen2O9T1DSeeg8UG8jstq6BLk+Mda0qup7mgk4z4XL4OzRaxZ8LA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.9.1", + "@typescript-eslint/scope-manager": "8.59.1", + "@typescript-eslint/types": "8.59.1", + "@typescript-eslint/typescript-estree": "8.59.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/@typescript-eslint/visitor-keys": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.59.1.tgz", + "integrity": "sha512-LdDNl6C5iJExcM0Yh0PwAIBb9PrSiCsWamF/JyEZawm3kFDnRoaq3LGE4bpyRao/fWeGKKyw7icx0YxrLFC5Cg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.59.1", + "eslint-visitor-keys": "^5.0.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/visitor-keys/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@unrs/resolver-binding-android-arm-eabi": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-android-arm-eabi/-/resolver-binding-android-arm-eabi-1.11.1.tgz", + "integrity": "sha512-ppLRUgHVaGRWUx0R0Ut06Mjo9gBaBkg3v/8AxusGLhsIotbBLuRk51rAzqLC8gq6NyyAojEXglNjzf6R948DNw==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ] + }, + "node_modules/@unrs/resolver-binding-android-arm64": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-android-arm64/-/resolver-binding-android-arm64-1.11.1.tgz", + "integrity": "sha512-lCxkVtb4wp1v+EoN+HjIG9cIIzPkX5OtM03pQYkG+U5O/wL53LC4QbIeazgiKqluGeVEeBlZahHalCaBvU1a2g==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ] + }, + "node_modules/@unrs/resolver-binding-darwin-arm64": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-darwin-arm64/-/resolver-binding-darwin-arm64-1.11.1.tgz", + "integrity": "sha512-gPVA1UjRu1Y/IsB/dQEsp2V1pm44Of6+LWvbLc9SDk1c2KhhDRDBUkQCYVWe6f26uJb3fOK8saWMgtX8IrMk3g==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@unrs/resolver-binding-darwin-x64": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-darwin-x64/-/resolver-binding-darwin-x64-1.11.1.tgz", + "integrity": "sha512-cFzP7rWKd3lZaCsDze07QX1SC24lO8mPty9vdP+YVa3MGdVgPmFc59317b2ioXtgCMKGiCLxJ4HQs62oz6GfRQ==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@unrs/resolver-binding-freebsd-x64": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-freebsd-x64/-/resolver-binding-freebsd-x64-1.11.1.tgz", + "integrity": "sha512-fqtGgak3zX4DCB6PFpsH5+Kmt/8CIi4Bry4rb1ho6Av2QHTREM+47y282Uqiu3ZRF5IQioJQ5qWRV6jduA+iGw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ] + }, + "node_modules/@unrs/resolver-binding-linux-arm-gnueabihf": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-arm-gnueabihf/-/resolver-binding-linux-arm-gnueabihf-1.11.1.tgz", + "integrity": "sha512-u92mvlcYtp9MRKmP+ZvMmtPN34+/3lMHlyMj7wXJDeXxuM0Vgzz0+PPJNsro1m3IZPYChIkn944wW8TYgGKFHw==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-arm-musleabihf": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-arm-musleabihf/-/resolver-binding-linux-arm-musleabihf-1.11.1.tgz", + "integrity": "sha512-cINaoY2z7LVCrfHkIcmvj7osTOtm6VVT16b5oQdS4beibX2SYBwgYLmqhBjA1t51CarSaBuX5YNsWLjsqfW5Cw==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-arm64-gnu": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-arm64-gnu/-/resolver-binding-linux-arm64-gnu-1.11.1.tgz", + "integrity": "sha512-34gw7PjDGB9JgePJEmhEqBhWvCiiWCuXsL9hYphDF7crW7UgI05gyBAi6MF58uGcMOiOqSJ2ybEeCvHcq0BCmQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-arm64-musl": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-arm64-musl/-/resolver-binding-linux-arm64-musl-1.11.1.tgz", + "integrity": "sha512-RyMIx6Uf53hhOtJDIamSbTskA99sPHS96wxVE/bJtePJJtpdKGXO1wY90oRdXuYOGOTuqjT8ACccMc4K6QmT3w==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-ppc64-gnu": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-ppc64-gnu/-/resolver-binding-linux-ppc64-gnu-1.11.1.tgz", + "integrity": "sha512-D8Vae74A4/a+mZH0FbOkFJL9DSK2R6TFPC9M+jCWYia/q2einCubX10pecpDiTmkJVUH+y8K3BZClycD8nCShA==", + "cpu": [ + "ppc64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-riscv64-gnu": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-riscv64-gnu/-/resolver-binding-linux-riscv64-gnu-1.11.1.tgz", + "integrity": "sha512-frxL4OrzOWVVsOc96+V3aqTIQl1O2TjgExV4EKgRY09AJ9leZpEg8Ak9phadbuX0BA4k8U5qtvMSQQGGmaJqcQ==", + "cpu": [ + "riscv64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-riscv64-musl": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-riscv64-musl/-/resolver-binding-linux-riscv64-musl-1.11.1.tgz", + "integrity": "sha512-mJ5vuDaIZ+l/acv01sHoXfpnyrNKOk/3aDoEdLO/Xtn9HuZlDD6jKxHlkN8ZhWyLJsRBxfv9GYM2utQ1SChKew==", + "cpu": [ + "riscv64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-s390x-gnu": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-s390x-gnu/-/resolver-binding-linux-s390x-gnu-1.11.1.tgz", + "integrity": "sha512-kELo8ebBVtb9sA7rMe1Cph4QHreByhaZ2QEADd9NzIQsYNQpt9UkM9iqr2lhGr5afh885d/cB5QeTXSbZHTYPg==", + "cpu": [ + "s390x" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-x64-gnu": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-x64-gnu/-/resolver-binding-linux-x64-gnu-1.11.1.tgz", + "integrity": "sha512-C3ZAHugKgovV5YvAMsxhq0gtXuwESUKc5MhEtjBpLoHPLYM+iuwSj3lflFwK3DPm68660rZ7G8BMcwSro7hD5w==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-linux-x64-musl": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-linux-x64-musl/-/resolver-binding-linux-x64-musl-1.11.1.tgz", + "integrity": "sha512-rV0YSoyhK2nZ4vEswT/QwqzqQXw5I6CjoaYMOX0TqBlWhojUf8P94mvI7nuJTeaCkkds3QE4+zS8Ko+GdXuZtA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@unrs/resolver-binding-wasm32-wasi": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-wasm32-wasi/-/resolver-binding-wasm32-wasi-1.11.1.tgz", + "integrity": "sha512-5u4RkfxJm+Ng7IWgkzi3qrFOvLvQYnPBmjmZQ8+szTK/b31fQCnleNl1GgEt7nIsZRIf5PLhPwT0WM+q45x/UQ==", + "cpu": [ + "wasm32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "@napi-rs/wasm-runtime": "^0.2.11" + }, + "engines": { + "node": ">=14.0.0" + } + }, + "node_modules/@unrs/resolver-binding-win32-arm64-msvc": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-win32-arm64-msvc/-/resolver-binding-win32-arm64-msvc-1.11.1.tgz", + "integrity": "sha512-nRcz5Il4ln0kMhfL8S3hLkxI85BXs3o8EYoattsJNdsX4YUU89iOkVn7g0VHSRxFuVMdM4Q1jEpIId1Ihim/Uw==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@unrs/resolver-binding-win32-ia32-msvc": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-win32-ia32-msvc/-/resolver-binding-win32-ia32-msvc-1.11.1.tgz", + "integrity": "sha512-DCEI6t5i1NmAZp6pFonpD5m7i6aFrpofcp4LA2i8IIq60Jyo28hamKBxNrZcyOwVOZkgsRp9O2sXWBWP8MnvIQ==", + "cpu": [ + "ia32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@unrs/resolver-binding-win32-x64-msvc": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/@unrs/resolver-binding-win32-x64-msvc/-/resolver-binding-win32-x64-msvc-1.11.1.tgz", + "integrity": "sha512-lrW200hZdbfRtztbygyaq/6jP6AKE8qQN2KvPcJ+x7wiD038YtnYtZ82IMNJ69GJibV7bwL3y9FgK+5w/pYt6g==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/accepts": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz", + "integrity": "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==", + "license": "MIT", + "dependencies": { + "mime-types": "^3.0.0", + "negotiator": "^1.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/acorn": { + "version": "8.16.0", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz", + "integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==", + "dev": true, + "license": "MIT", + "peer": true, + "bin": { + "acorn": "bin/acorn" + }, + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/acorn-jsx": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.3.2.tgz", + "integrity": "sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==", + "dev": true, + "license": "MIT", + "peerDependencies": { + "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0" + } + }, + "node_modules/agent-base": { + "version": "7.1.4", + "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", + "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==", + "license": "MIT", + "engines": { + "node": ">= 14" + } + }, + "node_modules/ajv": { + "version": "6.15.0", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.15.0.tgz", + "integrity": "sha512-fgFx7Hfoq60ytK2c7DhnF8jIvzYgOMxfugjLOSMHjLIPgenqa7S7oaagATUq99mV6IYvN2tRmC0wnTYX6iPbMw==", + "dev": true, + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ajv-formats": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/ajv-formats/-/ajv-formats-3.0.1.tgz", + "integrity": "sha512-8iUql50EUR+uUcdRQ3HDqa6EVyo3docL8g5WJ3FNcWmu62IbkGUue/pEyLBW8VGKKucTPgqeks4fIU1DA4yowQ==", + "license": "MIT", + "dependencies": { + "ajv": "^8.0.0" + }, + "peerDependencies": { + "ajv": "^8.0.0" + }, + "peerDependenciesMeta": { + "ajv": { + "optional": true + } + } + }, + "node_modules/ajv-formats/node_modules/ajv": { + "version": "8.20.0", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.20.0.tgz", + "integrity": "sha512-Thbli+OlOj+iMPYFBVBfJ3OmCAnaSyNn4M1vz9T6Gka5Jt9ba/HIR56joy65tY6kx/FCF5VXNB819Y7/GUrBGA==", + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.3", + "fast-uri": "^3.0.1", + "json-schema-traverse": "^1.0.0", + "require-from-string": "^2.0.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ajv-formats/node_modules/json-schema-traverse": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", + "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==", + "license": "MIT" + }, + "node_modules/ansi-regex": { + "version": "6.2.2", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz", + "integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-regex?sponsor=1" + } + }, + "node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/argparse": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz", + "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", + "license": "Python-2.0" + }, + "node_modules/aria-hidden": { + "version": "1.2.6", + "resolved": "https://registry.npmjs.org/aria-hidden/-/aria-hidden-1.2.6.tgz", + "integrity": "sha512-ik3ZgC9dY/lYVVM++OISsaYDeg1tb0VtP5uL3ouh1koGOaUMDPpbFIei4JkFimWUFPn90sbMNMXQAIVOlnYKJA==", + "license": "MIT", + "dependencies": { + "tslib": "^2.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/aria-query": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/aria-query/-/aria-query-5.3.2.tgz", + "integrity": "sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/array-buffer-byte-length": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/array-buffer-byte-length/-/array-buffer-byte-length-1.0.2.tgz", + "integrity": "sha512-LHE+8BuR7RYGDKvnrmcuSq3tDcKv9OFEXQt/HpbZhY7V6h0zlUXutnAD82GiFx9rdieCMjkvtcsPqBwgUl1Iiw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "is-array-buffer": "^3.0.5" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/array-includes": { + "version": "3.1.9", + "resolved": "https://registry.npmjs.org/array-includes/-/array-includes-3.1.9.tgz", + "integrity": "sha512-FmeCCAenzH0KH381SPT5FZmiA/TmpndpcaShhfgEN9eCVjnFBqq3l1xrI42y8+PPLI6hypzou4GXw00WHmPBLQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "define-properties": "^1.2.1", + "es-abstract": "^1.24.0", + "es-object-atoms": "^1.1.1", + "get-intrinsic": "^1.3.0", + "is-string": "^1.1.1", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/array.prototype.findlast": { + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/array.prototype.findlast/-/array.prototype.findlast-1.2.5.tgz", + "integrity": "sha512-CVvd6FHg1Z3POpBLxO6E6zr+rSKEQ9L6rZHAaY7lLfhKsWYUBBOuMs0e9o24oopj6H+geRCX0YJ+TJLBK2eHyQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.2", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0", + "es-shim-unscopables": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/array.prototype.findlastindex": { + "version": "1.2.6", + "resolved": "https://registry.npmjs.org/array.prototype.findlastindex/-/array.prototype.findlastindex-1.2.6.tgz", + "integrity": "sha512-F/TKATkzseUExPlfvmwQKGITM3DGTK+vkAsCZoDc5daVygbJBnjEUCbgkAvVFsgfXfX4YIqZ/27G3k3tdXrTxQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.9", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "es-shim-unscopables": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/array.prototype.flat": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/array.prototype.flat/-/array.prototype.flat-1.3.3.tgz", + "integrity": "sha512-rwG/ja1neyLqCuGZ5YYrznA62D4mZXg0i1cIskIUKSiqF3Cje9/wXAls9B9s1Wa2fomMsIv8czB8jZcPmxCXFg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-shim-unscopables": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/array.prototype.flatmap": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/array.prototype.flatmap/-/array.prototype.flatmap-1.3.3.tgz", + "integrity": "sha512-Y7Wt51eKJSyi80hFrJCePGGNo5ktJCslFuboqJsbf57CCPcm5zztluPlc4/aD8sWsKvlwatezpV4U1efk8kpjg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-shim-unscopables": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/array.prototype.tosorted": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/array.prototype.tosorted/-/array.prototype.tosorted-1.1.4.tgz", + "integrity": "sha512-p6Fx8B7b7ZhL/gmUsAy0D15WhvDccw3mnGNbZpi3pmeJdxtWsj2jEaI4Y6oo3XiHfzuSgPwKc04MYt6KgvC/wA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.3", + "es-errors": "^1.3.0", + "es-shim-unscopables": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/arraybuffer.prototype.slice": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/arraybuffer.prototype.slice/-/arraybuffer.prototype.slice-1.0.4.tgz", + "integrity": "sha512-BNoCY6SXXPQ7gF2opIP4GBE+Xw7U+pHMYKuzjgCN3GwiaIR09UUeKfheyIry77QtrCBlC0KK0q5/TER/tYh3PQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "array-buffer-byte-length": "^1.0.1", + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6", + "is-array-buffer": "^3.0.4" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/ast-types": { + "version": "0.16.1", + "resolved": "https://registry.npmjs.org/ast-types/-/ast-types-0.16.1.tgz", + "integrity": "sha512-6t10qk83GOG8p0vKmaCr8eiilZwO171AvbROMtvvNiwrTly62t+7XkA8RdIIVbpMhCASAsxgAzdRSwh6nw/5Dg==", + "license": "MIT", + "dependencies": { + "tslib": "^2.0.1" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/ast-types-flow": { + "version": "0.0.8", + "resolved": "https://registry.npmjs.org/ast-types-flow/-/ast-types-flow-0.0.8.tgz", + "integrity": "sha512-OH/2E5Fg20h2aPrbe+QL8JZQFko0YZaF+j4mnQ7BGhfavO7OpSLa8a0y9sBwomHdSbkhTS8TQNayBfnW5DwbvQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/async-function": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/async-function/-/async-function-1.0.0.tgz", + "integrity": "sha512-hsU18Ae8CDTR6Kgu9DYf0EbCr/a5iGL0rytQDobUcdpYOKokk8LEjVphnXkDkgpi0wYVsqrXuP0bZxJaTqdgoA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/available-typed-arrays": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/available-typed-arrays/-/available-typed-arrays-1.0.7.tgz", + "integrity": "sha512-wvUjBtSGN7+7SjNpq/9M2Tg350UZD3q62IFZLbRAR1bSMlCo1ZaeW+BJ+D090e4hIIZLBcTDWe4Mh4jvUDajzQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "possible-typed-array-names": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/axe-core": { + "version": "4.11.4", + "resolved": "https://registry.npmjs.org/axe-core/-/axe-core-4.11.4.tgz", + "integrity": "sha512-KunSNx+TVpkAw/6ULfhnx+HWRecjqZGTOyquAoWHYLRSdK1tB5Ihce1ZW+UY3fj33bYAFWPu7W/GRSmmrCGuxA==", + "dev": true, + "license": "MPL-2.0", + "engines": { + "node": ">=4" + } + }, + "node_modules/axobject-query": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/axobject-query/-/axobject-query-4.1.0.tgz", + "integrity": "sha512-qIj0G9wZbMGNLjLmg1PT6v2mE9AH2zlnADJD/2tC6E00hgmhUOfEB6greHPAfLRSufHqROIUTkw6E+M3lH0PTQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/baseline-browser-mapping": { + "version": "2.10.25", + "resolved": "https://registry.npmjs.org/baseline-browser-mapping/-/baseline-browser-mapping-2.10.25.tgz", + "integrity": "sha512-QO/VHsXCQdnzADMfmkeOPvHdIAkoB7i0/rGjINPJEetLx75hNttVWGQ/jycHUDP9zZ9rupbm60WRxcwViB0MiA==", + "license": "Apache-2.0", + "bin": { + "baseline-browser-mapping": "dist/cli.cjs" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/body-parser": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.2.tgz", + "integrity": "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==", + "license": "MIT", + "dependencies": { + "bytes": "^3.1.2", + "content-type": "^1.0.5", + "debug": "^4.4.3", + "http-errors": "^2.0.0", + "iconv-lite": "^0.7.0", + "on-finished": "^2.4.1", + "qs": "^6.14.1", + "raw-body": "^3.0.1", + "type-is": "^2.0.1" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/brace-expansion": { + "version": "1.1.14", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.14.tgz", + "integrity": "sha512-MWPGfDxnyzKU7rNOW9SP/c50vi3xrmrua/+6hfPbCS2ABNWfx24vPidzvC7krjU/RTo235sV776ymlsMtGKj8g==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/braces": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz", + "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==", + "license": "MIT", + "dependencies": { + "fill-range": "^7.1.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/browserslist": { + "version": "4.28.2", + "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.28.2.tgz", + "integrity": "sha512-48xSriZYYg+8qXna9kwqjIVzuQxi+KYWp2+5nCYnYKPTr0LvD89Jqk2Or5ogxz0NUMfIjhh2lIUX/LyX9B4oIg==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/browserslist" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/browserslist" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "peer": true, + "dependencies": { + "baseline-browser-mapping": "^2.10.12", + "caniuse-lite": "^1.0.30001782", + "electron-to-chromium": "^1.5.328", + "node-releases": "^2.0.36", + "update-browserslist-db": "^1.2.3" + }, + "bin": { + "browserslist": "cli.js" + }, + "engines": { + "node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7" + } + }, + "node_modules/bundle-name": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/bundle-name/-/bundle-name-4.1.0.tgz", + "integrity": "sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==", + "license": "MIT", + "dependencies": { + "run-applescript": "^7.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/bytes": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", + "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/call-bind": { + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.9.tgz", + "integrity": "sha512-a/hy+pNsFUTR+Iz8TCJvXudKVLAnz/DyeSUo10I5yvFDQJBFU2s9uqQpoSrJlroHUKoKqzg+epxyP9lqFdzfBQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "get-intrinsic": "^1.3.0", + "set-function-length": "^1.2.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/call-bound": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/callsites": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz", + "integrity": "sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/caniuse-lite": { + "version": "1.0.30001791", + "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001791.tgz", + "integrity": "sha512-yk0l/YSrOnFZk3UROpDLQD9+kC1l4meK/wed583AXrzoarMGJcbRi2Q4RaUYbKxYAsZ8sWmaSa/DsLmdBeI1vQ==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/browserslist" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/caniuse-lite" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "CC-BY-4.0" + }, + "node_modules/chalk": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/class-variance-authority": { + "version": "0.7.1", + "resolved": "https://registry.npmjs.org/class-variance-authority/-/class-variance-authority-0.7.1.tgz", + "integrity": "sha512-Ka+9Trutv7G8M6WT6SeiRWz792K5qEqIGEGzXKhAE6xOWAY6pPH8U+9IY3oCMv6kqTmLsv7Xh/2w2RigkePMsg==", + "license": "Apache-2.0", + "dependencies": { + "clsx": "^2.1.1" + }, + "funding": { + "url": "https://polar.sh/cva" + } + }, + "node_modules/cli-cursor": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/cli-cursor/-/cli-cursor-5.0.0.tgz", + "integrity": "sha512-aCj4O5wKyszjMmDT4tZj93kxyydN/K5zPWSCe6/0AV/AA1pqe5ZBIw0a2ZfPQV7lL5/yb5HsUreJ6UFAF1tEQw==", + "license": "MIT", + "dependencies": { + "restore-cursor": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/cli-spinners": { + "version": "2.9.2", + "resolved": "https://registry.npmjs.org/cli-spinners/-/cli-spinners-2.9.2.tgz", + "integrity": "sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==", + "license": "MIT", + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/cli-width": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/cli-width/-/cli-width-4.1.0.tgz", + "integrity": "sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ==", + "license": "ISC", + "engines": { + "node": ">= 12" + } + }, + "node_modules/client-only": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/client-only/-/client-only-0.0.1.tgz", + "integrity": "sha512-IV3Ou0jSMzZrd3pZ48nLkT9DA7Ag1pnPzaiQhpW7c3RbcqqzvzzVu+L8gfqMp/8IM2MQtSiqaCxrrcfu8I8rMA==", + "license": "MIT" + }, + "node_modules/cliui": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz", + "integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==", + "license": "ISC", + "dependencies": { + "string-width": "^4.2.0", + "strip-ansi": "^6.0.1", + "wrap-ansi": "^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/cliui/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/cliui/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "license": "MIT" + }, + "node_modules/cliui/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/cliui/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/clsx": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/clsx/-/clsx-2.1.1.tgz", + "integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/code-block-writer": { + "version": "13.0.3", + "resolved": "https://registry.npmjs.org/code-block-writer/-/code-block-writer-13.0.3.tgz", + "integrity": "sha512-Oofo0pq3IKnsFtuHqSF7TqBfr71aeyZDVJ0HpmqB7FBM2qEigL0iPONSCZSO9pE9dZTAxANe5XHG9Uy0YMv8cg==", + "license": "MIT" + }, + "node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "license": "MIT" + }, + "node_modules/commander": { + "version": "14.0.3", + "resolved": "https://registry.npmjs.org/commander/-/commander-14.0.3.tgz", + "integrity": "sha512-H+y0Jo/T1RZ9qPP4Eh1pkcQcLRglraJaSLoyOtHxu6AapkjWVCy2Sit1QQ4x3Dng8qDlSsZEet7g5Pq06MvTgw==", + "license": "MIT", + "engines": { + "node": ">=20" + } + }, + "node_modules/concat-map": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", + "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==", + "dev": true, + "license": "MIT" + }, + "node_modules/content-disposition": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.1.0.tgz", + "integrity": "sha512-5jRCH9Z/+DRP7rkvY83B+yGIGX96OYdJmzngqnw2SBSxqCFPd0w2km3s5iawpGX8krnwSGmF0FW5Nhr0Hfai3g==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/content-type": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", + "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/convert-source-map": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-2.0.0.tgz", + "integrity": "sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==", + "license": "MIT" + }, + "node_modules/cookie": { + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz", + "integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie-signature": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.2.2.tgz", + "integrity": "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==", + "license": "MIT", + "engines": { + "node": ">=6.6.0" + } + }, + "node_modules/cors": { + "version": "2.8.6", + "resolved": "https://registry.npmjs.org/cors/-/cors-2.8.6.tgz", + "integrity": "sha512-tJtZBBHA6vjIAaF6EnIaq6laBBP9aq/Y3ouVJjEfoHbRBcHBAHYcMh/w8LDrk2PvIMMq8gmopa5D4V8RmbrxGw==", + "license": "MIT", + "dependencies": { + "object-assign": "^4", + "vary": "^1" + }, + "engines": { + "node": ">= 0.10" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/cosmiconfig": { + "version": "9.0.1", + "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-9.0.1.tgz", + "integrity": "sha512-hr4ihw+DBqcvrsEDioRO31Z17x71pUYoNe/4h6Z0wB72p7MU7/9gH8Q3s12NFhHPfYBBOV3qyfUxmr/Yn3shnQ==", + "license": "MIT", + "dependencies": { + "env-paths": "^2.2.1", + "import-fresh": "^3.3.0", + "js-yaml": "^4.1.0", + "parse-json": "^5.2.0" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/d-fischer" + }, + "peerDependencies": { + "typescript": ">=4.9.5" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/cross-spawn": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", + "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", + "license": "MIT", + "dependencies": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/cssesc": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/cssesc/-/cssesc-3.0.0.tgz", + "integrity": "sha512-/Tb/JcjK111nNScGob5MNtsntNM1aCNUDipB/TkwZFhyDrrE47SOx/18wF2bbjgc3ZzCSKW1T5nt5EbFoAz/Vg==", + "license": "MIT", + "bin": { + "cssesc": "bin/cssesc" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/csstype": { + "version": "3.2.3", + "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz", + "integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==", + "devOptional": true, + "license": "MIT" + }, + "node_modules/damerau-levenshtein": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.8.tgz", + "integrity": "sha512-sdQSFB7+llfUcQHUQO3+B8ERRj0Oa4w9POWMI/puGtuf7gFywGmkaLCElnudfTiKZV+NvHqL0ifzdrI8Ro7ESA==", + "dev": true, + "license": "BSD-2-Clause" + }, + "node_modules/data-uri-to-buffer": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz", + "integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==", + "license": "MIT", + "engines": { + "node": ">= 12" + } + }, + "node_modules/data-view-buffer": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/data-view-buffer/-/data-view-buffer-1.0.2.tgz", + "integrity": "sha512-EmKO5V3OLXh1rtK2wgXRansaK1/mtVdTUEiEI0W8RkvgT05kfxaH29PliLnpLP73yYO6142Q72QNa8Wx/A5CqQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "is-data-view": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/data-view-byte-length": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/data-view-byte-length/-/data-view-byte-length-1.0.2.tgz", + "integrity": "sha512-tuhGbE6CfTM9+5ANGf+oQb72Ky/0+s3xKUpHvShfiz2RxMFgFPjsXuRLBVMtvMs15awe45SRb83D6wH4ew6wlQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "is-data-view": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/inspect-js" + } + }, + "node_modules/data-view-byte-offset": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/data-view-byte-offset/-/data-view-byte-offset-1.0.1.tgz", + "integrity": "sha512-BS8PfmtDGnrgYdOonGZQdLZslWIeCGFP9tpan0hi1Co2Zr2NKADsvGYA8XxuG/4UWgJ6Cjtv+YJnB6MM69QGlQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "is-data-view": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "license": "MIT", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/dedent": { + "version": "1.7.2", + "resolved": "https://registry.npmjs.org/dedent/-/dedent-1.7.2.tgz", + "integrity": "sha512-WzMx3mW98SN+zn3hgemf4OzdmyNhhhKz5Ay0pUfQiMQ3e1g+xmTJWp/pKdwKVXhdSkAEGIIzqeuWrL3mV/AXbA==", + "license": "MIT", + "peerDependencies": { + "babel-plugin-macros": "^3.1.0" + }, + "peerDependenciesMeta": { + "babel-plugin-macros": { + "optional": true + } + } + }, + "node_modules/deep-is": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz", + "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/deepmerge": { + "version": "4.3.1", + "resolved": "https://registry.npmjs.org/deepmerge/-/deepmerge-4.3.1.tgz", + "integrity": "sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/default-browser": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.5.0.tgz", + "integrity": "sha512-H9LMLr5zwIbSxrmvikGuI/5KGhZ8E2zH3stkMgM5LpOWDutGM2JZaj460Udnf1a+946zc7YBgrqEWwbk7zHvGw==", + "license": "MIT", + "dependencies": { + "bundle-name": "^4.1.0", + "default-browser-id": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/default-browser-id": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/default-browser-id/-/default-browser-id-5.0.1.tgz", + "integrity": "sha512-x1VCxdX4t+8wVfd1so/9w+vQ4vx7lKd2Qp5tDRutErwmR85OgmfX7RlLRMWafRMY7hbEiXIbudNrjOAPa/hL8Q==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/define-data-property": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/define-data-property/-/define-data-property-1.1.4.tgz", + "integrity": "sha512-rBMvIzlpA8v6E+SJZoo++HAYqsLrkg7MSfIinMPFhmkorw7X+dOXVJQs+QT69zGkzMyfDnIMN2Wid1+NbL3T+A==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-define-property": "^1.0.0", + "es-errors": "^1.3.0", + "gopd": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/define-lazy-prop": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-3.0.0.tgz", + "integrity": "sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/define-properties": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/define-properties/-/define-properties-1.2.1.tgz", + "integrity": "sha512-8QmQKqEASLd5nx0U1B1okLElbUuuttJ/AnYmRXbbbGDWh6uS208EjD4Xqq/I9wK7u0v6O08XhTWnt5XtEbR6Dg==", + "dev": true, + "license": "MIT", + "dependencies": { + "define-data-property": "^1.0.1", + "has-property-descriptors": "^1.0.0", + "object-keys": "^1.1.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/depd": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", + "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/detect-libc": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.1.2.tgz", + "integrity": "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==", + "devOptional": true, + "license": "Apache-2.0", + "engines": { + "node": ">=8" + } + }, + "node_modules/detect-node-es": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/detect-node-es/-/detect-node-es-1.1.0.tgz", + "integrity": "sha512-ypdmJU/TbBby2Dxibuv7ZLW3Bs1QEmM7nHjEANfohJLvE0XVujisn1qPJcZxg+qDucsr+bP6fLD1rPS3AhJ7EQ==", + "license": "MIT" + }, + "node_modules/diff": { + "version": "8.0.4", + "resolved": "https://registry.npmjs.org/diff/-/diff-8.0.4.tgz", + "integrity": "sha512-DPi0FmjiSU5EvQV0++GFDOJ9ASQUVFh5kD+OzOnYdi7n3Wpm9hWWGfB/O2blfHcMVTL5WkQXSnRiK9makhrcnw==", + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.3.1" + } + }, + "node_modules/doctrine": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-2.1.0.tgz", + "integrity": "sha512-35mSku4ZXK0vfCuHEDAwt55dg2jNajHZ1odvF+8SSr82EsZY4QmXfuWso8oEd8zRhVObSN18aM0CjSdoBX7zIw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "esutils": "^2.0.2" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/dotenv": { + "version": "17.4.2", + "resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.4.2.tgz", + "integrity": "sha512-nI4U3TottKAcAD9LLud4Cb7b2QztQMUEfHbvhTH09bqXTxnSie8WnjPALV/WMCrJZ6UV/qHJ6L03OqO3LcdYZw==", + "license": "BSD-2-Clause", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://dotenvx.com" + } + }, + "node_modules/dunder-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/eciesjs": { + "version": "0.4.18", + "resolved": "https://registry.npmjs.org/eciesjs/-/eciesjs-0.4.18.tgz", + "integrity": "sha512-wG99Zcfcys9fZux7Cft8BAX/YrOJLJSZ3jyYPfhZHqN2E+Ffx+QXBDsv3gubEgPtV6dTzJMSQUwk1H98/t/0wQ==", + "license": "MIT", + "dependencies": { + "@ecies/ciphers": "^0.2.5", + "@noble/ciphers": "^1.3.0", + "@noble/curves": "^1.9.7", + "@noble/hashes": "^1.8.0" + }, + "engines": { + "bun": ">=1", + "deno": ">=2", + "node": ">=16" + } + }, + "node_modules/ee-first": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", + "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==", + "license": "MIT" + }, + "node_modules/electron-to-chromium": { + "version": "1.5.349", + "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.349.tgz", + "integrity": "sha512-QsWVGyRuY07Aqb234QytTfwd5d9AJlfNIQ5wIOl1L+PZDzI9d9+Fn0FRale/QYlFxt/bUnB0/nLd1jFPGxGK1A==", + "license": "ISC" + }, + "node_modules/emoji-regex": { + "version": "9.2.2", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz", + "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", + "dev": true, + "license": "MIT" + }, + "node_modules/encodeurl": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", + "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/enhanced-resolve": { + "version": "5.21.0", + "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.21.0.tgz", + "integrity": "sha512-otxSQPw4lkOZWkHpB3zaEQs6gWYEsmX4xQF68ElXC/TWvGxGMSGOvoNbaLXm6/cS/fSfHtsEdw90y20PCd+sCA==", + "dev": true, + "license": "MIT", + "dependencies": { + "graceful-fs": "^4.2.4", + "tapable": "^2.3.3" + }, + "engines": { + "node": ">=10.13.0" + } + }, + "node_modules/env-paths": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/env-paths/-/env-paths-2.2.1.tgz", + "integrity": "sha512-+h1lkLKhZMTYjog1VEpJNG7NZJWcuc2DDk/qsqSTRRCOXiLjeQ1d1/udrUGhqMxUgAlwKNZ0cf2uqan5GLuS2A==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/error-ex": { + "version": "1.3.4", + "resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.4.tgz", + "integrity": "sha512-sqQamAnR14VgCr1A618A3sGrygcpK+HEbenA/HiEAkkUwcZIIB/tgWqHFxWgOyDh4nB4JCRimh79dR5Ywc9MDQ==", + "license": "MIT", + "dependencies": { + "is-arrayish": "^0.2.1" + } + }, + "node_modules/es-abstract": { + "version": "1.24.2", + "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.24.2.tgz", + "integrity": "sha512-2FpH9Q5i2RRwyEP1AylXe6nYLR5OhaJTZwmlcP0dL/+JCbgg7yyEo/sEK6HeGZRf3dFpWwThaRHVApXSkW3xeg==", + "dev": true, + "license": "MIT", + "dependencies": { + "array-buffer-byte-length": "^1.0.2", + "arraybuffer.prototype.slice": "^1.0.4", + "available-typed-arrays": "^1.0.7", + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "data-view-buffer": "^1.0.2", + "data-view-byte-length": "^1.0.2", + "data-view-byte-offset": "^1.0.1", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "es-set-tostringtag": "^2.1.0", + "es-to-primitive": "^1.3.0", + "function.prototype.name": "^1.1.8", + "get-intrinsic": "^1.3.0", + "get-proto": "^1.0.1", + "get-symbol-description": "^1.1.0", + "globalthis": "^1.0.4", + "gopd": "^1.2.0", + "has-property-descriptors": "^1.0.2", + "has-proto": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "internal-slot": "^1.1.0", + "is-array-buffer": "^3.0.5", + "is-callable": "^1.2.7", + "is-data-view": "^1.0.2", + "is-negative-zero": "^2.0.3", + "is-regex": "^1.2.1", + "is-set": "^2.0.3", + "is-shared-array-buffer": "^1.0.4", + "is-string": "^1.1.1", + "is-typed-array": "^1.1.15", + "is-weakref": "^1.1.1", + "math-intrinsics": "^1.1.0", + "object-inspect": "^1.13.4", + "object-keys": "^1.1.1", + "object.assign": "^4.1.7", + "own-keys": "^1.0.1", + "regexp.prototype.flags": "^1.5.4", + "safe-array-concat": "^1.1.3", + "safe-push-apply": "^1.0.0", + "safe-regex-test": "^1.1.0", + "set-proto": "^1.0.0", + "stop-iteration-iterator": "^1.1.0", + "string.prototype.trim": "^1.2.10", + "string.prototype.trimend": "^1.0.9", + "string.prototype.trimstart": "^1.0.8", + "typed-array-buffer": "^1.0.3", + "typed-array-byte-length": "^1.0.3", + "typed-array-byte-offset": "^1.0.4", + "typed-array-length": "^1.0.7", + "unbox-primitive": "^1.1.0", + "which-typed-array": "^1.1.19" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/es-define-property": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-iterator-helpers": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/es-iterator-helpers/-/es-iterator-helpers-1.3.2.tgz", + "integrity": "sha512-HVLACW1TppGYjJ8H6/jqH/pqOtKRw6wMlrB23xfExmFWxFquAIWCmwoLsOyN96K4a5KbmOf5At9ZUO3GZbetAw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.9", + "call-bound": "^1.0.4", + "define-properties": "^1.2.1", + "es-abstract": "^1.24.2", + "es-errors": "^1.3.0", + "es-set-tostringtag": "^2.1.0", + "function-bind": "^1.1.2", + "get-intrinsic": "^1.3.0", + "globalthis": "^1.0.4", + "gopd": "^1.2.0", + "has-property-descriptors": "^1.0.2", + "has-proto": "^1.2.0", + "has-symbols": "^1.1.0", + "internal-slot": "^1.1.0", + "iterator.prototype": "^1.1.5", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-object-atoms": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-set-tostringtag": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz", + "integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6", + "has-tostringtag": "^1.0.2", + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-shim-unscopables": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/es-shim-unscopables/-/es-shim-unscopables-1.1.0.tgz", + "integrity": "sha512-d9T8ucsEhh8Bi1woXCf+TIKDIROLG5WCkxg8geBCbvk22kzwC5G2OnXVMO6FUsvQlgUUXQ2itephWDLqDzbeCw==", + "dev": true, + "license": "MIT", + "dependencies": { + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-to-primitive": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.3.0.tgz", + "integrity": "sha512-w+5mJ3GuFL+NjVtJlvydShqE1eN3h3PbI7/5LAsYJP/2qtuMXjfL2LpHSRqo4b4eSF5K/DH1JXKUAHSB2UW50g==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-callable": "^1.2.7", + "is-date-object": "^1.0.5", + "is-symbol": "^1.0.4" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/escalade": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", + "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/escape-html": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", + "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==", + "license": "MIT" + }, + "node_modules/escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/eslint": { + "version": "9.39.4", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-9.39.4.tgz", + "integrity": "sha512-XoMjdBOwe/esVgEvLmNsD3IRHkm7fbKIUGvrleloJXUZgDHig2IPWNniv+GwjyJXzuNqVjlr5+4yVUZjycJwfQ==", + "dev": true, + "license": "MIT", + "peer": true, + "dependencies": { + "@eslint-community/eslint-utils": "^4.8.0", + "@eslint-community/regexpp": "^4.12.1", + "@eslint/config-array": "^0.21.2", + "@eslint/config-helpers": "^0.4.2", + "@eslint/core": "^0.17.0", + "@eslint/eslintrc": "^3.3.5", + "@eslint/js": "9.39.4", + "@eslint/plugin-kit": "^0.4.1", + "@humanfs/node": "^0.16.6", + "@humanwhocodes/module-importer": "^1.0.1", + "@humanwhocodes/retry": "^0.4.2", + "@types/estree": "^1.0.6", + "ajv": "^6.14.0", + "chalk": "^4.0.0", + "cross-spawn": "^7.0.6", + "debug": "^4.3.2", + "escape-string-regexp": "^4.0.0", + "eslint-scope": "^8.4.0", + "eslint-visitor-keys": "^4.2.1", + "espree": "^10.4.0", + "esquery": "^1.5.0", + "esutils": "^2.0.2", + "fast-deep-equal": "^3.1.3", + "file-entry-cache": "^8.0.0", + "find-up": "^5.0.0", + "glob-parent": "^6.0.2", + "ignore": "^5.2.0", + "imurmurhash": "^0.1.4", + "is-glob": "^4.0.0", + "json-stable-stringify-without-jsonify": "^1.0.1", + "lodash.merge": "^4.6.2", + "minimatch": "^3.1.5", + "natural-compare": "^1.4.0", + "optionator": "^0.9.3" + }, + "bin": { + "eslint": "bin/eslint.js" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://eslint.org/donate" + }, + "peerDependencies": { + "jiti": "*" + }, + "peerDependenciesMeta": { + "jiti": { + "optional": true + } + } + }, + "node_modules/eslint-config-next": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/eslint-config-next/-/eslint-config-next-16.2.4.tgz", + "integrity": "sha512-A6ekXYFj/YQxBPMl45g3e+U8zJo+X2+ZQwcz34pPKjpc/3S4roBA2Rd9xWB4FKuSxhofo1/95WjzmUY+wHrOhg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@next/eslint-plugin-next": "16.2.4", + "eslint-import-resolver-node": "^0.3.6", + "eslint-import-resolver-typescript": "^3.5.2", + "eslint-plugin-import": "^2.32.0", + "eslint-plugin-jsx-a11y": "^6.10.0", + "eslint-plugin-react": "^7.37.0", + "eslint-plugin-react-hooks": "^7.0.0", + "globals": "16.4.0", + "typescript-eslint": "^8.46.0" + }, + "peerDependencies": { + "eslint": ">=9.0.0", + "typescript": ">=3.3.1" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/eslint-config-next/node_modules/globals": { + "version": "16.4.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-16.4.0.tgz", + "integrity": "sha512-ob/2LcVVaVGCYN+r14cnwnoDPUufjiYgSqRhiFD0Q1iI4Odora5RE8Iv1D24hAz5oMophRGkGz+yuvQmmUMnMw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/eslint-import-resolver-node": { + "version": "0.3.10", + "resolved": "https://registry.npmjs.org/eslint-import-resolver-node/-/eslint-import-resolver-node-0.3.10.tgz", + "integrity": "sha512-tRrKqFyCaKict5hOd244sL6EQFNycnMQnBe+j8uqGNXYzsImGbGUU4ibtoaBmv5FLwJwcFJNeg1GeVjQfbMrDQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "debug": "^3.2.7", + "is-core-module": "^2.16.1", + "resolve": "^2.0.0-next.6" + } + }, + "node_modules/eslint-import-resolver-node/node_modules/debug": { + "version": "3.2.7", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.2.7.tgz", + "integrity": "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.1" + } + }, + "node_modules/eslint-import-resolver-typescript": { + "version": "3.10.1", + "resolved": "https://registry.npmjs.org/eslint-import-resolver-typescript/-/eslint-import-resolver-typescript-3.10.1.tgz", + "integrity": "sha512-A1rHYb06zjMGAxdLSkN2fXPBwuSaQ0iO5M/hdyS0Ajj1VBaRp0sPD3dn1FhME3c/JluGFbwSxyCfqdSbtQLAHQ==", + "dev": true, + "license": "ISC", + "dependencies": { + "@nolyfill/is-core-module": "1.0.39", + "debug": "^4.4.0", + "get-tsconfig": "^4.10.0", + "is-bun-module": "^2.0.0", + "stable-hash": "^0.0.5", + "tinyglobby": "^0.2.13", + "unrs-resolver": "^1.6.2" + }, + "engines": { + "node": "^14.18.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint-import-resolver-typescript" + }, + "peerDependencies": { + "eslint": "*", + "eslint-plugin-import": "*", + "eslint-plugin-import-x": "*" + }, + "peerDependenciesMeta": { + "eslint-plugin-import": { + "optional": true + }, + "eslint-plugin-import-x": { + "optional": true + } + } + }, + "node_modules/eslint-module-utils": { + "version": "2.12.1", + "resolved": "https://registry.npmjs.org/eslint-module-utils/-/eslint-module-utils-2.12.1.tgz", + "integrity": "sha512-L8jSWTze7K2mTg0vos/RuLRS5soomksDPoJLXIslC7c8Wmut3bx7CPpJijDcBZtxQ5lrbUdM+s0OlNbz0DCDNw==", + "dev": true, + "license": "MIT", + "dependencies": { + "debug": "^3.2.7" + }, + "engines": { + "node": ">=4" + }, + "peerDependenciesMeta": { + "eslint": { + "optional": true + } + } + }, + "node_modules/eslint-module-utils/node_modules/debug": { + "version": "3.2.7", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.2.7.tgz", + "integrity": "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.1" + } + }, + "node_modules/eslint-plugin-import": { + "version": "2.32.0", + "resolved": "https://registry.npmjs.org/eslint-plugin-import/-/eslint-plugin-import-2.32.0.tgz", + "integrity": "sha512-whOE1HFo/qJDyX4SnXzP4N6zOWn79WhnCUY/iDR0mPfQZO8wcYE4JClzI2oZrhBnnMUCBCHZhO6VQyoBU95mZA==", + "dev": true, + "license": "MIT", + "peer": true, + "dependencies": { + "@rtsao/scc": "^1.1.0", + "array-includes": "^3.1.9", + "array.prototype.findlastindex": "^1.2.6", + "array.prototype.flat": "^1.3.3", + "array.prototype.flatmap": "^1.3.3", + "debug": "^3.2.7", + "doctrine": "^2.1.0", + "eslint-import-resolver-node": "^0.3.9", + "eslint-module-utils": "^2.12.1", + "hasown": "^2.0.2", + "is-core-module": "^2.16.1", + "is-glob": "^4.0.3", + "minimatch": "^3.1.2", + "object.fromentries": "^2.0.8", + "object.groupby": "^1.0.3", + "object.values": "^1.2.1", + "semver": "^6.3.1", + "string.prototype.trimend": "^1.0.9", + "tsconfig-paths": "^3.15.0" + }, + "engines": { + "node": ">=4" + }, + "peerDependencies": { + "eslint": "^2 || ^3 || ^4 || ^5 || ^6 || ^7.2.0 || ^8 || ^9" + } + }, + "node_modules/eslint-plugin-import/node_modules/debug": { + "version": "3.2.7", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.2.7.tgz", + "integrity": "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.1" + } + }, + "node_modules/eslint-plugin-jsx-a11y": { + "version": "6.10.2", + "resolved": "https://registry.npmjs.org/eslint-plugin-jsx-a11y/-/eslint-plugin-jsx-a11y-6.10.2.tgz", + "integrity": "sha512-scB3nz4WmG75pV8+3eRUQOHZlNSUhFNq37xnpgRkCCELU3XMvXAxLk1eqWWyE22Ki4Q01Fnsw9BA3cJHDPgn2Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "aria-query": "^5.3.2", + "array-includes": "^3.1.8", + "array.prototype.flatmap": "^1.3.2", + "ast-types-flow": "^0.0.8", + "axe-core": "^4.10.0", + "axobject-query": "^4.1.0", + "damerau-levenshtein": "^1.0.8", + "emoji-regex": "^9.2.2", + "hasown": "^2.0.2", + "jsx-ast-utils": "^3.3.5", + "language-tags": "^1.0.9", + "minimatch": "^3.1.2", + "object.fromentries": "^2.0.8", + "safe-regex-test": "^1.0.3", + "string.prototype.includes": "^2.0.1" + }, + "engines": { + "node": ">=4.0" + }, + "peerDependencies": { + "eslint": "^3 || ^4 || ^5 || ^6 || ^7 || ^8 || ^9" + } + }, + "node_modules/eslint-plugin-react": { + "version": "7.37.5", + "resolved": "https://registry.npmjs.org/eslint-plugin-react/-/eslint-plugin-react-7.37.5.tgz", + "integrity": "sha512-Qteup0SqU15kdocexFNAJMvCJEfa2xUKNV4CC1xsVMrIIqEy3SQ/rqyxCWNzfrd3/ldy6HMlD2e0JDVpDg2qIA==", + "dev": true, + "license": "MIT", + "dependencies": { + "array-includes": "^3.1.8", + "array.prototype.findlast": "^1.2.5", + "array.prototype.flatmap": "^1.3.3", + "array.prototype.tosorted": "^1.1.4", + "doctrine": "^2.1.0", + "es-iterator-helpers": "^1.2.1", + "estraverse": "^5.3.0", + "hasown": "^2.0.2", + "jsx-ast-utils": "^2.4.1 || ^3.0.0", + "minimatch": "^3.1.2", + "object.entries": "^1.1.9", + "object.fromentries": "^2.0.8", + "object.values": "^1.2.1", + "prop-types": "^15.8.1", + "resolve": "^2.0.0-next.5", + "semver": "^6.3.1", + "string.prototype.matchall": "^4.0.12", + "string.prototype.repeat": "^1.0.0" + }, + "engines": { + "node": ">=4" + }, + "peerDependencies": { + "eslint": "^3 || ^4 || ^5 || ^6 || ^7 || ^8 || ^9.7" + } + }, + "node_modules/eslint-plugin-react-hooks": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/eslint-plugin-react-hooks/-/eslint-plugin-react-hooks-7.1.1.tgz", + "integrity": "sha512-f2I7Gw6JbvCexzIInuSbZpfdQ44D7iqdWX01FKLvrPgqxoE7oMj8clOfto8U6vYiz4yd5oKu39rRSVOe1zRu0g==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/core": "^7.24.4", + "@babel/parser": "^7.24.4", + "hermes-parser": "^0.25.1", + "zod": "^3.25.0 || ^4.0.0", + "zod-validation-error": "^3.5.0 || ^4.0.0" + }, + "engines": { + "node": ">=18" + }, + "peerDependencies": { + "eslint": "^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0 || ^8.0.0-0 || ^9.0.0 || ^10.0.0" + } + }, + "node_modules/eslint-scope": { + "version": "8.4.0", + "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-8.4.0.tgz", + "integrity": "sha512-sNXOfKCn74rt8RICKMvJS7XKV/Xk9kA7DyJr8mJik3S7Cwgy3qlkkmyS2uQB3jiJg6VNdZd/pDBJu0nvG2NlTg==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "esrecurse": "^4.3.0", + "estraverse": "^5.2.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint-visitor-keys": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.1.tgz", + "integrity": "sha512-Uhdk5sfqcee/9H/rCOJikYz67o0a2Tw2hGRPOG2Y1R2dg7brRe1uG0yaNQDHu+TO/uQPF/5eCapvYSmHUjt7JQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/espree": { + "version": "10.4.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-10.4.0.tgz", + "integrity": "sha512-j6PAQ2uUr79PZhBjP5C5fhl8e39FmRnOjsD5lGnWrFU8i2G776tBK7+nP8KuQUTTyAZUwfQqXAgrVH5MbH9CYQ==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "acorn": "^8.15.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^4.2.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/esprima": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/esprima/-/esprima-4.0.1.tgz", + "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", + "license": "BSD-2-Clause", + "bin": { + "esparse": "bin/esparse.js", + "esvalidate": "bin/esvalidate.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/esquery": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.7.0.tgz", + "integrity": "sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "estraverse": "^5.1.0" + }, + "engines": { + "node": ">=0.10" + } + }, + "node_modules/esrecurse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", + "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "estraverse": "^5.2.0" + }, + "engines": { + "node": ">=4.0" + } + }, + "node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/esutils": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", + "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/etag": { + "version": "1.8.1", + "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", + "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/eventsource": { + "version": "3.0.7", + "resolved": "https://registry.npmjs.org/eventsource/-/eventsource-3.0.7.tgz", + "integrity": "sha512-CRT1WTyuQoD771GW56XEZFQ/ZoSfWid1alKGDYMmkt2yl8UXrVR4pspqWNEcqKvVIzg6PAltWjxcSSPrboA4iA==", + "license": "MIT", + "dependencies": { + "eventsource-parser": "^3.0.1" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/eventsource-parser": { + "version": "3.0.8", + "resolved": "https://registry.npmjs.org/eventsource-parser/-/eventsource-parser-3.0.8.tgz", + "integrity": "sha512-70QWGkr4snxr0OXLRWsFLeRBIRPuQOvt4s8QYjmUlmlkyTZkRqS7EDVRZtzU3TiyDbXSzaOeF0XUKy8PchzukQ==", + "license": "MIT", + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/execa": { + "version": "9.6.1", + "resolved": "https://registry.npmjs.org/execa/-/execa-9.6.1.tgz", + "integrity": "sha512-9Be3ZoN4LmYR90tUoVu2te2BsbzHfhJyfEiAVfz7N5/zv+jduIfLrV2xdQXOHbaD6KgpGdO9PRPM1Y4Q9QkPkA==", + "license": "MIT", + "dependencies": { + "@sindresorhus/merge-streams": "^4.0.0", + "cross-spawn": "^7.0.6", + "figures": "^6.1.0", + "get-stream": "^9.0.0", + "human-signals": "^8.0.1", + "is-plain-obj": "^4.1.0", + "is-stream": "^4.0.1", + "npm-run-path": "^6.0.0", + "pretty-ms": "^9.2.0", + "signal-exit": "^4.1.0", + "strip-final-newline": "^4.0.0", + "yoctocolors": "^2.1.1" + }, + "engines": { + "node": "^18.19.0 || >=20.5.0" + }, + "funding": { + "url": "https://github.com/sindresorhus/execa?sponsor=1" + } + }, + "node_modules/express": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/express/-/express-5.2.1.tgz", + "integrity": "sha512-hIS4idWWai69NezIdRt2xFVofaF4j+6INOpJlVOLDO8zXGpUVEVzIYk12UUi2JzjEzWL3IOAxcTubgz9Po0yXw==", + "license": "MIT", + "peer": true, + "dependencies": { + "accepts": "^2.0.0", + "body-parser": "^2.2.1", + "content-disposition": "^1.0.0", + "content-type": "^1.0.5", + "cookie": "^0.7.1", + "cookie-signature": "^1.2.1", + "debug": "^4.4.0", + "depd": "^2.0.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "finalhandler": "^2.1.0", + "fresh": "^2.0.0", + "http-errors": "^2.0.0", + "merge-descriptors": "^2.0.0", + "mime-types": "^3.0.0", + "on-finished": "^2.4.1", + "once": "^1.4.0", + "parseurl": "^1.3.3", + "proxy-addr": "^2.0.7", + "qs": "^6.14.0", + "range-parser": "^1.2.1", + "router": "^2.2.0", + "send": "^1.1.0", + "serve-static": "^2.2.0", + "statuses": "^2.0.1", + "type-is": "^2.0.1", + "vary": "^1.1.2" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/express-rate-limit": { + "version": "8.4.1", + "resolved": "https://registry.npmjs.org/express-rate-limit/-/express-rate-limit-8.4.1.tgz", + "integrity": "sha512-NGVYwQSAyEQgzxX1iCM978PP9AdO/hW93gMcF6ZwQCm+rFvLsBH6w4xcXWTcliS8La5EPRN3p9wzItqBwJrfNw==", + "license": "MIT", + "dependencies": { + "ip-address": "10.1.0" + }, + "engines": { + "node": ">= 16" + }, + "funding": { + "url": "https://github.com/sponsors/express-rate-limit" + }, + "peerDependencies": { + "express": ">= 4.11" + } + }, + "node_modules/fast-deep-equal": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", + "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", + "license": "MIT" + }, + "node_modules/fast-glob": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.3.1.tgz", + "integrity": "sha512-kNFPyjhh5cKjrUltxs+wFx+ZkbRaxxmZ+X0ZU31SOsxCEtP9VPgtq2teZw1DebupL5GmDaNQ6yKMMVcM41iqDg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.2", + "merge2": "^1.3.0", + "micromatch": "^4.0.4" + }, + "engines": { + "node": ">=8.6.0" + } + }, + "node_modules/fast-glob/node_modules/glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "dev": true, + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/fast-json-stable-stringify": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-levenshtein": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz", + "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-string-truncated-width": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/fast-string-truncated-width/-/fast-string-truncated-width-3.0.3.tgz", + "integrity": "sha512-0jjjIEL6+0jag3l2XWWizO64/aZVtpiGE3t0Zgqxv0DPuxiMjvB3M24fCyhZUO4KomJQPj3LTSUnDP3GpdwC0g==", + "license": "MIT" + }, + "node_modules/fast-string-width": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/fast-string-width/-/fast-string-width-3.0.2.tgz", + "integrity": "sha512-gX8LrtNEI5hq8DVUfRQMbr5lpaS4nMIWV+7XEbXk2b8kiQIizgnlr12B4dA3ZEx3308ze0O4Q1R+cHts8kyUJg==", + "license": "MIT", + "dependencies": { + "fast-string-truncated-width": "^3.0.2" + } + }, + "node_modules/fast-uri": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/fast-uri/-/fast-uri-3.1.0.tgz", + "integrity": "sha512-iPeeDKJSWf4IEOasVVrknXpaBV0IApz/gp7S2bb7Z4Lljbl2MGJRqInZiUrQwV16cpzw/D3S5j5Julj/gT52AA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/fastify" + }, + { + "type": "opencollective", + "url": "https://opencollective.com/fastify" + } + ], + "license": "BSD-3-Clause" + }, + "node_modules/fast-wrap-ansi": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/fast-wrap-ansi/-/fast-wrap-ansi-0.2.0.tgz", + "integrity": "sha512-rLV8JHxTyhVmFYhBJuMujcrHqOT2cnO5Zxj37qROj23CP39GXubJRBUFF0z8KFK77Uc0SukZUf7JZhsVEQ6n8w==", + "license": "MIT", + "dependencies": { + "fast-string-width": "^3.0.2" + } + }, + "node_modules/fastq": { + "version": "1.20.1", + "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.20.1.tgz", + "integrity": "sha512-GGToxJ/w1x32s/D2EKND7kTil4n8OVk/9mycTc4VDza13lOvpUZTGX3mFSCtV9ksdGBVzvsyAVLM6mHFThxXxw==", + "license": "ISC", + "dependencies": { + "reusify": "^1.0.4" + } + }, + "node_modules/fetch-blob": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz", + "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/jimmywarting" + }, + { + "type": "paypal", + "url": "https://paypal.me/jimmywarting" + } + ], + "license": "MIT", + "dependencies": { + "node-domexception": "^1.0.0", + "web-streams-polyfill": "^3.0.3" + }, + "engines": { + "node": "^12.20 || >= 14.13" + } + }, + "node_modules/figures": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/figures/-/figures-6.1.0.tgz", + "integrity": "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg==", + "license": "MIT", + "dependencies": { + "is-unicode-supported": "^2.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/file-entry-cache": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", + "integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "flat-cache": "^4.0.0" + }, + "engines": { + "node": ">=16.0.0" + } + }, + "node_modules/fill-range": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz", + "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==", + "license": "MIT", + "dependencies": { + "to-regex-range": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/finalhandler": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.1.tgz", + "integrity": "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "on-finished": "^2.4.1", + "parseurl": "^1.3.3", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 18.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/find-up": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz", + "integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==", + "dev": true, + "license": "MIT", + "dependencies": { + "locate-path": "^6.0.0", + "path-exists": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/flat-cache": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-4.0.1.tgz", + "integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==", + "dev": true, + "license": "MIT", + "dependencies": { + "flatted": "^3.2.9", + "keyv": "^4.5.4" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/flatted": { + "version": "3.4.2", + "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.4.2.tgz", + "integrity": "sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==", + "dev": true, + "license": "ISC" + }, + "node_modules/for-each": { + "version": "0.3.5", + "resolved": "https://registry.npmjs.org/for-each/-/for-each-0.3.5.tgz", + "integrity": "sha512-dKx12eRCVIzqCxFGplyFKJMPvLEWgmNtUrpTiJIR5u97zEhRG8ySrtboPHZXx7daLxQVrl643cTzbab2tkQjxg==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-callable": "^1.2.7" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/formdata-polyfill": { + "version": "4.0.10", + "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz", + "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==", + "license": "MIT", + "dependencies": { + "fetch-blob": "^3.1.2" + }, + "engines": { + "node": ">=12.20.0" + } + }, + "node_modules/forwarded": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", + "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz", + "integrity": "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/fs-extra": { + "version": "11.3.4", + "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-11.3.4.tgz", + "integrity": "sha512-CTXd6rk/M3/ULNQj8FBqBWHYBVYybQ3VPBw0xGKFe3tuH7ytT6ACnvzpIQ3UZtB8yvUKC2cXn1a+x+5EVQLovA==", + "license": "MIT", + "dependencies": { + "graceful-fs": "^4.2.0", + "jsonfile": "^6.0.1", + "universalify": "^2.0.0" + }, + "engines": { + "node": ">=14.14" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/function.prototype.name": { + "version": "1.1.8", + "resolved": "https://registry.npmjs.org/function.prototype.name/-/function.prototype.name-1.1.8.tgz", + "integrity": "sha512-e5iwyodOHhbMr/yNrc7fDYG4qlbIvI5gajyzPnb5TCwyhjApznQh1BMFou9b30SevY43gCJKXycoCBjMbsuW0Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "functions-have-names": "^1.2.3", + "hasown": "^2.0.2", + "is-callable": "^1.2.7" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/functions-have-names": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/functions-have-names/-/functions-have-names-1.2.3.tgz", + "integrity": "sha512-xckBUXyTIqT97tq2x2AMb+g163b5JFysYk0x4qxNFwbfQkmNZoiRHb6sPzI9/QV33WeuvVYBUIiD4NzNIyqaRQ==", + "dev": true, + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/fuzzysort": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/fuzzysort/-/fuzzysort-3.1.0.tgz", + "integrity": "sha512-sR9BNCjBg6LNgwvxlBd0sBABvQitkLzoVY9MYYROQVX/FvfJ4Mai9LsGhDgd8qYdds0bY77VzYd5iuB+v5rwQQ==", + "license": "MIT" + }, + "node_modules/generator-function": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/generator-function/-/generator-function-2.0.1.tgz", + "integrity": "sha512-SFdFmIJi+ybC0vjlHN0ZGVGHc3lgE0DxPAT0djjVg+kjOnSqclqmj0KQ7ykTOLP6YxoqOvuAODGdcHJn+43q3g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/gensync": { + "version": "1.0.0-beta.2", + "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz", + "integrity": "sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "license": "ISC", + "engines": { + "node": "6.* || 8.* || >= 10.*" + } + }, + "node_modules/get-east-asian-width": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/get-east-asian-width/-/get-east-asian-width-1.5.0.tgz", + "integrity": "sha512-CQ+bEO+Tva/qlmw24dCejulK5pMzVnUOFOijVogd3KQs07HnRIgp8TGipvCCRT06xeYEbpbgwaCxglFyiuIcmA==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/get-intrinsic": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-nonce": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-nonce/-/get-nonce-1.0.1.tgz", + "integrity": "sha512-FJhYRoDaiatfEkUK8HKlicmu/3SGFD51q3itKDGoSTysQJBnfOcxU5GxnhE1E6soB76MbT0MBtnKJuXyAx+96Q==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/get-own-enumerable-keys": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/get-own-enumerable-keys/-/get-own-enumerable-keys-1.0.0.tgz", + "integrity": "sha512-PKsK2FSrQCyxcGHsGrLDcK0lx+0Ke+6e8KFFozA9/fIQLhQzPaRvJFdcz7+Axg3jUH/Mq+NI4xa5u/UT2tQskA==", + "license": "MIT", + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/get-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "license": "MIT", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/get-stream": { + "version": "9.0.1", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-9.0.1.tgz", + "integrity": "sha512-kVCxPF3vQM/N0B1PmoqVUqgHP+EeVjmZSQn+1oCRPxd2P21P2F19lIgbR3HBosbB1PUhOAoctJnfEn2GbN2eZA==", + "license": "MIT", + "dependencies": { + "@sec-ant/readable-stream": "^0.4.1", + "is-stream": "^4.0.1" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/get-symbol-description": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/get-symbol-description/-/get-symbol-description-1.1.0.tgz", + "integrity": "sha512-w9UMqWwJxHNOvoNzSJ2oPF5wvYcvP7jUvYzhp67yEhTi17ZDBBC1z9pTdGuzjD+EFIqLSYRweZjqfiPzQ06Ebg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-tsconfig": { + "version": "4.14.0", + "resolved": "https://registry.npmjs.org/get-tsconfig/-/get-tsconfig-4.14.0.tgz", + "integrity": "sha512-yTb+8DXzDREzgvYmh6s9vHsSVCHeC0G3PI5bEXNBHtmshPnO+S5O7qgLEOn0I5QvMy6kpZN8K1NKGyilLb93wA==", + "dev": true, + "license": "MIT", + "dependencies": { + "resolve-pkg-maps": "^1.0.0" + }, + "funding": { + "url": "https://github.com/privatenumber/get-tsconfig?sponsor=1" + } + }, + "node_modules/glob-parent": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", + "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==", + "dev": true, + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.3" + }, + "engines": { + "node": ">=10.13.0" + } + }, + "node_modules/globals": { + "version": "14.0.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-14.0.0.tgz", + "integrity": "sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/globalthis": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/globalthis/-/globalthis-1.0.4.tgz", + "integrity": "sha512-DpLKbNU4WylpxJykQujfCcwYWiV/Jhm50Goo0wrVILAv5jOr9d+H+UR3PhSCD2rCCEIg0uc+G+muBTwD54JhDQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "define-properties": "^1.2.1", + "gopd": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/graceful-fs": { + "version": "4.2.11", + "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", + "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", + "license": "ISC" + }, + "node_modules/graphql": { + "version": "16.13.2", + "resolved": "https://registry.npmjs.org/graphql/-/graphql-16.13.2.tgz", + "integrity": "sha512-5bJ+nf/UCpAjHM8i06fl7eLyVC9iuNAjm9qzkiu2ZGhM0VscSvS6WDPfAwkdkBuoXGM9FJSbKl6wylMwP9Ktig==", + "license": "MIT", + "engines": { + "node": "^12.22.0 || ^14.16.0 || ^16.0.0 || >=17.0.0" + } + }, + "node_modules/has-bigints": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-bigints/-/has-bigints-1.1.0.tgz", + "integrity": "sha512-R3pbpkcIqv2Pm3dUwgjclDRVmWpTJW2DcMzcIhEXEx1oh/CEMObMm3KLmRJOdvhM7o4uQBnwr8pzRK2sJWIqfg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/has-property-descriptors": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/has-property-descriptors/-/has-property-descriptors-1.0.2.tgz", + "integrity": "sha512-55JNKuIW+vq4Ke1BjOTjM2YctQIvCT7GFzHwmfZPGo5wnrgkid0YQtnAleFSqumZm4az3n2BS+erby5ipJdgrg==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-define-property": "^1.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-proto": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/has-proto/-/has-proto-1.2.0.tgz", + "integrity": "sha512-KIL7eQPfHQRC8+XluaIw7BHUwwqL19bQn4hzNgdr+1wXoU0KKj6rufu47lhY7KbJR2C6T6+PfyN0Ea7wkSS+qQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "dunder-proto": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-tostringtag": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz", + "integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-symbols": "^1.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/hasown": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.3.tgz", + "integrity": "sha512-ej4AhfhfL2Q2zpMmLo7U1Uv9+PyhIZpgQLGT1F9miIGmiCJIoCgSmczFdrc97mWT4kVY72KA+WnnhJ5pghSvSg==", + "license": "MIT", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/headers-polyfill": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/headers-polyfill/-/headers-polyfill-5.0.1.tgz", + "integrity": "sha512-1TJ6Fih/b8h5TIcv+1+Hw0PDQWJTKDKzFZzcKOiW1wJza3XoAQlkCuXLbymPYB8+ZQyw8mHvdw560e8zVFIWyA==", + "license": "MIT", + "dependencies": { + "@types/set-cookie-parser": "^2.4.10", + "set-cookie-parser": "^3.0.1" + } + }, + "node_modules/hermes-estree": { + "version": "0.25.1", + "resolved": "https://registry.npmjs.org/hermes-estree/-/hermes-estree-0.25.1.tgz", + "integrity": "sha512-0wUoCcLp+5Ev5pDW2OriHC2MJCbwLwuRx+gAqMTOkGKJJiBCLjtrvy4PWUGn6MIVefecRpzoOZ/UV6iGdOr+Cw==", + "dev": true, + "license": "MIT" + }, + "node_modules/hermes-parser": { + "version": "0.25.1", + "resolved": "https://registry.npmjs.org/hermes-parser/-/hermes-parser-0.25.1.tgz", + "integrity": "sha512-6pEjquH3rqaI6cYAXYPcz9MS4rY6R4ngRgrgfDshRptUZIc3lw0MCIJIGDj9++mfySOuPTHB4nrSW99BCvOPIA==", + "dev": true, + "license": "MIT", + "dependencies": { + "hermes-estree": "0.25.1" + } + }, + "node_modules/hono": { + "version": "4.12.16", + "resolved": "https://registry.npmjs.org/hono/-/hono-4.12.16.tgz", + "integrity": "sha512-jN0ZewiNAWSe5khM3EyCmBb250+b40wWbwNILNfEvq84VREWwOIkuUsFONk/3i3nqkz7Oe1PcpM2mwQEK2L9Kg==", + "license": "MIT", + "peer": true, + "engines": { + "node": ">=16.9.0" + } + }, + "node_modules/http-errors": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.1.tgz", + "integrity": "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==", + "license": "MIT", + "dependencies": { + "depd": "~2.0.0", + "inherits": "~2.0.4", + "setprototypeof": "~1.2.0", + "statuses": "~2.0.2", + "toidentifier": "~1.0.1" + }, + "engines": { + "node": ">= 0.8" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/https-proxy-agent": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", + "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", + "license": "MIT", + "dependencies": { + "agent-base": "^7.1.2", + "debug": "4" + }, + "engines": { + "node": ">= 14" + } + }, + "node_modules/human-signals": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-8.0.1.tgz", + "integrity": "sha512-eKCa6bwnJhvxj14kZk5NCPc6Hb6BdsU9DZcOnmQKSnO1VKrfV0zCvtttPZUsBvjmNDn8rpcJfpwSYnHBjc95MQ==", + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/iconv-lite": { + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.7.2.tgz", + "integrity": "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==", + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/ignore": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz", + "integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==", + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/import-fresh": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.1.tgz", + "integrity": "sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==", + "license": "MIT", + "dependencies": { + "parent-module": "^1.0.0", + "resolve-from": "^4.0.0" + }, + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/imurmurhash": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", + "integrity": "sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.19" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", + "license": "ISC" + }, + "node_modules/internal-slot": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/internal-slot/-/internal-slot-1.1.0.tgz", + "integrity": "sha512-4gd7VpWNQNB4UKKCFFVcp1AVv+FMOgs9NKzjHKusc8jTMhd5eL1NqQqOpE0KzMds804/yHlglp3uxgluOqAPLw==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "hasown": "^2.0.2", + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/ip-address": { + "version": "10.1.0", + "resolved": "https://registry.npmjs.org/ip-address/-/ip-address-10.1.0.tgz", + "integrity": "sha512-XXADHxXmvT9+CRxhXg56LJovE+bmWnEWB78LB83VZTprKTmaC5QfruXocxzTZ2Kl0DNwKuBdlIhjL8LeY8Sf8Q==", + "license": "MIT", + "engines": { + "node": ">= 12" + } + }, + "node_modules/ipaddr.js": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", + "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", + "license": "MIT", + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/is-array-buffer": { + "version": "3.0.5", + "resolved": "https://registry.npmjs.org/is-array-buffer/-/is-array-buffer-3.0.5.tgz", + "integrity": "sha512-DDfANUiiG2wC1qawP66qlTugJeL5HyzMpfr8lLK+jMQirGzNod0B12cFB/9q838Ru27sBwfw78/rdoU7RERz6A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "get-intrinsic": "^1.2.6" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-arrayish": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz", + "integrity": "sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==", + "license": "MIT" + }, + "node_modules/is-async-function": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-async-function/-/is-async-function-2.1.1.tgz", + "integrity": "sha512-9dgM/cZBnNvjzaMYHVoxxfPj2QXt22Ev7SuuPrs+xav0ukGB0S6d4ydZdEiM48kLx5kDV+QBPrpVnFyefL8kkQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "async-function": "^1.0.0", + "call-bound": "^1.0.3", + "get-proto": "^1.0.1", + "has-tostringtag": "^1.0.2", + "safe-regex-test": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-bigint": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-bigint/-/is-bigint-1.1.0.tgz", + "integrity": "sha512-n4ZT37wG78iz03xPRKJrHTdZbe3IicyucEtdRsV5yglwc3GyUfbAfpSeD0FJ41NbUNSt5wbhqfp1fS+BgnvDFQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-bigints": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-boolean-object": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/is-boolean-object/-/is-boolean-object-1.2.2.tgz", + "integrity": "sha512-wa56o2/ElJMYqjCjGkXri7it5FbebW5usLw/nPmCMs5DeZ7eziSYZhSmPRn0txqeW4LnAmQQU7FgqLpsEFKM4A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "has-tostringtag": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-bun-module": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-bun-module/-/is-bun-module-2.0.0.tgz", + "integrity": "sha512-gNCGbnnnnFAUGKeZ9PdbyeGYJqewpmc2aKHUEMO5nQPWU9lOmv7jcmQIv+qHD8fXW6W7qfuCwX4rY9LNRjXrkQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "semver": "^7.7.1" + } + }, + "node_modules/is-bun-module/node_modules/semver": { + "version": "7.7.4", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.4.tgz", + "integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/is-callable": { + "version": "1.2.7", + "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.7.tgz", + "integrity": "sha512-1BC0BVFhS/p0qtw6enp8e+8OD0UrK0oFLztSjNzhcKA3WDuJxxAPXzPuPtKkjEY9UUoEWlX/8fgKeu2S8i9JTA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-core-module": { + "version": "2.16.1", + "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.16.1.tgz", + "integrity": "sha512-UfoeMA6fIJ8wTYFEUjelnaGI67v6+N7qXJEvQuIGa99l4xsCruSYOVSQ0uPANn4dAzm8lkYPaKLrrijLq7x23w==", + "dev": true, + "license": "MIT", + "dependencies": { + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-data-view": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-data-view/-/is-data-view-1.0.2.tgz", + "integrity": "sha512-RKtWF8pGmS87i2D6gqQu/l7EYRlVdfzemCJN/P3UOs//x1QE7mfhvzHIApBTRf7axvT6DMGwSwBXYCT0nfB9xw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "get-intrinsic": "^1.2.6", + "is-typed-array": "^1.1.13" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-date-object": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-date-object/-/is-date-object-1.1.0.tgz", + "integrity": "sha512-PwwhEakHVKTdRNVOw+/Gyh0+MzlCl4R6qKvkhuvLtPMggI1WAHt9sOwZxQLSGpUaDnrdyDsomoRgNnCfKNSXXg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "has-tostringtag": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-docker": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-3.0.0.tgz", + "integrity": "sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==", + "license": "MIT", + "bin": { + "is-docker": "cli.js" + }, + "engines": { + "node": "^12.20.0 || ^14.13.1 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-finalizationregistry": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/is-finalizationregistry/-/is-finalizationregistry-1.1.1.tgz", + "integrity": "sha512-1pC6N8qWJbWoPtEjgcL2xyhQOP491EQjeUo3qTKcmV8YSDDJrOepfG8pcC7h/QgnQHYSv0mJ3Z/ZWxmatVrysg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-generator-function": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/is-generator-function/-/is-generator-function-1.1.2.tgz", + "integrity": "sha512-upqt1SkGkODW9tsGNG5mtXTXtECizwtS2kA161M+gJPc1xdb/Ax629af6YrTwcOeQHbewrPNlE5Dx7kzvXTizA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.4", + "generator-function": "^2.0.0", + "get-proto": "^1.0.1", + "has-tostringtag": "^1.0.2", + "safe-regex-test": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "license": "MIT", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-in-ssh": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-in-ssh/-/is-in-ssh-1.0.0.tgz", + "integrity": "sha512-jYa6Q9rH90kR1vKB6NM7qqd1mge3Fx4Dhw5TVlK1MUBqhEOuCagrEHMevNuCcbECmXZ0ThXkRm+Ymr51HwEPAw==", + "license": "MIT", + "engines": { + "node": ">=20" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-inside-container": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-inside-container/-/is-inside-container-1.0.0.tgz", + "integrity": "sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==", + "license": "MIT", + "dependencies": { + "is-docker": "^3.0.0" + }, + "bin": { + "is-inside-container": "cli.js" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-interactive": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-interactive/-/is-interactive-2.0.0.tgz", + "integrity": "sha512-qP1vozQRI+BMOPcjFzrjXuQvdak2pHNUMZoeG2eRbiSqyvbEf/wQtEOTOX1guk6E3t36RkaqiSt8A/6YElNxLQ==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-map": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/is-map/-/is-map-2.0.3.tgz", + "integrity": "sha512-1Qed0/Hr2m+YqxnM09CjA2d/i6YZNfF6R2oRAOj36eUdS6qIV/huPJNSEpKbupewFs+ZsJlxsjjPbc0/afW6Lw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-negative-zero": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/is-negative-zero/-/is-negative-zero-2.0.3.tgz", + "integrity": "sha512-5KoIu2Ngpyek75jXodFvnafB6DJgr3u8uuK0LEZJjrU19DrMD3EVERaR8sjz8CCGgpZvxPl9SuE1GMVPFHx1mw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-node-process": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/is-node-process/-/is-node-process-1.2.0.tgz", + "integrity": "sha512-Vg4o6/fqPxIjtxgUH5QLJhwZ7gW5diGCVlXpuUfELC62CuxM1iHcRe51f2W1FDy04Ai4KJkagKjx3XaqyfRKXw==", + "license": "MIT" + }, + "node_modules/is-number": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "license": "MIT", + "engines": { + "node": ">=0.12.0" + } + }, + "node_modules/is-number-object": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/is-number-object/-/is-number-object-1.1.1.tgz", + "integrity": "sha512-lZhclumE1G6VYD8VHe35wFaIif+CTy5SJIi5+3y4psDgWu4wPDoBhF8NxUOinEc7pHgiTsT6MaBb92rKhhD+Xw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "has-tostringtag": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-obj": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-obj/-/is-obj-3.0.0.tgz", + "integrity": "sha512-IlsXEHOjtKhpN8r/tRFj2nDyTmHvcfNeu/nrRIcXE17ROeatXchkojffa1SpdqW4cr/Fj6QkEf/Gn4zf6KKvEQ==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-plain-obj": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-4.1.0.tgz", + "integrity": "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-promise": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-4.0.0.tgz", + "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==", + "license": "MIT" + }, + "node_modules/is-regex": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.2.1.tgz", + "integrity": "sha512-MjYsKHO5O7mCsmRGxWcLWheFqN9DJ/2TmngvjKXihe6efViPqc274+Fx/4fYj/r03+ESvBdTXK0V6tA3rgez1g==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "gopd": "^1.2.0", + "has-tostringtag": "^1.0.2", + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-regexp": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/is-regexp/-/is-regexp-3.1.0.tgz", + "integrity": "sha512-rbku49cWloU5bSMI+zaRaXdQHXnthP6DZ/vLnfdSKyL4zUzuWnomtOEiZZOd+ioQ+avFo/qau3KPTc7Fjy1uPA==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-set": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/is-set/-/is-set-2.0.3.tgz", + "integrity": "sha512-iPAjerrse27/ygGLxw+EBR9agv9Y6uLeYVJMu+QNCoouJ1/1ri0mGrcWpfCqFZuzzx3WjtwxG098X+n4OuRkPg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-shared-array-buffer": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-shared-array-buffer/-/is-shared-array-buffer-1.0.4.tgz", + "integrity": "sha512-ISWac8drv4ZGfwKl5slpHG9OwPNty4jOWPRIhBpxOoD+hqITiwuipOQ2bNthAzwA3B4fIjO4Nln74N0S9byq8A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-stream": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-4.0.1.tgz", + "integrity": "sha512-Dnz92NInDqYckGEUJv689RbRiTSEHCQ7wOVeALbkOz999YpqT46yMRIGtSNl2iCL1waAZSx40+h59NV/EwzV/A==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-string": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/is-string/-/is-string-1.1.1.tgz", + "integrity": "sha512-BtEeSsoaQjlSPBemMQIrY1MY0uM6vnS1g5fmufYOtnxLGUZM2178PKbhsk7Ffv58IX+ZtcvoGwccYsh0PglkAA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "has-tostringtag": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-symbol": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/is-symbol/-/is-symbol-1.1.1.tgz", + "integrity": "sha512-9gGx6GTtCQM73BgmHQXfDmLtfjjTUDSyoxTCbp5WtoixAhfgsDirWIcVQ/IHpvI5Vgd5i/J5F7B9cN/WlVbC/w==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "has-symbols": "^1.1.0", + "safe-regex-test": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-typed-array": { + "version": "1.1.15", + "resolved": "https://registry.npmjs.org/is-typed-array/-/is-typed-array-1.1.15.tgz", + "integrity": "sha512-p3EcsicXjit7SaskXHs1hA91QxgTw46Fv6EFKKGS5DRFLD8yKnohjF3hxoju94b/OcMZoQukzpPpBE9uLVKzgQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "which-typed-array": "^1.1.16" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-unicode-supported": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-2.1.0.tgz", + "integrity": "sha512-mE00Gnza5EEB3Ds0HfMyllZzbBrmLOX3vfWoj9A9PEnTfratQ/BcaJOuMhnkhjXvb2+FkY3VuHqtAGpTPmglFQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-weakmap": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/is-weakmap/-/is-weakmap-2.0.2.tgz", + "integrity": "sha512-K5pXYOm9wqY1RgjpL3YTkF39tni1XajUIkawTLUo9EZEVUFga5gSQJF8nNS7ZwJQ02y+1YCNYcMh+HIf1ZqE+w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-weakref": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/is-weakref/-/is-weakref-1.1.1.tgz", + "integrity": "sha512-6i9mGWSlqzNMEqpCp93KwRS1uUOodk2OJ6b+sq7ZPDSy2WuI5NFIxp/254TytR8ftefexkWn5xNiHUNpPOfSew==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-weakset": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/is-weakset/-/is-weakset-2.0.4.tgz", + "integrity": "sha512-mfcwb6IzQyOKTs84CQMrOwW4gQcaTOAWJ0zzJCl2WSPDrWk/OzDaImWFH3djXhb24g4eudZfLRozAvPGw4d9hQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "get-intrinsic": "^1.2.6" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-wsl": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-3.1.1.tgz", + "integrity": "sha512-e6rvdUCiQCAuumZslxRJWR/Doq4VpPR82kqclvcS0efgt430SlGIk05vdCN58+VrzgtIcfNODjozVielycD4Sw==", + "license": "MIT", + "dependencies": { + "is-inside-container": "^1.0.0" + }, + "engines": { + "node": ">=16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/isarray": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.5.tgz", + "integrity": "sha512-xHjhDr3cNBK0BzdUJSPXZntQUx/mwMS5Rw4A7lPJ90XGAO6ISP/ePDNuo0vhqOZU+UD5JoodwCAAoZQd3FeAKw==", + "dev": true, + "license": "MIT" + }, + "node_modules/isexe": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", + "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", + "license": "ISC" + }, + "node_modules/iterator.prototype": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/iterator.prototype/-/iterator.prototype-1.1.5.tgz", + "integrity": "sha512-H0dkQoCa3b2VEeKQBOxFph+JAbcrQdE7KC0UkqwpLmv2EC4P41QXP+rqo9wYodACiG5/WM5s9oDApTU8utwj9g==", + "dev": true, + "license": "MIT", + "dependencies": { + "define-data-property": "^1.1.4", + "es-object-atoms": "^1.0.0", + "get-intrinsic": "^1.2.6", + "get-proto": "^1.0.0", + "has-symbols": "^1.1.0", + "set-function-name": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/jiti": { + "version": "2.6.1", + "resolved": "https://registry.npmjs.org/jiti/-/jiti-2.6.1.tgz", + "integrity": "sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ==", + "dev": true, + "license": "MIT", + "bin": { + "jiti": "lib/jiti-cli.mjs" + } + }, + "node_modules/jose": { + "version": "6.2.3", + "resolved": "https://registry.npmjs.org/jose/-/jose-6.2.3.tgz", + "integrity": "sha512-YYVDInQKFJfR/xa3ojUTl8c2KoTwiL1R5Wg9YCydwH0x0B9grbzlg5HC7mMjCtUJjbQ/YnGEZIhI5tCgfTb4Hw==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/panva" + } + }, + "node_modules/js-tokens": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz", + "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==", + "license": "MIT" + }, + "node_modules/js-yaml": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz", + "integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==", + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/jsesc": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-3.1.0.tgz", + "integrity": "sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==", + "license": "MIT", + "bin": { + "jsesc": "bin/jsesc" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/json-buffer": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.1.tgz", + "integrity": "sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-parse-even-better-errors": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", + "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==", + "license": "MIT" + }, + "node_modules/json-schema-traverse": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", + "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-schema-typed": { + "version": "8.0.2", + "resolved": "https://registry.npmjs.org/json-schema-typed/-/json-schema-typed-8.0.2.tgz", + "integrity": "sha512-fQhoXdcvc3V28x7C7BMs4P5+kNlgUURe2jmUT1T//oBRMDrqy1QPelJimwZGo7Hg9VPV3EQV5Bnq4hbFy2vetA==", + "license": "BSD-2-Clause" + }, + "node_modules/json-stable-stringify-without-jsonify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz", + "integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/json5": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz", + "integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==", + "license": "MIT", + "bin": { + "json5": "lib/cli.js" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/jsonfile": { + "version": "6.2.1", + "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.2.1.tgz", + "integrity": "sha512-zwOTdL3rFQ/lRdBnntKVOX6k5cKJwEc1HdilT71BWEu7J41gXIB2MRp+vxduPSwZJPWBxEzv4yH1wYLJGUHX4Q==", + "license": "MIT", + "dependencies": { + "universalify": "^2.0.0" + }, + "optionalDependencies": { + "graceful-fs": "^4.1.6" + } + }, + "node_modules/jsx-ast-utils": { + "version": "3.3.5", + "resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-3.3.5.tgz", + "integrity": "sha512-ZZow9HBI5O6EPgSJLUb8n2NKgmVWTwCvHGwFuJlMjvLFqlGG6pjirPhtdsseaLZjSibD8eegzmYpUZwoIlj2cQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "array-includes": "^3.1.6", + "array.prototype.flat": "^1.3.1", + "object.assign": "^4.1.4", + "object.values": "^1.1.6" + }, + "engines": { + "node": ">=4.0" + } + }, + "node_modules/keyv": { + "version": "4.5.4", + "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz", + "integrity": "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "json-buffer": "3.0.1" + } + }, + "node_modules/kleur": { + "version": "4.1.5", + "resolved": "https://registry.npmjs.org/kleur/-/kleur-4.1.5.tgz", + "integrity": "sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/language-subtag-registry": { + "version": "0.3.23", + "resolved": "https://registry.npmjs.org/language-subtag-registry/-/language-subtag-registry-0.3.23.tgz", + "integrity": "sha512-0K65Lea881pHotoGEa5gDlMxt3pctLi2RplBb7Ezh4rRdLEOtgi7n4EwK9lamnUCkKBqaeKRVebTq6BAxSkpXQ==", + "dev": true, + "license": "CC0-1.0" + }, + "node_modules/language-tags": { + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/language-tags/-/language-tags-1.0.9.tgz", + "integrity": "sha512-MbjN408fEndfiQXbFQ1vnd+1NoLDsnQW41410oQBXiyXDMYH5z505juWa4KUE1LqxRC7DgOgZDbKLxHIwm27hA==", + "dev": true, + "license": "MIT", + "dependencies": { + "language-subtag-registry": "^0.3.20" + }, + "engines": { + "node": ">=0.10" + } + }, + "node_modules/levn": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz", + "integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1", + "type-check": "~0.4.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/lightningcss": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss/-/lightningcss-1.32.0.tgz", + "integrity": "sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ==", + "dev": true, + "license": "MPL-2.0", + "dependencies": { + "detect-libc": "^2.0.3" + }, + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + }, + "optionalDependencies": { + "lightningcss-android-arm64": "1.32.0", + "lightningcss-darwin-arm64": "1.32.0", + "lightningcss-darwin-x64": "1.32.0", + "lightningcss-freebsd-x64": "1.32.0", + "lightningcss-linux-arm-gnueabihf": "1.32.0", + "lightningcss-linux-arm64-gnu": "1.32.0", + "lightningcss-linux-arm64-musl": "1.32.0", + "lightningcss-linux-x64-gnu": "1.32.0", + "lightningcss-linux-x64-musl": "1.32.0", + "lightningcss-win32-arm64-msvc": "1.32.0", + "lightningcss-win32-x64-msvc": "1.32.0" + } + }, + "node_modules/lightningcss-android-arm64": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-android-arm64/-/lightningcss-android-arm64-1.32.0.tgz", + "integrity": "sha512-YK7/ClTt4kAK0vo6w3X+Pnm0D2cf2vPHbhOXdoNti1Ga0al1P4TBZhwjATvjNwLEBCnKvjJc2jQgHXH0NEwlAg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-darwin-arm64": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-darwin-arm64/-/lightningcss-darwin-arm64-1.32.0.tgz", + "integrity": "sha512-RzeG9Ju5bag2Bv1/lwlVJvBE3q6TtXskdZLLCyfg5pt+HLz9BqlICO7LZM7VHNTTn/5PRhHFBSjk5lc4cmscPQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-darwin-x64": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-darwin-x64/-/lightningcss-darwin-x64-1.32.0.tgz", + "integrity": "sha512-U+QsBp2m/s2wqpUYT/6wnlagdZbtZdndSmut/NJqlCcMLTWp5muCrID+K5UJ6jqD2BFshejCYXniPDbNh73V8w==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-freebsd-x64": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-freebsd-x64/-/lightningcss-freebsd-x64-1.32.0.tgz", + "integrity": "sha512-JCTigedEksZk3tHTTthnMdVfGf61Fky8Ji2E4YjUTEQX14xiy/lTzXnu1vwiZe3bYe0q+SpsSH/CTeDXK6WHig==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-linux-arm-gnueabihf": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-linux-arm-gnueabihf/-/lightningcss-linux-arm-gnueabihf-1.32.0.tgz", + "integrity": "sha512-x6rnnpRa2GL0zQOkt6rts3YDPzduLpWvwAF6EMhXFVZXD4tPrBkEFqzGowzCsIWsPjqSK+tyNEODUBXeeVHSkw==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-linux-arm64-gnu": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-linux-arm64-gnu/-/lightningcss-linux-arm64-gnu-1.32.0.tgz", + "integrity": "sha512-0nnMyoyOLRJXfbMOilaSRcLH3Jw5z9HDNGfT/gwCPgaDjnx0i8w7vBzFLFR1f6CMLKF8gVbebmkUN3fa/kQJpQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-linux-arm64-musl": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-linux-arm64-musl/-/lightningcss-linux-arm64-musl-1.32.0.tgz", + "integrity": "sha512-UpQkoenr4UJEzgVIYpI80lDFvRmPVg6oqboNHfoH4CQIfNA+HOrZ7Mo7KZP02dC6LjghPQJeBsvXhJod/wnIBg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-linux-x64-gnu": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-linux-x64-gnu/-/lightningcss-linux-x64-gnu-1.32.0.tgz", + "integrity": "sha512-V7Qr52IhZmdKPVr+Vtw8o+WLsQJYCTd8loIfpDaMRWGUZfBOYEJeyJIkqGIDMZPwPx24pUMfwSxxI8phr/MbOA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-linux-x64-musl": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-linux-x64-musl/-/lightningcss-linux-x64-musl-1.32.0.tgz", + "integrity": "sha512-bYcLp+Vb0awsiXg/80uCRezCYHNg1/l3mt0gzHnWV9XP1W5sKa5/TCdGWaR/zBM2PeF/HbsQv/j2URNOiVuxWg==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-win32-arm64-msvc": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-win32-arm64-msvc/-/lightningcss-win32-arm64-msvc-1.32.0.tgz", + "integrity": "sha512-8SbC8BR40pS6baCM8sbtYDSwEVQd4JlFTOlaD3gWGHfThTcABnNDBda6eTZeqbofalIJhFx0qKzgHJmcPTnGdw==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lightningcss-win32-x64-msvc": { + "version": "1.32.0", + "resolved": "https://registry.npmjs.org/lightningcss-win32-x64-msvc/-/lightningcss-win32-x64-msvc-1.32.0.tgz", + "integrity": "sha512-Amq9B/SoZYdDi1kFrojnoqPLxYhQ4Wo5XiL8EVJrVsB8ARoC1PWW6VGtT0WKCemjy8aC+louJnjS7U18x3b06Q==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MPL-2.0", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">= 12.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/parcel" + } + }, + "node_modules/lines-and-columns": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/lines-and-columns/-/lines-and-columns-1.2.4.tgz", + "integrity": "sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==", + "license": "MIT" + }, + "node_modules/locate-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz", + "integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-locate": "^5.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/lodash.merge": { + "version": "4.6.2", + "resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz", + "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/log-symbols": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-6.0.0.tgz", + "integrity": "sha512-i24m8rpwhmPIS4zscNzK6MSEhk0DUWa/8iYQWxhffV8jkI4Phvs3F+quL5xvS0gdQR0FyTCMMH33Y78dDTzzIw==", + "license": "MIT", + "dependencies": { + "chalk": "^5.3.0", + "is-unicode-supported": "^1.3.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/log-symbols/node_modules/chalk": { + "version": "5.6.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz", + "integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==", + "license": "MIT", + "engines": { + "node": "^12.17.0 || ^14.13 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/log-symbols/node_modules/is-unicode-supported": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-1.3.0.tgz", + "integrity": "sha512-43r2mRvz+8JRIKnWJ+3j8JtjRKZ6GmjzfaE/qiBJnikNnYv/6bagRJ1kUhNk8R5EX/GkobD+r+sfxCPJsiKBLQ==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/loose-envify": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz", + "integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "js-tokens": "^3.0.0 || ^4.0.0" + }, + "bin": { + "loose-envify": "cli.js" + } + }, + "node_modules/lru-cache": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz", + "integrity": "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==", + "license": "ISC", + "dependencies": { + "yallist": "^3.0.2" + } + }, + "node_modules/lucide-react": { + "version": "1.14.0", + "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-1.14.0.tgz", + "integrity": "sha512-+1mdWcfSJVUsaTIjN9zoezmUhfXo5l0vP7ekBMPo3jcS/aIkxHnXqAPsByszMZx/Y8oQBRJxJx5xg+RH3urzxA==", + "license": "ISC", + "peerDependencies": { + "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0" + } + }, + "node_modules/magic-string": { + "version": "0.30.21", + "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.21.tgz", + "integrity": "sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@jridgewell/sourcemap-codec": "^1.5.5" + } + }, + "node_modules/math-intrinsics": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/media-typer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-1.1.0.tgz", + "integrity": "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/merge-descriptors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-2.0.0.tgz", + "integrity": "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/merge-stream": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", + "integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==", + "license": "MIT" + }, + "node_modules/merge2": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", + "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", + "license": "MIT", + "engines": { + "node": ">= 8" + } + }, + "node_modules/micromatch": { + "version": "4.0.8", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz", + "integrity": "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==", + "license": "MIT", + "dependencies": { + "braces": "^3.0.3", + "picomatch": "^2.3.1" + }, + "engines": { + "node": ">=8.6" + } + }, + "node_modules/mime-db": { + "version": "1.54.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz", + "integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mime-types": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.2.tgz", + "integrity": "sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A==", + "license": "MIT", + "dependencies": { + "mime-db": "^1.54.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/mimic-fn": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz", + "integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/mimic-function": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/mimic-function/-/mimic-function-5.0.1.tgz", + "integrity": "sha512-VP79XUPxV2CigYP3jWwAUFSku2aKqBH7uTAapFWCBqutsbmDo96KY5o8uh6U+/YSIn5OxJnXp73beVkpqMIGhA==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/minimatch": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.5.tgz", + "integrity": "sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/minimist": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz", + "integrity": "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", + "license": "MIT" + }, + "node_modules/msw": { + "version": "2.14.2", + "resolved": "https://registry.npmjs.org/msw/-/msw-2.14.2.tgz", + "integrity": "sha512-D2bTe0tpuf9nw4DA39wFaqUD/hRPKj0DKpo2lAqu+A47Ifg4+h0hbfn6QxVOsiUY2uhgEN6TTpGSHDsc+ysYNg==", + "hasInstallScript": true, + "license": "MIT", + "dependencies": { + "@inquirer/confirm": "^6.0.11", + "@mswjs/interceptors": "^0.41.3", + "@open-draft/deferred-promise": "^3.0.0", + "@types/statuses": "^2.0.6", + "cookie": "^1.1.1", + "graphql": "^16.13.2", + "headers-polyfill": "^5.0.1", + "is-node-process": "^1.2.0", + "outvariant": "^1.4.3", + "path-to-regexp": "^6.3.0", + "picocolors": "^1.1.1", + "rettime": "^0.11.7", + "statuses": "^2.0.2", + "strict-event-emitter": "^0.5.1", + "tough-cookie": "^6.0.1", + "type-fest": "^5.5.0", + "until-async": "^3.0.2", + "yargs": "^17.7.2" + }, + "bin": { + "msw": "cli/index.js" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/mswjs" + }, + "peerDependencies": { + "typescript": ">= 4.8.x" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/msw/node_modules/cookie": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-1.1.1.tgz", + "integrity": "sha512-ei8Aos7ja0weRpFzJnEA9UHJ/7XQmqglbRwnf2ATjcB9Wq874VKH9kfjjirM6UhU2/E5fFYadylyhFldcqSidQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/mute-stream": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/mute-stream/-/mute-stream-3.0.0.tgz", + "integrity": "sha512-dkEJPVvun4FryqBmZ5KhDo0K9iDXAwn08tMLDinNdRBNPcYEDiWYysLcc6k3mjTMlbP9KyylvRpd4wFtwrT9rw==", + "license": "ISC", + "engines": { + "node": "^20.17.0 || >=22.9.0" + } + }, + "node_modules/nanoid": { + "version": "3.3.12", + "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.12.tgz", + "integrity": "sha512-ZB9RH/39qpq5Vu6Y+NmUaFhQR6pp+M2Xt76XBnEwDaGcVAqhlvxrl3B2bKS5D3NH3QR76v3aSrKaF/Kiy7lEtQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "bin": { + "nanoid": "bin/nanoid.cjs" + }, + "engines": { + "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1" + } + }, + "node_modules/napi-postinstall": { + "version": "0.3.4", + "resolved": "https://registry.npmjs.org/napi-postinstall/-/napi-postinstall-0.3.4.tgz", + "integrity": "sha512-PHI5f1O0EP5xJ9gQmFGMS6IZcrVvTjpXjz7Na41gTE7eE2hK11lg04CECCYEEjdc17EV4DO+fkGEtt7TpTaTiQ==", + "dev": true, + "license": "MIT", + "bin": { + "napi-postinstall": "lib/cli.js" + }, + "engines": { + "node": "^12.20.0 || ^14.18.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/napi-postinstall" + } + }, + "node_modules/natural-compare": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz", + "integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==", + "dev": true, + "license": "MIT" + }, + "node_modules/negotiator": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-1.0.0.tgz", + "integrity": "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/next": { + "version": "16.2.4", + "resolved": "https://registry.npmjs.org/next/-/next-16.2.4.tgz", + "integrity": "sha512-kPvz56wF5frc+FxlHI5qnklCzbq53HTwORaWBGdT0vNoKh1Aya9XC8aPauH4NJxqtzbWsS5mAbctm4cr+EkQ2Q==", + "license": "MIT", + "dependencies": { + "@next/env": "16.2.4", + "@swc/helpers": "0.5.15", + "baseline-browser-mapping": "^2.9.19", + "caniuse-lite": "^1.0.30001579", + "postcss": "8.4.31", + "styled-jsx": "5.1.6" + }, + "bin": { + "next": "dist/bin/next" + }, + "engines": { + "node": ">=20.9.0" + }, + "optionalDependencies": { + "@next/swc-darwin-arm64": "16.2.4", + "@next/swc-darwin-x64": "16.2.4", + "@next/swc-linux-arm64-gnu": "16.2.4", + "@next/swc-linux-arm64-musl": "16.2.4", + "@next/swc-linux-x64-gnu": "16.2.4", + "@next/swc-linux-x64-musl": "16.2.4", + "@next/swc-win32-arm64-msvc": "16.2.4", + "@next/swc-win32-x64-msvc": "16.2.4", + "sharp": "^0.34.5" + }, + "peerDependencies": { + "@opentelemetry/api": "^1.1.0", + "@playwright/test": "^1.51.1", + "babel-plugin-react-compiler": "*", + "react": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", + "react-dom": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", + "sass": "^1.3.0" + }, + "peerDependenciesMeta": { + "@opentelemetry/api": { + "optional": true + }, + "@playwright/test": { + "optional": true + }, + "babel-plugin-react-compiler": { + "optional": true + }, + "sass": { + "optional": true + } + } + }, + "node_modules/next-themes": { + "version": "0.4.6", + "resolved": "https://registry.npmjs.org/next-themes/-/next-themes-0.4.6.tgz", + "integrity": "sha512-pZvgD5L0IEvX5/9GWyHMf3m8BKiVQwsCMHfoFosXtXBMnaS0ZnIJ9ST4b4NqLVKDEm8QBxoNNGNaBv2JNF6XNA==", + "license": "MIT", + "peerDependencies": { + "react": "^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc" + } + }, + "node_modules/next/node_modules/postcss": { + "version": "8.4.31", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz", + "integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/postcss" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "nanoid": "^3.3.6", + "picocolors": "^1.0.0", + "source-map-js": "^1.0.2" + }, + "engines": { + "node": "^10 || ^12 || >=14" + } + }, + "node_modules/node-domexception": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz", + "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==", + "deprecated": "Use your platform's native DOMException instead", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/jimmywarting" + }, + { + "type": "github", + "url": "https://paypal.me/jimmywarting" + } + ], + "license": "MIT", + "engines": { + "node": ">=10.5.0" + } + }, + "node_modules/node-exports-info": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/node-exports-info/-/node-exports-info-1.6.0.tgz", + "integrity": "sha512-pyFS63ptit/P5WqUkt+UUfe+4oevH+bFeIiPPdfb0pFeYEu/1ELnJu5l+5EcTKYL5M7zaAa7S8ddywgXypqKCw==", + "dev": true, + "license": "MIT", + "dependencies": { + "array.prototype.flatmap": "^1.3.3", + "es-errors": "^1.3.0", + "object.entries": "^1.1.9", + "semver": "^6.3.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/node-fetch": { + "version": "3.3.2", + "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz", + "integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==", + "license": "MIT", + "dependencies": { + "data-uri-to-buffer": "^4.0.0", + "fetch-blob": "^3.1.4", + "formdata-polyfill": "^4.0.10" + }, + "engines": { + "node": "^12.20.0 || ^14.13.1 || >=16.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/node-fetch" + } + }, + "node_modules/node-releases": { + "version": "2.0.38", + "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.38.tgz", + "integrity": "sha512-3qT/88Y3FbH/Kx4szpQQ4HzUbVrHPKTLVpVocKiLfoYvw9XSGOX2FmD2d6DrXbVYyAQTF2HeF6My8jmzx7/CRw==", + "license": "MIT" + }, + "node_modules/npm-run-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-6.0.0.tgz", + "integrity": "sha512-9qny7Z9DsQU8Ou39ERsPU4OZQlSTP47ShQzuKZ6PRXpYLtIFgl/DEBYEXKlvcEa+9tHVcK8CF81Y2V72qaZhWA==", + "license": "MIT", + "dependencies": { + "path-key": "^4.0.0", + "unicorn-magic": "^0.3.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/npm-run-path/node_modules/path-key": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-4.0.0.tgz", + "integrity": "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/object-assign": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", + "integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/object-inspect": { + "version": "1.13.4", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/object-keys": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz", + "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/object-treeify": { + "version": "1.1.33", + "resolved": "https://registry.npmjs.org/object-treeify/-/object-treeify-1.1.33.tgz", + "integrity": "sha512-EFVjAYfzWqWsBMRHPMAXLCDIJnpMhdWAqR7xG6M6a2cs6PMFpl/+Z20w9zDW4vkxOFfddegBKq9Rehd0bxWE7A==", + "license": "MIT", + "engines": { + "node": ">= 10" + } + }, + "node_modules/object.assign": { + "version": "4.1.7", + "resolved": "https://registry.npmjs.org/object.assign/-/object.assign-4.1.7.tgz", + "integrity": "sha512-nK28WOo+QIjBkDduTINE4JkF/UJJKyf2EJxvJKfblDpyg0Q+pkOHNTL0Qwy6NP6FhE/EnzV73BxxqcJaXY9anw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0", + "has-symbols": "^1.1.0", + "object-keys": "^1.1.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/object.entries": { + "version": "1.1.9", + "resolved": "https://registry.npmjs.org/object.entries/-/object.entries-1.1.9.tgz", + "integrity": "sha512-8u/hfXFRBD1O0hPUjioLhoWFHRmt6tKA4/vZPyckBr18l1KE9uHrFaFaUi8MDRTpi4uak2goyPTSNJLXX2k2Hw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.1.1" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/object.fromentries": { + "version": "2.0.8", + "resolved": "https://registry.npmjs.org/object.fromentries/-/object.fromentries-2.0.8.tgz", + "integrity": "sha512-k6E21FzySsSK5a21KRADBd/NGneRegFO5pLHfdQLpRDETUNJueLXs3WCzyQ3tFRDYgbq3KHGXfTbi2bs8WQ6rQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.2", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/object.groupby": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/object.groupby/-/object.groupby-1.0.3.tgz", + "integrity": "sha512-+Lhy3TQTuzXI5hevh8sBGqbmurHbbIjAi0Z4S63nthVLmLxfbj4T54a4CfZrXIrt9iP4mVAPYMo/v99taj3wjQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/object.values": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/object.values/-/object.values-1.2.1.tgz", + "integrity": "sha512-gXah6aZrcUxjWg2zR2MwouP2eHlCBzdV4pygudehaKXSGW4v2AsRQUK+lwwXhii6KFZcunEnmSUoYp5CXibxtA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/on-finished": { + "version": "2.4.1", + "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", + "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", + "license": "MIT", + "dependencies": { + "ee-first": "1.1.1" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "license": "ISC", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/onetime": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/onetime/-/onetime-7.0.0.tgz", + "integrity": "sha512-VXJjc87FScF88uafS3JllDgvAm+c/Slfz06lorj2uAY34rlUu0Nt+v8wreiImcrgAjjIHp1rXpTDlLOGw29WwQ==", + "license": "MIT", + "dependencies": { + "mimic-function": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/open": { + "version": "11.0.0", + "resolved": "https://registry.npmjs.org/open/-/open-11.0.0.tgz", + "integrity": "sha512-smsWv2LzFjP03xmvFoJ331ss6h+jixfA4UUV/Bsiyuu4YJPfN+FIQGOIiv4w9/+MoHkfkJ22UIaQWRVFRfH6Vw==", + "license": "MIT", + "dependencies": { + "default-browser": "^5.4.0", + "define-lazy-prop": "^3.0.0", + "is-in-ssh": "^1.0.0", + "is-inside-container": "^1.0.0", + "powershell-utils": "^0.1.0", + "wsl-utils": "^0.3.0" + }, + "engines": { + "node": ">=20" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/optionator": { + "version": "0.9.4", + "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz", + "integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==", + "dev": true, + "license": "MIT", + "dependencies": { + "deep-is": "^0.1.3", + "fast-levenshtein": "^2.0.6", + "levn": "^0.4.1", + "prelude-ls": "^1.2.1", + "type-check": "^0.4.0", + "word-wrap": "^1.2.5" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/ora": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/ora/-/ora-8.2.0.tgz", + "integrity": "sha512-weP+BZ8MVNnlCm8c0Qdc1WSWq4Qn7I+9CJGm7Qali6g44e/PUzbjNqJX5NJ9ljlNMosfJvg1fKEGILklK9cwnw==", + "license": "MIT", + "dependencies": { + "chalk": "^5.3.0", + "cli-cursor": "^5.0.0", + "cli-spinners": "^2.9.2", + "is-interactive": "^2.0.0", + "is-unicode-supported": "^2.0.0", + "log-symbols": "^6.0.0", + "stdin-discarder": "^0.2.2", + "string-width": "^7.2.0", + "strip-ansi": "^7.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/chalk": { + "version": "5.6.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz", + "integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==", + "license": "MIT", + "engines": { + "node": "^12.17.0 || ^14.13 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/outvariant": { + "version": "1.4.3", + "resolved": "https://registry.npmjs.org/outvariant/-/outvariant-1.4.3.tgz", + "integrity": "sha512-+Sl2UErvtsoajRDKCE5/dBz4DIvHXQQnAxtQTF04OJxY0+DyZXSo5P5Bb7XYWOh81syohlYL24hbDwxedPUJCA==", + "license": "MIT" + }, + "node_modules/own-keys": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/own-keys/-/own-keys-1.0.1.tgz", + "integrity": "sha512-qFOyK5PjiWZd+QQIh+1jhdb9LpxTF0qs7Pm8o5QHYZ0M3vKqSqzsZaEB6oWlxZ+q2sJBMI/Ktgd2N5ZwQoRHfg==", + "dev": true, + "license": "MIT", + "dependencies": { + "get-intrinsic": "^1.2.6", + "object-keys": "^1.1.1", + "safe-push-apply": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/p-limit": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz", + "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "yocto-queue": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-locate": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-5.0.0.tgz", + "integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-limit": "^3.0.2" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/parent-module": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/parent-module/-/parent-module-1.0.1.tgz", + "integrity": "sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g==", + "license": "MIT", + "dependencies": { + "callsites": "^3.0.0" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/parse-json": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-5.2.0.tgz", + "integrity": "sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg==", + "license": "MIT", + "dependencies": { + "@babel/code-frame": "^7.0.0", + "error-ex": "^1.3.1", + "json-parse-even-better-errors": "^2.3.0", + "lines-and-columns": "^1.1.6" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/parse-ms": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/parse-ms/-/parse-ms-4.0.0.tgz", + "integrity": "sha512-TXfryirbmq34y8QBwgqCVLi+8oA3oWx2eAnSn62ITyEhEYaWRlVZ2DvMM9eZbMs/RfxPu/PK/aBLyGj4IrqMHw==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/parseurl": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", + "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/path-browserify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-1.0.1.tgz", + "integrity": "sha512-b7uo2UCUOYZcnF/3ID0lulOJi/bafxa1xPe7ZPsammBSpjSWQkjNxlt635YGS2MiR9GjvuXCtz2emr3jbsz98g==", + "license": "MIT" + }, + "node_modules/path-exists": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-key": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-parse": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", + "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==", + "dev": true, + "license": "MIT" + }, + "node_modules/path-to-regexp": { + "version": "6.3.0", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-6.3.0.tgz", + "integrity": "sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ==", + "license": "MIT" + }, + "node_modules/picocolors": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", + "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==", + "license": "ISC" + }, + "node_modules/picomatch": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.2.tgz", + "integrity": "sha512-V7+vQEJ06Z+c5tSye8S+nHUfI51xoXIXjHQ99cQtKUkQqqO1kO/KCJUfZXuB47h/YBlDhah2H3hdUGXn8ie0oA==", + "license": "MIT", + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/pkce-challenge": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/pkce-challenge/-/pkce-challenge-5.0.1.tgz", + "integrity": "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ==", + "license": "MIT", + "engines": { + "node": ">=16.20.0" + } + }, + "node_modules/possible-typed-array-names": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/possible-typed-array-names/-/possible-typed-array-names-1.1.0.tgz", + "integrity": "sha512-/+5VFTchJDoVj3bhoqi6UeymcD00DAwb1nJwamzPvHEszJ4FpF6SNNbUbOS8yI56qHzdV8eK0qEfOSiodkTdxg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/postcss": { + "version": "8.5.13", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.13.tgz", + "integrity": "sha512-qif0+jGGZoLWdHey3UFHHWP0H7Gbmsk8T5VEqyYFbWqPr1XqvLGBbk/sl8V5exGmcYJklJOhOQq1pV9IcsiFag==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/postcss" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "nanoid": "^3.3.11", + "picocolors": "^1.1.1", + "source-map-js": "^1.2.1" + }, + "engines": { + "node": "^10 || ^12 || >=14" + } + }, + "node_modules/postcss-selector-parser": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.1.1.tgz", + "integrity": "sha512-orRsuYpJVw8LdAwqqLykBj9ecS5/cRHlI5+nvTo8LcCKmzDmqVORXtOIYEEQuL9D4BxtA1lm5isAqzQZCoQ6Eg==", + "license": "MIT", + "dependencies": { + "cssesc": "^3.0.0", + "util-deprecate": "^1.0.2" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/powershell-utils": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/powershell-utils/-/powershell-utils-0.1.0.tgz", + "integrity": "sha512-dM0jVuXJPsDN6DvRpea484tCUaMiXWjuCn++HGTqUWzGDjv5tZkEZldAJ/UMlqRYGFrD/etByo4/xOuC/snX2A==", + "license": "MIT", + "engines": { + "node": ">=20" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/prelude-ls": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz", + "integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/pretty-ms": { + "version": "9.3.0", + "resolved": "https://registry.npmjs.org/pretty-ms/-/pretty-ms-9.3.0.tgz", + "integrity": "sha512-gjVS5hOP+M3wMm5nmNOucbIrqudzs9v/57bWRHQWLYklXqoXKrVfYW2W9+glfGsqtPgpiz5WwyEEB+ksXIx3gQ==", + "license": "MIT", + "dependencies": { + "parse-ms": "^4.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/prompts": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz", + "integrity": "sha512-NxNv/kLguCA7p3jE8oL2aEBsrJWgAakBpgmgK6lpPWV+WuOmY6r2/zbAVnP+T8bQlA0nzHXSJSJW0Hq7ylaD2Q==", + "license": "MIT", + "dependencies": { + "kleur": "^3.0.3", + "sisteransi": "^1.0.5" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/prompts/node_modules/kleur": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz", + "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/prop-types": { + "version": "15.8.1", + "resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.8.1.tgz", + "integrity": "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg==", + "dev": true, + "license": "MIT", + "dependencies": { + "loose-envify": "^1.4.0", + "object-assign": "^4.1.1", + "react-is": "^16.13.1" + } + }, + "node_modules/proxy-addr": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", + "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", + "license": "MIT", + "dependencies": { + "forwarded": "0.2.0", + "ipaddr.js": "1.9.1" + }, + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/qs": { + "version": "6.15.1", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.15.1.tgz", + "integrity": "sha512-6YHEFRL9mfgcAvql/XhwTvf5jKcOiiupt2FiJxHkiX1z4j7WL8J/jRHYLluORvc1XxB5rV20KoeK00gVJamspg==", + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/queue-microtask": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", + "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT" + }, + "node_modules/radix-ui": { + "version": "1.4.3", + "resolved": "https://registry.npmjs.org/radix-ui/-/radix-ui-1.4.3.tgz", + "integrity": "sha512-aWizCQiyeAenIdUbqEpXgRA1ya65P13NKn/W8rWkcN0OPkRDxdBVLWnIEDsS2RpwCK2nobI7oMUSmexzTDyAmA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.3", + "@radix-ui/react-accessible-icon": "1.1.7", + "@radix-ui/react-accordion": "1.2.12", + "@radix-ui/react-alert-dialog": "1.1.15", + "@radix-ui/react-arrow": "1.1.7", + "@radix-ui/react-aspect-ratio": "1.1.7", + "@radix-ui/react-avatar": "1.1.10", + "@radix-ui/react-checkbox": "1.3.3", + "@radix-ui/react-collapsible": "1.1.12", + "@radix-ui/react-collection": "1.1.7", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-context-menu": "2.2.16", + "@radix-ui/react-dialog": "1.1.15", + "@radix-ui/react-direction": "1.1.1", + "@radix-ui/react-dismissable-layer": "1.1.11", + "@radix-ui/react-dropdown-menu": "2.1.16", + "@radix-ui/react-focus-guards": "1.1.3", + "@radix-ui/react-focus-scope": "1.1.7", + "@radix-ui/react-form": "0.1.8", + "@radix-ui/react-hover-card": "1.1.15", + "@radix-ui/react-label": "2.1.7", + "@radix-ui/react-menu": "2.1.16", + "@radix-ui/react-menubar": "1.1.16", + "@radix-ui/react-navigation-menu": "1.2.14", + "@radix-ui/react-one-time-password-field": "0.1.8", + "@radix-ui/react-password-toggle-field": "0.1.3", + "@radix-ui/react-popover": "1.1.15", + "@radix-ui/react-popper": "1.2.8", + "@radix-ui/react-portal": "1.1.9", + "@radix-ui/react-presence": "1.1.5", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-progress": "1.1.7", + "@radix-ui/react-radio-group": "1.3.8", + "@radix-ui/react-roving-focus": "1.1.11", + "@radix-ui/react-scroll-area": "1.2.10", + "@radix-ui/react-select": "2.2.6", + "@radix-ui/react-separator": "1.1.7", + "@radix-ui/react-slider": "1.3.6", + "@radix-ui/react-slot": "1.2.3", + "@radix-ui/react-switch": "1.2.6", + "@radix-ui/react-tabs": "1.1.13", + "@radix-ui/react-toast": "1.2.15", + "@radix-ui/react-toggle": "1.1.10", + "@radix-ui/react-toggle-group": "1.1.11", + "@radix-ui/react-toolbar": "1.1.11", + "@radix-ui/react-tooltip": "1.2.8", + "@radix-ui/react-use-callback-ref": "1.1.1", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-effect-event": "0.0.2", + "@radix-ui/react-use-escape-keydown": "1.1.1", + "@radix-ui/react-use-is-hydrated": "0.1.0", + "@radix-ui/react-use-layout-effect": "1.1.1", + "@radix-ui/react-use-size": "1.1.1", + "@radix-ui/react-visually-hidden": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/range-parser": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", + "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/raw-body": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.2.tgz", + "integrity": "sha512-K5zQjDllxWkf7Z5xJdV0/B0WTNqx6vxG70zJE4N0kBs4LovmEYWJzQGxC9bS9RAKu3bgM40lrd5zoLJ12MQ5BA==", + "license": "MIT", + "dependencies": { + "bytes": "~3.1.2", + "http-errors": "~2.0.1", + "iconv-lite": "~0.7.0", + "unpipe": "~1.0.0" + }, + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/react": { + "version": "19.2.4", + "resolved": "https://registry.npmjs.org/react/-/react-19.2.4.tgz", + "integrity": "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==", + "license": "MIT", + "peer": true, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/react-dom": { + "version": "19.2.4", + "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.4.tgz", + "integrity": "sha512-AXJdLo8kgMbimY95O2aKQqsz2iWi9jMgKJhRBAxECE4IFxfcazB2LmzloIoibJI3C12IlY20+KFaLv+71bUJeQ==", + "license": "MIT", + "peer": true, + "dependencies": { + "scheduler": "^0.27.0" + }, + "peerDependencies": { + "react": "^19.2.4" + } + }, + "node_modules/react-is": { + "version": "16.13.1", + "resolved": "https://registry.npmjs.org/react-is/-/react-is-16.13.1.tgz", + "integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/react-remove-scroll": { + "version": "2.7.2", + "resolved": "https://registry.npmjs.org/react-remove-scroll/-/react-remove-scroll-2.7.2.tgz", + "integrity": "sha512-Iqb9NjCCTt6Hf+vOdNIZGdTiH1QSqr27H/Ek9sv/a97gfueI/5h1s3yRi1nngzMUaOOToin5dI1dXKdXiF+u0Q==", + "license": "MIT", + "dependencies": { + "react-remove-scroll-bar": "^2.3.7", + "react-style-singleton": "^2.2.3", + "tslib": "^2.1.0", + "use-callback-ref": "^1.3.3", + "use-sidecar": "^1.1.3" + }, + "engines": { + "node": ">=10" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/react-remove-scroll-bar": { + "version": "2.3.8", + "resolved": "https://registry.npmjs.org/react-remove-scroll-bar/-/react-remove-scroll-bar-2.3.8.tgz", + "integrity": "sha512-9r+yi9+mgU33AKcj6IbT9oRCO78WriSj6t/cF8DWBZJ9aOGPOTEDvdUDz1FwKim7QXWwmHqtdHnRJfhAxEG46Q==", + "license": "MIT", + "dependencies": { + "react-style-singleton": "^2.2.2", + "tslib": "^2.0.0" + }, + "engines": { + "node": ">=10" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/react-style-singleton": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/react-style-singleton/-/react-style-singleton-2.2.3.tgz", + "integrity": "sha512-b6jSvxvVnyptAiLjbkWLE/lOnR4lfTtDAl+eUC7RZy+QQWc6wRzIV2CE6xBuMmDxc2qIihtDCZD5NPOFl7fRBQ==", + "license": "MIT", + "dependencies": { + "get-nonce": "^1.0.0", + "tslib": "^2.0.0" + }, + "engines": { + "node": ">=10" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/recast": { + "version": "0.23.11", + "resolved": "https://registry.npmjs.org/recast/-/recast-0.23.11.tgz", + "integrity": "sha512-YTUo+Flmw4ZXiWfQKGcwwc11KnoRAYgzAE2E7mXKCjSviTKShtxBsN6YUUBB2gtaBzKzeKunxhUwNHQuRryhWA==", + "license": "MIT", + "dependencies": { + "ast-types": "^0.16.1", + "esprima": "~4.0.0", + "source-map": "~0.6.1", + "tiny-invariant": "^1.3.3", + "tslib": "^2.0.1" + }, + "engines": { + "node": ">= 4" + } + }, + "node_modules/reflect.getprototypeof": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/reflect.getprototypeof/-/reflect.getprototypeof-1.0.10.tgz", + "integrity": "sha512-00o4I+DVrefhv+nX0ulyi3biSHCPDe+yLv5o/p6d/UVlirijB8E16FtfwSAi4g3tcqrQ4lRAqQSoFEZJehYEcw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.9", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0", + "get-intrinsic": "^1.2.7", + "get-proto": "^1.0.1", + "which-builtin-type": "^1.2.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/regexp.prototype.flags": { + "version": "1.5.4", + "resolved": "https://registry.npmjs.org/regexp.prototype.flags/-/regexp.prototype.flags-1.5.4.tgz", + "integrity": "sha512-dYqgNSZbDwkaJ2ceRd9ojCGjBq+mOm9LmtXnAnEGyHhN/5R7iDW2TRw3h+o/jCFxus3P2LfWIIiwowAjANm7IA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-errors": "^1.3.0", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "set-function-name": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/require-from-string": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", + "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/resolve": { + "version": "2.0.0-next.6", + "resolved": "https://registry.npmjs.org/resolve/-/resolve-2.0.0-next.6.tgz", + "integrity": "sha512-3JmVl5hMGtJ3kMmB3zi3DL25KfkCEyy3Tw7Gmw7z5w8M9WlwoPFnIvwChzu1+cF3iaK3sp18hhPz8ANeimdJfA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "is-core-module": "^2.16.1", + "node-exports-info": "^1.6.0", + "object-keys": "^1.1.1", + "path-parse": "^1.0.7", + "supports-preserve-symlinks-flag": "^1.0.0" + }, + "bin": { + "resolve": "bin/resolve" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/resolve-from": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz", + "integrity": "sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==", + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/resolve-pkg-maps": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/resolve-pkg-maps/-/resolve-pkg-maps-1.0.0.tgz", + "integrity": "sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw==", + "dev": true, + "license": "MIT", + "funding": { + "url": "https://github.com/privatenumber/resolve-pkg-maps?sponsor=1" + } + }, + "node_modules/restore-cursor": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/restore-cursor/-/restore-cursor-5.1.0.tgz", + "integrity": "sha512-oMA2dcrw6u0YfxJQXm342bFKX/E4sG9rbTzO9ptUcR/e8A33cHuvStiYOwH7fszkZlZ1z/ta9AAoPk2F4qIOHA==", + "license": "MIT", + "dependencies": { + "onetime": "^7.0.0", + "signal-exit": "^4.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/rettime": { + "version": "0.11.10", + "resolved": "https://registry.npmjs.org/rettime/-/rettime-0.11.10.tgz", + "integrity": "sha512-zzgUfAF20wZtfDs72M15qiX6EutHxgEZ1PAEJUsWQsOi4aOdcK8BJ3/WSDHBhJ7P27fQjd6iOE1+CWIO8t3XOA==", + "license": "MIT" + }, + "node_modules/reusify": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.1.0.tgz", + "integrity": "sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw==", + "license": "MIT", + "engines": { + "iojs": ">=1.0.0", + "node": ">=0.10.0" + } + }, + "node_modules/router": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/router/-/router-2.2.0.tgz", + "integrity": "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.0", + "depd": "^2.0.0", + "is-promise": "^4.0.0", + "parseurl": "^1.3.3", + "path-to-regexp": "^8.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/router/node_modules/path-to-regexp": { + "version": "8.4.2", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-8.4.2.tgz", + "integrity": "sha512-qRcuIdP69NPm4qbACK+aDogI5CBDMi1jKe0ry5rSQJz8JVLsC7jV8XpiJjGRLLol3N+R5ihGYcrPLTno6pAdBA==", + "license": "MIT", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/run-applescript": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.1.0.tgz", + "integrity": "sha512-DPe5pVFaAsinSaV6QjQ6gdiedWDcRCbUuiQfQa2wmWV7+xC9bGulGI8+TdRmoFkAPaBXk8CrAbnlY2ISniJ47Q==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/run-parallel": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", + "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "dependencies": { + "queue-microtask": "^1.2.2" + } + }, + "node_modules/safe-array-concat": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/safe-array-concat/-/safe-array-concat-1.1.4.tgz", + "integrity": "sha512-wtZlHyOje6OZTGqAoaDKxFkgRtkF9CnHAVnCHKfuj200wAgL+bSJhdsCD2l0Qx/2ekEXjPWcyKkfGb5CPboslg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.9", + "call-bound": "^1.0.4", + "get-intrinsic": "^1.3.0", + "has-symbols": "^1.1.0", + "isarray": "^2.0.5" + }, + "engines": { + "node": ">=0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/safe-push-apply": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/safe-push-apply/-/safe-push-apply-1.0.0.tgz", + "integrity": "sha512-iKE9w/Z7xCzUMIZqdBsp6pEQvwuEebH4vdpjcDWnyzaI6yl6O9FHvVpmGelvEHNsoY6wGblkxR6Zty/h00WiSA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "isarray": "^2.0.5" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/safe-regex-test": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/safe-regex-test/-/safe-regex-test-1.1.0.tgz", + "integrity": "sha512-x/+Cz4YrimQxQccJf5mKEbIa1NzeCRNI5Ecl/ekmlYaampdNLPalVyIcCZNNH3MvmqBugV5TMYZXv0ljslUlaw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "is-regex": "^1.2.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "license": "MIT" + }, + "node_modules/scheduler": { + "version": "0.27.0", + "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.27.0.tgz", + "integrity": "sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q==", + "license": "MIT" + }, + "node_modules/semver": { + "version": "6.3.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz", + "integrity": "sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==", + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + } + }, + "node_modules/send": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/send/-/send-1.2.1.tgz", + "integrity": "sha512-1gnZf7DFcoIcajTjTwjwuDjzuz4PPcY2StKPlsGAQ1+YH20IRVrBaXSWmdjowTJ6u8Rc01PoYOGHXfP1mYcZNQ==", + "license": "MIT", + "dependencies": { + "debug": "^4.4.3", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "fresh": "^2.0.0", + "http-errors": "^2.0.1", + "mime-types": "^3.0.2", + "ms": "^2.1.3", + "on-finished": "^2.4.1", + "range-parser": "^1.2.1", + "statuses": "^2.0.2" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/serve-static": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.1.tgz", + "integrity": "sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw==", + "license": "MIT", + "dependencies": { + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "parseurl": "^1.3.3", + "send": "^1.2.0" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/set-cookie-parser": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/set-cookie-parser/-/set-cookie-parser-3.1.0.tgz", + "integrity": "sha512-kjnC1DXBHcxaOaOXBHBeRtltsDG2nUiUni+jP92M9gYdW12rsmx92UsfpH7o5tDRs7I1ZZPSQJQGv3UaRfCiuw==", + "license": "MIT" + }, + "node_modules/set-function-length": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/set-function-length/-/set-function-length-1.2.2.tgz", + "integrity": "sha512-pgRc4hJ4/sNjWCSS9AmnS40x3bNMDTknHgL5UaMBTMyJnU90EgWh1Rz+MC9eFu4BuN/UwZjKQuY/1v3rM7HMfg==", + "dev": true, + "license": "MIT", + "dependencies": { + "define-data-property": "^1.1.4", + "es-errors": "^1.3.0", + "function-bind": "^1.1.2", + "get-intrinsic": "^1.2.4", + "gopd": "^1.0.1", + "has-property-descriptors": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/set-function-name": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/set-function-name/-/set-function-name-2.0.2.tgz", + "integrity": "sha512-7PGFlmtwsEADb0WYyvCMa1t+yke6daIG4Wirafur5kcf+MhUnPms1UeR0CKQdTZD81yESwMHbtn+TR+dMviakQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "define-data-property": "^1.1.4", + "es-errors": "^1.3.0", + "functions-have-names": "^1.2.3", + "has-property-descriptors": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/set-proto": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/set-proto/-/set-proto-1.0.0.tgz", + "integrity": "sha512-RJRdvCo6IAnPdsvP/7m6bsQqNnn1FCBX5ZNtFL98MmFF/4xAIJTIg1YbHW5DC2W5SKZanrC6i4HsJqlajw/dZw==", + "dev": true, + "license": "MIT", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/setprototypeof": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", + "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==", + "license": "ISC" + }, + "node_modules/shadcn": { + "version": "4.6.0", + "resolved": "https://registry.npmjs.org/shadcn/-/shadcn-4.6.0.tgz", + "integrity": "sha512-4XeMwFf8ZZxmqQQp+U+Nsq2M+cY4Da8Joo/EaMdHVc4uVuWSTJoeidlZ3gDjyxXCjYB1FLcxYwR4lYQAH8emOg==", + "license": "MIT", + "dependencies": { + "@babel/core": "^7.28.0", + "@babel/parser": "^7.28.0", + "@babel/plugin-transform-typescript": "^7.28.0", + "@babel/preset-typescript": "^7.27.1", + "@dotenvx/dotenvx": "^1.48.4", + "@modelcontextprotocol/sdk": "^1.26.0", + "@types/validate-npm-package-name": "^4.0.2", + "browserslist": "^4.26.2", + "commander": "^14.0.0", + "cosmiconfig": "^9.0.0", + "dedent": "^1.6.0", + "deepmerge": "^4.3.1", + "diff": "^8.0.2", + "execa": "^9.6.0", + "fast-glob": "^3.3.3", + "fs-extra": "^11.3.1", + "fuzzysort": "^3.1.0", + "https-proxy-agent": "^7.0.6", + "kleur": "^4.1.5", + "msw": "^2.10.4", + "node-fetch": "^3.3.2", + "open": "^11.0.0", + "ora": "^8.2.0", + "postcss": "^8.5.6", + "postcss-selector-parser": "^7.1.0", + "prompts": "^2.4.2", + "recast": "^0.23.11", + "stringify-object": "^5.0.0", + "tailwind-merge": "^3.0.1", + "ts-morph": "^26.0.0", + "tsconfig-paths": "^4.2.0", + "validate-npm-package-name": "^7.0.1", + "zod": "^3.24.1", + "zod-to-json-schema": "^3.24.6" + }, + "bin": { + "shadcn": "dist/index.js" + } + }, + "node_modules/shadcn/node_modules/fast-glob": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.3.3.tgz", + "integrity": "sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg==", + "license": "MIT", + "dependencies": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.2", + "merge2": "^1.3.0", + "micromatch": "^4.0.8" + }, + "engines": { + "node": ">=8.6.0" + } + }, + "node_modules/shadcn/node_modules/glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/shadcn/node_modules/tsconfig-paths": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/tsconfig-paths/-/tsconfig-paths-4.2.0.tgz", + "integrity": "sha512-NoZ4roiN7LnbKn9QqE1amc9DJfzvZXxF4xDavcOWt1BPkdx+m+0gJuPM+S0vCe7zTJMYUP0R8pO2XMr+Y8oLIg==", + "license": "MIT", + "dependencies": { + "json5": "^2.2.2", + "minimist": "^1.2.6", + "strip-bom": "^3.0.0" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/shadcn/node_modules/zod": { + "version": "3.25.76", + "resolved": "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz", + "integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/colinhacks" + } + }, + "node_modules/sharp": { + "version": "0.34.5", + "resolved": "https://registry.npmjs.org/sharp/-/sharp-0.34.5.tgz", + "integrity": "sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==", + "hasInstallScript": true, + "license": "Apache-2.0", + "optional": true, + "dependencies": { + "@img/colour": "^1.0.0", + "detect-libc": "^2.1.2", + "semver": "^7.7.3" + }, + "engines": { + "node": "^18.17.0 || ^20.3.0 || >=21.0.0" + }, + "funding": { + "url": "https://opencollective.com/libvips" + }, + "optionalDependencies": { + "@img/sharp-darwin-arm64": "0.34.5", + "@img/sharp-darwin-x64": "0.34.5", + "@img/sharp-libvips-darwin-arm64": "1.2.4", + "@img/sharp-libvips-darwin-x64": "1.2.4", + "@img/sharp-libvips-linux-arm": "1.2.4", + "@img/sharp-libvips-linux-arm64": "1.2.4", + "@img/sharp-libvips-linux-ppc64": "1.2.4", + "@img/sharp-libvips-linux-riscv64": "1.2.4", + "@img/sharp-libvips-linux-s390x": "1.2.4", + "@img/sharp-libvips-linux-x64": "1.2.4", + "@img/sharp-libvips-linuxmusl-arm64": "1.2.4", + "@img/sharp-libvips-linuxmusl-x64": "1.2.4", + "@img/sharp-linux-arm": "0.34.5", + "@img/sharp-linux-arm64": "0.34.5", + "@img/sharp-linux-ppc64": "0.34.5", + "@img/sharp-linux-riscv64": "0.34.5", + "@img/sharp-linux-s390x": "0.34.5", + "@img/sharp-linux-x64": "0.34.5", + "@img/sharp-linuxmusl-arm64": "0.34.5", + "@img/sharp-linuxmusl-x64": "0.34.5", + "@img/sharp-wasm32": "0.34.5", + "@img/sharp-win32-arm64": "0.34.5", + "@img/sharp-win32-ia32": "0.34.5", + "@img/sharp-win32-x64": "0.34.5" + } + }, + "node_modules/sharp/node_modules/semver": { + "version": "7.7.4", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.4.tgz", + "integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==", + "license": "ISC", + "optional": true, + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/shebang-command": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "license": "MIT", + "dependencies": { + "shebang-regex": "^3.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/shebang-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/side-channel": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-list": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.1.tgz", + "integrity": "sha512-mjn/0bi/oUURjc5Xl7IaWi/OJJJumuoJFQJfDDyO46+hBWsfaVM65TBHq2eoZBhzl9EchxOijpkbRC8SVBQU0w==", + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.4" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-map": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/signal-exit": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz", + "integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==", + "license": "ISC", + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/sisteransi": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz", + "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==", + "license": "MIT" + }, + "node_modules/sonner": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/sonner/-/sonner-2.0.7.tgz", + "integrity": "sha512-W6ZN4p58k8aDKA4XPcx2hpIQXBRAgyiWVkYhT7CvK6D3iAu7xjvVyhQHg2/iaKJZ1XVJ4r7XuwGL+WGEK37i9w==", + "license": "MIT", + "peerDependencies": { + "react": "^18.0.0 || ^19.0.0 || ^19.0.0-rc", + "react-dom": "^18.0.0 || ^19.0.0 || ^19.0.0-rc" + } + }, + "node_modules/source-map": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", + "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/source-map-js": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz", + "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==", + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/stable-hash": { + "version": "0.0.5", + "resolved": "https://registry.npmjs.org/stable-hash/-/stable-hash-0.0.5.tgz", + "integrity": "sha512-+L3ccpzibovGXFK+Ap/f8LOS0ahMrHTf3xu7mMLSpEGU0EO9ucaysSylKo9eRDFNhWve/y275iPmIZ4z39a9iA==", + "dev": true, + "license": "MIT" + }, + "node_modules/statuses": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.2.tgz", + "integrity": "sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/stdin-discarder": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/stdin-discarder/-/stdin-discarder-0.2.2.tgz", + "integrity": "sha512-UhDfHmA92YAlNnCfhmq0VeNL5bDbiZGg7sZ2IvPsXubGkiNa9EC+tUTsjBRsYUAz87btI6/1wf4XoVvQ3uRnmQ==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/stop-iteration-iterator": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/stop-iteration-iterator/-/stop-iteration-iterator-1.1.0.tgz", + "integrity": "sha512-eLoXW/DHyl62zxY4SCaIgnRhuMr6ri4juEYARS8E6sCEqzKpOiE521Ucofdx+KnDZl5xmvGYaaKCk5FEOxJCoQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "internal-slot": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/strict-event-emitter": { + "version": "0.5.1", + "resolved": "https://registry.npmjs.org/strict-event-emitter/-/strict-event-emitter-0.5.1.tgz", + "integrity": "sha512-vMgjE/GGEPEFnhFub6pa4FmJBRBVOLpIII2hvCZ8Kzb7K0hlHo7mQv6xYrBvCL2LtAIBwFUK8wvuJgTVSQ5MFQ==", + "license": "MIT" + }, + "node_modules/string-width": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-7.2.0.tgz", + "integrity": "sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ==", + "license": "MIT", + "dependencies": { + "emoji-regex": "^10.3.0", + "get-east-asian-width": "^1.0.0", + "strip-ansi": "^7.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/string-width/node_modules/emoji-regex": { + "version": "10.6.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-10.6.0.tgz", + "integrity": "sha512-toUI84YS5YmxW219erniWD0CIVOo46xGKColeNQRgOzDorgBi1v4D71/OFzgD9GO2UGKIv1C3Sp8DAn0+j5w7A==", + "license": "MIT" + }, + "node_modules/string.prototype.includes": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/string.prototype.includes/-/string.prototype.includes-2.0.1.tgz", + "integrity": "sha512-o7+c9bW6zpAdJHTtujeePODAhkuicdAryFsfVKwA+wGw89wJ4GTY484WTucM9hLtDEOpOvI+aHnzqnC5lHp4Rg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.3" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/string.prototype.matchall": { + "version": "4.0.12", + "resolved": "https://registry.npmjs.org/string.prototype.matchall/-/string.prototype.matchall-4.0.12.tgz", + "integrity": "sha512-6CC9uyBL+/48dYizRf7H7VAYCMCNTBeM78x/VTUe9bFEaxBepPJDa1Ow99LqI/1yF7kuy7Q3cQsYMrcjGUcskA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.6", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0", + "get-intrinsic": "^1.2.6", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "internal-slot": "^1.1.0", + "regexp.prototype.flags": "^1.5.3", + "set-function-name": "^2.0.2", + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/string.prototype.repeat": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/string.prototype.repeat/-/string.prototype.repeat-1.0.0.tgz", + "integrity": "sha512-0u/TldDbKD8bFCQ/4f5+mNRrXwZ8hg2w7ZR8wa16e8z9XpePWl3eGEcUD0OXpEH/VJH/2G3gjUtR3ZOiBe2S/w==", + "dev": true, + "license": "MIT", + "dependencies": { + "define-properties": "^1.1.3", + "es-abstract": "^1.17.5" + } + }, + "node_modules/string.prototype.trim": { + "version": "1.2.10", + "resolved": "https://registry.npmjs.org/string.prototype.trim/-/string.prototype.trim-1.2.10.tgz", + "integrity": "sha512-Rs66F0P/1kedk5lyYyH9uBzuiI/kNRmwJAR9quK6VOtIpZ2G+hMZd+HQbbv25MgCA6gEffoMZYxlTod4WcdrKA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.2", + "define-data-property": "^1.1.4", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-object-atoms": "^1.0.0", + "has-property-descriptors": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/string.prototype.trimend": { + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/string.prototype.trimend/-/string.prototype.trimend-1.0.9.tgz", + "integrity": "sha512-G7Ok5C6E/j4SGfyLCloXTrngQIQU3PWtXGst3yM7Bea9FRURf1S42ZHlZZtsNque2FN2PoUhfZXYLNWwEr4dLQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.2", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/string.prototype.trimstart": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/string.prototype.trimstart/-/string.prototype.trimstart-1.0.8.tgz", + "integrity": "sha512-UXSH262CSZY1tfu3G3Secr6uGLCFVPMhIqHjlgCUtCCcgihYc/xKs9djMTMUOb2j1mVSeU8EU6NWc/iQKU6Gfg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/stringify-object": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/stringify-object/-/stringify-object-5.0.0.tgz", + "integrity": "sha512-zaJYxz2FtcMb4f+g60KsRNFOpVMUyuJgA51Zi5Z1DOTC3S59+OQiVOzE9GZt0x72uBGWKsQIuBKeF9iusmKFsg==", + "license": "BSD-2-Clause", + "dependencies": { + "get-own-enumerable-keys": "^1.0.0", + "is-obj": "^3.0.0", + "is-regexp": "^3.1.0" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/yeoman/stringify-object?sponsor=1" + } + }, + "node_modules/strip-ansi": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.2.0.tgz", + "integrity": "sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w==", + "license": "MIT", + "dependencies": { + "ansi-regex": "^6.2.2" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/strip-ansi?sponsor=1" + } + }, + "node_modules/strip-bom": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/strip-bom/-/strip-bom-3.0.0.tgz", + "integrity": "sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==", + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/strip-final-newline": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-4.0.0.tgz", + "integrity": "sha512-aulFJcD6YK8V1G7iRB5tigAP4TsHBZZrOV8pjV++zdUwmeV8uzbY7yn6h9MswN62adStNZFuCIx4haBnRuMDaw==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/strip-json-comments": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", + "integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/styled-jsx": { + "version": "5.1.6", + "resolved": "https://registry.npmjs.org/styled-jsx/-/styled-jsx-5.1.6.tgz", + "integrity": "sha512-qSVyDTeMotdvQYoHWLNGwRFJHC+i+ZvdBRYosOFgC+Wg1vx4frN2/RG/NA7SYqqvKNLf39P2LSRA2pu6n0XYZA==", + "license": "MIT", + "dependencies": { + "client-only": "0.0.1" + }, + "engines": { + "node": ">= 12.0.0" + }, + "peerDependencies": { + "react": ">= 16.8.0 || 17.x.x || ^18.0.0-0 || ^19.0.0-0" + }, + "peerDependenciesMeta": { + "@babel/core": { + "optional": true + }, + "babel-plugin-macros": { + "optional": true + } + } + }, + "node_modules/supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/supports-preserve-symlinks-flag": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz", + "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/tagged-tag": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/tagged-tag/-/tagged-tag-1.0.0.tgz", + "integrity": "sha512-yEFYrVhod+hdNyx7g5Bnkkb0G6si8HJurOoOEgC8B/O0uXLHlaey/65KRv6cuWBNhBgHKAROVpc7QyYqE5gFng==", + "license": "MIT", + "engines": { + "node": ">=20" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/tailwind-merge": { + "version": "3.5.0", + "resolved": "https://registry.npmjs.org/tailwind-merge/-/tailwind-merge-3.5.0.tgz", + "integrity": "sha512-I8K9wewnVDkL1NTGoqWmVEIlUcB9gFriAEkXkfCjX5ib8ezGxtR3xD7iZIxrfArjEsH7F1CHD4RFUtxefdqV/A==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/dcastil" + } + }, + "node_modules/tailwindcss": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.2.4.tgz", + "integrity": "sha512-HhKppgO81FQof5m6TEnuBWCZGgfRAWbaeOaGT00KOy/Pf/j6oUihdvBpA7ltCeAvZpFhW3j0PTclkxsd4IXYDA==", + "dev": true, + "license": "MIT" + }, + "node_modules/tapable": { + "version": "2.3.3", + "resolved": "https://registry.npmjs.org/tapable/-/tapable-2.3.3.tgz", + "integrity": "sha512-uxc/zpqFg6x7C8vOE7lh6Lbda8eEL9zmVm/PLeTPBRhh1xCgdWaQ+J1CUieGpIfm2HdtsUpRv+HshiasBMcc6A==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/webpack" + } + }, + "node_modules/tiny-invariant": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/tiny-invariant/-/tiny-invariant-1.3.3.tgz", + "integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==", + "license": "MIT" + }, + "node_modules/tinyglobby": { + "version": "0.2.16", + "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.16.tgz", + "integrity": "sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg==", + "dev": true, + "license": "MIT", + "dependencies": { + "fdir": "^6.5.0", + "picomatch": "^4.0.4" + }, + "engines": { + "node": ">=12.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/SuperchupuDev" + } + }, + "node_modules/tinyglobby/node_modules/fdir": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", + "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12.0.0" + }, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/tinyglobby/node_modules/picomatch": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz", + "integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==", + "dev": true, + "license": "MIT", + "peer": true, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/tldts": { + "version": "7.0.30", + "resolved": "https://registry.npmjs.org/tldts/-/tldts-7.0.30.tgz", + "integrity": "sha512-ELrFxuqsDdHUwoh0XxDbxuLD3Wnz49Z57IFvTtvWy1hJdcMZjXLIuonjilCiWHlT2GbE4Wlv1wKVTzDFnXH1aw==", + "license": "MIT", + "dependencies": { + "tldts-core": "^7.0.30" + }, + "bin": { + "tldts": "bin/cli.js" + } + }, + "node_modules/tldts-core": { + "version": "7.0.30", + "resolved": "https://registry.npmjs.org/tldts-core/-/tldts-core-7.0.30.tgz", + "integrity": "sha512-uiHN8PIB1VmWyS98eZYja4xzlYqeFZVjb4OuYlJQnZAuJhMw4PbKQOKgHKhBdJR3FE/t5mUQ1Kd80++B+qhD1Q==", + "license": "MIT" + }, + "node_modules/to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "license": "MIT", + "dependencies": { + "is-number": "^7.0.0" + }, + "engines": { + "node": ">=8.0" + } + }, + "node_modules/toidentifier": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", + "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", + "license": "MIT", + "engines": { + "node": ">=0.6" + } + }, + "node_modules/tough-cookie": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-6.0.1.tgz", + "integrity": "sha512-LktZQb3IeoUWB9lqR5EWTHgW/VTITCXg4D21M+lvybRVdylLrRMnqaIONLVb5mav8vM19m44HIcGq4qASeu2Qw==", + "license": "BSD-3-Clause", + "dependencies": { + "tldts": "^7.0.5" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/ts-api-utils": { + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.5.0.tgz", + "integrity": "sha512-OJ/ibxhPlqrMM0UiNHJ/0CKQkoKF243/AEmplt3qpRgkW8VG7IfOS41h7V8TjITqdByHzrjcS/2si+y4lIh8NA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18.12" + }, + "peerDependencies": { + "typescript": ">=4.8.4" + } + }, + "node_modules/ts-morph": { + "version": "26.0.0", + "resolved": "https://registry.npmjs.org/ts-morph/-/ts-morph-26.0.0.tgz", + "integrity": "sha512-ztMO++owQnz8c/gIENcM9XfCEzgoGphTv+nKpYNM1bgsdOVC/jRZuEBf6N+mLLDNg68Kl+GgUZfOySaRiG1/Ug==", + "license": "MIT", + "dependencies": { + "@ts-morph/common": "~0.27.0", + "code-block-writer": "^13.0.3" + } + }, + "node_modules/tsconfig-paths": { + "version": "3.15.0", + "resolved": "https://registry.npmjs.org/tsconfig-paths/-/tsconfig-paths-3.15.0.tgz", + "integrity": "sha512-2Ac2RgzDe/cn48GvOe3M+o82pEFewD3UPbyoUHHdKasHwJKjds4fLXWf/Ux5kATBKN20oaFGu+jbElp1pos0mg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@types/json5": "^0.0.29", + "json5": "^1.0.2", + "minimist": "^1.2.6", + "strip-bom": "^3.0.0" + } + }, + "node_modules/tsconfig-paths/node_modules/json5": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", + "integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", + "dev": true, + "license": "MIT", + "dependencies": { + "minimist": "^1.2.0" + }, + "bin": { + "json5": "lib/cli.js" + } + }, + "node_modules/tslib": { + "version": "2.8.1", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", + "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", + "license": "0BSD" + }, + "node_modules/tw-animate-css": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/tw-animate-css/-/tw-animate-css-1.4.0.tgz", + "integrity": "sha512-7bziOlRqH0hJx80h/3mbicLW7o8qLsH5+RaLR2t+OHM3D0JlWGODQKQ4cxbK7WlvmUxpcj6Kgu6EKqjrGFe3QQ==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/Wombosvideo" + } + }, + "node_modules/type-check": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", + "integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/type-fest": { + "version": "5.6.0", + "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-5.6.0.tgz", + "integrity": "sha512-8ZiHFm91orbSAe2PSAiSVBVko18pbhbiB3U9GglSzF/zCGkR+rxpHx6sEMCUm4kxY4LjDIUGgCfUMtwfZfjfUA==", + "license": "(MIT OR CC0-1.0)", + "dependencies": { + "tagged-tag": "^1.0.0" + }, + "engines": { + "node": ">=20" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/type-is": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-2.0.1.tgz", + "integrity": "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw==", + "license": "MIT", + "dependencies": { + "content-type": "^1.0.5", + "media-typer": "^1.1.0", + "mime-types": "^3.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/typed-array-buffer": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/typed-array-buffer/-/typed-array-buffer-1.0.3.tgz", + "integrity": "sha512-nAYYwfY3qnzX30IkA6AQZjVbtK6duGontcQm1WSG1MD94YLqK0515GNApXkoxKOWMusVssAHWLh9SeaoefYFGw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "is-typed-array": "^1.1.14" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/typed-array-byte-length": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/typed-array-byte-length/-/typed-array-byte-length-1.0.3.tgz", + "integrity": "sha512-BaXgOuIxz8n8pIq3e7Atg/7s+DpiYrxn4vdot3w9KbnBhcRQq6o3xemQdIfynqSeXeDrF32x+WvfzmOjPiY9lg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.8", + "for-each": "^0.3.3", + "gopd": "^1.2.0", + "has-proto": "^1.2.0", + "is-typed-array": "^1.1.14" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/typed-array-byte-offset": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/typed-array-byte-offset/-/typed-array-byte-offset-1.0.4.tgz", + "integrity": "sha512-bTlAFB/FBYMcuX81gbL4OcpH5PmlFHqlCCpAl8AlEzMz5k53oNDvN8p1PNOWLEmI2x4orp3raOFB51tv9X+MFQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "available-typed-arrays": "^1.0.7", + "call-bind": "^1.0.8", + "for-each": "^0.3.3", + "gopd": "^1.2.0", + "has-proto": "^1.2.0", + "is-typed-array": "^1.1.15", + "reflect.getprototypeof": "^1.0.9" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/typed-array-length": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/typed-array-length/-/typed-array-length-1.0.7.tgz", + "integrity": "sha512-3KS2b+kL7fsuk/eJZ7EQdnEmQoaho/r6KUef7hxvltNA5DR8NAUM+8wJMbJyZ4G9/7i3v5zPBIMN5aybAh2/Jg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind": "^1.0.7", + "for-each": "^0.3.3", + "gopd": "^1.0.1", + "is-typed-array": "^1.1.13", + "possible-typed-array-names": "^1.0.0", + "reflect.getprototypeof": "^1.0.6" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/typescript": { + "version": "5.9.3", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz", + "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==", + "devOptional": true, + "license": "Apache-2.0", + "peer": true, + "bin": { + "tsc": "bin/tsc", + "tsserver": "bin/tsserver" + }, + "engines": { + "node": ">=14.17" + } + }, + "node_modules/typescript-eslint": { + "version": "8.59.1", + "resolved": "https://registry.npmjs.org/typescript-eslint/-/typescript-eslint-8.59.1.tgz", + "integrity": "sha512-xqDcFVBmlrltH64lklOVp1wYxgJr6LVdg3NamBgH2OOQDLFdTKfIZXF5PfghrnXQKXZGTQs8tr1vL7fJvq8CTQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/eslint-plugin": "8.59.1", + "@typescript-eslint/parser": "8.59.1", + "@typescript-eslint/typescript-estree": "8.59.1", + "@typescript-eslint/utils": "8.59.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.1.0" + } + }, + "node_modules/unbox-primitive": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/unbox-primitive/-/unbox-primitive-1.1.0.tgz", + "integrity": "sha512-nWJ91DjeOkej/TA8pXQ3myruKpKEYgqvpw9lz4OPHj/NWFNluYrjbz9j01CJ8yKQd2g4jFoOkINCTW2I5LEEyw==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.3", + "has-bigints": "^1.0.2", + "has-symbols": "^1.1.0", + "which-boxed-primitive": "^1.1.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/undici-types": { + "version": "6.21.0", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz", + "integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==", + "license": "MIT" + }, + "node_modules/unicorn-magic": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/unicorn-magic/-/unicorn-magic-0.3.0.tgz", + "integrity": "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/universalify": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz", + "integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==", + "license": "MIT", + "engines": { + "node": ">= 10.0.0" + } + }, + "node_modules/unpipe": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", + "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/unrs-resolver": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/unrs-resolver/-/unrs-resolver-1.11.1.tgz", + "integrity": "sha512-bSjt9pjaEBnNiGgc9rUiHGKv5l4/TGzDmYw3RhnkJGtLhbnnA/5qJj7x3dNDCRx/PJxu774LlH8lCOlB4hEfKg==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "dependencies": { + "napi-postinstall": "^0.3.0" + }, + "funding": { + "url": "https://opencollective.com/unrs-resolver" + }, + "optionalDependencies": { + "@unrs/resolver-binding-android-arm-eabi": "1.11.1", + "@unrs/resolver-binding-android-arm64": "1.11.1", + "@unrs/resolver-binding-darwin-arm64": "1.11.1", + "@unrs/resolver-binding-darwin-x64": "1.11.1", + "@unrs/resolver-binding-freebsd-x64": "1.11.1", + "@unrs/resolver-binding-linux-arm-gnueabihf": "1.11.1", + "@unrs/resolver-binding-linux-arm-musleabihf": "1.11.1", + "@unrs/resolver-binding-linux-arm64-gnu": "1.11.1", + "@unrs/resolver-binding-linux-arm64-musl": "1.11.1", + "@unrs/resolver-binding-linux-ppc64-gnu": "1.11.1", + "@unrs/resolver-binding-linux-riscv64-gnu": "1.11.1", + "@unrs/resolver-binding-linux-riscv64-musl": "1.11.1", + "@unrs/resolver-binding-linux-s390x-gnu": "1.11.1", + "@unrs/resolver-binding-linux-x64-gnu": "1.11.1", + "@unrs/resolver-binding-linux-x64-musl": "1.11.1", + "@unrs/resolver-binding-wasm32-wasi": "1.11.1", + "@unrs/resolver-binding-win32-arm64-msvc": "1.11.1", + "@unrs/resolver-binding-win32-ia32-msvc": "1.11.1", + "@unrs/resolver-binding-win32-x64-msvc": "1.11.1" + } + }, + "node_modules/until-async": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/until-async/-/until-async-3.0.2.tgz", + "integrity": "sha512-IiSk4HlzAMqTUseHHe3VhIGyuFmN90zMTpD3Z3y8jeQbzLIq500MVM7Jq2vUAnTKAFPJrqwkzr6PoTcPhGcOiw==", + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/kettanaito" + } + }, + "node_modules/update-browserslist-db": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.2.3.tgz", + "integrity": "sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/browserslist" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/browserslist" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "escalade": "^3.2.0", + "picocolors": "^1.1.1" + }, + "bin": { + "update-browserslist-db": "cli.js" + }, + "peerDependencies": { + "browserslist": ">= 4.21.0" + } + }, + "node_modules/uri-js": { + "version": "4.4.1", + "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", + "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "punycode": "^2.1.0" + } + }, + "node_modules/use-callback-ref": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/use-callback-ref/-/use-callback-ref-1.3.3.tgz", + "integrity": "sha512-jQL3lRnocaFtu3V00JToYz/4QkNWswxijDaCVNZRiRTO3HQDLsdu1ZtmIUvV4yPp+rvWm5j0y0TG/S61cuijTg==", + "license": "MIT", + "dependencies": { + "tslib": "^2.0.0" + }, + "engines": { + "node": ">=10" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/use-sidecar": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/use-sidecar/-/use-sidecar-1.1.3.tgz", + "integrity": "sha512-Fedw0aZvkhynoPYlA5WXrMCAMm+nSWdZt6lzJQ7Ok8S6Q+VsHmHpRWndVRJ8Be0ZbkfPc5LRYH+5XrzXcEeLRQ==", + "license": "MIT", + "dependencies": { + "detect-node-es": "^1.1.0", + "tslib": "^2.0.0" + }, + "engines": { + "node": ">=10" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, + "node_modules/use-sync-external-store": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/use-sync-external-store/-/use-sync-external-store-1.6.0.tgz", + "integrity": "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==", + "license": "MIT", + "peerDependencies": { + "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" + } + }, + "node_modules/util-deprecate": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", + "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==", + "license": "MIT" + }, + "node_modules/validate-npm-package-name": { + "version": "7.0.2", + "resolved": "https://registry.npmjs.org/validate-npm-package-name/-/validate-npm-package-name-7.0.2.tgz", + "integrity": "sha512-hVDIBwsRruT73PbK7uP5ebUt+ezEtCmzZz3F59BSr2F6OVFnJ/6h8liuvdLrQ88Xmnk6/+xGGuq+pG9WwTuy3A==", + "license": "ISC", + "engines": { + "node": "^20.17.0 || >=22.9.0" + } + }, + "node_modules/vary": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", + "integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==", + "license": "MIT", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/web-streams-polyfill": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz", + "integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==", + "license": "MIT", + "engines": { + "node": ">= 8" + } + }, + "node_modules/which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "license": "ISC", + "dependencies": { + "isexe": "^2.0.0" + }, + "bin": { + "node-which": "bin/node-which" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/which-boxed-primitive": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/which-boxed-primitive/-/which-boxed-primitive-1.1.1.tgz", + "integrity": "sha512-TbX3mj8n0odCBFVlY8AxkqcHASw3L60jIuF8jFP78az3C2YhmGvqbHBpAjTRH2/xqYunrJ9g1jSyjCjpoWzIAA==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-bigint": "^1.1.0", + "is-boolean-object": "^1.2.1", + "is-number-object": "^1.1.1", + "is-string": "^1.1.1", + "is-symbol": "^1.1.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/which-builtin-type": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/which-builtin-type/-/which-builtin-type-1.2.1.tgz", + "integrity": "sha512-6iBczoX+kDQ7a3+YJBnh3T+KZRxM/iYNPXicqk66/Qfm1b93iu+yOImkg0zHbj5LNOcNv1TEADiZ0xa34B4q6Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "function.prototype.name": "^1.1.6", + "has-tostringtag": "^1.0.2", + "is-async-function": "^2.0.0", + "is-date-object": "^1.1.0", + "is-finalizationregistry": "^1.1.0", + "is-generator-function": "^1.0.10", + "is-regex": "^1.2.1", + "is-weakref": "^1.0.2", + "isarray": "^2.0.5", + "which-boxed-primitive": "^1.1.0", + "which-collection": "^1.0.2", + "which-typed-array": "^1.1.16" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/which-collection": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/which-collection/-/which-collection-1.0.2.tgz", + "integrity": "sha512-K4jVyjnBdgvc86Y6BkaLZEN933SwYOuBFkdmBu9ZfkcAbdVbpITnDmjvZ/aQjRXQrv5EPkTnD1s39GiiqbngCw==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-map": "^2.0.3", + "is-set": "^2.0.3", + "is-weakmap": "^2.0.2", + "is-weakset": "^2.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/which-typed-array": { + "version": "1.1.20", + "resolved": "https://registry.npmjs.org/which-typed-array/-/which-typed-array-1.1.20.tgz", + "integrity": "sha512-LYfpUkmqwl0h9A2HL09Mms427Q1RZWuOHsukfVcKRq9q95iQxdw0ix1JQrqbcDR9PH1QDwf5Qo8OZb5lksZ8Xg==", + "dev": true, + "license": "MIT", + "dependencies": { + "available-typed-arrays": "^1.0.7", + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "for-each": "^0.3.5", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-tostringtag": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/word-wrap": { + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz", + "integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/wrap-ansi": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "license": "MIT" + }, + "node_modules/wrap-ansi/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", + "license": "ISC" + }, + "node_modules/wsl-utils": { + "version": "0.3.1", + "resolved": "https://registry.npmjs.org/wsl-utils/-/wsl-utils-0.3.1.tgz", + "integrity": "sha512-g/eziiSUNBSsdDJtCLB8bdYEUMj4jR7AGeUo96p/3dTafgjHhpF4RiCFPiRILwjQoDXx5MqkBr4fwWtR3Ky4Wg==", + "license": "MIT", + "dependencies": { + "is-wsl": "^3.1.0", + "powershell-utils": "^0.1.0" + }, + "engines": { + "node": ">=20" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/y18n": { + "version": "5.0.8", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", + "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==", + "license": "ISC", + "engines": { + "node": ">=10" + } + }, + "node_modules/yallist": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz", + "integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==", + "license": "ISC" + }, + "node_modules/yargs": { + "version": "17.7.2", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", + "integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==", + "license": "MIT", + "dependencies": { + "cliui": "^8.0.1", + "escalade": "^3.1.1", + "get-caller-file": "^2.0.5", + "require-directory": "^2.1.1", + "string-width": "^4.2.3", + "y18n": "^5.0.5", + "yargs-parser": "^21.1.1" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs-parser": { + "version": "21.1.1", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz", + "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==", + "license": "ISC", + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "license": "MIT" + }, + "node_modules/yargs/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yocto-queue": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", + "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/yocto-spinner": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/yocto-spinner/-/yocto-spinner-1.2.0.tgz", + "integrity": "sha512-Yw0hUB6UA3o4YUgKy3oSe9a4cxoaZ9sBfYDw+JSxo6Id0KoJGoxzPA24qqUXYKBWABs/zDSGTz9kww7t3F0XGw==", + "license": "MIT", + "dependencies": { + "yoctocolors": "^2.1.1" + }, + "engines": { + "node": ">=18.19" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/yoctocolors": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/yoctocolors/-/yoctocolors-2.1.2.tgz", + "integrity": "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug==", + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/zod": { + "version": "4.4.2", + "resolved": "https://registry.npmjs.org/zod/-/zod-4.4.2.tgz", + "integrity": "sha512-IynmDyxsEsb9RKzO3J9+4SxXnl2FTFSzNBaKKaMV6tsSk0rw9gYw9gs+JFCq/qk2LCZ78KDwyj+Z289TijSkUw==", + "license": "MIT", + "peer": true, + "funding": { + "url": "https://github.com/sponsors/colinhacks" + } + }, + "node_modules/zod-to-json-schema": { + "version": "3.25.2", + "resolved": "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.25.2.tgz", + "integrity": "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA==", + "license": "ISC", + "peerDependencies": { + "zod": "^3.25.28 || ^4" + } + }, + "node_modules/zod-validation-error": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/zod-validation-error/-/zod-validation-error-4.0.2.tgz", + "integrity": "sha512-Q6/nZLe6jxuU80qb/4uJ4t5v2VEZ44lzQjPDhYJNztRQ4wyWc6VF3D3Kb/fAuPetZQnhS3hnajCf9CsWesghLQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18.0.0" + }, + "peerDependencies": { + "zod": "^3.25.0 || ^4.0.0" + } + } + } +} diff --git a/web/package.json b/web/package.json new file mode 100644 index 0000000000..d8c4b806c4 --- /dev/null +++ b/web/package.json @@ -0,0 +1,35 @@ +{ + "name": "cli-proxy-management", + "version": "0.1.0", + "private": true, + "scripts": { + "dev": "next dev", + "build": "next build", + "start": "next start", + "lint": "eslint src" + }, + "dependencies": { + "class-variance-authority": "^0.7.1", + "clsx": "^2.1.1", + "lucide-react": "^1.14.0", + "next": "16.2.4", + "next-themes": "^0.4.6", + "radix-ui": "^1.4.3", + "react": "19.2.4", + "react-dom": "19.2.4", + "shadcn": "^4.6.0", + "sonner": "^2.0.7", + "tailwind-merge": "^3.5.0", + "tw-animate-css": "^1.4.0" + }, + "devDependencies": { + "@tailwindcss/postcss": "^4", + "@types/node": "^20", + "@types/react": "^19", + "@types/react-dom": "^19", + "eslint": "^9", + "eslint-config-next": "16.2.4", + "tailwindcss": "^4", + "typescript": "^5" + } +} diff --git a/web/postcss.config.mjs b/web/postcss.config.mjs new file mode 100644 index 0000000000..61e36849cf --- /dev/null +++ b/web/postcss.config.mjs @@ -0,0 +1,7 @@ +const config = { + plugins: { + "@tailwindcss/postcss": {}, + }, +}; + +export default config; diff --git a/web/public/file.svg b/web/public/file.svg new file mode 100644 index 0000000000..004145cddf --- /dev/null +++ b/web/public/file.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/web/public/globe.svg b/web/public/globe.svg new file mode 100644 index 0000000000..567f17b0d7 --- /dev/null +++ b/web/public/globe.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/web/public/next.svg b/web/public/next.svg new file mode 100644 index 0000000000..5174b28c56 --- /dev/null +++ b/web/public/next.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/web/public/vercel.svg b/web/public/vercel.svg new file mode 100644 index 0000000000..7705396033 --- /dev/null +++ b/web/public/vercel.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/web/public/window.svg b/web/public/window.svg new file mode 100644 index 0000000000..b2b2a44f6e --- /dev/null +++ b/web/public/window.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/web/src/app/(dashboard)/api-keys/page.tsx b/web/src/app/(dashboard)/api-keys/page.tsx new file mode 100644 index 0000000000..d9d44b1ea5 --- /dev/null +++ b/web/src/app/(dashboard)/api-keys/page.tsx @@ -0,0 +1,3025 @@ +"use client"; + +import { useCallback, useEffect, useRef, useState } from "react"; +import { + api, + type GeminiKey, + type ClaudeKey, + type CodexKey, + type VertexKey, + type OpenAICompatEntry, + type OpenAICompatAPIKeyEntry, + type AmpModelMapping, + type AmpUpstreamAPIKeyEntry, +} from "@/lib/api"; +import { toast } from "sonner"; +import { + Key, + Eye, + EyeOff, + Plus, + Trash2, + Pencil, + Upload, + X, + Globe, + Bot, + Sparkles, + Cloud, + Layers, + Code2, + ArrowRight, + Save, + Eraser, +} from "lucide-react"; + +import { Button } from "@/components/ui/button"; +import { Badge } from "@/components/ui/badge"; +import { Switch } from "@/components/ui/switch"; +import { Input } from "@/components/ui/input"; +import { Label } from "@/components/ui/label"; +import { Skeleton } from "@/components/ui/skeleton"; +import { + Card, + CardContent, + CardDescription, + CardHeader, + CardTitle, +} from "@/components/ui/card"; +import { + Tabs, + TabsContent, + TabsList, + TabsTrigger, +} from "@/components/ui/tabs"; +import { + Table, + TableBody, + TableCell, + TableHead, + TableHeader, + TableRow, +} from "@/components/ui/table"; +import { + Dialog, + DialogContent, + DialogDescription, + DialogFooter, + DialogHeader, + DialogTitle, +} from "@/components/ui/dialog"; +import { + AlertDialog, + AlertDialogAction, + AlertDialogCancel, + AlertDialogContent, + AlertDialogDescription, + AlertDialogFooter, + AlertDialogHeader, + AlertDialogTitle, +} from "@/components/ui/alert-dialog"; + +function maskKey(key: string): string { + if (!key) return "—"; + if (key.length <= 8) return "****"; + return key.slice(0, 4) + "****" + key.slice(-4); +} + +function MaskedKeyCell({ value }: { value: string }) { + const [revealed, setRevealed] = useState(false); + return ( +
+ + {revealed ? value : maskKey(value)} + + +
+ ); +} + +function MaskedValue({ value }: { value: string }) { + const [revealed, setRevealed] = useState(false); + if (!value) return ; + return ( +
+ + {revealed ? value : maskKey(value)} + + +
+ ); +} + +interface HeaderEntry { + key: string; + value: string; +} + +interface ModelAliasEntry { + name: string; + alias: string; +} + +interface ProviderKeyFormData { + apiKey: string; + prefix: string; + baseUrl: string; + proxyUrl: string; + priority: string; + models: ModelAliasEntry[]; + headers: HeaderEntry[]; + excludedModels: string; + websockets: boolean; +} + +function emptyProviderForm(): ProviderKeyFormData { + return { + apiKey: "", + prefix: "", + baseUrl: "", + proxyUrl: "", + priority: "", + models: [], + headers: [], + excludedModels: "", + websockets: false, + }; +} + +function geminiToForm(k: GeminiKey): ProviderKeyFormData { + return { + apiKey: k["api-key"] ?? "", + prefix: k.prefix ?? "", + baseUrl: k["base-url"] ?? "", + proxyUrl: k["proxy-url"] ?? "", + priority: k.priority != null ? String(k.priority) : "", + models: (k.models ?? []).map((m) => ({ name: m.name, alias: m.alias })), + headers: k.headers + ? Object.entries(k.headers).map(([key, value]) => ({ key, value })) + : [], + excludedModels: (k["excluded-models"] ?? []).join(", "), + websockets: false, + }; +} + +function claudeToForm(k: ClaudeKey): ProviderKeyFormData { + return { + apiKey: k["api-key"] ?? "", + prefix: k.prefix ?? "", + baseUrl: k["base-url"] ?? "", + proxyUrl: k["proxy-url"] ?? "", + priority: k.priority != null ? String(k.priority) : "", + models: (k.models ?? []).map((m) => ({ name: m.name, alias: m.alias })), + headers: k.headers + ? Object.entries(k.headers).map(([key, value]) => ({ key, value })) + : [], + excludedModels: (k["excluded-models"] ?? []).join(", "), + websockets: false, + }; +} + +function codexToForm(k: CodexKey): ProviderKeyFormData { + return { + apiKey: k["api-key"] ?? "", + prefix: k.prefix ?? "", + baseUrl: k["base-url"] ?? "", + proxyUrl: k["proxy-url"] ?? "", + priority: k.priority != null ? String(k.priority) : "", + models: (k.models ?? []).map((m) => ({ name: m.name, alias: m.alias })), + headers: k.headers + ? Object.entries(k.headers).map(([key, value]) => ({ key, value })) + : [], + excludedModels: (k["excluded-models"] ?? []).join(", "), + websockets: k.websockets ?? false, + }; +} + +function vertexToForm(k: VertexKey): ProviderKeyFormData { + return { + apiKey: k["api-key"] ?? "", + prefix: k.prefix ?? "", + baseUrl: k["base-url"] ?? "", + proxyUrl: k["proxy-url"] ?? "", + priority: k.priority != null ? String(k.priority) : "", + models: (k.models ?? []).map((m) => ({ name: m.name, alias: m.alias })), + headers: k.headers + ? Object.entries(k.headers).map(([key, value]) => ({ key, value })) + : [], + excludedModels: (k["excluded-models"] ?? []).join(", "), + websockets: false, + }; +} + +function formToGemini(f: ProviderKeyFormData): Partial { + const obj: Partial = { "api-key": f.apiKey }; + if (f.prefix) obj.prefix = f.prefix; + if (f.baseUrl) obj["base-url"] = f.baseUrl; + if (f.proxyUrl) obj["proxy-url"] = f.proxyUrl; + if (f.priority) obj.priority = Number(f.priority); + if (f.models.length > 0) obj.models = f.models.map((m) => ({ name: m.name, alias: m.alias })); + if (f.headers.length > 0) { + const h: Record = {}; + for (const e of f.headers) { + if (e.key.trim()) h[e.key] = e.value; + } + if (Object.keys(h).length > 0) obj.headers = h; + } + if (f.excludedModels.trim()) { + obj["excluded-models"] = f.excludedModels.split(",").map((s) => s.trim()).filter(Boolean); + } + return obj; +} + +function formToClaude(f: ProviderKeyFormData): Partial { + const obj: Partial = { "api-key": f.apiKey }; + if (f.prefix) obj.prefix = f.prefix; + if (f.baseUrl) obj["base-url"] = f.baseUrl; + if (f.proxyUrl) obj["proxy-url"] = f.proxyUrl; + if (f.priority) obj.priority = Number(f.priority); + if (f.models.length > 0) obj.models = f.models.map((m) => ({ name: m.name, alias: m.alias })); + if (f.headers.length > 0) { + const h: Record = {}; + for (const e of f.headers) { + if (e.key.trim()) h[e.key] = e.value; + } + if (Object.keys(h).length > 0) obj.headers = h; + } + if (f.excludedModels.trim()) { + obj["excluded-models"] = f.excludedModels.split(",").map((s) => s.trim()).filter(Boolean); + } + return obj; +} + +function formToCodex(f: ProviderKeyFormData): Partial { + const obj: Partial = { "api-key": f.apiKey, websockets: f.websockets }; + if (f.prefix) obj.prefix = f.prefix; + if (f.baseUrl) obj["base-url"] = f.baseUrl; + if (f.proxyUrl) obj["proxy-url"] = f.proxyUrl; + if (f.priority) obj.priority = Number(f.priority); + if (f.models.length > 0) obj.models = f.models.map((m) => ({ name: m.name, alias: m.alias })); + if (f.headers.length > 0) { + const h: Record = {}; + for (const e of f.headers) { + if (e.key.trim()) h[e.key] = e.value; + } + if (Object.keys(h).length > 0) obj.headers = h; + } + if (f.excludedModels.trim()) { + obj["excluded-models"] = f.excludedModels.split(",").map((s) => s.trim()).filter(Boolean); + } + return obj; +} + +function formToVertex(f: ProviderKeyFormData): Partial { + const obj: Partial = { "api-key": f.apiKey }; + if (f.prefix) obj.prefix = f.prefix; + if (f.baseUrl) obj["base-url"] = f.baseUrl; + if (f.proxyUrl) obj["proxy-url"] = f.proxyUrl; + if (f.priority) obj.priority = Number(f.priority); + if (f.models.length > 0) obj.models = f.models.map((m) => ({ name: m.name, alias: m.alias })); + if (f.headers.length > 0) { + const h: Record = {}; + for (const e of f.headers) { + if (e.key.trim()) h[e.key] = e.value; + } + if (Object.keys(h).length > 0) obj.headers = h; + } + if (f.excludedModels.trim()) { + obj["excluded-models"] = f.excludedModels.split(",").map((s) => s.trim()).filter(Boolean); + } + return obj; +} + +interface OpenAICompatFormData { + name: string; + prefix: string; + baseUrl: string; + apiKeyEntries: { apiKey: string; proxyUrl: string }[]; + models: ModelAliasEntry[]; + headers: HeaderEntry[]; + priority: string; +} + +function emptyOpenAICompatForm(): OpenAICompatFormData { + return { + name: "", + prefix: "", + baseUrl: "", + apiKeyEntries: [{ apiKey: "", proxyUrl: "" }], + models: [], + headers: [], + priority: "", + }; +} + +function openAICompatToForm(e: OpenAICompatEntry): OpenAICompatFormData { + return { + name: e.name ?? "", + prefix: e.prefix ?? "", + baseUrl: e["base-url"] ?? "", + apiKeyEntries: + (e["api-key-entries"] ?? []).map((a) => ({ + apiKey: a["api-key"] ?? "", + proxyUrl: a["proxy-url"] ?? "", + })).length > 0 + ? (e["api-key-entries"] ?? []).map((a) => ({ + apiKey: a["api-key"] ?? "", + proxyUrl: a["proxy-url"] ?? "", + })) + : [{ apiKey: "", proxyUrl: "" }], + models: (e.models ?? []).map((m) => ({ name: m.name, alias: m.alias })), + headers: e.headers + ? Object.entries(e.headers).map(([key, value]) => ({ key, value })) + : [], + priority: e.priority != null ? String(e.priority) : "", + }; +} + +function formToOpenAICompat(f: OpenAICompatFormData): Partial { + const obj: Partial = { name: f.name, "base-url": f.baseUrl }; + if (f.prefix) obj.prefix = f.prefix; + if (f.priority) obj.priority = Number(f.priority); + const cleanEntries = f.apiKeyEntries.filter((a) => a.apiKey.trim() !== ""); + if (cleanEntries.length > 0) { + obj["api-key-entries"] = cleanEntries.map((a) => { + const entry: OpenAICompatAPIKeyEntry = { "api-key": a.apiKey }; + if (a.proxyUrl) entry["proxy-url"] = a.proxyUrl; + return entry; + }); + } + if (f.models.length > 0) obj.models = f.models.map((m) => ({ name: m.name, alias: m.alias })); + if (f.headers.length > 0) { + const h: Record = {}; + for (const e of f.headers) { + if (e.key.trim()) h[e.key] = e.value; + } + if (Object.keys(h).length > 0) obj.headers = h; + } + return obj; +} + +function ProviderKeyForm({ + form, + setForm, + showWebsockets, +}: { + form: ProviderKeyFormData; + setForm: React.Dispatch>; + showWebsockets?: boolean; +}) { + const updateField = (field: keyof ProviderKeyFormData, value: string | boolean) => { + setForm((prev) => ({ ...prev, [field]: value })); + }; + + const addModel = () => { + setForm((prev) => ({ + ...prev, + models: [...prev.models, { name: "", alias: "" }], + })); + }; + + const removeModel = (index: number) => { + setForm((prev) => ({ + ...prev, + models: prev.models.filter((_, i) => i !== index), + })); + }; + + const updateModel = (index: number, field: "name" | "alias", value: string) => { + setForm((prev) => ({ + ...prev, + models: prev.models.map((m, i) => (i === index ? { ...m, [field]: value } : m)), + })); + }; + + const addHeader = () => { + setForm((prev) => ({ + ...prev, + headers: [...prev.headers, { key: "", value: "" }], + })); + }; + + const removeHeader = (index: number) => { + setForm((prev) => ({ + ...prev, + headers: prev.headers.filter((_, i) => i !== index), + })); + }; + + const updateHeader = (index: number, field: "key" | "value", value: string) => { + setForm((prev) => ({ + ...prev, + headers: prev.headers.map((h, i) => (i === index ? { ...h, [field]: value } : h)), + })); + }; + + return ( +
+
+ + updateField("apiKey", e.target.value)} + placeholder="Enter API key" + /> +
+
+ + updateField("prefix", e.target.value)} + placeholder="Model prefix" + /> +
+
+ + updateField("baseUrl", e.target.value)} + placeholder="https://api.example.com" + /> +
+
+ + updateField("proxyUrl", e.target.value)} + placeholder="https://proxy.example.com" + /> +
+
+ + updateField("priority", e.target.value)} + placeholder="0" + /> +
+ {showWebsockets && ( +
+ updateField("websockets", checked)} + /> + +
+ )} +
+ +
+ {form.models.map((m, i) => ( +
+ updateModel(i, "name", e.target.value)} + placeholder="Model name" + className="flex-1" + /> + updateModel(i, "alias", e.target.value)} + placeholder="Alias" + className="flex-1" + /> + +
+ ))} + +
+
+
+ +
+ {form.headers.map((h, i) => ( +
+ updateHeader(i, "key", e.target.value)} + placeholder="Header name" + className="flex-1" + /> + updateHeader(i, "value", e.target.value)} + placeholder="Header value" + className="flex-1" + /> + +
+ ))} + +
+
+
+ + updateField("excludedModels", e.target.value)} + placeholder="model1, model2, ..." + /> +
+
+ ); +} + +function OpenAICompatForm({ + form, + setForm, +}: { + form: OpenAICompatFormData; + setForm: React.Dispatch>; +}) { + const updateField = (field: keyof OpenAICompatFormData, value: string) => { + setForm((prev) => ({ ...prev, [field]: value })); + }; + + const addApiKeyEntry = () => { + setForm((prev) => ({ + ...prev, + apiKeyEntries: [...prev.apiKeyEntries, { apiKey: "", proxyUrl: "" }], + })); + }; + + const removeApiKeyEntry = (index: number) => { + setForm((prev) => ({ + ...prev, + apiKeyEntries: prev.apiKeyEntries.filter((_, i) => i !== index), + })); + }; + + const updateApiKeyEntry = (index: number, field: "apiKey" | "proxyUrl", value: string) => { + setForm((prev) => ({ + ...prev, + apiKeyEntries: prev.apiKeyEntries.map((e, i) => + i === index ? { ...e, [field]: value } : e + ), + })); + }; + + const addModel = () => { + setForm((prev) => ({ + ...prev, + models: [...prev.models, { name: "", alias: "" }], + })); + }; + + const removeModel = (index: number) => { + setForm((prev) => ({ + ...prev, + models: prev.models.filter((_, i) => i !== index), + })); + }; + + const updateModel = (index: number, field: "name" | "alias", value: string) => { + setForm((prev) => ({ + ...prev, + models: prev.models.map((m, i) => (i === index ? { ...m, [field]: value } : m)), + })); + }; + + const addHeader = () => { + setForm((prev) => ({ + ...prev, + headers: [...prev.headers, { key: "", value: "" }], + })); + }; + + const removeHeader = (index: number) => { + setForm((prev) => ({ + ...prev, + headers: prev.headers.filter((_, i) => i !== index), + })); + }; + + const updateHeader = (index: number, field: "key" | "value", value: string) => { + setForm((prev) => ({ + ...prev, + headers: prev.headers.map((h, i) => (i === index ? { ...h, [field]: value } : h)), + })); + }; + + return ( +
+
+ + updateField("name", e.target.value)} + placeholder="Provider name" + /> +
+
+ + updateField("prefix", e.target.value)} + placeholder="Model prefix" + /> +
+
+ + updateField("baseUrl", e.target.value)} + placeholder="https://api.example.com" + /> +
+
+ + updateField("priority", e.target.value)} + placeholder="0" + /> +
+
+ +
+ {form.apiKeyEntries.map((entry, i) => ( +
+ updateApiKeyEntry(i, "apiKey", e.target.value)} + placeholder="API key" + className="flex-1" + /> + updateApiKeyEntry(i, "proxyUrl", e.target.value)} + placeholder="Proxy URL (optional)" + className="flex-1" + /> + +
+ ))} + +
+
+
+ +
+ {form.models.map((m, i) => ( +
+ updateModel(i, "name", e.target.value)} + placeholder="Model name" + className="flex-1" + /> + updateModel(i, "alias", e.target.value)} + placeholder="Alias" + className="flex-1" + /> + +
+ ))} + +
+
+
+ +
+ {form.headers.map((h, i) => ( +
+ updateHeader(i, "key", e.target.value)} + placeholder="Header name" + className="flex-1" + /> + updateHeader(i, "value", e.target.value)} + placeholder="Header value" + className="flex-1" + /> + +
+ ))} + +
+
+
+ ); +} + +function TableSkeleton({ cols }: { cols: number }) { + return ( + <> + {Array.from({ length: 3 }).map((_, i) => ( + + {Array.from({ length: cols }).map((_, j) => ( + + + + ))} + + ))} + + ); +} + +function EmptyState({ icon, message }: { icon: React.ReactNode; message: string }) { + return ( +
+
{icon}
+

{message}

+
+ ); +} + +const DELETE_ACTION_CLASS = + "bg-destructive/10 text-destructive hover:bg-destructive/20 focus-visible:border-destructive/40 focus-visible:ring-destructive/20 dark:bg-destructive/20 dark:hover:bg-destructive/30 dark:focus-visible:ring-destructive/40"; + +export default function APIKeysPage() { + // ─── Tab 1: API Keys ─── + const [apiKeys, setApiKeys] = useState([]); + const [apiKeysLoading, setApiKeysLoading] = useState(true); + const [addKeyOpen, setAddKeyOpen] = useState(false); + const [newKeyValue, setNewKeyValue] = useState(""); + const [addKeySaving, setAddKeySaving] = useState(false); + const [deleteKeyTarget, setDeleteKeyTarget] = useState<{ index: number; value: string } | null>(null); + const [deleteKeySaving, setDeleteKeySaving] = useState(false); + + const fetchAPIKeys = useCallback(async () => { + try { + const data = await api.apiKeys.getAPIKeys(); + setApiKeys(data); + } catch (err) { + toast.error("Failed to load API keys", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setApiKeysLoading(false); + } + }, []); + + const handleAddKey = async () => { + if (!newKeyValue.trim()) return; + setAddKeySaving(true); + try { + await api.apiKeys.patchAPIKeys({ value: newKeyValue.trim() }); + toast.success("API key added"); + setNewKeyValue(""); + setAddKeyOpen(false); + await fetchAPIKeys(); + } catch (err) { + toast.error("Failed to add API key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAddKeySaving(false); + } + }; + + const handleDeleteKey = async () => { + if (!deleteKeyTarget) return; + setDeleteKeySaving(true); + try { + await api.apiKeys.deleteAPIKeys({ index: deleteKeyTarget.index }); + toast.success("API key deleted"); + setDeleteKeyTarget(null); + await fetchAPIKeys(); + } catch (err) { + toast.error("Failed to delete API key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteKeySaving(false); + } + }; + + // ─── Tab 2: Gemini Keys ─── + const [geminiKeys, setGeminiKeys] = useState([]); + const [geminiLoading, setGeminiLoading] = useState(true); + const [geminiFormOpen, setGeminiFormOpen] = useState(false); + const [geminiEditIndex, setGeminiEditIndex] = useState(null); + const [geminiForm, setGeminiForm] = useState(emptyProviderForm()); + const [geminiSaving, setGeminiSaving] = useState(false); + const [deleteGeminiTarget, setDeleteGeminiTarget] = useState<{ index: number; key: GeminiKey } | null>(null); + const [deleteGeminiSaving, setDeleteGeminiSaving] = useState(false); + + const fetchGeminiKeys = useCallback(async () => { + try { + const data = await api.geminiKeys.getGeminiKeys(); + setGeminiKeys(data); + } catch (err) { + toast.error("Failed to load Gemini keys", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setGeminiLoading(false); + } + }, []); + + const openGeminiAdd = () => { + setGeminiEditIndex(null); + setGeminiForm(emptyProviderForm()); + setGeminiFormOpen(true); + }; + + const openGeminiEdit = (index: number, key: GeminiKey) => { + setGeminiEditIndex(index); + setGeminiForm(geminiToForm(key)); + setGeminiFormOpen(true); + }; + + const handleGeminiSave = async () => { + if (!geminiForm.apiKey.trim()) { + toast.error("API key is required"); + return; + } + setGeminiSaving(true); + try { + const value = formToGemini(geminiForm) as GeminiKey; + if (geminiEditIndex !== null) { + await api.geminiKeys.patchGeminiKey({ index: geminiEditIndex, value }); + toast.success("Gemini key updated"); + } else { + const current = await api.geminiKeys.getGeminiKeys(); + await api.geminiKeys.putGeminiKeys([...current, value]); + toast.success("Gemini key added"); + } + setGeminiFormOpen(false); + await fetchGeminiKeys(); + } catch (err) { + toast.error("Failed to save Gemini key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setGeminiSaving(false); + } + }; + + const handleDeleteGemini = async () => { + if (!deleteGeminiTarget) return; + setDeleteGeminiSaving(true); + try { + await api.geminiKeys.deleteGeminiKey({ index: deleteGeminiTarget.index }); + toast.success("Gemini key deleted"); + setDeleteGeminiTarget(null); + await fetchGeminiKeys(); + } catch (err) { + toast.error("Failed to delete Gemini key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteGeminiSaving(false); + } + }; + + // ─── Tab 3: Claude Keys ─── + const [claudeKeys, setClaudeKeys] = useState([]); + const [claudeLoading, setClaudeLoading] = useState(true); + const [claudeFormOpen, setClaudeFormOpen] = useState(false); + const [claudeEditIndex, setClaudeEditIndex] = useState(null); + const [claudeForm, setClaudeForm] = useState(emptyProviderForm()); + const [claudeSaving, setClaudeSaving] = useState(false); + const [deleteClaudeTarget, setDeleteClaudeTarget] = useState<{ index: number; key: ClaudeKey } | null>(null); + const [deleteClaudeSaving, setDeleteClaudeSaving] = useState(false); + + const fetchClaudeKeys = useCallback(async () => { + try { + const data = await api.claudeKeys.getClaudeKeys(); + setClaudeKeys(data); + } catch (err) { + toast.error("Failed to load Claude keys", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setClaudeLoading(false); + } + }, []); + + const openClaudeAdd = () => { + setClaudeEditIndex(null); + setClaudeForm(emptyProviderForm()); + setClaudeFormOpen(true); + }; + + const openClaudeEdit = (index: number, key: ClaudeKey) => { + setClaudeEditIndex(index); + setClaudeForm(claudeToForm(key)); + setClaudeFormOpen(true); + }; + + const handleClaudeSave = async () => { + if (!claudeForm.apiKey.trim()) { + toast.error("API key is required"); + return; + } + setClaudeSaving(true); + try { + const value = formToClaude(claudeForm) as ClaudeKey; + if (claudeEditIndex !== null) { + await api.claudeKeys.patchClaudeKey({ index: claudeEditIndex, value }); + toast.success("Claude key updated"); + } else { + const current = await api.claudeKeys.getClaudeKeys(); + await api.claudeKeys.putClaudeKeys([...current, value]); + toast.success("Claude key added"); + } + setClaudeFormOpen(false); + await fetchClaudeKeys(); + } catch (err) { + toast.error("Failed to save Claude key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setClaudeSaving(false); + } + }; + + const handleDeleteClaude = async () => { + if (!deleteClaudeTarget) return; + setDeleteClaudeSaving(true); + try { + await api.claudeKeys.deleteClaudeKey({ index: deleteClaudeTarget.index }); + toast.success("Claude key deleted"); + setDeleteClaudeTarget(null); + await fetchClaudeKeys(); + } catch (err) { + toast.error("Failed to delete Claude key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteClaudeSaving(false); + } + }; + + // ─── Tab 4: Codex Keys ─── + const [codexKeys, setCodexKeys] = useState([]); + const [codexLoading, setCodexLoading] = useState(true); + const [codexFormOpen, setCodexFormOpen] = useState(false); + const [codexEditIndex, setCodexEditIndex] = useState(null); + const [codexForm, setCodexForm] = useState(emptyProviderForm()); + const [codexSaving, setCodexSaving] = useState(false); + const [deleteCodexTarget, setDeleteCodexTarget] = useState<{ index: number; key: CodexKey } | null>(null); + const [deleteCodexSaving, setDeleteCodexSaving] = useState(false); + + const fetchCodexKeys = useCallback(async () => { + try { + const data = await api.codexKeys.getCodexKeys(); + setCodexKeys(data); + } catch (err) { + toast.error("Failed to load Codex keys", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setCodexLoading(false); + } + }, []); + + const openCodexAdd = () => { + setCodexEditIndex(null); + setCodexForm(emptyProviderForm()); + setCodexFormOpen(true); + }; + + const openCodexEdit = (index: number, key: CodexKey) => { + setCodexEditIndex(index); + setCodexForm(codexToForm(key)); + setCodexFormOpen(true); + }; + + const handleCodexSave = async () => { + if (!codexForm.apiKey.trim()) { + toast.error("API key is required"); + return; + } + setCodexSaving(true); + try { + const value = formToCodex(codexForm) as CodexKey; + if (codexEditIndex !== null) { + await api.codexKeys.patchCodexKey({ index: codexEditIndex, value }); + toast.success("Codex key updated"); + } else { + const current = await api.codexKeys.getCodexKeys(); + await api.codexKeys.putCodexKeys([...current, value]); + toast.success("Codex key added"); + } + setCodexFormOpen(false); + await fetchCodexKeys(); + } catch (err) { + toast.error("Failed to save Codex key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setCodexSaving(false); + } + }; + + const handleDeleteCodex = async () => { + if (!deleteCodexTarget) return; + setDeleteCodexSaving(true); + try { + await api.codexKeys.deleteCodexKey({ index: deleteCodexTarget.index }); + toast.success("Codex key deleted"); + setDeleteCodexTarget(null); + await fetchCodexKeys(); + } catch (err) { + toast.error("Failed to delete Codex key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteCodexSaving(false); + } + }; + + // ─── Tab 5: Vertex Keys ─── + const [vertexKeys, setVertexKeys] = useState([]); + const [vertexLoading, setVertexLoading] = useState(true); + const [vertexFormOpen, setVertexFormOpen] = useState(false); + const [vertexEditIndex, setVertexEditIndex] = useState(null); + const [vertexForm, setVertexForm] = useState(emptyProviderForm()); + const [vertexSaving, setVertexSaving] = useState(false); + const [deleteVertexTarget, setDeleteVertexTarget] = useState<{ index: number; key: VertexKey } | null>(null); + const [deleteVertexSaving, setDeleteVertexSaving] = useState(false); + const [vertexImportOpen, setVertexImportOpen] = useState(false); + const [vertexImporting, setVertexImporting] = useState(false); + const vertexFileRef = useRef(null); + + const fetchVertexKeys = useCallback(async () => { + try { + const data = await api.vertexKeys.getVertexKeys(); + setVertexKeys(data); + } catch (err) { + toast.error("Failed to load Vertex keys", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setVertexLoading(false); + } + }, []); + + const openVertexAdd = () => { + setVertexEditIndex(null); + setVertexForm(emptyProviderForm()); + setVertexFormOpen(true); + }; + + const openVertexEdit = (index: number, key: VertexKey) => { + setVertexEditIndex(index); + setVertexForm(vertexToForm(key)); + setVertexFormOpen(true); + }; + + const handleVertexSave = async () => { + if (!vertexForm.apiKey.trim()) { + toast.error("API key is required"); + return; + } + setVertexSaving(true); + try { + const value = formToVertex(vertexForm) as VertexKey; + if (vertexEditIndex !== null) { + await api.vertexKeys.patchVertexKey({ index: vertexEditIndex, value }); + toast.success("Vertex key updated"); + } else { + const current = await api.vertexKeys.getVertexKeys(); + await api.vertexKeys.putVertexKeys([...current, value]); + toast.success("Vertex key added"); + } + setVertexFormOpen(false); + await fetchVertexKeys(); + } catch (err) { + toast.error("Failed to save Vertex key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setVertexSaving(false); + } + }; + + const handleDeleteVertex = async () => { + if (!deleteVertexTarget) return; + setDeleteVertexSaving(true); + try { + await api.vertexKeys.deleteVertexKey({ index: deleteVertexTarget.index }); + toast.success("Vertex key deleted"); + setDeleteVertexTarget(null); + await fetchVertexKeys(); + } catch (err) { + toast.error("Failed to delete Vertex key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteVertexSaving(false); + } + }; + + const handleVertexImport = async () => { + const file = vertexFileRef.current?.files?.[0]; + if (!file) return; + setVertexImporting(true); + try { + const text = await file.text(); + const json = JSON.parse(text) as Record; + const projectId = json.project_id as string | undefined; + const privateKey = json.private_key as string | undefined; + const clientEmail = json.client_email as string | undefined; + if (!projectId || !privateKey || !clientEmail) { + toast.error("Invalid service account file: missing project_id, private_key, or client_email"); + return; + } + await api.vertexImport({ project_id: projectId, private_key: privateKey, client_email: clientEmail }); + toast.success("Vertex credentials imported"); + setVertexImportOpen(false); + if (vertexFileRef.current) vertexFileRef.current.value = ""; + await fetchVertexKeys(); + } catch (err) { + toast.error("Vertex import failed", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setVertexImporting(false); + } + }; + + // ─── Tab 6: OpenAI Compatibility ─── + const [openAICompat, setOpenAICompat] = useState([]); + const [oacLoading, setOACLoading] = useState(true); + const [oacFormOpen, setOACFormOpen] = useState(false); + const [oacEditIndex, setOACEditIndex] = useState(null); + const [oacForm, setOACForm] = useState(emptyOpenAICompatForm()); + const [oacSaving, setOACSaving] = useState(false); + const [deleteOACTarget, setDeleteOACTarget] = useState<{ index: number; entry: OpenAICompatEntry } | null>(null); + const [deleteOACSaving, setDeleteOACSaving] = useState(false); + + const fetchOpenAICompat = useCallback(async () => { + try { + const data = await api.openAICompat.getOpenAICompat(); + setOpenAICompat(data); + } catch (err) { + toast.error("Failed to load OpenAI compatibility entries", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setOACLoading(false); + } + }, []); + + const openOACAdd = () => { + setOACEditIndex(null); + setOACForm(emptyOpenAICompatForm()); + setOACFormOpen(true); + }; + + const openOACEdit = (index: number, entry: OpenAICompatEntry) => { + setOACEditIndex(index); + setOACForm(openAICompatToForm(entry)); + setOACFormOpen(true); + }; + + const handleOACSave = async () => { + if (!oacForm.name.trim()) { + toast.error("Name is required"); + return; + } + if (!oacForm.baseUrl.trim()) { + toast.error("Base URL is required"); + return; + } + setOACSaving(true); + try { + const value = formToOpenAICompat(oacForm) as OpenAICompatEntry; + if (oacEditIndex !== null) { + await api.openAICompat.patchOpenAICompat({ index: oacEditIndex, value }); + toast.success("OpenAI compatibility entry updated"); + } else { + const current = await api.openAICompat.getOpenAICompat(); + await api.openAICompat.putOpenAICompat([...current, value]); + toast.success("OpenAI compatibility entry added"); + } + setOACFormOpen(false); + await fetchOpenAICompat(); + } catch (err) { + toast.error("Failed to save OpenAI compatibility entry", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setOACSaving(false); + } + }; + + const handleDeleteOAC = async () => { + if (!deleteOACTarget) return; + setDeleteOACSaving(true); + try { + await api.openAICompat.deleteOpenAICompat({ index: deleteOACTarget.index }); + toast.success("OpenAI compatibility entry deleted"); + setDeleteOACTarget(null); + await fetchOpenAICompat(); + } catch (err) { + toast.error("Failed to delete OpenAI compatibility entry", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteOACSaving(false); + } + }; + + // ─── Tab 7: AmpCode ─── + const [ampUpstreamURL, setAmpUpstreamURL] = useState(""); + const [ampUpstreamAPIKey, setAmpUpstreamAPIKey] = useState(""); + const [ampRestrictLocalhost, setAmpRestrictLocalhost] = useState(false); + const [ampForceMappings, setAmpForceMappings] = useState(false); + const [ampModelMappings, setAmpModelMappings] = useState([]); + const [ampUpstreamAPIKeys, setAmpUpstreamAPIKeys] = useState([]); + const [ampLoading, setAmpLoading] = useState(true); + const [ampUpstreamURLEdit, setAmpUpstreamURLEdit] = useState(""); + const [ampUpstreamAPIKeyEdit, setAmpUpstreamAPIKeyEdit] = useState(""); + const [ampSavingURL, setAmpSavingURL] = useState(false); + const [ampSavingAPIKey, setAmpSavingAPIKey] = useState(false); + const [ampSavingSwitch, setAmpSavingSwitch] = useState(false); + const [ampMappingFormOpen, setAmpMappingFormOpen] = useState(false); + const [ampMappingEditIndex, setAmpMappingEditIndex] = useState(null); + const [ampMappingFrom, setAmpMappingFrom] = useState(""); + const [ampMappingTo, setAmpMappingTo] = useState(""); + const [ampMappingSaving, setAmpMappingSaving] = useState(false); + const [deleteAmpMappingTarget, setDeleteAmpMappingTarget] = useState(null); + const [deleteAmpMappingSaving, setDeleteAmpMappingSaving] = useState(false); + const [ampUpstreamKeyFormOpen, setAmpUpstreamKeyFormOpen] = useState(false); + const [ampUpstreamKeyEditIndex, setAmpUpstreamKeyEditIndex] = useState(null); + const [ampUpstreamKeyValue, setAmpUpstreamKeyValue] = useState(""); + const [ampUpstreamKeyApiKeys, setAmpUpstreamKeyApiKeys] = useState(""); + const [ampUpstreamKeySaving, setAmpUpstreamKeySaving] = useState(false); + const [deleteAmpUpstreamKeyTarget, setDeleteAmpUpstreamKeyTarget] = useState(null); + const [deleteAmpUpstreamKeySaving, setDeleteAmpUpstreamKeySaving] = useState(false); + + const fetchAmpCode = useCallback(async () => { + try { + const [url, apiKey, restrict, force, mappings, upstreamKeys] = await Promise.all([ + api.ampCode.getAmpUpstreamURL(), + api.ampCode.getAmpUpstreamAPIKey(), + api.ampCode.getAmpRestrictManagementToLocalhost(), + api.ampCode.getAmpForceModelMappings(), + api.ampCode.getAmpModelMappings(), + api.ampCode.getAmpUpstreamAPIKeys(), + ]); + setAmpUpstreamURL(url); + setAmpUpstreamURLEdit(url); + setAmpUpstreamAPIKey(apiKey); + setAmpUpstreamAPIKeyEdit(apiKey); + setAmpRestrictLocalhost(restrict); + setAmpForceMappings(force); + setAmpModelMappings(mappings); + setAmpUpstreamAPIKeys(upstreamKeys); + } catch (err) { + toast.error("Failed to load AmpCode settings", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpLoading(false); + } + }, []); + + const handleSaveAmpURL = async () => { + setAmpSavingURL(true); + try { + if (ampUpstreamURLEdit.trim()) { + await api.ampCode.putAmpUpstreamURL(ampUpstreamURLEdit.trim()); + } else { + await api.ampCode.deleteAmpUpstreamURL(); + } + setAmpUpstreamURL(ampUpstreamURLEdit.trim()); + toast.success("Upstream URL saved"); + } catch (err) { + toast.error("Failed to save upstream URL", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpSavingURL(false); + } + }; + + const handleClearAmpURL = async () => { + setAmpSavingURL(true); + try { + await api.ampCode.deleteAmpUpstreamURL(); + setAmpUpstreamURL(""); + setAmpUpstreamURLEdit(""); + toast.success("Upstream URL cleared"); + } catch (err) { + toast.error("Failed to clear upstream URL", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpSavingURL(false); + } + }; + + const handleSaveAmpAPIKey = async () => { + setAmpSavingAPIKey(true); + try { + if (ampUpstreamAPIKeyEdit.trim()) { + await api.ampCode.putAmpUpstreamAPIKey(ampUpstreamAPIKeyEdit.trim()); + } else { + await api.ampCode.deleteAmpUpstreamAPIKey(); + } + setAmpUpstreamAPIKey(ampUpstreamAPIKeyEdit.trim()); + toast.success("Upstream API key saved"); + } catch (err) { + toast.error("Failed to save upstream API key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpSavingAPIKey(false); + } + }; + + const handleClearAmpAPIKey = async () => { + setAmpSavingAPIKey(true); + try { + await api.ampCode.deleteAmpUpstreamAPIKey(); + setAmpUpstreamAPIKey(""); + setAmpUpstreamAPIKeyEdit(""); + toast.success("Upstream API key cleared"); + } catch (err) { + toast.error("Failed to clear upstream API key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpSavingAPIKey(false); + } + }; + + const handleAmpRestrictToggle = async (checked: boolean) => { + const prev = ampRestrictLocalhost; + setAmpRestrictLocalhost(checked); + setAmpSavingSwitch(true); + try { + await api.ampCode.putAmpRestrictManagementToLocalhost(checked); + toast.success(checked ? "Restrict management to localhost enabled" : "Restrict management to localhost disabled"); + } catch (err) { + setAmpRestrictLocalhost(prev); + toast.error("Failed to toggle setting", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpSavingSwitch(false); + } + }; + + const handleAmpForceMappingsToggle = async (checked: boolean) => { + const prev = ampForceMappings; + setAmpForceMappings(checked); + setAmpSavingSwitch(true); + try { + await api.ampCode.putAmpForceModelMappings(checked); + toast.success(checked ? "Force model mappings enabled" : "Force model mappings disabled"); + } catch (err) { + setAmpForceMappings(prev); + toast.error("Failed to toggle setting", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpSavingSwitch(false); + } + }; + + const openAmpMappingAdd = () => { + setAmpMappingEditIndex(null); + setAmpMappingFrom(""); + setAmpMappingTo(""); + setAmpMappingFormOpen(true); + }; + + const openAmpMappingEdit = (index: number) => { + setAmpMappingEditIndex(index); + setAmpMappingFrom(ampModelMappings[index].from); + setAmpMappingTo(ampModelMappings[index].to); + setAmpMappingFormOpen(true); + }; + + const handleAmpMappingSave = async () => { + if (!ampMappingFrom.trim() || !ampMappingTo.trim()) { + toast.error("Both from and to fields are required"); + return; + } + setAmpMappingSaving(true); + try { + let newMappings: AmpModelMapping[]; + if (ampMappingEditIndex !== null) { + newMappings = ampModelMappings.map((m, i) => + i === ampMappingEditIndex ? { from: ampMappingFrom.trim(), to: ampMappingTo.trim() } : m + ); + } else { + newMappings = [...ampModelMappings, { from: ampMappingFrom.trim(), to: ampMappingTo.trim() }]; + } + await api.ampCode.putAmpModelMappings(newMappings); + setAmpModelMappings(newMappings); + toast.success(ampMappingEditIndex !== null ? "Model mapping updated" : "Model mapping added"); + setAmpMappingFormOpen(false); + } catch (err) { + toast.error("Failed to save model mapping", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpMappingSaving(false); + } + }; + + const handleDeleteAmpMapping = async () => { + if (deleteAmpMappingTarget === null) return; + setDeleteAmpMappingSaving(true); + try { + const fromKey = ampModelMappings[deleteAmpMappingTarget].from; + await api.ampCode.deleteAmpModelMappings([fromKey]); + setAmpModelMappings(ampModelMappings.filter((_, i) => i !== deleteAmpMappingTarget)); + toast.success("Model mapping deleted"); + setDeleteAmpMappingTarget(null); + } catch (err) { + toast.error("Failed to delete model mapping", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteAmpMappingSaving(false); + } + }; + + const openAmpUpstreamKeyAdd = () => { + setAmpUpstreamKeyEditIndex(null); + setAmpUpstreamKeyValue(""); + setAmpUpstreamKeyApiKeys(""); + setAmpUpstreamKeyFormOpen(true); + }; + + const openAmpUpstreamKeyEdit = (index: number) => { + setAmpUpstreamKeyEditIndex(index); + setAmpUpstreamKeyValue(ampUpstreamAPIKeys[index]["upstream-api-key"]); + setAmpUpstreamKeyApiKeys(ampUpstreamAPIKeys[index]["api-keys"].join(", ")); + setAmpUpstreamKeyFormOpen(true); + }; + + const handleAmpUpstreamKeySave = async () => { + if (!ampUpstreamKeyValue.trim()) { + toast.error("Upstream API key is required"); + return; + } + setAmpUpstreamKeySaving(true); + try { + const apiKeysList = ampUpstreamKeyApiKeys + .split(",") + .map((s) => s.trim()) + .filter(Boolean); + const entry: AmpUpstreamAPIKeyEntry = { + "upstream-api-key": ampUpstreamKeyValue.trim(), + "api-keys": apiKeysList, + }; + let newKeys: AmpUpstreamAPIKeyEntry[]; + if (ampUpstreamKeyEditIndex !== null) { + newKeys = ampUpstreamAPIKeys.map((k, i) => + i === ampUpstreamKeyEditIndex ? entry : k + ); + } else { + newKeys = [...ampUpstreamAPIKeys, entry]; + } + await api.ampCode.putAmpUpstreamAPIKeys(newKeys); + setAmpUpstreamAPIKeys(newKeys); + toast.success(ampUpstreamKeyEditIndex !== null ? "Upstream API key updated" : "Upstream API key added"); + setAmpUpstreamKeyFormOpen(false); + } catch (err) { + toast.error("Failed to save upstream API key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setAmpUpstreamKeySaving(false); + } + }; + + const handleDeleteAmpUpstreamKey = async () => { + if (deleteAmpUpstreamKeyTarget === null) return; + setDeleteAmpUpstreamKeySaving(true); + try { + const key = ampUpstreamAPIKeys[deleteAmpUpstreamKeyTarget]["upstream-api-key"]; + await api.ampCode.deleteAmpUpstreamAPIKeys([key]); + setAmpUpstreamAPIKeys(ampUpstreamAPIKeys.filter((_, i) => i !== deleteAmpUpstreamKeyTarget)); + toast.success("Upstream API key deleted"); + setDeleteAmpUpstreamKeyTarget(null); + } catch (err) { + toast.error("Failed to delete upstream API key", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleteAmpUpstreamKeySaving(false); + } + }; + + // ─── Initial data fetch ─── + useEffect(() => { + fetchAPIKeys(); + fetchGeminiKeys(); + fetchClaudeKeys(); + fetchCodexKeys(); + fetchVertexKeys(); + fetchOpenAICompat(); + fetchAmpCode(); + }, [fetchAPIKeys, fetchGeminiKeys, fetchClaudeKeys, fetchCodexKeys, fetchVertexKeys, fetchOpenAICompat, fetchAmpCode]); + + return ( +
+
+ +

API Keys

+
+ + + + + + API Keys + + + + Gemini + + + + Claude + + + + Codex + + + + Vertex + + + + OpenAI Compat + + + + AmpCode + + + + {/* ─── Tab 1: API Keys ─── */} + +
+

+ Simple API key strings for authentication +

+ +
+ + {apiKeysLoading ? ( +
+ + + + # + Key + + + + + + +
+
+ ) : apiKeys.length === 0 ? ( + } + message="No API keys configured. Add a key to get started." + /> + ) : ( +
+ + + + # + Key + + + + + {apiKeys.map((key, i) => ( + + {i + 1} + + + + + + + + ))} + +
+
+ )} +
+ + {/* ─── Tab 2: Gemini Keys ─── */} + +
+

+ Gemini API key entries with optional prefix and proxy +

+ +
+ + {geminiLoading ? ( +
+ + + + API Key + Prefix + Base URL + Proxy URL + + + + + + +
+
+ ) : geminiKeys.length === 0 ? ( + } + message="No Gemini keys configured. Add a key to get started." + /> + ) : ( +
+ + + + API Key + Prefix + Base URL + Proxy URL + + + + + {geminiKeys.map((key, i) => ( + + + + + + {key.prefix ? ( + {key.prefix} + ) : ( + + )} + + + {key["base-url"] || "—"} + + + {key["proxy-url"] || "—"} + + +
+ + +
+
+
+ ))} +
+
+
+ )} +
+ + {/* ─── Tab 3: Claude Keys ─── */} + +
+

+ Claude API key entries with model aliases +

+ +
+ + {claudeLoading ? ( +
+ + + + API Key + Prefix + Base URL + Models + + + + + + +
+
+ ) : claudeKeys.length === 0 ? ( + } + message="No Claude keys configured. Add a key to get started." + /> + ) : ( +
+ + + + API Key + Prefix + Base URL + Models + + + + + {claudeKeys.map((key, i) => ( + + + + + + {key.prefix ? ( + {key.prefix} + ) : ( + + )} + + + {key["base-url"] || "—"} + + + {key.models && key.models.length > 0 ? ( + {key.models.length} + ) : ( + 0 + )} + + +
+ + +
+
+
+ ))} +
+
+
+ )} +
+ + {/* ─── Tab 4: Codex Keys ─── */} + +
+

+ Codex API key entries with WebSocket support +

+ +
+ + {codexLoading ? ( +
+ + + + API Key + Prefix + Base URL + Models + WebSocket + + + + + + +
+
+ ) : codexKeys.length === 0 ? ( + } + message="No Codex keys configured. Add a key to get started." + /> + ) : ( +
+ + + + API Key + Prefix + Base URL + Models + WebSocket + + + + + {codexKeys.map((key, i) => ( + + + + + + {key.prefix ? ( + {key.prefix} + ) : ( + + )} + + + {key["base-url"] || "—"} + + + {key.models && key.models.length > 0 ? ( + {key.models.length} + ) : ( + 0 + )} + + + {key.websockets ? ( + + On + + ) : ( + Off + )} + + +
+ + +
+
+
+ ))} +
+
+
+ )} +
+ + {/* ─── Tab 5: Vertex Keys ─── */} + +
+

+ Vertex AI API key entries with service account import +

+
+ + +
+
+ + {vertexLoading ? ( +
+ + + + API Key + Prefix + Base URL + Models + + + + + + +
+
+ ) : vertexKeys.length === 0 ? ( + } + message="No Vertex keys configured. Add a key or import credentials to get started." + /> + ) : ( +
+ + + + API Key + Prefix + Base URL + Models + + + + + {vertexKeys.map((key, i) => ( + + + + + + {key.prefix ? ( + {key.prefix} + ) : ( + + )} + + + {key["base-url"] || "—"} + + + {key.models && key.models.length > 0 ? ( + {key.models.length} + ) : ( + 0 + )} + + +
+ + +
+
+
+ ))} +
+
+
+ )} +
+ + {/* ─── Tab 6: OpenAI Compatibility ─── */} + +
+

+ OpenAI-compatible provider entries +

+ +
+ + {oacLoading ? ( +
+ + + + Name + Prefix + Base URL + API Keys + + + + + + +
+
+ ) : openAICompat.length === 0 ? ( + } + message="No OpenAI compatibility entries. Add an entry to get started." + /> + ) : ( +
+ + + + Name + Prefix + Base URL + API Keys + + + + + {openAICompat.map((entry, i) => ( + + {entry.name} + + {entry.prefix ? ( + {entry.prefix} + ) : ( + + )} + + + {entry["base-url"]} + + + {entry["api-key-entries"] && entry["api-key-entries"].length > 0 ? ( + {entry["api-key-entries"].length} + ) : ( + 0 + )} + + +
+ + +
+
+
+ ))} +
+
+
+ )} +
+ + {/* ─── Tab 7: AmpCode ─── */} + + {ampLoading ? ( +
+ {Array.from({ length: 4 }).map((_, i) => ( + + + + + + + + + + ))} +
+ ) : ( + <> + + + + + Upstream URL + + + The upstream proxy URL for AmpCode requests + + + +
+ setAmpUpstreamURLEdit(e.target.value)} + placeholder="https://upstream.example.com" + disabled={ampSavingURL} + /> + + +
+
+
+ + + + + + Upstream API Key + + + The API key for upstream AmpCode authentication + + + +
+ setAmpUpstreamAPIKeyEdit(e.target.value)} + placeholder="Enter upstream API key" + disabled={ampSavingAPIKey} + /> + + +
+
+
+ + + + Restrict Management to Localhost + + Only allow management API access from localhost + + + +
+ + + {ampRestrictLocalhost ? "Enabled" : "Disabled"} + +
+
+
+ + + + Force Model Mappings + + Force all model requests through the defined mappings + + + +
+ + + {ampForceMappings ? "Enabled" : "Disabled"} + +
+
+
+ + + +
+
+ + + Model Mappings + + + Map model names from source to target + +
+ +
+
+ + {ampModelMappings.length === 0 ? ( +

+ No model mappings configured +

+ ) : ( +
+ + + + From + To + + + + + {ampModelMappings.map((m, i) => ( + + {m.from} + {m.to} + +
+ + +
+
+
+ ))} +
+
+
+ )} +
+
+ + + +
+
+ + + Upstream API Keys + + + Upstream API key entries with associated local API keys + +
+ +
+
+ + {ampUpstreamAPIKeys.length === 0 ? ( +

+ No upstream API keys configured +

+ ) : ( +
+ + + + Upstream Key + API Keys + + + + + {ampUpstreamAPIKeys.map((entry, i) => ( + + + + + +
+ {entry["api-keys"].length > 0 ? ( + entry["api-keys"].map((k, j) => ( + + {maskKey(k)} + + )) + ) : ( + + )} +
+
+ +
+ + +
+
+
+ ))} +
+
+
+ )} +
+
+ + )} +
+
+ + {/* ─── Dialogs ─── */} + + {/* Add API Key Dialog */} + + + + Add API Key + + Enter a new API key to add to the configuration. + + +
+
+ + setNewKeyValue(e.target.value)} + placeholder="Enter API key" + /> +
+
+ + + + +
+
+ + {/* Delete API Key Confirmation */} + { + if (!open) setDeleteKeyTarget(null); + }} + > + + + Delete API Key + + Are you sure you want to delete this API key? This action cannot be undone. + + + + Cancel + + {deleteKeySaving ? "Deleting..." : "Delete"} + + + + + + {/* Gemini Key Form Dialog */} + + + + + {geminiEditIndex !== null ? "Edit Gemini Key" : "Add Gemini Key"} + + + {geminiEditIndex !== null + ? "Update the Gemini API key configuration." + : "Add a new Gemini API key entry."} + + + + + + + + + + + {/* Delete Gemini Key Confirmation */} + { + if (!open) setDeleteGeminiTarget(null); + }} + > + + + Delete Gemini Key + + Are you sure you want to delete this Gemini key? This action cannot be undone. + + + + Cancel + + {deleteGeminiSaving ? "Deleting..." : "Delete"} + + + + + + {/* Claude Key Form Dialog */} + + + + + {claudeEditIndex !== null ? "Edit Claude Key" : "Add Claude Key"} + + + {claudeEditIndex !== null + ? "Update the Claude API key configuration." + : "Add a new Claude API key entry."} + + + + + + + + + + + {/* Delete Claude Key Confirmation */} + { + if (!open) setDeleteClaudeTarget(null); + }} + > + + + Delete Claude Key + + Are you sure you want to delete this Claude key? This action cannot be undone. + + + + Cancel + + {deleteClaudeSaving ? "Deleting..." : "Delete"} + + + + + + {/* Codex Key Form Dialog */} + + + + + {codexEditIndex !== null ? "Edit Codex Key" : "Add Codex Key"} + + + {codexEditIndex !== null + ? "Update the Codex API key configuration." + : "Add a new Codex API key entry."} + + + + + + + + + + + {/* Delete Codex Key Confirmation */} + { + if (!open) setDeleteCodexTarget(null); + }} + > + + + Delete Codex Key + + Are you sure you want to delete this Codex key? This action cannot be undone. + + + + Cancel + + {deleteCodexSaving ? "Deleting..." : "Delete"} + + + + + + {/* Vertex Key Form Dialog */} + + + + + {vertexEditIndex !== null ? "Edit Vertex Key" : "Add Vertex Key"} + + + {vertexEditIndex !== null + ? "Update the Vertex API key configuration." + : "Add a new Vertex API key entry."} + + + + + + + + + + + {/* Delete Vertex Key Confirmation */} + { + if (!open) setDeleteVertexTarget(null); + }} + > + + + Delete Vertex Key + + Are you sure you want to delete this Vertex key? This action cannot be undone. + + + + Cancel + + {deleteVertexSaving ? "Deleting..." : "Delete"} + + + + + + {/* Vertex Import Dialog */} + + + + Import Vertex Credentials + + Upload a Google Cloud service account JSON file to import Vertex AI credentials. + + +
+ +
+ + + + +
+
+ + {/* OpenAI Compat Form Dialog */} + + + + + {oacEditIndex !== null ? "Edit OpenAI Compatibility Entry" : "Add OpenAI Compatibility Entry"} + + + {oacEditIndex !== null + ? "Update the OpenAI compatibility entry configuration." + : "Add a new OpenAI-compatible provider entry."} + + + + + + + + + + + {/* Delete OpenAI Compat Confirmation */} + { + if (!open) setDeleteOACTarget(null); + }} + > + + + Delete OpenAI Compatibility Entry + + Are you sure you want to delete{" "} + + {deleteOACTarget?.entry.name} + + ? This action cannot be undone. + + + + Cancel + + {deleteOACSaving ? "Deleting..." : "Delete"} + + + + + + {/* AmpCode Model Mapping Form Dialog */} + + + + + {ampMappingEditIndex !== null ? "Edit Model Mapping" : "Add Model Mapping"} + + + Map a source model name to a target model name. + + +
+
+ + setAmpMappingFrom(e.target.value)} + placeholder="Source model name" + /> +
+
+ + setAmpMappingTo(e.target.value)} + placeholder="Target model name" + /> +
+
+ + + + +
+
+ + {/* Delete AmpCode Model Mapping Confirmation */} + { + if (!open) setDeleteAmpMappingTarget(null); + }} + > + + + Delete Model Mapping + + Are you sure you want to delete the mapping from{" "} + + {deleteAmpMappingTarget !== null ? ampModelMappings[deleteAmpMappingTarget]?.from : ""} + {" "} + to{" "} + + {deleteAmpMappingTarget !== null ? ampModelMappings[deleteAmpMappingTarget]?.to : ""} + + ? This action cannot be undone. + + + + Cancel + + {deleteAmpMappingSaving ? "Deleting..." : "Delete"} + + + + + + {/* AmpCode Upstream API Key Form Dialog */} + + + + + {ampUpstreamKeyEditIndex !== null ? "Edit Upstream API Key" : "Add Upstream API Key"} + + + Configure an upstream API key with associated local API keys. + + +
+
+ + setAmpUpstreamKeyValue(e.target.value)} + placeholder="Enter upstream API key" + /> +
+
+ + setAmpUpstreamKeyApiKeys(e.target.value)} + placeholder="key1, key2, key3 (comma-separated)" + /> +
+
+ + + + +
+
+ + {/* Delete AmpCode Upstream API Key Confirmation */} + { + if (!open) setDeleteAmpUpstreamKeyTarget(null); + }} + > + + + Delete Upstream API Key + + Are you sure you want to delete this upstream API key? This action cannot be undone. + + + + Cancel + + {deleteAmpUpstreamKeySaving ? "Deleting..." : "Delete"} + + + + +
+ ); +} diff --git a/web/src/app/(dashboard)/auth-files/page.tsx b/web/src/app/(dashboard)/auth-files/page.tsx new file mode 100644 index 0000000000..2e5d97de8b --- /dev/null +++ b/web/src/app/(dashboard)/auth-files/page.tsx @@ -0,0 +1,669 @@ +"use client"; + +import { useCallback, useEffect, useRef, useState } from "react"; +import { api, type AuthFile } from "@/lib/api"; +import { toast } from "sonner"; +import { + Upload, + MoreHorizontal, + Pencil, + Trash2, + Power, + FileKey2, + Plus, + X, +} from "lucide-react"; + +import { Button } from "@/components/ui/button"; +import { Badge } from "@/components/ui/badge"; +import { Switch } from "@/components/ui/switch"; +import { Input } from "@/components/ui/input"; +import { Textarea } from "@/components/ui/textarea"; +import { Label } from "@/components/ui/label"; +import { Skeleton } from "@/components/ui/skeleton"; +import { + Table, + TableBody, + TableCell, + TableHead, + TableHeader, + TableRow, +} from "@/components/ui/table"; +import { + Dialog, + DialogContent, + DialogDescription, + DialogFooter, + DialogHeader, + DialogTitle, +} from "@/components/ui/dialog"; +import { + AlertDialog, + AlertDialogAction, + AlertDialogCancel, + AlertDialogContent, + AlertDialogDescription, + AlertDialogFooter, + AlertDialogHeader, + AlertDialogTitle, +} from "@/components/ui/alert-dialog"; +import { + DropdownMenu, + DropdownMenuContent, + DropdownMenuItem, + DropdownMenuSeparator, + DropdownMenuTrigger, +} from "@/components/ui/dropdown-menu"; + +function statusBadge(status: string) { + const lower = status.toLowerCase(); + if (lower === "active") { + return ( + + active + + ); + } + if (lower === "error") { + return ( + + error + + ); + } + if (lower === "disabled") { + return ( + + disabled + + ); + } + return ( + + {status} + + ); +} + +function providerBadge(provider: string) { + return {provider}; +} + +interface HeaderEntry { + key: string; + value: string; +} + +interface EditFormData { + prefix: string; + proxy_url: string; + headers: HeaderEntry[]; + priority: string; + note: string; +} + +function fieldsToFormData(fields: Record): EditFormData { + const headers: HeaderEntry[] = []; + if (fields.headers) { + try { + const parsed = JSON.parse(fields.headers) as Record; + for (const [k, v] of Object.entries(parsed)) { + headers.push({ key: k, value: v }); + } + } catch { + headers.push({ key: "", value: "" }); + } + } + return { + prefix: fields.prefix ?? "", + proxy_url: fields.proxy_url ?? fields["proxy-url"] ?? "", + headers: headers.length > 0 ? headers : [{ key: "", value: "" }], + priority: fields.priority ?? "", + note: fields.note ?? "", + }; +} + +function formDataToFields(data: EditFormData): Record { + const fields: Record = {}; + if (data.prefix) fields.prefix = data.prefix; + if (data.proxy_url) fields.proxy_url = data.proxy_url; + const cleanHeaders = data.headers.filter((h) => h.key.trim() !== ""); + if (cleanHeaders.length > 0) { + const headerObj: Record = {}; + for (const h of cleanHeaders) { + headerObj[h.key] = h.value; + } + fields.headers = JSON.stringify(headerObj); + } + if (data.priority) fields.priority = data.priority; + if (data.note) fields.note = data.note; + return fields; +} + +export default function AuthFilesPage() { + const [files, setFiles] = useState([]); + const [loading, setLoading] = useState(true); + const fetchIdRef = useRef(0); + + const fetchFiles = useCallback(async () => { + const fetchId = ++fetchIdRef.current; + try { + const data = await api.authFiles.listAuthFiles(); + if (fetchId === fetchIdRef.current) { + setFiles(data); + } + } catch (err) { + if (fetchId === fetchIdRef.current) { + toast.error("Failed to load auth files", { + description: err instanceof Error ? err.message : undefined, + }); + } + } finally { + if (fetchId === fetchIdRef.current) { + setLoading(false); + } + } + }, []); + + useEffect(() => { + fetchFiles(); + }, [fetchFiles]); + + const [uploadOpen, setUploadOpen] = useState(false); + const [uploading, setUploading] = useState(false); + const fileInputRef = useRef(null); + + const handleUpload = async () => { + const fileList = fileInputRef.current?.files; + if (!fileList || fileList.length === 0) return; + + setUploading(true); + try { + for (let i = 0; i < fileList.length; i++) { + const formData = new FormData(); + formData.append("file", fileList[i]); + await api.authFiles.uploadAuthFile(formData); + } + toast.success("Auth file(s) uploaded successfully"); + setUploadOpen(false); + if (fileInputRef.current) fileInputRef.current.value = ""; + await fetchFiles(); + } catch (err) { + toast.error("Upload failed", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setUploading(false); + } + }; + + const [deleteTarget, setDeleteTarget] = useState(null); + const [deleteAllOpen, setDeleteAllOpen] = useState(false); + const [deleting, setDeleting] = useState(false); + + const handleDelete = async () => { + if (!deleteTarget) return; + setDeleting(true); + try { + await api.authFiles.deleteAuthFile({ + name: deleteTarget.name, + provider: deleteTarget.provider, + }); + toast.success("Auth file deleted"); + setDeleteTarget(null); + await fetchFiles(); + } catch (err) { + toast.error("Delete failed", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleting(false); + } + }; + + const handleDeleteAll = async () => { + setDeleting(true); + try { + const results = await Promise.allSettled( + files.map((f) => + api.authFiles.deleteAuthFile({ name: f.name, provider: f.provider }) + ) + ); + const failed = results.filter((r) => r.status === "rejected").length; + if (failed > 0) { + toast.error(`Deleted ${results.length - failed} of ${results.length} files, ${failed} failed`); + } else { + toast.success("All auth files deleted"); + } + setDeleteAllOpen(false); + await fetchFiles(); + } catch (err) { + toast.error("Delete all failed", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setDeleting(false); + } + }; + + const [editTarget, setEditTarget] = useState(null); + const [editForm, setEditForm] = useState({ + prefix: "", + proxy_url: "", + headers: [{ key: "", value: "" }], + priority: "", + note: "", + }); + const [saving, setSaving] = useState(false); + + const openEdit = (file: AuthFile) => { + setEditTarget(file); + setEditForm(fieldsToFormData(file.fields ?? {})); + }; + + const handleSaveEdit = async () => { + if (!editTarget) return; + setSaving(true); + try { + await api.authFiles.patchAuthFileFields({ + name: editTarget.name, + provider: editTarget.provider, + fields: formDataToFields(editForm), + }); + toast.success("Auth file updated"); + setEditTarget(null); + await fetchFiles(); + } catch (err) { + toast.error("Update failed", { + description: err instanceof Error ? err.message : undefined, + }); + } finally { + setSaving(false); + } + }; + + const handleToggle = async (file: AuthFile) => { + const newDisabled = !file.disabled; + const originalFiles = [...files]; + setFiles((prev) => + prev.map((f) => + f.name === file.name && f.provider === file.provider + ? { ...f, disabled: newDisabled, status: newDisabled ? "disabled" : "active" } + : f + ) + ); + try { + await api.authFiles.patchAuthFileStatus({ + name: file.name, + provider: file.provider, + disabled: newDisabled, + }); + toast.success(newDisabled ? "Auth file disabled" : "Auth file enabled"); + } catch (err) { + setFiles(originalFiles); + toast.error("Toggle failed", { + description: err instanceof Error ? err.message : undefined, + }); + } + }; + + const addHeaderEntry = () => { + setEditForm((prev) => ({ + ...prev, + headers: [...prev.headers, { key: "", value: "" }], + })); + }; + + const removeHeaderEntry = (index: number) => { + setEditForm((prev) => ({ + ...prev, + headers: prev.headers.filter((_, i) => i !== index), + })); + }; + + const updateHeaderEntry = (index: number, field: "key" | "value", val: string) => { + setEditForm((prev) => ({ + ...prev, + headers: prev.headers.map((h, i) => (i === index ? { ...h, [field]: val } : h)), + })); + }; + + return ( +
+
+
+ +

Auth Files

+
+
+ {files.length > 0 && ( + + )} + +
+
+ + {loading ? ( +
+ + + + Provider + Name + Status + Email / Account + Priority + Note + Enabled + + + + + {Array.from({ length: 5 }).map((_, i) => ( + + + + + + + + + + + ))} + +
+
+ ) : files.length === 0 ? ( +
+ +

+ No auth files found. Upload an auth file to get started. +

+
+ ) : ( +
+ + + + Provider + Name + Status + Email / Account + Priority + Note + Enabled + + + + + {files.map((file) => ( + + {providerBadge(file.provider)} + {file.name} + {statusBadge(file.status)} + + {file.fields?.email || file.fields?.account || file.fields?.["email-or-account"] || "—"} + + + {file.fields?.priority ?? "—"} + + + {file.fields?.note || "—"} + + + handleToggle(file)} + aria-label={`Toggle ${file.name}`} + /> + + + + + + + + openEdit(file)}> + + Edit + + handleToggle(file)}> + + {file.disabled ? "Enable" : "Disable"} + + + setDeleteTarget(file)} + > + + Delete + + + + + + ))} + +
+
+ )} + + + + + Upload Auth File + + Select one or more .json auth files to upload. + + +
+ +
+ + + + +
+
+ + { + if (!open) setDeleteTarget(null); + }} + > + + + Delete Auth File + + Are you sure you want to delete{" "} + + {deleteTarget?.name} + {" "} + ({deleteTarget?.provider})? This action cannot be undone. + + + + Cancel + + {deleting ? "Deleting..." : "Delete"} + + + + + + + + + Delete All Auth Files + + Are you sure you want to delete all {files.length} auth file(s)? + This action cannot be undone. + + + + Cancel + + {deleting ? "Deleting..." : "Delete All"} + + + + + + { + if (!open) setEditTarget(null); + }}> + + + Edit Auth File + + Modify fields for{" "} + + {editTarget?.name} + {" "} + ({editTarget?.provider}) + + +
+
+ + + setEditForm((prev) => ({ ...prev, prefix: e.target.value })) + } + placeholder="Model prefix" + /> +
+
+ + + setEditForm((prev) => ({ ...prev, proxy_url: e.target.value })) + } + placeholder="https://proxy.example.com" + /> +
+
+ +
+ {editForm.headers.map((entry, i) => ( +
+ updateHeaderEntry(i, "key", e.target.value)} + placeholder="Header name" + className="flex-1" + /> + updateHeaderEntry(i, "value", e.target.value)} + placeholder="Header value" + className="flex-1" + /> + +
+ ))} + +
+
+
+ + + setEditForm((prev) => ({ ...prev, priority: e.target.value })) + } + placeholder="0" + /> +
+
+ +