Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: df29e81252
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| { | ||
| "code": "upstream_error", | ||
| "message": "The language model dependency failed.", | ||
| "message": f"LLM error: {exc}", |
There was a problem hiding this comment.
Keep upstream LLM details out of client-visible errors
Returning f"LLM error: {exc}" to the SSE client leaks raw provider error text; LlmClientError is raised with full OpenRouter HTTP bodies in OpenAiLlmClient (including whatever the upstream sends back), so end users can now receive internal dependency details that were previously hidden behind a generic message. This is a regression in error-surface safety and should be limited to logs while keeping the public error.message generic.
Useful? React with 👍 / 👎.
remove ask ai from junction