fix(reasoning): respect user config and Effort::None for models without effort support#2941
Open
tivris wants to merge 2 commits intotailcallhq:mainfrom
Open
fix(reasoning): respect user config and Effort::None for models without effort support#2941tivris wants to merge 2 commits intotailcallhq:mainfrom
tivris wants to merge 2 commits intotailcallhq:mainfrom
Conversation
…ut effort support
Two bugs caused reasoning parameters to leak to models that don't
support them (e.g. Claude Haiku 4.5), even when the user explicitly
disabled reasoning in .forge.toml:
1. The reasoning config merge used agent-as-base with overwrite_none,
so the forge agent's hardcoded `enabled: true` silently overrode
the user's `enabled: false`. Reversed the merge direction to match
the compact config pattern (user config as base, agent fills gaps).
2. Effort::None ("skip reasoning entirely") was mapped to
OutputEffort::Low in the Anthropic DTO, actively enabling reasoning.
Now returns (None, None) to skip reasoning, matching the OpenAI path.
Closes tailcallhq#2940, addresses tailcallhq#2924.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes two interacting bugs that caused reasoning parameters to be sent to models that don't support them (e.g. Claude Haiku 4.5), even when the user explicitly disabled reasoning in
.forge.toml.Fixes #2924.
Bug 1: Config merge direction (
agent.rs)The reasoning config merge used
overwrite_nonewith the agent as base, so the forge agent's hardcodedenabled: truesilently overrode the user'senabled: false. Reversed the merge direction to match the compact config pattern already used in the same file: user config as base, agent defaults fill gaps.Bug 2:
Effort::Nonesemantics (request.rs)Effort::Noneis documented as "No reasoning; skips the thinking step entirely" but the Anthropic DTO mapped it toOutputEffort::Low, actively enabling reasoning. Now returns(None, None)to skip reasoning entirely, matching the OpenAI path which already handled this correctly.Test plan
enabled=falseoverrides agentenabled=true(the [Bug]: Can't use model without _effort_ support. #2924 scenario)Effort::Nonewithenabled=trueskips reasoning on AnthropicEffort::Nonewithenabled=falseskips reasoning on AnthropicEffort::Minimalstill maps toOutputEffort::LowRUSTFLAGS='-Dwarnings' cargo check -p forge_apppassescargo test -p forge_app(576 tests) passescargo test -p forge_domainpassesNote
The Google provider (
crates/forge_app/src/dto/google/request.rs) has the same class of bug:Effort::None+enabled=truestill emits aThinkingConfig, andeffortis never mapped tothinking_level. That is a pre-existing issue outside the scope of this PR.