Skip to content

fix: normalize provider tool choice handling and harden tool call dec…#598

Merged
mikehostetler merged 1 commit intoagentjido:mainfrom
zblanco:zw/provider-toolchoice-contract
Apr 12, 2026
Merged

fix: normalize provider tool choice handling and harden tool call dec…#598
mikehostetler merged 1 commit intoagentjido:mainfrom
zblanco:zw/provider-toolchoice-contract

Conversation

@zblanco
Copy link
Copy Markdown
Contributor

@zblanco zblanco commented Apr 12, 2026

This PR tightens ReqLLM's tool-calling contract handling across OpenAI-compatible providers and hardens response decoding for malformed tool arguments.

The main changes are:

  • Normalize tool_choice handling for OpenAI-compatible providers so ReqLLM inputs like :auto, :none, :required, and %{type: "tool", name: ...} are encoded consistently for provider APIs.
  • Fix providers that were previously too narrow or inconsistent in this area:
    • Cerebras
    • Groq
    • Zenmux
    • Google Vertex OpenAI-compatible
    • Amazon Bedrock OpenAI
  • Preserve OpenAI-style function-specific tool_choice objects for Cerebras instead of collapsing them back to "auto".
  • Pass through parallel_tool_calls for Cerebras.
  • Harden default OpenAI-format response decoding so tool calls with invalid or non-object JSON arguments do not crash or produce bad decoded shapes.
  • Preserve malformed raw JSON where needed so ReqLLM's existing structured-output JSON repair path still works.

This came out of a broader provider-contract pass rather than only fixing the original crash case. In particular, the PR addresses:

  • atom tool_choice values flowing into OpenAI-compatible providers
  • forced tool-choice encoding for providers expecting OpenAI's function object shape
  • inconsistent wrapper behavior in OpenAI-compatible adapters that build request bodies through shared defaults
  • response-side handling of malformed/scalar tool arguments

I also added regression coverage for these provider and decoding paths, including live manual verification against real Groq and Cerebras APIs in local IEx.

Type of Change

  • Bug fix (non-breaking change fixing an issue)

Breaking Changes

N/A.

Testing

  • Tests pass (mix test)
  • Quality checks pass (mix quality)

Additional validation performed:

  • Added/updated provider regression tests for Groq, Cerebras, Zenmux, Google Vertex OpenAI-compatible, Bedrock OpenAI, and default OpenAI-style decoding
  • Manually tested tool-calling with:
    • groq:llama-3.1-8b-instant
    • cerebras:gpt-oss-120b

Observed live behavior:

  • Groq correctly handled tool_choice: :auto
  • Groq correctly handled forced %{type: "tool", name: ...}
  • Cerebras correctly handled tool_choice: :auto
  • Cerebras correctly handled forced %{type: "function", function: %{name: ...}}
  • Both providers returned decoded ReqLLM tool calls with expected arguments and finish_reason == :tool_calls

Checklist

  • My code follows the project's style guidelines
  • I have updated the documentation accordingly
  • I have added tests that prove my fix/feature works
  • All new and existing tests pass
  • My commits follow conventional commit format
  • I have NOT edited CHANGELOG.md (it is auto-generated by git_ops)

Related Issues

Closes #597

@zblanco zblanco marked this pull request as ready for review April 12, 2026 20:02
@mikehostetler mikehostetler merged commit 9873287 into agentjido:main Apr 12, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAI-compatible providers handle tool_choice inconsistently and decode malformed tool arguments too narrowly

2 participants