fix(openai): handle tool calls with empty/null arguments#4060
Merged
markbackman merged 1 commit intopipecat-ai:mainfrom Mar 27, 2026
Merged
fix(openai): handle tool calls with empty/null arguments#4060markbackman merged 1 commit intopipecat-ai:mainfrom
markbackman merged 1 commit intopipecat-ai:mainfrom
Conversation
When an LLM returns a tool call with no arguments (arguments=null in
the streaming chunks), the tool call is silently dropped because:
1. `tool_call.function.arguments` is None, so nothing is accumulated
and `arguments` stays as "" (empty string)
2. `if function_name and arguments:` treats "" as falsy, skipping the
entire tool call execution
OpenAI always sends arguments="{}" even for parameterless tools,
masking this bug. But vLLM, Ollama, and other OpenAI-compatible
providers may omit arguments entirely when the tool schema has no
required parameters, causing tool calls to be silently ignored.
Fix: check only `function_name` (not `arguments`) and default empty
arguments to "{}" so `json.loads` produces an empty dict. Apply the
same fallback for intermediate tool calls in multi-tool responses.
markbackman
approved these changes
Mar 27, 2026
Contributor
markbackman
left a comment
There was a problem hiding this comment.
Thanks for the well-written issue and clean fix.
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #4059
Problem
When an LLM returns a streaming tool call with no arguments (
arguments=nullin the delta chunks), the tool call is silently dropped. This affects all OpenAI-compatible providers that don't sendarguments: "{}"for parameterless tools, including vLLM (with Qwen, Llama, Mistral), Ollama, and others.Root cause in
BaseOpenAILLMService._process_context:tool_call.function.argumentsisNone→ nothing accumulated →argumentsstays""if function_name and arguments:→""is falsy → tool call execution skipped entirelyOpenAI masks this because it always sends
arguments: "{}"even for tools with zero parameters.Fix
Three lines changed:
arguments_list.append(arguments)→arguments_list.append(arguments or "{}")(intermediate tool calls in multi-tool responses)if function_name and arguments:→if function_name:(final tool call check)arguments_list.append(arguments)→arguments_list.append(arguments or "{}")(final tool call append)Empty arguments default to
"{}"sojson.loadsproduces{}. No behavior change for OpenAI or any provider that already sends"{}".Testing
Verified with vLLM serving
Qwen3-VL-235Bwith--enable-auto-tool-choice --tool-call-parser hermes. Before this fix,end_calltool calls were silently dropped. After, they execute correctly with empty arguments.