Skip to content

improvement: rewrite ADK adapter to use Portkey Responses API#421

Open
viraj-s15 wants to merge 3 commits intoPortkey-AI:mainfrom
viraj-s15:feature/adk-responses-endpoint
Open

improvement: rewrite ADK adapter to use Portkey Responses API#421
viraj-s15 wants to merge 3 commits intoPortkey-AI:mainfrom
viraj-s15:feature/adk-responses-endpoint

Conversation

@viraj-s15
Copy link

@viraj-s15 viraj-s15 commented Mar 7, 2026

Rewrite ADK adapter to use Portkey Responses API

Description

  • Core Migration: Migrated portkey_ai/integrations/adk.py from Chat Completions to the Portkey Responses API (responses.create), replacing the entire request/response translation layer.

  • Reasoning/Thinking Support: Added support for mapping ADK thinking_budget to Responses reasoning.effort (low/medium/high) with automatic summary and encrypted content passthrough.

  • Streaming Refactor: Rewrote streaming from accumulation-based chunk processing (~120 lines) to clean event-driven handling (~30 lines) using Responses stream events:response.reasoning_text.delta, response.output_text.delta and response.completed

  • Testing & Documentation: Updated unit tests and added a streaming+thinking example (examples/adk_streaming_thinking_usage.py).

Motivation

This discussion: https://discord.com/channels/1143393887742861333/1462080485583749318/1462080485583749318

TL;DR: The Portkey Responses API has native reasoning support, structured streaming events, and a unified interface across acrooss multiple providers.

Known Limitations

Gemini reasoning not surfaced

Gemini reasoning might not surfaced via Portkey Responses endpoint, when using @vertex-ai/gemini-3-flash-preview with reasoning enabled, the model consumes reasoning tokens internally (usage.output_tokens_details.reasoning_tokens > 0) but Portkey does not return reasoning output items for Gemini. This seems to be occurring in LiteLLM too (BerriAI/litellm#10227), also found this discussion in the google forums (discuss.ai.google.dev). The adapter will surface Gemini reasoning automatically once it starts being including these items in the response.

Anthropic streaming + reasoning thoughts not surfaced

When using @anthropic/claude-sonnet-4-6 (or other Claude models) with reasoning enabled in streaming mode, response.reasoning_text.delta events are not emitted by Portkey's Responses API. Non-streaming reasoning works correctly (both thought and text items are returned). This might be limitation in Portkey Anthropic adapter, it translates Responses API requests to Anthropic Chat Completions internally, and the thinking deltas are not being re mapped to Responses SSE events during streaming. The adapter code already handles response.reasoning_text.delta events, so Claude streaming thoughts will surface automatically once Portkey's adapter starts emitting them.

@viraj-s15 viraj-s15 changed the title integration: rewrite ADK adapter to use Portkey Responses API improvement: rewrite ADK adapter to use Portkey Responses API Mar 7, 2026
@narengogi
Copy link
Member

thankyou @viraj-s15 we'll get this tested and reviewed

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Migrates the Google ADK adapter to use Portkey’s Responses API (responses.create) instead of Chat Completions, updating request/response translation and streaming handling to align with Responses output items and stream events.

Changes:

  • Rewrites PortkeyAdk request shaping to build Responses input items (messages, tool calls/outputs, and optional reasoning config).
  • Refactors streaming to an event-driven loop based on Responses stream event types and emits ADK LlmResponse partials/final.
  • Updates/expands unit tests and adds an example demonstrating streaming + thinking/reasoning across providers.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 6 comments.

File Description
portkey_ai/integrations/adk.py Core migration to Responses API, adds reasoning mapping + new streaming/event translation.
tests/integrations/test_adk_adapter.py Updates tests to mock Responses API output items/events; adds coverage for reasoning + function calls.
examples/hello_world_portkey_adk.py Adjusts example response parsing to the new output/parts behavior.
examples/adk_streaming_thinking_usage.py New example demonstrating streaming + thinking usage across multiple providers/models.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines 24 to 29
import logging

from portkey_ai import AsyncPortkey

logger = logging.getLogger("portkey_ai.integrations.adk")

Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logging is imported and logger is defined but never used. Since CI runs ruff check ., this will fail with an unused-import/unused-variable lint error; please remove them or use the logger in this module.

Suggested change
import logging
from portkey_ai import AsyncPortkey
logger = logging.getLogger("portkey_ai.integrations.adk")
from portkey_ai import AsyncPortkey

Copilot uses AI. Check for mistakes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch, I added them for debugging then forgot

try:
return json.dumps(obj, ensure_ascii=False)
except (TypeError, OverflowError):
except (TypeError, ValueError):
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

json.dumps(...) can raise OverflowError (e.g., for very large integers) in addition to TypeError/ValueError. The previous version caught OverflowError; please add it back so _safe_json_serialize doesn't unexpectedly propagate on large payloads.

Suggested change
except (TypeError, ValueError):
except (TypeError, ValueError, OverflowError):

Copilot uses AI. Check for mistakes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accidental regression, adding this back

Comment on lines +169 to +176
tool_outputs.append(
{
"type": "function_call_output",
"call_id": getattr(function_response, "id", None),
"output": _safe_json_serialize(
getattr(function_response, "response", None)
),
}
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

function_call_output items require a non-empty call_id (per Responses API schema), but this code may send call_id=None when function_response.id is missing. Consider validating/deriving a fallback call_id (or raising a clear error) to avoid hard-to-debug 400s from the API.

Copilot uses AI. Check for mistakes.
Copy link
Author

@viraj-s15 viraj-s15 Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an explicit ValueError if FunctionResponse.id is missing, so the caller gets a clear error message instead of a 400 from the API.

Comment on lines +188 to +198
items.append(
{
"type": "function_call",
"id": getattr(function_call, "id", None),
"call_id": getattr(function_call, "id", None)
or getattr(function_call, "name", "function_call"),
"name": getattr(function_call, "name", None),
"arguments": _safe_json_serialize(
getattr(function_call, "args", None)
),
}
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Responses API function_call items require name to be a string, but this maps name directly from function_call.name and may send None (invalid). Please ensure name is always populated (e.g., default to a placeholder or raise) and keep call_id/id consistent with that choice.

Copilot uses AI. Check for mistakes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same approach, added explicit ValueErrors for both FunctionCall.name and FunctionCall.id

Comment on lines +454 to +460
sys_role = str(kwargs.pop("system_role", "developer")).lower()
self._system_role = (

base_kwargs = {k: v for k, v in kwargs.items() if k != "model"}
try:
super().__init__(model=model, **base_kwargs) # type: ignore[misc,call-arg] # BaseLlm may be the stub; real BaseLlm signature varies by adk version
except TypeError:
super().__init__() # type: ignore[misc,call-arg] # fallback for older BaseLlm that takes no args
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

base_kwargs is computed before popping Portkey client kwargs (e.g., virtual_key, base_url, strict_open_ai_compliance). If ADK's BaseLlm.__init__ doesn't accept those, it will raise TypeError and trigger the fallback super().__init__() path even when it would have succeeded otherwise. Consider extracting/popping client-related kwargs first, then passing only ADK/BaseLlm-relevant kwargs into super().__init__.

Copilot uses AI. Check for mistakes.
Copy link
Author

@viraj-s15 viraj-s15 Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved the client kwarg extraction (virtual_key, base_url, etc.) before base_kwargs are created so they're no longer accidentally passed to BaseLlm

Comment on lines +511 to +514
if reasoning:
response_args["reasoning"] = reasoning
response_args.setdefault("include", ["reasoning.encrypted_content"])
response_args.update(self._additional_args)
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include is not treated as a reserved key, so a user-provided include passed via PortkeyAdk(..., include=...) can overwrite the adapter's include=['reasoning.encrypted_content'] after setdefault(). That would break thought_signature/encrypted_content passthrough when reasoning is enabled. Consider reserving include (and/or merging lists) so the adapter-required include entries can't be accidentally dropped.

Suggested change
if reasoning:
response_args["reasoning"] = reasoning
response_args.setdefault("include", ["reasoning.encrypted_content"])
response_args.update(self._additional_args)
# Prepare reasoning config and required include entries.
if reasoning:
response_args["reasoning"] = reasoning
response_args.setdefault("include", ["reasoning.encrypted_content"])
# Merge in additional args, taking care not to drop required include entries
# when reasoning is enabled.
additional_args: dict[str, Any] = dict(getattr(self, "_additional_args", {}) or {})
user_include = None
if reasoning and "include" in additional_args:
# Pop user-specified include so it doesn't overwrite adapter-required entries.
user_include = additional_args.pop("include")
response_args.update(additional_args)
if reasoning:
# Merge any user-provided include with the adapter's include, ensuring
# "reasoning.encrypted_content" is always present.
base_include = response_args.get("include") or []
if not isinstance(base_include, list):
base_include = [base_include]
merged_include = list(base_include)
if user_include is not None:
if isinstance(user_include, str):
user_include_iter = [user_include]
elif isinstance(user_include, Iterable):
user_include_iter = list(user_include)
else:
user_include_iter = [user_include]
for item in user_include_iter:
if item not in merged_include:
merged_include.append(item)
if "reasoning.encrypted_content" not in merged_include:
merged_include.append("reasoning.encrypted_content")
response_args["include"] = merged_include

Copilot uses AI. Check for mistakes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, but went with a simpler approach than what copilot suggested. Instead of popping include from _additional_args before the update and then iterating/deduping both lists, we just do the update() as is and then verify reasoning.encrypted_content is still present in the final include list, re adding it if something goes wrong. The approach copilot suggested is better if adapter ever needs to protect multiple required include entries, dont think its required right now, can add if needed

@viraj-s15
Copy link
Author

thankyou @viraj-s15 we'll get this tested and reviewed

Thanks for taking the time to review, will address any review comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants