feat:Add Qwen3.5-9B model and expand CompletionRequest schema#217
feat:Add Qwen3.5-9B model and expand CompletionRequest schema#217
Conversation
WalkthroughExtended the OpenAPI specification to introduce support for a new model (Qwen/Qwen3.5-9B) and expanded the CompletionRequest schema with nine new fields enabling structured outputs, tool usage, moderation controls, and configurable reasoning behavior. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/libs/Together/openapi.yaml (1)
5940-5950:⚠️ Potential issue | 🟠 MajorUse request-shaped schemas for
toolsandtool_choice.Line 5943 points
toolsatToolsPart, which does not requiretype: functionorfunction.name. Line 5949 pointstool_choiceatToolChoice, which requires response-only fields likeid,index, andfunction.arguments. That will generate request models that both accept invalid tool definitions and reject validtool_choiceobjects.🔧 Suggested schema split
tools: type: array items: - $ref: '#/components/schemas/ToolsPart' + $ref: '#/components/schemas/ChatCompletionTool' description: 'A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.' tool_choice: oneOf: - type: string example: tool_name - - $ref: '#/components/schemas/ToolChoice' + - $ref: '#/components/schemas/ToolChoiceRequest' description: 'Controls which (if any) function is called by the model. By default uses `auto`, which lets the model pick between generating a message or calling a function.'components: schemas: ToolChoiceRequest: required: - type - function type: object properties: type: enum: - function type: string function: required: - name type: object properties: name: type: string🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/Together/openapi.yaml` around lines 5940 - 5950, The OpenAPI schemas for tools and tool_choice must use request-shaped types: add a new components schema (e.g., ToolChoiceRequest) that requires type with enum ["function"] and a function object requiring name, update ToolsPart or add a ToolsRequest schema so each item enforces type:function and function.name, then change the tools array to reference the request schema (ToolsRequest/ToolsPartRequest) and change tool_choice's oneOf to reference the string example or the new ToolChoiceRequest instead of the response-oriented ToolChoice; update any references to ToolsPart and ToolChoice to their request-shaped counterparts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/libs/Together/openapi.yaml`:
- Around line 964-976: The CompletionRequest.model enum in the OpenAPI spec is
missing the "Qwen/Qwen3.5-9B" entry that your examples use; update the schema so
the CompletionRequest.model enum (and thus the generated CompletionRequestModel)
includes the value "Qwen/Qwen3.5-9B" just like ChatCompletionRequest does, then
regenerate the client code so the generated CompletionRequestModel contains that
value and callers using CompletionRequest.model can reference "Qwen/Qwen3.5-9B".
---
Outside diff comments:
In `@src/libs/Together/openapi.yaml`:
- Around line 5940-5950: The OpenAPI schemas for tools and tool_choice must use
request-shaped types: add a new components schema (e.g., ToolChoiceRequest) that
requires type with enum ["function"] and a function object requiring name,
update ToolsPart or add a ToolsRequest schema so each item enforces
type:function and function.name, then change the tools array to reference the
request schema (ToolsRequest/ToolsPartRequest) and change tool_choice's oneOf to
reference the string example or the new ToolChoiceRequest instead of the
response-oriented ToolChoice; update any references to ToolsPart and ToolChoice
to their request-shaped counterparts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e44d84f5-df4d-4609-a4dc-d140a903564f
⛔ Files ignored due to path filters (10)
src/libs/Together/Generated/Together.BatchesClient.CreateBatches.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.ChatClient.ChatCompletions.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.ChatClient.ChatCompletionsAsStream.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.IBatchesClient.CreateBatches.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.IChatClient.ChatCompletions.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.IChatClient.ChatCompletionsAsStream.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.Models.BatchJob.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.Models.ChatCompletionRequest.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.Models.ChatCompletionRequestModel.g.csis excluded by!**/generated/**src/libs/Together/Generated/Together.Models.CreateBatchRequest.g.csis excluded by!**/generated/**
📒 Files selected for processing (1)
src/libs/Together/openapi.yaml
| source: "# Docs for v1 can be found by changing the above selector ^\nfrom together import Together\nimport os\n\nclient = Together(\n api_key=os.environ.get(\"TOGETHER_API_KEY\"),\n)\n\nresponse = client.completions.create(\n model=\"Qwen/Qwen3.5-9B\",\n prompt=\"The largest city in France is\",\n max_tokens=1\n)\n\nprint(response.choices[0].text)\n" | ||
| - lang: Python | ||
| label: Together AI SDK (v1) | ||
| source: "from together import Together\nimport os\n\nclient = Together(\n api_key=os.environ.get(\"TOGETHER_API_KEY\"),\n)\n\nresponse = client.completions.create(\n model=\"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo\",\n prompt=\"The largest city in France is\",\n max_tokens=1\n)\n\nprint(response.choices[0].text)\n" | ||
| source: "from together import Together\nimport os\n\nclient = Together(\n api_key=os.environ.get(\"TOGETHER_API_KEY\"),\n)\n\nresponse = client.completions.create(\n model=\"Qwen/Qwen3.5-9B\",\n prompt=\"The largest city in France is\",\n max_tokens=1\n)\n\nprint(response.choices[0].text)\n" | ||
| - lang: TypeScript | ||
| label: Together AI SDK (TypeScript) | ||
| source: "import Together from \"together-ai\";\n\nconst client = new Together({\n apiKey: process.env.TOGETHER_API_KEY,\n});\n\nconst response = await client.completions.create({\n model: \"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo\",\n prompt: \"The largest city in France is\",\n max_tokens: 1,\n});\n\nconsole.log(response.choices[0].text);\n" | ||
| source: "import Together from \"together-ai\";\n\nconst client = new Together({\n apiKey: process.env.TOGETHER_API_KEY,\n});\n\nconst response = await client.completions.create({\n model: \"Qwen/Qwen3.5-9B\",\n prompt: \"The largest city in France is\",\n max_tokens: 1,\n});\n\nconsole.log(response.choices[0].text);\n" | ||
| - lang: JavaScript | ||
| label: Together AI SDK (JavaScript) | ||
| source: "import Together from \"together-ai\";\n\nconst client = new Together({\n apiKey: process.env.TOGETHER_API_KEY,\n});\n\nconst response = await client.completions.create({\n model: \"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo\",\n prompt: \"The largest city in France is\",\n max_tokens: 1\n});\n\nconsole.log(response.choices[0].text);\n" | ||
| source: "import Together from \"together-ai\";\n\nconst client = new Together({\n apiKey: process.env.TOGETHER_API_KEY,\n});\n\nconst response = await client.completions.create({\n model: \"Qwen/Qwen3.5-9B\",\n prompt: \"The largest city in France is\",\n max_tokens: 1\n});\n\nconsole.log(response.choices[0].text);\n" | ||
| - lang: Shell | ||
| label: cURL | ||
| source: "curl -X POST \"https://api.together.xyz/v1/completions\" \\\n -H \"Authorization: Bearer $TOGETHER_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo\",\n \"prompt\": \"The largest city in France is\",\n \"max_tokens\": 1\n }'\n" | ||
| source: "curl -X POST \"https://api.together.xyz/v1/completions\" \\\n -H \"Authorization: Bearer $TOGETHER_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"Qwen/Qwen3.5-9B\",\n \"prompt\": \"The largest city in France is\",\n \"max_tokens\": 1\n }'\n" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== OpenAPI completions model references =="
rg -n -C2 'CompletionRequest:|Qwen/Qwen3\.5-9B' src/libs/Together/openapi.yaml
echo
echo "== Generated completion model helpers =="
fd 'CompletionRequestModel.*\.g\.cs$' src/libs/Together/Generated | while read -r f; do
echo "FILE: $f"
sed -n '1,180p' "$f"
echo
doneRepository: tryAGI/Together
Length of output: 25191
🏁 Script executed:
sed -n '5518,5700p' src/libs/Together/openapi.yamlRepository: tryAGI/Together
Length of output: 7405
🏁 Script executed:
sed -n '5826,5880p' src/libs/Together/openapi.yamlRepository: tryAGI/Together
Length of output: 3233
Add Qwen/Qwen3.5-9B to the CompletionRequest.model enum.
The examples at lines 964–976 advertise Qwen/Qwen3.5-9B, but the OpenAPI schema enum for CompletionRequest.model (lines 5520–5526) does not include it. The generated CompletionRequestModel enum is missing this value. For consistency with the ChatCompletionRequest pattern, which includes this model in both the schema and generated code, add Qwen/Qwen3.5-9B to the CompletionRequest.model enum.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/Together/openapi.yaml` around lines 964 - 976, The
CompletionRequest.model enum in the OpenAPI spec is missing the
"Qwen/Qwen3.5-9B" entry that your examples use; update the schema so the
CompletionRequest.model enum (and thus the generated CompletionRequestModel)
includes the value "Qwen/Qwen3.5-9B" just like ChatCompletionRequest does, then
regenerate the client code so the generated CompletionRequestModel contains that
value and callers using CompletionRequest.model can reference "Qwen/Qwen3.5-9B".
Summary by CodeRabbit
New Features
Documentation