fix: releases SQL placeholder, settings serialization, and Gemini thinking model support#140
Conversation
…nking model support
- server/routes/releases.ts: fix INSERT VALUES having 14 placeholders for 15
columns (id, tag_name, name, body, html_url, published_at, prerelease, draft,
is_read, assets, repo_id, repo_full_name, repo_name, zipball_url, tarball_url),
causing SqliteError on every release sync
- server/routes/configs.ts: serialize object/array values with JSON.stringify
before passing to better-sqlite3; the library treats plain objects as named
parameter maps, which throws RangeError when updating AI/WebDAV settings
- src/services/aiService.ts (Gemini thinking models, e.g. gemini-2.5-pro):
* filter out thought parts (thought: true) from response candidates so only
the actual reply text is returned, not the internal reasoning trace
* raise testConnection maxTokens from 50 to 2048; thinking models allocate
a minimum of ~1024 tokens for reasoning, leaving nothing for output at 50
* extend testConnection timeout to 30 s for gemini api type (was 10 s),
matching the existing treatment of openai-responses and reasoningEffort
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR updates three separate backend components: SQLite value serialization in the settings route to prevent RangeError when storing complex objects, Gemini response filtering and improved test connection timeout handling in the AI service, and minor SQL formatting in the releases route. ChangesStorage and Service Improvements
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/services/aiService.ts (1)
503-503: 💤 Low valueConsider adding a comment explaining the maxTokens value.
The increase from 50 to 2048 is necessary for thinking models that consume tokens during their reasoning phase, but this context isn't obvious from the code alone. A brief comment would help future maintainers understand why such a high value is needed for a simple "OK" response.
📝 Suggested comment
const content = await this.requestText({ system: 'You are a connection test assistant.', user: 'Reply with exactly one word: OK', temperature: 0, + // High maxTokens needed for thinking models (e.g., Gemini 2.5 Pro, o1) that consume + // tokens during reasoning phase before generating the response maxTokens: 2048, signal: controller.signal, });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/aiService.ts` at line 503, Add a brief inline comment next to the maxTokens setting in aiService.ts (the maxTokens: 2048 entry) explaining why the value was increased from 50 to 2048 (e.g., to accommodate "thinking" reasoning models that consume many tokens during internal reasoning even for short outputs like "OK"), so future maintainers understand the rationale and don't reduce it; reference the maxTokens property in the AI request construction (where maxTokens is set) when adding the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@src/services/aiService.ts`:
- Line 503: Add a brief inline comment next to the maxTokens setting in
aiService.ts (the maxTokens: 2048 entry) explaining why the value was increased
from 50 to 2048 (e.g., to accommodate "thinking" reasoning models that consume
many tokens during internal reasoning even for short outputs like "OK"), so
future maintainers understand the rationale and don't reduce it; reference the
maxTokens property in the AI request construction (where maxTokens is set) when
adding the comment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 37a05880-0868-497a-bb90-de8e818d0e38
📒 Files selected for processing (3)
server/src/routes/configs.tsserver/src/routes/releases.tssrc/services/aiService.ts
Summary
Three independent bugs found during deployment on a self-hosted instance:
1.
server/src/routes/releases.ts— SQL INSERT crashThe
INSERT OR REPLACE INTO releasesstatement lists 15 columns but only has 14?placeholders, causingSqliteError: 14 values for 15 columnson every release sync.2.
server/src/routes/configs.ts—better-sqlite3RangeError on settings savebetter-sqlite3treats a plain object passed as a positional parameter as a named-parameter map, throwingRangeError: Too few parameter values were providedwhen saving AI or WebDAV config objects. Fix: serialize object/array values withJSON.stringifybefore thestmt.run()call.3.
src/services/aiService.ts— Gemini thinking models (e.g.gemini-2.5-pro)Three related issues when using a Gemini thinking model:
maxTokens: 50is consumed entirely by the thinking phase, leaving zero tokens for the actual reply — response returnsfinishReason: MAX_TOKENSwith nopartsmaxTokensto 2048apiType === 'gemini'partsinclude items withthought: true(internal reasoning); concatenating all parts returns the model's chain-of-thought as part of the answerthought: trueparts before joiningTest plan
SqliteErrorcrashRangeErrorcrashgemini, model togemini-2.5-pro, click Test Connection — returns "Connection successful" within 30 sgemini-2.5-pro— summary contains only the model's reply, not its internal reasoning trace🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
Bug Fixes
Chores