Skip to content

fix: releases SQL placeholder, settings serialization, and Gemini thinking model support#140

Merged
AmintaCCCP merged 1 commit into
AmintaCCCP:mainfrom
iamvicliu:fix/gemini-thinking-model-and-sqlite-bugs
May 13, 2026
Merged

fix: releases SQL placeholder, settings serialization, and Gemini thinking model support#140
AmintaCCCP merged 1 commit into
AmintaCCCP:mainfrom
iamvicliu:fix/gemini-thinking-model-and-sqlite-bugs

Conversation

@iamvicliu
Copy link
Copy Markdown
Contributor

@iamvicliu iamvicliu commented May 13, 2026

Summary

Three independent bugs found during deployment on a self-hosted instance:

1. server/src/routes/releases.ts — SQL INSERT crash

The INSERT OR REPLACE INTO releases statement lists 15 columns but only has 14 ? placeholders, causing SqliteError: 14 values for 15 columns on every release sync.

- ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
+ ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

2. server/src/routes/configs.tsbetter-sqlite3 RangeError on settings save

better-sqlite3 treats a plain object passed as a positional parameter as a named-parameter map, throwing RangeError: Too few parameter values were provided when saving AI or WebDAV config objects. Fix: serialize object/array values with JSON.stringify before the stmt.run() call.

3. src/services/aiService.ts — Gemini thinking models (e.g. gemini-2.5-pro)

Three related issues when using a Gemini thinking model:

Issue Root cause Fix
"No content received" on connection test maxTokens: 50 is consumed entirely by the thinking phase, leaving zero tokens for the actual reply — response returns finishReason: MAX_TOKENS with no parts Raise maxTokens to 2048
Connection test times out Timeout was 10 s for all Gemini calls; thinking models routinely take 15–30 s Extend timeout to 30 s for apiType === 'gemini'
Thinking trace mixed into output Response parts include items with thought: true (internal reasoning); concatenating all parts returns the model's chain-of-thought as part of the answer Filter out thought: true parts before joining

Test plan

  • Sync releases from GitHub — no SqliteError crash
  • Save AI or WebDAV settings — no RangeError crash
  • Set API type to gemini, model to gemini-2.5-pro, click Test Connection — returns "Connection successful" within 30 s
  • Run a repository analysis with gemini-2.5-pro — summary contains only the model's reply, not its internal reasoning trace

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • Bug Fixes

    • Fixed data serialization in configuration storage to properly handle complex value types and prevent data corruption
    • Enhanced AI service connectivity testing with intelligent timeout adjustments based on model selection and improved diagnostic response capacity
    • Streamlined Gemini model response processing by filtering unnecessary content for better performance
  • Chores

    • Minor SQL statement formatting refinements

Review Change Stack

…nking model support

- server/routes/releases.ts: fix INSERT VALUES having 14 placeholders for 15
  columns (id, tag_name, name, body, html_url, published_at, prerelease, draft,
  is_read, assets, repo_id, repo_full_name, repo_name, zipball_url, tarball_url),
  causing SqliteError on every release sync

- server/routes/configs.ts: serialize object/array values with JSON.stringify
  before passing to better-sqlite3; the library treats plain objects as named
  parameter maps, which throws RangeError when updating AI/WebDAV settings

- src/services/aiService.ts (Gemini thinking models, e.g. gemini-2.5-pro):
  * filter out thought parts (thought: true) from response candidates so only
    the actual reply text is returned, not the internal reasoning trace
  * raise testConnection maxTokens from 50 to 2048; thinking models allocate
    a minimum of ~1024 tokens for reasoning, leaving nothing for output at 50
  * extend testConnection timeout to 30 s for gemini api type (was 10 s),
    matching the existing treatment of openai-responses and reasoningEffort

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 13, 2026

📝 Walkthrough

Walkthrough

This PR updates three separate backend components: SQLite value serialization in the settings route to prevent RangeError when storing complex objects, Gemini response filtering and improved test connection timeout handling in the AI service, and minor SQL formatting in the releases route.

Changes

Storage and Service Improvements

Layer / File(s) Summary
Settings storage value serialization
server/src/routes/configs.ts
The /api/settings PUT route now JSON-serializes non-primitive values (objects/arrays) before passing them to the SQLite INSERT OR REPLACE statement, replacing direct stmt.run(key, value ?? null) calls to prevent RangeError.
AI service Gemini improvements
src/services/aiService.ts
Gemini response text extraction filters out internal "thought" parts before joining content, testConnection extends timeout calculation for gemini models or when reasoningEffort is set, and increases connection-test maxTokens from 50 to 2048.
Releases SQL formatting
server/src/routes/releases.ts
Whitespace adjustment in the PUT /api/releases bulk upsert prepared statement between the column list and VALUES clause.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related PRs

  • AmintaCCCP/GithubStarsManager#114: Both PRs modify src/services/aiService.ts testConnection logic (timeout/maxTokens) and touch server/src/routes/configs.ts token handling.

Poem

🐰 Serialized tokens safe in SQLite's keep,
Gemini's thoughts now buried deep,
Timeouts extended, tokens spread wide,
Three little fixes, tested with pride! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately covers all three main fixes in the PR: SQL placeholder issue in releases, settings serialization in configs, and Gemini thinking model support in aiService.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/services/aiService.ts (1)

503-503: 💤 Low value

Consider adding a comment explaining the maxTokens value.

The increase from 50 to 2048 is necessary for thinking models that consume tokens during their reasoning phase, but this context isn't obvious from the code alone. A brief comment would help future maintainers understand why such a high value is needed for a simple "OK" response.

📝 Suggested comment
         const content = await this.requestText({
           system: 'You are a connection test assistant.',
           user: 'Reply with exactly one word: OK',
           temperature: 0,
+          // High maxTokens needed for thinking models (e.g., Gemini 2.5 Pro, o1) that consume
+          // tokens during reasoning phase before generating the response
           maxTokens: 2048,
           signal: controller.signal,
         });
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/aiService.ts` at line 503, Add a brief inline comment next to
the maxTokens setting in aiService.ts (the maxTokens: 2048 entry) explaining why
the value was increased from 50 to 2048 (e.g., to accommodate "thinking"
reasoning models that consume many tokens during internal reasoning even for
short outputs like "OK"), so future maintainers understand the rationale and
don't reduce it; reference the maxTokens property in the AI request construction
(where maxTokens is set) when adding the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In `@src/services/aiService.ts`:
- Line 503: Add a brief inline comment next to the maxTokens setting in
aiService.ts (the maxTokens: 2048 entry) explaining why the value was increased
from 50 to 2048 (e.g., to accommodate "thinking" reasoning models that consume
many tokens during internal reasoning even for short outputs like "OK"), so
future maintainers understand the rationale and don't reduce it; reference the
maxTokens property in the AI request construction (where maxTokens is set) when
adding the comment.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 37a05880-0868-497a-bb90-de8e818d0e38

📥 Commits

Reviewing files that changed from the base of the PR and between daae883 and 8113a05.

📒 Files selected for processing (3)
  • server/src/routes/configs.ts
  • server/src/routes/releases.ts
  • src/services/aiService.ts

@AmintaCCCP AmintaCCCP merged commit 21cf4aa into AmintaCCCP:main May 13, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants