Skip to content

fix: 修复 3 个后端 BUG + 新增向导恢复/超时重试功能#157

Open
Kobe9312 wants to merge 2 commits intoshenminglinyi:masterfrom
Kobe9312:master
Open

fix: 修复 3 个后端 BUG + 新增向导恢复/超时重试功能#157
Kobe9312 wants to merge 2 commits intoshenminglinyi:masterfrom
Kobe9312:master

Conversation

@Kobe9312
Copy link
Copy Markdown

@Kobe9312 Kobe9312 commented May 3, 2026

概述

本次 PR 包含 3 个后端 BUG 修复2 项前端向导功能增强,共涉及 9 个文件。


一、BUG 修复

BUG-1: 嵌入配置更新接口崩溃

文件: interfaces/api/v1/core/settings.py
问题: update_embedding_config 端点调用 updated.to_api_dict(),但 EmbeddingConfigModel 只有 to_dict() 方法,导致运行时 AttributeError 崩溃。
修复: 将 to_api_dict() 改为 to_dict()

# Before
return updated.to_api_dict()
# After
return updated.to_dict()

BUG-2: OpenAI Responses API 500 错误未兜底

文件: infrastructure/ai/providers/openai_provider.py
问题: OpenAI 兼容接口返回 500 InternalServerError 时,无法触发 Responses API → Chat Completions 的降级,直接抛出异常。
修复: 在 generate()stream_generate() 的异常捕获中新增 openai.InternalServerError,并在字符串匹配中增加 "500" / "Internal Server Error" 的判断。

BUG-3: 章节生成字数不受控

文件:

  • application/workflows/auto_novel_generation_workflow.py
  • interfaces/api/v1/engine/generation.py
  • application/engine/services/autopilot_daemon.py

问题: 用户在创建小说时设置的"每章目标字数" (target_words_per_chapter) 未传递给章节生成流程,LLM 始终按硬编码的 3000-4000 字生成。
修复:

  • API 层读取小说的 target_words_per_chapter,传递给 workflow
  • Prompt 中的章节长度规则动态使用目标字数
  • 自动导航模式下 max_tokens 根据目标字数动态计算

二、前端功能增强

功能 1: 向导恢复入口

文件:

  • frontend/src/composables/useWorkbench.ts
  • frontend/src/views/Workbench.vue
  • frontend/src/components/stats/StatsTopBar.vue

问题: 新书设置向导在刷新页面后消失,无法恢复。用户只能进入工作台手动补全未完成的设定(地点、故事线等)。
新增:

  • 工作台 StatsTopBar 工具栏添加「设置向导」按钮
  • 点击后打开 NovelSetupGuide 组件,自动检测已有进度并跳到对应步骤

功能 2: 向导超时重试

文件: frontend/src/components/onboarding/NovelSetupGuide.vue

问题: LLM 生成较慢时前端轮询超时,仅显示错误提示,无重试入口。即使后台实际已完成生成,用户也无法获取数据。
新增:

  • 超时错误提示旁新增「刷新检查」按钮
  • 点击后先向后台查询已有数据,若已完成则直接展示
  • 若未完成则重新触发生成并启动轮询
  • 向导恢复时自动触发生成(步骤 2/3)

影响范围

层级 改动
接口层 嵌入配置 API、章节生成 API
基础设施层 OpenAI 提供者错误处理
应用层 章节生成 workflow、自动导航
前端 工作台、设置向导组件

所有改动均为增量修复,不影响现有功能。

Summary by CodeRabbit

  • New Features

    • Setup wizard modal accessible from the stats panel
    • Retry flow for character/location generation with "refresh check" control
    • New API to fetch available embedding models
  • Improvements

    • Setup wizard shows clearer error headers, warnings and recovery actions
    • Generation fallbacks broadened for more robust AI requests
    • Chapter generation honors configurable target words per novel (including streaming)

- Fix to_api_dict() crash in embedding config update (settings.py:175)
- Fix OpenAI provider 500 InternalServerError handling for Responses API fallback
- Fix chapter generation word count control: use novel target_words_per_chapter
- Fix autopilot max_tokens based on target words per chapter
- Add 'Setup Wizard' button in workbench StatsTopBar for existing novels
- Add retry/refresh button in wizard when generation times out
- Auto-trigger generation when wizard resumes at character/location steps
@Kobe9312 Kobe9312 requested a review from shenminglinyi as a code owner May 3, 2026 04:14
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 3, 2026

📝 Walkthrough

Walkthrough

Adds configurable target-word plumbing for chapter generation, updates daemon token budgeting to use the target, exposes target words via the generation API, introduces a frontend setup wizard and retry flows, broadens OpenAI provider fallback/error detection, and adds an embedding-model fetch endpoint.

Changes

Chapter Generation Target Words

Layer / File(s) Summary
API Wiring
interfaces/api/v1/engine/generation.py
generate_chapter_stream now depends on novel_service, loads the novel, derives target_words from novel.target_words_per_chapter (fallback 2500), and passes it to workflow.
Workflow Core
application/workflows/auto_novel_generation_workflow.py
Added target_words: int = 2500 to generate_chapter, generate_chapter_stream, and build_chapter_prompt; beat path calls magnify_outline_to_beats(..., target_chapter_words=target_words); non-beat path forwards target_words into prompt builder; prompt length rule uses beat_target_words or target_words.
Daemon Token Budget
application/engine/services/autopilot_daemon.py
Non-beat generation now sets GenerationConfig.max_tokens = int(tw * 1.5) (was fixed 3000), linking token budget to target words.
Tests
tests/unit/application/workflows/test_auto_novel_generation_workflow.py
Mocks magnify_outline_to_beats() and updates prompt-fingerprint test to scan mock_llm_service.generate.call_args_list for the injected fingerprint prompt.

Frontend: Setup Wizard & Onboarding

Layer / File(s) Summary
State Hook
frontend/src/composables/useWorkbench.ts
Adds targetChapters (default 100) and novelId refs; loadDesk() populates them from novelData; they are returned to consumers.
Top Bar / View
frontend/src/components/stats/StatsTopBar.vue, frontend/src/views/Workbench.vue
StatsTopBar adds "设置向导" button emitting open-wizard; Workbench handles open-wizard, adds showSetupWizard state, and conditionally renders NovelSetupGuide with novelId and targetChapters.
Onboarding Modal Logic
frontend/src/components/onboarding/NovelSetupGuide.vue
Character/location error panels changed to warning with header/footer slots; added retry buttons with loading states; new triggerStepGeneration(step) centralizes step generation on mount/show; retryCharacters()/retryLocations() fetch bible, detect early-complete, or re-trigger generation with polling and error handling.

OpenAI Provider Fallbacks

Layer / File(s) Summary
Provider
infrastructure/ai/providers/openai_provider.py
generate() and stream_generate() now include openai.InternalServerError in fallback exception tuples; gateway/error-matching expanded to detect "500" and "Internal Server Error" alongside existing 400/404/account/argument markers.

Embedding Settings API

Layer / File(s) Summary
Settings Endpoint
interfaces/api/v1/core/settings.py
update_embedding_config now returns updated.to_dict() (not to_api_dict()); added POST /settings/embedding/fetch-models which validates base_url, calls workbench model listing, and returns model IDs (or [] when base_url empty).

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant API as Engine API
    participant NovelSvc as NovelService
    participant Workflow as AutoNovelGenerationWorkflow
    participant Daemon as AutopilotDaemon
    participant LLM as LLM Provider

    Client->>API: request generate_chapter_stream(novel_id, ...)
    API->>NovelSvc: get_novel(novel_id)
    NovelSvc-->>API: novel (target_words_per_chapter)
    API->>Workflow: generate_chapter_stream(..., target_words)
    Workflow->>Daemon: request writing (beat or non-beat)
    alt non-beat
        Daemon->>Daemon: compute max_tokens = int(tw * 1.5)
    end
    Daemon->>LLM: stream_generate(prompt, token_budget)
    LLM-->>Daemon: streaming tokens / events
    Daemon-->>Workflow: stream events
    Workflow-->>API: forward stream to Client
Loading

(Note: colored rectangles not used; diagram focuses on component interactions.)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • shenminglinyi

Poem

🐰 I hopped through prompts and word-count fields so wide,
I nudged the daemon's tokens to swell with pride,
The wizard glows, retries hum in the night,
OpenAI now falls back with steadier light,
A rabbit cheers: clean wires and gentle guide.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main changes: 3 backend BUG fixes and 2 frontend wizard feature enhancements for recovery/retry functionality.
Description check ✅ Passed The description is comprehensive and well-structured, covering all required sections: it details 3 backend bugs with files and fixes, 2 frontend enhancements, and impact scope, though testing command output is not included.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get your free trial and get 200 agent minutes per Slack user (a $50 value).


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
frontend/src/components/onboarding/NovelSetupGuide.vue (1)

954-975: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Guard async resume flow against stale open/close cycles.

After await detectWizardProgress(), stale callbacks can still run and call triggerStepGeneration(step) even if the wizard was closed (or reopened), causing unintended backend generation.

Suggested fix
 const currentStep = ref(1)
+const openSessionEpoch = ref(0)

 watch(
   () => props.show,
   async (val) => {
     if (val) {
+      const epoch = ++openSessionEpoch.value
       resetWizardStateForOpen()
       // 检查已有进度,确定从哪一步继续
       const step = await detectWizardProgress()
+      if (!props.show || openSessionEpoch.value !== epoch) return
       currentStep.value = step
       triggerStepGeneration(step)
     } else {
+      openSessionEpoch.value += 1
       stopGenerationOnClose()
     }
   }
 )

 onMounted(async () => {
   if (props.show) {
+    const epoch = ++openSessionEpoch.value
     resetWizardStateForOpen()
     const step = await detectWizardProgress()
+    if (!props.show || openSessionEpoch.value !== epoch) return
     currentStep.value = step
     triggerStepGeneration(step)
   }
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/onboarding/NovelSetupGuide.vue` around lines 954 -
975, After awaiting detectWizardProgress() ensure the wizard is still open
before proceeding: in both the watcher and the onMounted handler, capture the
current visibility (or increment a local generation token/nonce) before the
await and then check that props.show is still true (or the token matches) after
the await; only then set currentStep.value and call triggerStepGeneration(step);
also propagate the same guard to any async paths started by
resetWizardStateForOpen so stale callbacks cannot call triggerStepGeneration
when stopGenerationOnClose has run.
infrastructure/ai/providers/openai_provider.py (1)

126-135: ⚠️ Potential issue | 🟠 Major

Only fall back to Chat Completions before the first chunk is emitted; otherwise re-raise the exception.

The exception handlers at lines 126–132 can execute after the Responses stream has already yielded content (line 124). When they complete without returning or re-raising, execution continues to the fallback logic (lines 137–144), which creates and yields from an entirely new Chat Completions stream. Callers like auto_novel_generation_workflow.py and autopilot_daemon.py iterate over chunks sequentially and will receive mixed or duplicated content from both streams, corrupting the generated output. Add a flag to track whether any chunks have been yielded, and only proceed with fallback if the stream hasn't started; otherwise, re-raise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrastructure/ai/providers/openai_provider.py` around lines 126 - 135, The
Responses stream exception handlers currently fall back to Chat Completions even
if the Responses stream has already emitted chunks; add a local flag (e.g.,
started_emitting = False) in the streaming generator that you set to True
immediately before yielding the first chunk, then in the except blocks (the
handlers that call self.__class__._fallback_to_chat_cache.add and logger.info)
check that flag and only perform the fallback when started_emitting is False; if
started_emitting is True, re-raise the caught exception instead of swallowing it
so callers do not receive mixed/duplicated content. Ensure the flag is
referenced in the same method handling the Responses stream and use the original
exception when re-raising.
application/workflows/auto_novel_generation_workflow.py (1)

737-766: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Forward target_words into _build_prompt().

build_chapter_prompt() accepts the new parameter, but it never passes it through. That makes the API misleading and leaves callers on the default 2500-word prompt text, which breaks the new chapter-length control in helper-based call sites.

🔧 Suggested fix
     return self._build_prompt(
         context,
         outline,
         storyline_context=storyline_context,
@@
         voice_anchors=voice_anchors,
         chapter_draft_so_far=chapter_draft_so_far,
+        target_words=target_words,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@application/workflows/auto_novel_generation_workflow.py` around lines 737 -
766, The build_chapter_prompt function accepts a target_words parameter but
never forwards it to _build_prompt, so pass target_words through when calling
self._build_prompt (e.g., add target_words=target_words to the _build_prompt
keyword args); also verify the _build_prompt signature (and any downstream
Prompt-building helpers) accepts and uses target_words so the chapter-length
control is honored by build_chapter_prompt and its callers.
application/engine/services/autopilot_daemon.py (1)

699-714: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Compute the chapter target before the non-beat branch.

tw is only assigned inside the beat-generation block, but this branch now uses it too. If context_builder is absent, use_wf can still be true and this path will raise UnboundLocalError. Move the target-word lookup out of the beat-only section and reuse it here.

🛠️ Suggested fix
 async def _handle_writing(self, novel: Novel):
     """处理写作(节拍级幂等落库)"""
     if not self._is_still_running(novel):
         return
 
+    tw = getattr(novel, "target_words_per_chapter", None) or 2500
+
     # 1. 目标控制:达到目标章节数则自动停止(允许用户设置更高的 max_auto_chapters 作为保护上限)
     target_chapters = novel.target_chapters or 50
@@
         # 5. 节拍放大
         beats = []
         if self.context_builder:
-            tw = getattr(novel, "target_words_per_chapter", None) or 2500
             beats = self.context_builder.magnify_outline_to_beats(
                 chapter_num, outline, target_chapter_words=int(tw)
             )
@@
             if use_wf:
                 prompt = self.chapter_workflow.build_chapter_prompt(
                     bundle["context"],
                     outline,
@@
                     voice_anchors=voice_anchors,
                 )
                 cfg = GenerationConfig(max_tokens=int(tw * 1.5), temperature=0.85)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@application/engine/services/autopilot_daemon.py` around lines 699 - 714, The
code uses tw (chapter target words) only inside the beat-generation branch but
later references it in the non-beat branch (causing UnboundLocalError if
context_builder is absent); compute/assign tw (the chapter target lookup) before
the branching so both beat and non-beat paths can use it. Specifically, move the
logic that derives tw (the target word count) out of the beat-only block so it
runs before the if use_wf: / else: branching that calls
chapter_workflow.build_chapter_prompt and GenerationConfig(max_tokens=int(tw *
1.5)), ensuring functions/methods like context_builder, build_chapter_prompt,
GenerationConfig, and _stream_llm_with_stop_watch still receive the same
tw-derived value and preserving the existing _is_still_running(novel) check and
logging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@frontend/src/components/stats/StatsTopBar.vue`:
- Around line 75-79: The clickable div in StatsTopBar.vue (the element with
class "action-trigger" and `@click`="$emit('open-wizard')") is not
keyboard-accessible; add tabindex="0" and keyboard handlers so Enter and Space
also trigger the same action (either via `@keydown.enter` and `@keydown.space` that
call $emit('open-wizard') or by adding an onKeydown method that emits and
prevents default for Space), keep role="button" and aria-label, and ensure the
handler prevents default for Space to avoid page scrolling.

In `@frontend/src/composables/useWorkbench.ts`:
- Around line 74-75: The code uses || to set defaults for novelId.value and
targetChapters.value which can override valid falsy API values; update the
assignments in useWorkbench (novelId.value = novelData.id || slug and
targetChapters.value = novelData.target_chapters || 100) to use the nullish
coalescing operator (??) so they become novelId.value = novelData.id ?? slug and
targetChapters.value = novelData.target_chapters ?? 100, preserving legitimate
falsy values like 0 or "" from the DTO.

In `@infrastructure/ai/providers/openai_provider.py`:
- Around line 66-73: The code is incorrectly treating transient server errors as
permanent "Responses API unsupported" signals by adding base_url to
_fallback_to_chat_cache on InternalServerError and "500"/"Internal Server Error"
checks; update the exception handling in both generate() and stream_generate()
so that only protocol-mismatch / client errors (e.g., 400, 404, "Not Found",
"Account invalid", "INVALID_ARGUMENT") cause adding base_url to
self.__class__._fallback_to_chat_cache, and remove InternalServerError and any
5xx/"500" string checks from the cache-path so transient 5xx failures are left
to normal retry logic instead of being permanently cached as unsupported.

In `@interfaces/api/v1/core/settings.py`:
- Line 175: The update endpoint currently returns updated.to_dict() which omits
created_at/updated_at and breaks the existing response contract; change the
response in the update handler to include the timestamp fields (matching
get_embedding_config()) by merging or adding created_at and updated_at from the
updated model into the returned dict (use the same keys/format as
get_embedding_config()) so callers continue to receive the timestamps after
saving.

---

Outside diff comments:
In `@application/engine/services/autopilot_daemon.py`:
- Around line 699-714: The code uses tw (chapter target words) only inside the
beat-generation branch but later references it in the non-beat branch (causing
UnboundLocalError if context_builder is absent); compute/assign tw (the chapter
target lookup) before the branching so both beat and non-beat paths can use it.
Specifically, move the logic that derives tw (the target word count) out of the
beat-only block so it runs before the if use_wf: / else: branching that calls
chapter_workflow.build_chapter_prompt and GenerationConfig(max_tokens=int(tw *
1.5)), ensuring functions/methods like context_builder, build_chapter_prompt,
GenerationConfig, and _stream_llm_with_stop_watch still receive the same
tw-derived value and preserving the existing _is_still_running(novel) check and
logging.

In `@application/workflows/auto_novel_generation_workflow.py`:
- Around line 737-766: The build_chapter_prompt function accepts a target_words
parameter but never forwards it to _build_prompt, so pass target_words through
when calling self._build_prompt (e.g., add target_words=target_words to the
_build_prompt keyword args); also verify the _build_prompt signature (and any
downstream Prompt-building helpers) accepts and uses target_words so the
chapter-length control is honored by build_chapter_prompt and its callers.

In `@frontend/src/components/onboarding/NovelSetupGuide.vue`:
- Around line 954-975: After awaiting detectWizardProgress() ensure the wizard
is still open before proceeding: in both the watcher and the onMounted handler,
capture the current visibility (or increment a local generation token/nonce)
before the await and then check that props.show is still true (or the token
matches) after the await; only then set currentStep.value and call
triggerStepGeneration(step); also propagate the same guard to any async paths
started by resetWizardStateForOpen so stale callbacks cannot call
triggerStepGeneration when stopGenerationOnClose has run.

In `@infrastructure/ai/providers/openai_provider.py`:
- Around line 126-135: The Responses stream exception handlers currently fall
back to Chat Completions even if the Responses stream has already emitted
chunks; add a local flag (e.g., started_emitting = False) in the streaming
generator that you set to True immediately before yielding the first chunk, then
in the except blocks (the handlers that call
self.__class__._fallback_to_chat_cache.add and logger.info) check that flag and
only perform the fallback when started_emitting is False; if started_emitting is
True, re-raise the caught exception instead of swallowing it so callers do not
receive mixed/duplicated content. Ensure the flag is referenced in the same
method handling the Responses stream and use the original exception when
re-raising.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 5643d0b5-38c4-4699-b19c-1e7afc966d92

📥 Commits

Reviewing files that changed from the base of the PR and between 6dab93a and 21e5236.

📒 Files selected for processing (9)
  • application/engine/services/autopilot_daemon.py
  • application/workflows/auto_novel_generation_workflow.py
  • frontend/src/components/onboarding/NovelSetupGuide.vue
  • frontend/src/components/stats/StatsTopBar.vue
  • frontend/src/composables/useWorkbench.ts
  • frontend/src/views/Workbench.vue
  • infrastructure/ai/providers/openai_provider.py
  • interfaces/api/v1/core/settings.py
  • interfaces/api/v1/engine/generation.py

Comment on lines +75 to +79
<div class="action-trigger" @click="$emit('open-wizard')" role="button" aria-label="打开设置向导" title="新书设置向导">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="18" height="18">
<path fill="currentColor" d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zm-5 14H7v-2h7v2zm3-4H7v-2h10v2zm0-4H7V7h10v2z"/>
</svg>
</div>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Make the new wizard trigger keyboard-accessible.

The new interactive control is mouse-only right now. Keyboard users can’t reliably trigger it.

Suggested fix
-      <div class="action-trigger" `@click`="$emit('open-wizard')" role="button" aria-label="打开设置向导" title="新书设置向导">
+      <div
+        class="action-trigger"
+        role="button"
+        tabindex="0"
+        aria-label="打开设置向导"
+        title="新书设置向导"
+        `@click`="$emit('open-wizard')"
+        `@keydown.enter.prevent`="$emit('open-wizard')"
+        `@keydown.space.prevent`="$emit('open-wizard')"
+      >
         <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="18" height="18">
           <path fill="currentColor" d="M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zm-5 14H7v-2h7v2zm3-4H7v-2h10v2zm0-4H7V7h10v2z"/>
         </svg>
       </div>
+.action-trigger:focus-visible {
+  outline: 2px solid rgba(255, 255, 255, 0.55);
+  outline-offset: 2px;
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/stats/StatsTopBar.vue` around lines 75 - 79, The
clickable div in StatsTopBar.vue (the element with class "action-trigger" and
`@click`="$emit('open-wizard')") is not keyboard-accessible; add tabindex="0" and
keyboard handlers so Enter and Space also trigger the same action (either via
`@keydown.enter` and `@keydown.space` that call $emit('open-wizard') or by adding an
onKeydown method that emits and prevents default for Space), keep role="button"
and aria-label, and ensure the handler prevents default for Space to avoid page
scrolling.

Comment on lines +74 to +75
novelId.value = novelData.id || slug
targetChapters.value = novelData.target_chapters || 100
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Use nullish fallback instead of || for DTO defaults.

|| can overwrite valid falsy values from API payloads. ?? is safer here.

Suggested fix
-    novelId.value = novelData.id || slug
-    targetChapters.value = novelData.target_chapters || 100
+    novelId.value = novelData.id ?? slug
+    targetChapters.value = novelData.target_chapters ?? 100
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
novelId.value = novelData.id || slug
targetChapters.value = novelData.target_chapters || 100
novelId.value = novelData.id ?? slug
targetChapters.value = novelData.target_chapters ?? 100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/composables/useWorkbench.ts` around lines 74 - 75, The code uses
|| to set defaults for novelId.value and targetChapters.value which can override
valid falsy API values; update the assignments in useWorkbench (novelId.value =
novelData.id || slug and targetChapters.value = novelData.target_chapters ||
100) to use the nullish coalescing operator (??) so they become novelId.value =
novelData.id ?? slug and targetChapters.value = novelData.target_chapters ??
100, preserving legitimate falsy values like 0 or "" from the DTO.

Comment on lines +66 to 73
except (openai.NotFoundError, openai.BadRequestError, openai.InternalServerError, RuntimeError) as e:
logger.info(f"Responses API unsupported for {base_url}, falling back to chat completions: {str(e)}")
self.__class__._fallback_to_chat_cache.add(base_url)
except Exception as e:
# 某些网关在路径错误时可能不抛严格的 404 而是抛出其他错误,如果消息含有明确路径错误也尝试降级
if "404" in str(e) or "Not Found" in str(e) or "400" in str(e) or "Account invalid" in str(e) or "INVALID_ARGUMENT" in str(e):
if "404" in str(e) or "Not Found" in str(e) or "400" in str(e) or "500" in str(e) or "Internal Server Error" in str(e) or "Account invalid" in str(e) or "INVALID_ARGUMENT" in str(e):
logger.info(f"Gateway returned error for Responses API ({base_url}), falling back: {str(e)}")
self.__class__._fallback_to_chat_cache.add(base_url)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, examine the file structure and the specific lines mentioned
wc -l infrastructure/ai/providers/openai_provider.py

Repository: shenminglinyi/PlotPilot

Length of output: 118


🏁 Script executed:

# Read the context around lines 66-73 and 126-135
sed -n '50,90p' infrastructure/ai/providers/openai_provider.py

Repository: shenminglinyi/PlotPilot

Length of output: 2193


🏁 Script executed:

# Get broader context to understand the class and _fallback_to_chat_cache usage
sed -n '1,65p' infrastructure/ai/providers/openai_provider.py

Repository: shenminglinyi/PlotPilot

Length of output: 2185


🏁 Script executed:

# Check how _fallback_to_chat_cache is initialized and used throughout the file
rg '_fallback_to_chat_cache' infrastructure/ai/providers/openai_provider.py -B 2 -A 2

Repository: shenminglinyi/PlotPilot

Length of output: 2349


🏁 Script executed:

# Look at stream_generate method
sed -n '110,145p' infrastructure/ai/providers/openai_provider.py

Repository: shenminglinyi/PlotPilot

Length of output: 2183


🏁 Script executed:

# Search for retry logic in the callers mentioned in the scratchpad
rg -l "generate|stream_generate" application/ai/structured_json_pipeline.py application/workflows/auto_novel_generation_workflow.py application/engine/services/autopilot_daemon.py 2>/dev/null | head -5

Repository: shenminglinyi/PlotPilot

Length of output: 214


🏁 Script executed:

# Check one of the callers to see how they handle retries
sed -n '1,150p' application/ai/structured_json_pipeline.py | head -80

Repository: shenminglinyi/PlotPilot

Length of output: 2234


🏁 Script executed:

# Look for retry decorators or mechanisms
rg -i "retry|attempt|tries" infrastructure/ai/providers/openai_provider.py

Repository: shenminglinyi/PlotPilot

Length of output: 49


🏁 Script executed:

# Check if _fallback_to_chat_cache is ever cleared or reset
rg -i "clear|reset|_fallback_to_chat_cache\s*=" infrastructure/ai/providers/openai_provider.py

Repository: shenminglinyi/PlotPilot

Length of output: 49


🏁 Script executed:

# Examine how generate is called by checking if any retry wrapper is used
grep -n "await.*generate\|\.generate(" application/ai/structured_json_pipeline.py | head -20

Repository: shenminglinyi/PlotPilot

Length of output: 189


🏁 Script executed:

# Look at the broader error handling in OpenAIProvider to see if there's recovery
sed -n '1,350p' infrastructure/ai/providers/openai_provider.py | tail -100

Repository: shenminglinyi/PlotPilot

Length of output: 3800


🏁 Script executed:

# Check if there's any process-lifetime reset or TTL mechanism for _fallback_to_chat_cache
rg -i "fallback.*cache|cache.*reset|process.*lifetime" infrastructure/ai/providers/ --type py

Repository: shenminglinyi/PlotPilot

Length of output: 958


🏁 Script executed:

# Verify LLM_MAX_TOTAL_ATTEMPTS and retry policy
cat application/ai/llm_retry_policy.py 2>/dev/null || rg "LLM_MAX_TOTAL_ATTEMPTS" application/ai/

Repository: shenminglinyi/PlotPilot

Length of output: 171


🏁 Script executed:

# Check if OpenAIProvider is instantiated per-request or singleton
rg "OpenAIProvider\(" application/ infrastructure/ --type py | head -20

Repository: shenminglinyi/PlotPilot

Length of output: 228


Don't cache transient 5xx errors as permanent "unsupported API" signals.

InternalServerError and "500" / "Internal Server Error" string checks represent transient server faults, not proof that Responses API is unsupported. Adding these errors to _fallback_to_chat_cache permanently disables Responses API for that base_url for the process lifetime with no recovery mechanism. This bypasses the retryable error detection in callers (e.g., structured_json_pipeline.py), which expect 5xx failures to be retried. Keep the cache limited to protocol-mismatch errors (404, 400, invalid account). The same issue exists in both generate() and stream_generate() methods; for streaming, it also risks splicing partial content from one API with a different API's continuation.

Suggested adjustment
-                except (openai.NotFoundError, openai.BadRequestError, openai.InternalServerError, RuntimeError) as e:
+                except (openai.NotFoundError, openai.BadRequestError, RuntimeError) as e:
                     logger.info(f"Responses API unsupported for {base_url}, falling back to chat completions: {str(e)}")
                     self.__class__._fallback_to_chat_cache.add(base_url)
                 except Exception as e:
-                    if "404" in str(e) or "Not Found" in str(e) or "400" in str(e) or "500" in str(e) or "Internal Server Error" in str(e) or "Account invalid" in str(e) or "INVALID_ARGUMENT" in str(e):
+                    if "404" in str(e) or "Not Found" in str(e) or "400" in str(e) or "Account invalid" in str(e) or "INVALID_ARGUMENT" in str(e):
                         logger.info(f"Gateway returned error for Responses API ({base_url}), falling back: {str(e)}")
                         self.__class__._fallback_to_chat_cache.add(base_url)

Apply the same change to stream_generate() at lines 126–135.

🧰 Tools
🪛 Ruff (0.15.12)

[warning] 67-67: Use explicit conversion flag

Replace with conversion flag

(RUF010)


[warning] 70-70: Comment contains ambiguous (FULLWIDTH COMMA). Did you mean , (COMMA)?

(RUF003)


[warning] 72-72: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrastructure/ai/providers/openai_provider.py` around lines 66 - 73, The
code is incorrectly treating transient server errors as permanent "Responses API
unsupported" signals by adding base_url to _fallback_to_chat_cache on
InternalServerError and "500"/"Internal Server Error" checks; update the
exception handling in both generate() and stream_generate() so that only
protocol-mismatch / client errors (e.g., 400, 404, "Not Found", "Account
invalid", "INVALID_ARGUMENT") cause adding base_url to
self.__class__._fallback_to_chat_cache, and remove InternalServerError and any
5xx/"500" string checks from the cache-path so transient 5xx failures are left
to normal retry logic instead of being permanently cached as unsupported.

model_path=body.model_path,
)
return updated.to_api_dict()
return updated.to_dict()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Preserve the update response contract.

updated.to_dict() drops created_at / updated_at, so this endpoint no longer matches get_embedding_config() or the prior API payload. That’s a breaking change for any caller that reads the timestamps after saving.

Suggested fix
-    return updated.to_dict()
+    return updated.to_api_dict()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return updated.to_dict()
return updated.to_api_dict()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interfaces/api/v1/core/settings.py` at line 175, The update endpoint
currently returns updated.to_dict() which omits created_at/updated_at and breaks
the existing response contract; change the response in the update handler to
include the timestamp fields (matching get_embedding_config()) by merging or
adding created_at and updated_at from the updated model into the returned dict
(use the same keys/format as get_embedding_config()) so callers continue to
receive the timestamps after saving.

…fy_outline_to_beats return value to mock_context_builder - Fix fingerprint test to handle keyword args from state_extractor
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/unit/application/workflows/test_auto_novel_generation_workflow.py (1)

37-37: Don't stub magnify_outline_to_beats globally in this shared fixture.

This forces every workflow test in this module down the single-segment path, and there are no dedicated tests elsewhere to cover the beat-generation and target-word plumbing introduced by this PR. Keep the default fixture closer to real behavior and stub the beat path only in tests that intentionally need it, or add a dedicated beat-path test.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/application/workflows/test_auto_novel_generation_workflow.py` at
line 37, The shared test fixture currently forces every test down the
single-segment path by stubbing builder.magnify_outline_to_beats.return_value =
[], so remove that global stub from the shared fixture and let
magnify_outline_to_beats run its real behavior by default; for tests that need
the single-segment path, set builder.magnify_outline_to_beats.return_value = []
locally inside those test functions (or use a per-test monkeypatch), and add at
least one dedicated test that exercises the beat-generation path to validate
beat creation and target-word plumbing (asserting behaviors produced by
magnify_outline_to_beats and consumed by the workflow).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/unit/application/workflows/test_auto_novel_generation_workflow.py`:
- Line 37: The shared test fixture currently forces every test down the
single-segment path by stubbing builder.magnify_outline_to_beats.return_value =
[], so remove that global stub from the shared fixture and let
magnify_outline_to_beats run its real behavior by default; for tests that need
the single-segment path, set builder.magnify_outline_to_beats.return_value = []
locally inside those test functions (or use a per-test monkeypatch), and add at
least one dedicated test that exercises the beat-generation path to validate
beat creation and target-word plumbing (asserting behaviors produced by
magnify_outline_to_beats and consumed by the workflow).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 01f688d7-0229-4e12-86a5-458a6ea137a8

📥 Commits

Reviewing files that changed from the base of the PR and between 21e5236 and dbc29f8.

📒 Files selected for processing (1)
  • tests/unit/application/workflows/test_auto_novel_generation_workflow.py

@Kobe9312
Copy link
Copy Markdown
Author

Kobe9312 commented May 3, 2026

关于 Backend CI 测试失败的说明

经过排查,本次 PR 没有引入任何测试失败。CI 中 Backend CI / test 的失败是主仓库 预存的 42 个测试失败,与本 PR 无关。

证据

1. 本 PR 涉及的文件,对应测试全部通过

改动文件 对应单元测试 结果
application/workflows/auto_novel_generation_workflow.py test_auto_novel_generation_workflow.py ✅ 19/19
infrastructure/ai/providers/openai_provider.py test_openai_provider.py ⚠️ 2 个预存失败
application/engine/services/autopilot_daemon.py 无独立单测
interfaces/api/v1/core/settings.py 无独立单测
interfaces/api/v1/engine/generation.py 无独立单测

2. 预存失败的 42 个测试(与 PR 无关)

在本地执行与 CI 相同命令(pytest tests/unit -q --tb=short --import-mode=importlib --strict-markers)的结果:

42 failed, 712 passed, 5 skipped in 113.28s

预存失败的测试分布:

test_context_builder.py              7 个(build_context 基本场景、storylines、plot_arc、scene_director 等)
test_context_builder_pov_firewall.py 6 个(POV 防火墙相关)
test_chromadb_vector_store_unit.py   6 个(ChromaDB 向量存储)
test_dependencies.py                 6 个(Qdrant / ChromaDB 依赖注入)
test_chapter_indexing_service.py     4 个(章节索引服务)
test_trigger_keyword_bible_integration 3 个(触发关键词集成)
test_hosted_write_service.py         2 个(托管写入服务)
test_state_updater.py                2 个(状态更新器)
test_openai_provider.py              2 个(OpenAI Provider)
test_context_budget_recent_chapters  1 个
test_auto_bible_generator.py         1 个
test_llm_service.py                  1 个
test_settings.py                     1 个

3. 为什么会触发?

主仓库的 Backend CI 触发条件是 PR 中修改了 .py 文件:

on:
  pull_request:
    paths:
      - "**.py"

近期没有修改 Python 文件的 PR,所以没触发 CI。本次 PR 开启了 CI 才发现主仓库存在预存测试失败。

建议

  • 如果接受当前修复:可以跳过 CI 要求直接合并(42 个失败均为预存,不影响本次改动)
  • 如果需要 CI 全绿:需先修复主仓库的 42 个预存测试失败,再进行合并

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant