fix(dashboard): resolve default model from settings for mission interview#49
Conversation
…view The mission interview route passed modelProvider/modelId from the request body directly without resolving the configured default model from settings. When "Use default" was selected, both values were undefined, causing createFnAgent to use pi's internal fallback instead of the user's configured default (e.g. zai/glm-5.1). This produced "AI returned no valid JSON" errors. Use resolvePlanningSettingsModel() to resolve the effective model from the settings hierarchy (planning-specific → project → global defaults), with explicit request overrides still taking precedence. Closes Runfusion#48 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR fixes mission interview to respect configured default models by using ChangesMission Interview Default Model Resolution
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/dashboard/src/mission-routes.ts (1)
402-419:⚠️ Potential issue | 🟠 Major | ⚡ Quick winNormalize model overrides before
??fallback to prevent empty-string override bugs.
""values currently bypass the “both-or-neither” intent and still override resolved defaults via??, which can pass invalid model settings intocreateMissionInterviewSession.Suggested fix
- const effectiveModel = resolvePlanningSettingsModel(settings); - const resolvedProvider = modelProvider ?? effectiveModel.provider; - const resolvedModelId = modelId ?? effectiveModel.modelId; + const normalizedProvider = typeof modelProvider === "string" && modelProvider.trim().length > 0 + ? modelProvider.trim() + : undefined; + const normalizedModelId = typeof modelId === "string" && modelId.trim().length > 0 + ? modelId.trim() + : undefined; + + if ((normalizedProvider === undefined) !== (normalizedModelId === undefined)) { + throw badRequest("Both modelProvider and modelId must be provided together, or neither should be provided"); + } + + const effectiveModel = resolvePlanningSettingsModel(settings); + const resolvedProvider = normalizedProvider ?? effectiveModel.provider; + const resolvedModelId = normalizedModelId ?? effectiveModel.modelId;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/dashboard/src/mission-routes.ts` around lines 402 - 419, Normalize incoming model override strings (treat empty string as undefined) before applying the both-or-neither check and the nullish-coalescing fallback: create normalizedModelProvider and normalizedModelId (convert "" to undefined, e.g. trim and if empty set undefined), use those normalized variables in the existing conditional that throws the badRequest and in computing resolvedProvider and resolvedModelId (where resolvePlanningSettingsModel(settings) is used). This ensures empty-string overrides do not bypass validation or win over defaults when computing resolvedProvider/resolvedModelId used for createMissionInterviewSession.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@packages/dashboard/src/mission-routes.ts`:
- Around line 402-419: Normalize incoming model override strings (treat empty
string as undefined) before applying the both-or-neither check and the
nullish-coalescing fallback: create normalizedModelProvider and
normalizedModelId (convert "" to undefined, e.g. trim and if empty set
undefined), use those normalized variables in the existing conditional that
throws the badRequest and in computing resolvedProvider and resolvedModelId
(where resolvePlanningSettingsModel(settings) is used). This ensures
empty-string overrides do not bypass validation or win over defaults when
computing resolvedProvider/resolvedModelId used for
createMissionInterviewSession.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 5d9c86af-4ccd-4692-a070-aa869db576be
📒 Files selected for processing (2)
packages/dashboard/src/__tests__/mission-e2e.test.tspackages/dashboard/src/mission-routes.ts
Greptile SummaryThis fix ensures the
Confidence Score: 4/5Safe to merge — the change is a narrow, well-tested fix that wires an already-existing resolver into the interview route. The production code path is correct: The new tests in Important Files Changed
Sequence DiagramsequenceDiagram
participant Client
participant Route as POST /interview/start
participant Store as TaskStore
participant Resolver as resolvePlanningSettingsModel
participant Session as createMissionInterviewSession
Client->>Route: { missionTitle, modelProvider?, modelId? }
Route->>Store: getSettings()
Store-->>Route: settings
Route->>Resolver: resolvePlanningSettingsModel(settings)
Note over Resolver: planningProvider/modelId<br/>→ planningGlobalProvider/modelId<br/>→ defaultProvider/modelId
Resolver-->>Route: { provider?, modelId? }
Note over Route: resolvedProvider = modelProvider ?? effectiveModel.provider<br/>resolvedModelId = modelId ?? effectiveModel.modelId
Route->>Session: (ip, title, rootDir, overrides, resolvedProvider, resolvedModelId)
Session-->>Route: sessionId
Route-->>Client: 201 { sessionId }
Reviews (1): Last reviewed commit: "fix(dashboard): resolve default model fr..." | Re-trigger Greptile |
|
|
||
| it("POST /api/missions/interview/start uses planning-specific model over global default", async () => { | ||
| // Configure scoped store with both planning-specific and global defaults | ||
| scopedStore = { | ||
| getRootDir: vi.fn().mockReturnValue(scopedRootDir), | ||
| getSettings: vi.fn().mockResolvedValue({ | ||
| promptOverrides: {}, | ||
| planningProvider: "anthropic", | ||
| planningModelId: "claude-sonnet-4-5", | ||
| defaultProvider: "zai", | ||
| defaultModelId: "glm-5.1", | ||
| }), | ||
| getMissionStore: vi.fn().mockReturnValue(createMockMissionStore()), | ||
| } as unknown as TaskStore; | ||
| vi.spyOn(projectStoreResolver, "getOrCreateProjectStore").mockResolvedValue(scopedStore); | ||
|
|
||
| const createSpy = vi | ||
| .spyOn(missionInterviewModule, "createMissionInterviewSession") | ||
| .mockResolvedValueOnce("planning-model-session-id"); | ||
|
|
||
| const { app } = buildApp(); | ||
| const res = await request( | ||
| app, | ||
| "POST", | ||
| `/api/missions/interview/start?projectId=${projectId}`, | ||
| JSON.stringify({ missionTitle: "Planning Model Mission" }), | ||
| { "content-type": "application/json" } | ||
| ); | ||
|
|
||
| expect(res.status).toBe(201); | ||
| // Planning-specific model should take priority over global default | ||
| expect(createSpy).toHaveBeenCalledWith( | ||
| expect.any(String), | ||
| "Planning Model Mission", | ||
| scopedRootDir, | ||
| {}, | ||
| "anthropic", | ||
| "claude-sonnet-4-5", | ||
| ); |
There was a problem hiding this comment.
Global-planning tier not covered by route tests
The new tests exercise tier 1 (project-level planningProvider/planningModelId) and tier 4 (defaultProvider/defaultModelId), but the intermediate planningGlobalProvider/planningGlobalModelId tier in resolvePlanningSettingsModel has no route-layer coverage. If the resolver's priority order is ever changed, a bug could silently survive. Adding a test with only planningGlobalProvider/planningGlobalModelId set (and no project-level planning fields) would close this gap.
Summary
undefinedfor bothmodelProviderandmodelIdresolvePlanningSettingsModel()which chains through: planning-specific → project defaults → global defaultsRoot Cause
POST /api/missions/interview/startinmission-routes.tspassed the request body'smodelProvider/modelIddirectly tocreateMissionInterviewSessionwithout resolving defaults. When "Use default" was selected, both wereundefined, causingcreateFnAgentto use pi's internal fallback instead of the user's configured default model (e.g.zai/glm-5.1). This produced"AI returned no valid JSON. Please try again."Test plan
Closes #48
🤶 Generated with Claude Code
Summary by CodeRabbit
Release Notes
Tests
Bug Fixes