Streamline featured models dropdown#1682
Conversation
Reduces dropdown friction by surfacing only the three models we want users to reach for first. Promotes Claude Opus 4.7 to the top spot (replacing 4.5), bumps Sonnet 4.5 to 4.6, and removes GPT-5.2, Gemini 2.5 Flash, Gemini 3 Pro, and Grok 4 from the featured list — those remain available in the full model list. Made-with: Cursor
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis pull request updates the featured AI models configuration by replacing six older model entries with three newer ones and reordering their display priority, positioning Anthropic models first and the default model last. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ❌ 1❌ Failed checks (1 warning)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 94e9062b0d
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| { | ||
| id: "anthropic/claude-opus-4.5", | ||
| displayName: "Claude Opus 4.5", | ||
| id: "anthropic/claude-opus-4.7", |
There was a problem hiding this comment.
Update tests for the changed featured model set
Changing the featured list here from openai/gpt-5.2 to anthropic/claude-opus-4.7 changes organizeModels output, but lib/ai/__tests__/organizeModels.test.ts still expects openai/gpt-5.2 to appear in featuredModels. Since .github/workflows/test.yaml runs pnpm test on every PR, this commit will leave CI red until the test fixture/assertion is updated to match the new featured IDs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
No issues found across 1 file
Confidence score: 5/5
- Automated review surfaced no issues in the provided summaries.
- No files require special attention.
Requires human review: Requires manual verification that the new model IDs (Claude Opus 4.7, Sonnet 4.6) exist in the AI Gateway to prevent them from being filtered out of the UI.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
lib/ai/featuredModels.ts (1)
45-50: Avoid metadata drift when usingDEFAULT_MODELin featured config.
idcomes fromDEFAULT_MODEL, butdisplayName/description/tooltipare hardcoded. IfDEFAULT_MODELchanges, this entry can become semantically incorrect in UI. Consider pinning a literal ID here or deriving the text from model metadata at render time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/ai/featuredModels.ts` around lines 45 - 50, The featuredModels entry uses DEFAULT_MODEL for id but hardcodes displayName/description/tooltip which can drift; either replace id: DEFAULT_MODEL with the concrete literal model ID you intend to pin (so the displayName/description/tooltip remain correct), or remove the hardcoded displayName/description/tooltip from the object in lib/ai/featuredModels.ts and instead resolve those fields at render time from your model metadata lookup (using DEFAULT_MODEL to select the metadata in the UI rendering code); update the code paths that consume this object (e.g., UI component that reads displayName/description/tooltip) accordingly so they pull live metadata when needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/ai/featuredModels.ts`:
- Around line 30-50: Update the organize-models test expectations to reflect the
new featured models and their order: replace assertions that expect
"openai/gpt-5.2" with checks for "anthropic/claude-opus-4.7" first, then
"anthropic/claude-sonnet-4.6", and then the DEFAULT_MODEL entry (displayName
"GPT-5.4 Mini") in that exact order; adjust any checks for
isPro/pill/description/tooltip to match the featured entries (e.g., isPro true
for the Anthropic models, pill "Fast" for DEFAULT_MODEL) in
organizeModels.test.ts so the test's featured set and ordering align with the
array in featuredModels.ts.
- Around line 30-39: The FEATURED_MODELS list contains new Anthropic IDs
("anthropic/claude-opus-4.7", "anthropic/claude-sonnet-4.6") but organizeModels
currently silently drops any featured model not returned by the AI Gateway;
update organizeModels to check gateway availability for each id in
FEATURED_MODELS and emit a clear warning (or debug log) when a featured model ID
is missing so maintainers know to verify availability, and update the test
fixtures that reference featured models (replace example "openai/gpt-5.2" with
one of the actual FEATURED_MODELS like "openai/gpt-5.4-mini" or the new
Anthropic IDs) so tests reflect the real featured configuration.
---
Nitpick comments:
In `@lib/ai/featuredModels.ts`:
- Around line 45-50: The featuredModels entry uses DEFAULT_MODEL for id but
hardcodes displayName/description/tooltip which can drift; either replace id:
DEFAULT_MODEL with the concrete literal model ID you intend to pin (so the
displayName/description/tooltip remain correct), or remove the hardcoded
displayName/description/tooltip from the object in lib/ai/featuredModels.ts and
instead resolve those fields at render time from your model metadata lookup
(using DEFAULT_MODEL to select the metadata in the UI rendering code); update
the code paths that consume this object (e.g., UI component that reads
displayName/description/tooltip) accordingly so they pull live metadata when
needed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 76970d3e-fca8-4d5b-b54d-279e3e07f853
📒 Files selected for processing (1)
lib/ai/featuredModels.ts
| id: "anthropic/claude-opus-4.7", | ||
| displayName: "Claude Opus 4.7", | ||
| isPro: true, | ||
| pill: "New", | ||
| description: "Great for complex reasoning", | ||
| tooltip: "Anthropic's latest model for demanding reasoning and complex problem solving", | ||
| }, | ||
| { | ||
| id: "anthropic/claude-sonnet-4.5", | ||
| displayName: "Claude Sonnet 4.5", | ||
| id: "anthropic/claude-sonnet-4.6", | ||
| displayName: "Claude Sonnet 4.6", | ||
| isPro: true, | ||
| description: "Great for agents", | ||
| tooltip: "Anthropic's balanced model for agentic workflows", | ||
| }, | ||
| { | ||
| id: "google/gemini-2.5-flash-lite", | ||
| displayName: "Gemini 2.5 Flash", | ||
| id: DEFAULT_MODEL, | ||
| displayName: "GPT-5.4 Mini", | ||
| isPro: false, | ||
| pill: "Fast", | ||
| description: "Great for speed", | ||
| tooltip: "Google's fastest model", | ||
| }, | ||
| { | ||
| id: "google/gemini-3-pro-preview", | ||
| displayName: "Gemini 3 Pro", | ||
| isPro: true, | ||
| description: "Google's latest model", | ||
| tooltip: "Google's newest Gemini 3 Pro preview model", | ||
| }, | ||
| { | ||
| id: "xai/grok-4", | ||
| displayName: "Grok 4", | ||
| isPro: true, | ||
| description: "Great for writing", | ||
| tooltip: "Xai's reasoning model", | ||
| description: "Great for everyday", | ||
| tooltip: "OpenAI's faster, cost-optimized model", |
There was a problem hiding this comment.
Update organize-models tests to match the new featured set and order.
The current test snippet in lib/ai/__tests__/organizeModels.test.ts still expects openai/gpt-5.2 as featured, which no longer matches this configuration and weakens regression coverage for featured ordering.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/ai/featuredModels.ts` around lines 30 - 50, Update the organize-models
test expectations to reflect the new featured models and their order: replace
assertions that expect "openai/gpt-5.2" with checks for
"anthropic/claude-opus-4.7" first, then "anthropic/claude-sonnet-4.6", and then
the DEFAULT_MODEL entry (displayName "GPT-5.4 Mini") in that exact order; adjust
any checks for isPro/pill/description/tooltip to match the featured entries
(e.g., isPro true for the Anthropic models, pill "Fast" for DEFAULT_MODEL) in
organizeModels.test.ts so the test's featured set and ordering align with the
array in featuredModels.ts.
| id: "anthropic/claude-opus-4.7", | ||
| displayName: "Claude Opus 4.7", | ||
| isPro: true, | ||
| pill: "New", | ||
| description: "Great for complex reasoning", | ||
| tooltip: "Anthropic's latest model for demanding reasoning and complex problem solving", | ||
| }, | ||
| { | ||
| id: "anthropic/claude-sonnet-4.5", | ||
| displayName: "Claude Sonnet 4.5", | ||
| id: "anthropic/claude-sonnet-4.6", | ||
| displayName: "Claude Sonnet 4.6", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read-only verification of required featured IDs from the models endpoint.
# Set BASE_URL to the environment you want to validate.
set -euo pipefail
BASE_URL="${BASE_URL:-http://localhost:3000}"
REQUIRED_IDS=(
"anthropic/claude-opus-4.7"
"anthropic/claude-sonnet-4.6"
)
ids="$(curl -fsS "${BASE_URL}/api/ai/models" | jq -r '.[].id')"
missing=0
for id in "${REQUIRED_IDS[@]}"; do
if grep -Fxq "$id" <<< "$ids"; then
echo "FOUND: $id"
else
echo "MISSING: $id"
missing=1
fi
done
exit "$missing"Repository: recoupable/chat
Length of output: 148
🏁 Script executed:
# First, let's examine the key files mentioned in the review
echo "=== Examining featuredModels.ts ===" && \
wc -l lib/ai/featuredModels.ts && \
echo && \
echo "=== Examining organizeModels.ts ===" && \
wc -l lib/ai/organizeModels.ts && \
echo && \
echo "=== Examining consts.ts ===" && \
wc -l lib/consts.tsRepository: recoupable/chat
Length of output: 235
🏁 Script executed:
# Read the full featuredModels.ts file
cat -n lib/ai/featuredModels.tsRepository: recoupable/chat
Length of output: 2578
🏁 Script executed:
# Read organizeModels.ts to understand the filtering logic
cat -n lib/ai/organizeModels.tsRepository: recoupable/chat
Length of output: 2252
🏁 Script executed:
# Check if there's model configuration elsewhere
fd -t f -e ts -e js -e json | grep -i 'model' | head -20Repository: recoupable/chat
Length of output: 353
🏁 Script executed:
# Search for where availableModelMap is populated or defined
rg 'availableModelMap' --type ts -B 3 -A 3Repository: recoupable/chat
Length of output: 769
🏁 Script executed:
cat -n lib/ai/__tests__/organizeModels.test.tsRepository: recoupable/chat
Length of output: 1256
🏁 Script executed:
# Check what DEFAULT_MODEL is set to in consts.ts
grep -n "DEFAULT_MODEL" lib/consts.ts | head -5Repository: recoupable/chat
Length of output: 113
🏁 Script executed:
# Search for any API response mocking or model fixtures in tests
rg 'claude-opus|claude-sonnet|gpt-5.4-mini' --type ts -B 2 -A 2Repository: recoupable/chat
Length of output: 903
🏁 Script executed:
# Look for getAvailableModels to understand how models are fetched
cat -n lib/ai/getAvailableModels.tsRepository: recoupable/chat
Length of output: 1304
Verify both new Anthropic model IDs are available in the AI Gateway before merging.
The organizeModels function silently skips featured models that don't exist in the gateway response—missing IDs won't cause errors, they'll just reduce the featured section without any warning. Additionally, the test file uses openai/gpt-5.2 as a featured model example, but the current FEATURED_MODELS config lists anthropic/claude-opus-4.7, anthropic/claude-sonnet-4.6, and openai/gpt-5.4-mini—the tests need updating to reflect the actual featured configuration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/ai/featuredModels.ts` around lines 30 - 39, The FEATURED_MODELS list
contains new Anthropic IDs ("anthropic/claude-opus-4.7",
"anthropic/claude-sonnet-4.6") but organizeModels currently silently drops any
featured model not returned by the AI Gateway; update organizeModels to check
gateway availability for each id in FEATURED_MODELS and emit a clear warning (or
debug log) when a featured model ID is missing so maintainers know to verify
availability, and update the test fixtures that reference featured models
(replace example "openai/gpt-5.2" with one of the actual FEATURED_MODELS like
"openai/gpt-5.4-mini" or the new Anthropic IDs) so tests reflect the real
featured configuration.
Summary
Featured order is now: Claude Opus 4.7 → Claude Sonnet 4.6 → GPT-5.4 Mini.
Why
Reduce dropdown noise and surface only the three models we want users to reach for first.
Heads up (verify before merging)
The two new Anthropic model IDs must exist in the AI Gateway or `organizeModels.ts` will silently filter them out:
Hit `/api/ai/models` in the running app and confirm both IDs are present in the response before merging.
Test plan
Made with Cursor
Summary by cubic
Streamlined the featured models dropdown to show three models in this order: Claude Opus 4.7, Claude Sonnet 4.6, and GPT-5.4 Mini. Removed GPT-5.2, Gemini 2.5 Flash, Gemini 3 Pro, and Grok 4 from featured; they remain in the full list.
anthropic/claude-opus-4.7andanthropic/claude-sonnet-4.6exist in the AI Gateway ororganizeModels.tswill filter them out./api/ai/modelsreturns both IDs before merging.Written for commit 94e9062. Summary will update on new commits.
Summary by CodeRabbit