A production-style chat application built with Next.js and the Vercel AI SDK.
This portfolio project builds on the official Vercel AI Chatbot template and emphasizes:
- Multi-model orchestration
- Real-time chat UX
- Clean full-stack architecture
- Practical product features (comparison mode, winner selection, persistence)
Adapted from the official Vercel AI Chatbot template: https://github.com/vercel/ai-chatbot
- Single-response chat mode
- Quad-response mode with live per-card streaming (4 model outputs from one prompt)
- Winner selection (
Use for Next) to route the next prompt to the preferred model and switch back to single mode - Header view-mode toggle (
single/quad) with tooltips - Smart scrolling behavior:
- Quad finishes anchored at top cards
- Single finishes jump to newest response
- Auth (guest + regular user)
- Persistent chat history in Postgres
- File upload support via Vercel Blob
- Optional resumable streaming with Redis
- Next.js (App Router)
- TypeScript
- Vercel AI SDK (
ai,@ai-sdk/react, AI Gateway) - Auth.js (NextAuth)
- Drizzle ORM + Postgres
- shadcn/ui + Radix + Tailwind CSS
- SWR + Sonner + Framer Motion
Install these first:
- Node.js 20+
- pnpm 9+
- A Postgres database
- A Vercel Blob token
- AI Gateway key (for local/non-Vercel usage)
Optional:
- Redis (for resumable streams)
Helpful setup links:
- AI Gateway: https://vercel.com/ai-gateway
- Vercel Blob Store: https://vercel.com/docs/vercel-blob
- Postgres: https://vercel.com/docs/postgres
- Redis: https://vercel.com/docs/redis
git clone <your-repo-url>
cd ai-ensemble-chat
pnpm installIf you use Codex/agent workflows, note that cloning this repo and running pnpm install
sets up the app, but may not automatically install/sync local agent skills on every machine.
- App runtime setup: covered by this README
- Agent skill setup: managed by your Codex environment/tooling
This repo includes skills-lock.json to track skill state, but you may still need to run
your local skill install/sync step when setting up a new workstation.
AI SDK skill reference:
Copy .env.example to .env.local and fill values:
cp .env.example .env.localRequired variables:
AUTH_SECRETAI_GATEWAY_API_KEY(required outside Vercel)POSTGRES_URLBLOB_READ_WRITE_TOKEN
Optional:
REDIS_URL(enables resumable stream support)
pnpm db:migratepnpm devOpen: http://localhost:3000
- Use the header toggle:
single: one assistant responsequad: four assistant candidates streamed live per card
- Send a prompt in Quad mode
- Compare 4 responses
- Click
Use for Nexton the best one - The selected model becomes active and mode switches to
singlefor the next prompt
Use the model dropdown in the input toolbar to choose provider/model.
pnpm dev- start local development serverpnpm build- migrate DB then buildpnpm start- start production serverpnpm test- run Playwright testspnpm lint- run linter checkspnpm format- auto-fix lint/stylepnpm db:migrate- apply migrationspnpm db:studio- open Drizzle Studio
See .env.example for the full list.
AUTH_SECRET=
AI_GATEWAY_API_KEY=
BLOB_READ_WRITE_TOKEN=
POSTGRES_URL=
REDIS_URL=app/
(chat)/
api/chat/route.ts # chat API (single + quad modes)
components/
chat.tsx # main chat state + transport wiring
chat-header.tsx # top utility bar + mode toggle
message.tsx # message rendering + quad cards
multimodal-input.tsx # input, model selector, uploads
lib/
ai/models.ts # curated model list
ai/providers.ts # gateway/provider wiring
ai/entitlements.ts # rate/usage limits by user type
db/queries.ts # persistence layer
Areas this project emphasizes:
- Multi-model comparison UX and state synchronization
- Robust request-state handling (avoids stale mode/model payload bugs)
- Practical API/UI design for advanced chat features without over-engineering
- Clear separation of concerns between API routes, UI state, and rendering components
Check:
AI_GATEWAY_API_KEYis valid (if running locally)- provider/model IDs in
lib/ai/models.tsare supported by your gateway setup
Check:
POSTGRES_URLpnpm db:migrateran successfully
Check:
BLOB_READ_WRITE_TOKEN
Set:
REDIS_URL
MIT