TuvixRSS is a modern RSS reader with AI features, built on Cloudflare Workers.
- API: Hono (Cloudflare Workers), tRPC, Drizzle ORM, Cloudflare D1
- Frontend: React, TanStack Router, TanStack Query, Tailwind CSS
- Auth: Better Auth (email/password)
- Observability: Sentry (errors, performance, metrics)
- Email: Resend
- Monorepo: pnpm workspaces (
packages/api,packages/app,packages/tricorder)
packages/
api/ # Cloudflare Workers API (Hono + tRPC)
src/
routers/ # tRPC route handlers
services/ # Business logic (RSS fetching, email, etc.)
auth/ # Better Auth configuration
db/ # Drizzle schema and migrations
app/ # React frontend (Vite + TanStack)
tricorder/ # RSS/Atom feed discovery library
⛔ NEVER run production database migrations or modifications without explicit user permission.
This includes but is not limited to:
wrangler d1 execute <db> --remote- Any SQL migrations against production databases
- Schema alterations on live systems
- Data modifications in production
Required Process:
- Generate migrations locally
- Show the user what will change
- Explain impact and safety
- ASK FOR PERMISSION
- Only after explicit approval, proceed
Rationale: Production database operations are irreversible and can cause data loss, service disruption, or schema conflicts. Always give the user control over these decisions.
Exception: Local/dev database operations (--local, db:migrate:local) are safe to run without asking.
⛔ NEVER deploy to production. Only local development is allowed.
Deployment is explicitly forbidden and handled by CI/CD pipelines.
✅ Staging deployments are allowed via manual workflow dispatch.
Staging provides a production-like environment for testing PRs before they're merged to main.
How to Deploy to Staging:
- Go to Actions → Deploy to Staging → Run workflow
- Select the branch/PR to deploy (default:
main) - Choose whether to seed test data (optional)
- Click Run workflow
What Happens:
- API and App are deployed to staging environment
- Staging database is wiped clean (all data deleted)
- Fresh migrations are applied from scratch
- Optional test data seeding (if selected)
Key Points:
- Staging uses separate infrastructure (D1 database, Worker, Pages project)
- Database starts fresh on every deployment (no migration conflicts)
- Last deployment wins (concurrent deployments are cancelled)
- Perfect for testing PRs in a production-like environment
Staging Secrets Required:
STAGING_D1_DATABASE_ID # Separate D1 database for staging
STAGING_VITE_API_URL # Staging API URL
STAGING_CLOUDFLARE_PAGES_PROJECT_NAME # Staging Pages project name
When to Use Staging:
- Test PRs before merging to main
- Verify database migrations work correctly
- Integration testing with production-like infrastructure
- Demo features to stakeholders
When NOT to Use Staging:
- Local development (use
pnpm devinstead) - Quick iteration (too slow compared to local)
- Testing that requires preserving data (staging wipes on each deploy)
- Modify schema in
packages/api/src/db/schema.ts - Generate migration:
pnpm db:generate - Review generated SQL in
packages/api/drizzle/ - Apply locally:
pnpm db:migrate:local
- API:
pnpm --filter @tuvixrss/api test - App:
pnpm --filter @tuvixrss/app test - All:
pnpm test
pnpm type-check- Check all packagespnpm lint- Lint all packagespnpm format- Format with Prettier
- Fire-and-forget emails: Email sending doesn't block API responses; uses Sentry spans for tracking
- Admin dashboard: User management at
packages/api/src/routers/admin.ts - Security audit logging: All auth events logged to
security_audit_logtable - Rate limiting: Cloudflare Workers rate limit API per plan tier
TuvixRSS includes optional AI-powered features using OpenAI and the Vercel AI SDK.
- AI Category Suggestions: Automatically suggests feed categories based on feed metadata and recent articles
- Model: GPT-4o-mini (via
@ai-sdk/openai) - Location:
packages/api/src/services/ai-category-suggester.ts
AI features are triple-gated for security and cost control:
- Global Setting:
aiEnabledflag inglobal_settingstable (admin-controlled via admin dashboard) - User Plan: Only Pro or Enterprise plan users have access
- Environment:
OPENAI_API_KEYmust be configured
Access check: packages/api/src/services/limits.ts:checkAiFeatureAccess()
Local Development (Docker/Node.js):
# Add to .env
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxCloudflare Workers (Production/Staging):
# Use wrangler CLI to set secret
cd packages/api
npx wrangler secret put OPENAI_API_KEY
# Enter: sk-proj-xxxxxxxxxxxxxGitHub Actions (CI/CD):
Add OPENAI_API_KEY to repository secrets for production deployments.
AI calls are automatically tracked by Sentry via the vercelAIIntegration:
- Token usage: Tracked automatically by AI SDK telemetry
- Latency: Per-call duration metrics
- Model info: Model name and version
- Errors: AI SDK errors and failures
- Input/Output: Captured when
experimental_telemetry.recordInputs/recordOutputsis enabled
Configuration:
- Node.js:
packages/api/src/entries/node.ts(Sentry.init with vercelAIIntegration) - Cloudflare:
packages/api/src/entries/cloudflare.ts(withSentry config) - AI calls: Include
experimental_telemetrywithfunctionIdfor better tracking
Example:
const result = await generateObject({
model: openai("gpt-4o-mini"),
// ... schema and prompts
experimental_telemetry: {
isEnabled: true,
functionId: "ai.suggestCategories",
recordInputs: true,
recordOutputs: true,
},
});- Always check access: Use
checkAiFeatureAccess()before calling AI services - Graceful degradation: Return
undefinedif AI is unavailable (don't error) - Add telemetry: Include
experimental_telemetryin all AI SDK calls - Function IDs: Use descriptive
functionIdfor easier tracking in Sentry - Cost awareness: AI features are gated to Pro/Enterprise to manage costs
TuvixRSS uses Sentry for comprehensive observability: error tracking, performance monitoring, and custom metrics.
Add Sentry instrumentation to:
- Database-heavy operations - Complex queries, aggregations, bulk operations
- External API calls - RSS fetching, email sending, favicon fetching
- Business-critical paths - User registration, authentication, feed subscriptions
- Performance-sensitive endpoints - Feed fetching, article retrieval
Located in packages/api/src/utils/metrics.ts:
Wraps async functions to measure execution time and emit distribution metrics:
import { withTiming } from "@/utils/metrics";
.query(async ({ ctx, input }) => {
return withTiming(
'admin.getUserGrowth',
async () => {
// Your logic here
const data = await fetchData();
return data;
},
{ days: input.days } // Optional attributes for filtering
);
})Automatically tracks:
- Execution duration (milliseconds)
- Success/failure status
- Distribution metrics (p50, p95, p99) in Sentry
Use for: Database queries, API endpoints, external service calls
Creates named spans for distributed tracing with nested operations:
import * as Sentry from "@/utils/sentry";
return Sentry.startSpan(
{
name: "auth.signup",
op: "auth.register",
attributes: {
"auth.method": "email_password",
"auth.has_username": !!input.username,
},
},
async (parentSpan) => {
// Main logic
const user = await createUser();
// Nested span
await Sentry.startSpan(
{ name: "auth.send_welcome_email", op: "email.send" },
async () => {
await sendWelcomeEmail(user);
}
);
return user;
}
);Use for: Complex operations with multiple steps, distributed tracing
Direct metric emission for counters, gauges, and distributions:
import { emitCounter, emitGauge, emitDistribution } from "@/utils/metrics";
// Count occurrences
emitCounter("email.sent", 1, {
type: "verification",
status: "success",
});
// Track current state
emitGauge("subscriptions.active", activeCount, {
plan: "free",
});
// Measure value distribution
emitDistribution("rss.fetch_time", 150, "millisecond", {
format: "atom",
domain: "example.com",
});- Start simple - Use
withTimingfor most cases - Add attributes - Include contextual data (user plan, operation type, resource count)
- Avoid over-instrumentation - Focus on critical paths and performance bottlenecks
- Low-volume endpoints - Admin endpoints can use lighter instrumentation
- High-volume endpoints - Use sampling or metrics instead of full spans
Database query timing:
return withTiming('feeds.getUserFeeds', async () => {
return await ctx.db.query.feeds.findMany({ where: ... });
}, { userId: ctx.user.id });Multi-step operation:
return Sentry.startSpan({ name: "rss.fetch", op: "http.fetch" }, async () => {
const feed = await fetchFeed(url);
return await Sentry.startSpan({ name: "rss.parse" }, async () => {
return parseFeed(feed);
});
});