Pre-send email testing — create variants, collect real human signals, and compare outcomes before you send.
AptivAI started as a broad AI email analytics platform. The strongest direction to emerge from that work is a focused pre-send testing product: Signal Lab. Users create email variants, generate tokenized reviewer links, capture behavioral signals from real readers, and compare outcomes using direct survey feedback — before any email reaches a production audience. AI remains part of the platform, but human evidence comes first.
https://aptiv-analytics.vercel.app
Stack: Vercel (frontend) · Render (backend) · MongoDB Atlas
Most email optimization tools analyze content in isolation. They predict engagement without measuring how real people actually respond. AptivAI is being built around a different premise: test before you send, not after.
Signal Lab gives teams a structured way to collect real human-response signals — scroll behavior, CTA clicks, reading completion, and direct survey feedback — across email variants, before launch. The goal is to close the gap between what a model predicts and what a person actually experiences.
The first end-to-end testing flow is live in the deployed app:
- Signal Lab in the sidebar — dedicated product lane at
/lab - Test creation — create a test with a title, goal, and audience context; add 2–5 email variants (A/B/C/D/E), each with subject line, preview text, body, and CTA
- Reviewer link generation — per-variant tokenized links with configurable expiration (1–30 days, default 7 days)
- Public reviewer flow — recipients open
/review/:tokenwith no login required; token is validated server-side, expires on schedule, and tracks stateful session progress - Interaction event logging — five events captured per session:
opened_email,scroll_progress(at 25/50/75/100%),cta_click,completed_review,abandon - Token security — SHA-256 hashed storage, server-side expiration, and specific error states for invalid, expired, already-completed, and inactive sessions
- AI Chat Assistant — Gemini 2.5 Flash-Lite, available on every authenticated page; handles text and voice input with function-calling against live campaign data
- Campaign analytics dashboard — performance metrics, trend charts, engagement heatmaps, and campaign comparisons
- Content Optimizer — AI-driven subject line and email body optimization with configurable audience targeting
- Emotional Impact Analyzer — multimodal email analysis (subject, body, image); optional client-side facial reaction detection via MediaPipe FaceLandmarker (all facial processing runs in-browser)
- Audience segments — CRUD management with targeting criteria
- Firebase Auth — email/password login with per-user data isolation across all endpoints
Implemented on the active feature branch, currently in review for merge:
- Post-read survey — a short survey shown after the reviewer finishes reading; collects trust, clarity, and action intent ratings (1–5), a recall question, and an optional comment
- Survey persistence — responses saved to
SurveyResponse, linked to the reviewer session and variant; one response per session; session marked complete on submission - Results aggregation —
GET /api/lab/tests/:id/resultscomputes on demand: per-variant session counts, completion rates, CTA click counts, survey averages, and a signal status (no_data,insufficient_data,too_close,early_leader) - Results page —
/lab/results/:idshows the comparison payoff: overall session stats, per-variant rating bars, reviewer quotes, and an early-leader indicator when the evidence supports one - Results navigation — "View Results" button accessible from test detail for any published test
The results layer is intentionally honest. A minimum of 3 survey responses per variant is required before an early leader is surfaced. When evidence is insufficient, the page says so clearly.
End-to-end flow — steps 1–5 are live; steps 6–8 are implemented and in review:
1. Create test → set title, goal, context
2. Add variants → 2–5 variants, each with subject, preview, body, CTA
3. Generate links → one tokenized reviewer link per variant
4. Share links → reviewer opens /review/:token, no login required
5. Read + track → behavioral events captured as reviewer reads
6. Complete survey → trust, clarity, action intent, recall (in review)
7. View results → return to test detail, open results page (in review)
8. Compare outcomes → per-variant stats, signal status, quotes (in review)
The platform is being extended additively — nothing is being removed.
Near term
- Evidence-backed AI rewrite suggestions grounded in observed human signals, not generic optimization
- Reviewer cohort context — device type, session labels
- Improved results visualization
Later
- Human vs. AI disagreement view — where observed human response diverged from model prediction
- Cross-test learning — pattern recognition across tests from the same sender
- Release gate recommendation — a structured go/no-go signal before a campaign launches
- React 18 + TypeScript
- Tailwind CSS (dark mode)
- TanStack React Query
- React Router v6
- Chart.js / react-chartjs-2
- Firebase Auth (client SDK)
- MediaPipe Tasks Vision (WASM, fully client-side)
- Node.js + Express
- MongoDB + Mongoose
- Firebase Admin SDK
- Google Gemini API (
@google/generative-ai, model:gemini-2.5-flash-lite) - Google Cloud Text-to-Speech
- Multer (multipart uploads)
- Helmet, CORS, express-rate-limit, express-mongo-sanitize
- Frontend: Vercel (auto-deploys from
development) - Backend: Render (auto-deploys from
development) - Database: MongoDB Atlas
- Auth: Firebase Authentication
- CI/CD: GitHub Actions — lint, typecheck, tests, build on every PR to
developmentandmain - Node version: 20 (
.nvmrc)
The frontend is a React SPA with client-side routing and Firebase-gated protected routes. Reviewer flows at /review/:token are fully public — no authentication required, by design.
The backend is an Express API responsible for AI orchestration, campaign CRUD, Signal Lab session lifecycle, event ingestion, survey persistence, and on-demand results computation. Reviewer tokens are stored as SHA-256 hashes — raw tokens exist only in the URL and are never written to the database.
Frontend Backend
├── /login ├── /api/lab/tests (CRUD)
├── / (Dashboard) ├── /api/lab/tests/:id/results
├── /campaigns ├── /api/lab/tests/:id/reviewer-link
├── /analytics ├── /api/lab/review/:token/validate
├── /content-optimizer ├── /api/lab/review/:token/events
├── /emotional-impact ├── /api/lab/review/:token/survey
├── /audiences ├── /api/lab/review/:token/complete
├── /settings ├── /api/ai/chat-sync
│ ├── /api/ai/analyze-emotions
├── /lab (Signal Lab) ├── /api/ai/optimize-content
├── /lab/tests/new ├── /api/ai/synthesize-speech
├── /lab/tests/:id ├── /api/campaigns (CRUD)
├── /lab/results/:id ├── /api/segments (CRUD)
└── /review/:token (public) ├── /api/analytics/*
└── /api/health
- Node.js 20+
- MongoDB (local instance or Atlas)
- Firebase project with Authentication enabled
- Google Cloud project with Gemini API enabled
- (Optional) Google Cloud Text-to-Speech API for voice features
# Install frontend dependencies
npm install
# Install backend dependencies
cd server && npm install && cd ..
# Configure environment
cp .env.example .env
# Fill in Firebase, MongoDB, and Gemini credentials
# Seed the database with sample data (optional)
SEED_USER_ID=your_firebase_uid npm run seed
# Start backend (Terminal 1)
npm run server
# Start frontend (Terminal 2)
npm start- Frontend:
http://localhost:3000 - Backend:
http://localhost:3001
Create a .env file at the project root (use .env.example as a template):
REACT_APP_FIREBASE_API_KEY=
REACT_APP_FIREBASE_AUTH_DOMAIN=
REACT_APP_FIREBASE_PROJECT_ID=
REACT_APP_FIREBASE_STORAGE_BUCKET=
REACT_APP_FIREBASE_MESSAGING_SENDER_ID=
REACT_APP_FIREBASE_APP_ID=
REACT_APP_API_URL=http://localhost:3001/apiMONGODB_URI=mongodb://localhost:27017/email_dashboard_db
GOOGLE_GEMINI_API_KEY=
# Firebase Admin — one of the following:
FIREBASE_SERVICE_ACCOUNT_BASE64= # Base64-encoded service account JSON (recommended for production)
GOOGLE_APPLICATION_CREDENTIALS=./path/to/service-account.json # File path (local development)PORT=3001
FRONTEND_URL=https://your-app.vercel.app # Required in production for CORS
NODE_ENV=production
GOOGLE_APPLICATION_CREDENTIALS_JSON= # Base64-encoded service account for TTS
SEED_USER_ID= # Firebase UID for seed data ownership| Command | Description |
|---|---|
npm start |
Start the React development server |
npm run build |
Create a production frontend build |
npm run lint |
Lint the frontend source |
npm run lint:fix |
Auto-fix lint issues |
npm run typecheck |
TypeScript type check |
npm run test |
Run frontend tests (watch mode) |
npm run test:ci |
Run frontend tests (CI mode) |
npm run verify |
Run typecheck + lint + tests |
npm run build:ci |
Production build for CI |
npm run server |
Start the Express backend |
npm run test:server |
Run backend tests |
npm run seed |
Seed MongoDB with sample campaign data |
npm run clean-install |
Clean install all dependencies |
- Import the repository into Vercel
- Set all
REACT_APP_*environment variables in the Vercel dashboard - Set
REACT_APP_API_URLto your deployed backend URL (e.g.,https://your-app.onrender.com/api) - Deploy — Vercel builds automatically via
npm run build
- Create a Web Service connected to the repository
- Set root directory to
server - Build command:
npm install - Start command:
node src/server.js - Set environment variables:
MONGODB_URI— Atlas connection stringGOOGLE_GEMINI_API_KEYFIREBASE_SERVICE_ACCOUNT_BASE64— Base64-encoded Firebase service accountFRONTEND_URL— comma-separated Vercel deployment URLs (for CORS)NODE_ENV=production
- Create an Atlas cluster
- Create a database user with an alphanumeric password (avoids URL-encoding issues)
- Add
0.0.0.0/0to the IP access list (or restrict to Render's IPs) - Set
MONGODB_URIin your environment - Seed if needed:
SEED_USER_ID=your_firebase_uid MONGODB_URI=your_atlas_uri npm run seed# Frontend unit tests
npm run test:ci
# Backend tests
npm run test:server
# Full verification (typecheck + lint + tests)
npm run verifyCI runs on every pull request to development and main via GitHub Actions.
MIT