Skip to content

Tony5897/aptiv-analytics

Repository files navigation

AptivAI

Pre-send email testing — create variants, collect real human signals, and compare outcomes before you send.

AptivAI started as a broad AI email analytics platform. The strongest direction to emerge from that work is a focused pre-send testing product: Signal Lab. Users create email variants, generate tokenized reviewer links, capture behavioral signals from real readers, and compare outcomes using direct survey feedback — before any email reaches a production audience. AI remains part of the platform, but human evidence comes first.

CI


Live Demo

https://aptiv-analytics.vercel.app

Stack: Vercel (frontend) · Render (backend) · MongoDB Atlas


Why This Direction

Most email optimization tools analyze content in isolation. They predict engagement without measuring how real people actually respond. AptivAI is being built around a different premise: test before you send, not after.

Signal Lab gives teams a structured way to collect real human-response signals — scroll behavior, CTA clicks, reading completion, and direct survey feedback — across email variants, before launch. The goal is to close the gap between what a model predicts and what a person actually experiences.


What Is Live Now

Signal Lab — core testing loop

The first end-to-end testing flow is live in the deployed app:

  • Signal Lab in the sidebar — dedicated product lane at /lab
  • Test creation — create a test with a title, goal, and audience context; add 2–5 email variants (A/B/C/D/E), each with subject line, preview text, body, and CTA
  • Reviewer link generation — per-variant tokenized links with configurable expiration (1–30 days, default 7 days)
  • Public reviewer flow — recipients open /review/:token with no login required; token is validated server-side, expires on schedule, and tracks stateful session progress
  • Interaction event logging — five events captured per session: opened_email, scroll_progress (at 25/50/75/100%), cta_click, completed_review, abandon
  • Token security — SHA-256 hashed storage, server-side expiration, and specific error states for invalid, expired, already-completed, and inactive sessions

Broader platform

  • AI Chat Assistant — Gemini 2.5 Flash-Lite, available on every authenticated page; handles text and voice input with function-calling against live campaign data
  • Campaign analytics dashboard — performance metrics, trend charts, engagement heatmaps, and campaign comparisons
  • Content Optimizer — AI-driven subject line and email body optimization with configurable audience targeting
  • Emotional Impact Analyzer — multimodal email analysis (subject, body, image); optional client-side facial reaction detection via MediaPipe FaceLandmarker (all facial processing runs in-browser)
  • Audience segments — CRUD management with targeting criteria
  • Firebase Auth — email/password login with per-user data isolation across all endpoints

In Progress

Signal Lab — survey and results layer

Implemented on the active feature branch, currently in review for merge:

  • Post-read survey — a short survey shown after the reviewer finishes reading; collects trust, clarity, and action intent ratings (1–5), a recall question, and an optional comment
  • Survey persistence — responses saved to SurveyResponse, linked to the reviewer session and variant; one response per session; session marked complete on submission
  • Results aggregationGET /api/lab/tests/:id/results computes on demand: per-variant session counts, completion rates, CTA click counts, survey averages, and a signal status (no_data, insufficient_data, too_close, early_leader)
  • Results page/lab/results/:id shows the comparison payoff: overall session stats, per-variant rating bars, reviewer quotes, and an early-leader indicator when the evidence supports one
  • Results navigation — "View Results" button accessible from test detail for any published test

The results layer is intentionally honest. A minimum of 3 survey responses per variant is required before an early leader is surfaced. When evidence is insufficient, the page says so clearly.


Current Product Loop

End-to-end flow — steps 1–5 are live; steps 6–8 are implemented and in review:

1. Create test       →  set title, goal, context
2. Add variants      →  2–5 variants, each with subject, preview, body, CTA
3. Generate links    →  one tokenized reviewer link per variant
4. Share links       →  reviewer opens /review/:token, no login required
5. Read + track      →  behavioral events captured as reviewer reads
6. Complete survey   →  trust, clarity, action intent, recall           (in review)
7. View results      →  return to test detail, open results page        (in review)
8. Compare outcomes  →  per-variant stats, signal status, quotes        (in review)

What Comes Next

The platform is being extended additively — nothing is being removed.

Near term

  • Evidence-backed AI rewrite suggestions grounded in observed human signals, not generic optimization
  • Reviewer cohort context — device type, session labels
  • Improved results visualization

Later

  • Human vs. AI disagreement view — where observed human response diverged from model prediction
  • Cross-test learning — pattern recognition across tests from the same sender
  • Release gate recommendation — a structured go/no-go signal before a campaign launches

Tech Stack

Frontend

  • React 18 + TypeScript
  • Tailwind CSS (dark mode)
  • TanStack React Query
  • React Router v6
  • Chart.js / react-chartjs-2
  • Firebase Auth (client SDK)
  • MediaPipe Tasks Vision (WASM, fully client-side)

Backend

  • Node.js + Express
  • MongoDB + Mongoose
  • Firebase Admin SDK
  • Google Gemini API (@google/generative-ai, model: gemini-2.5-flash-lite)
  • Google Cloud Text-to-Speech
  • Multer (multipart uploads)
  • Helmet, CORS, express-rate-limit, express-mongo-sanitize

Infrastructure

  • Frontend: Vercel (auto-deploys from development)
  • Backend: Render (auto-deploys from development)
  • Database: MongoDB Atlas
  • Auth: Firebase Authentication
  • CI/CD: GitHub Actions — lint, typecheck, tests, build on every PR to development and main
  • Node version: 20 (.nvmrc)

Architecture

The frontend is a React SPA with client-side routing and Firebase-gated protected routes. Reviewer flows at /review/:token are fully public — no authentication required, by design.

The backend is an Express API responsible for AI orchestration, campaign CRUD, Signal Lab session lifecycle, event ingestion, survey persistence, and on-demand results computation. Reviewer tokens are stored as SHA-256 hashes — raw tokens exist only in the URL and are never written to the database.

Route Overview

Frontend                           Backend
├── /login                         ├── /api/lab/tests (CRUD)
├── / (Dashboard)                  ├── /api/lab/tests/:id/results
├── /campaigns                     ├── /api/lab/tests/:id/reviewer-link
├── /analytics                     ├── /api/lab/review/:token/validate
├── /content-optimizer             ├── /api/lab/review/:token/events
├── /emotional-impact              ├── /api/lab/review/:token/survey
├── /audiences                     ├── /api/lab/review/:token/complete
├── /settings                      ├── /api/ai/chat-sync
│                                  ├── /api/ai/analyze-emotions
├── /lab (Signal Lab)              ├── /api/ai/optimize-content
├── /lab/tests/new                 ├── /api/ai/synthesize-speech
├── /lab/tests/:id                 ├── /api/campaigns (CRUD)
├── /lab/results/:id               ├── /api/segments (CRUD)
└── /review/:token (public)        ├── /api/analytics/*
                                   └── /api/health

Local Development

Prerequisites

  • Node.js 20+
  • MongoDB (local instance or Atlas)
  • Firebase project with Authentication enabled
  • Google Cloud project with Gemini API enabled
  • (Optional) Google Cloud Text-to-Speech API for voice features

Setup

# Install frontend dependencies
npm install

# Install backend dependencies
cd server && npm install && cd ..

# Configure environment
cp .env.example .env
# Fill in Firebase, MongoDB, and Gemini credentials

# Seed the database with sample data (optional)
SEED_USER_ID=your_firebase_uid npm run seed

# Start backend (Terminal 1)
npm run server

# Start frontend (Terminal 2)
npm start
  • Frontend: http://localhost:3000
  • Backend: http://localhost:3001

Environment Variables

Create a .env file at the project root (use .env.example as a template):

Frontend

REACT_APP_FIREBASE_API_KEY=
REACT_APP_FIREBASE_AUTH_DOMAIN=
REACT_APP_FIREBASE_PROJECT_ID=
REACT_APP_FIREBASE_STORAGE_BUCKET=
REACT_APP_FIREBASE_MESSAGING_SENDER_ID=
REACT_APP_FIREBASE_APP_ID=
REACT_APP_API_URL=http://localhost:3001/api

Backend (required)

MONGODB_URI=mongodb://localhost:27017/email_dashboard_db
GOOGLE_GEMINI_API_KEY=

# Firebase Admin — one of the following:
FIREBASE_SERVICE_ACCOUNT_BASE64=         # Base64-encoded service account JSON (recommended for production)
GOOGLE_APPLICATION_CREDENTIALS=./path/to/service-account.json  # File path (local development)

Backend (optional)

PORT=3001
FRONTEND_URL=https://your-app.vercel.app  # Required in production for CORS
NODE_ENV=production
GOOGLE_APPLICATION_CREDENTIALS_JSON=      # Base64-encoded service account for TTS
SEED_USER_ID=                             # Firebase UID for seed data ownership

Available Scripts

Command Description
npm start Start the React development server
npm run build Create a production frontend build
npm run lint Lint the frontend source
npm run lint:fix Auto-fix lint issues
npm run typecheck TypeScript type check
npm run test Run frontend tests (watch mode)
npm run test:ci Run frontend tests (CI mode)
npm run verify Run typecheck + lint + tests
npm run build:ci Production build for CI
npm run server Start the Express backend
npm run test:server Run backend tests
npm run seed Seed MongoDB with sample campaign data
npm run clean-install Clean install all dependencies

Deployment

Frontend (Vercel)

  1. Import the repository into Vercel
  2. Set all REACT_APP_* environment variables in the Vercel dashboard
  3. Set REACT_APP_API_URL to your deployed backend URL (e.g., https://your-app.onrender.com/api)
  4. Deploy — Vercel builds automatically via npm run build

Backend (Render)

  1. Create a Web Service connected to the repository
  2. Set root directory to server
  3. Build command: npm install
  4. Start command: node src/server.js
  5. Set environment variables:
    • MONGODB_URI — Atlas connection string
    • GOOGLE_GEMINI_API_KEY
    • FIREBASE_SERVICE_ACCOUNT_BASE64 — Base64-encoded Firebase service account
    • FRONTEND_URL — comma-separated Vercel deployment URLs (for CORS)
    • NODE_ENV=production

Database (MongoDB Atlas)

  1. Create an Atlas cluster
  2. Create a database user with an alphanumeric password (avoids URL-encoding issues)
  3. Add 0.0.0.0/0 to the IP access list (or restrict to Render's IPs)
  4. Set MONGODB_URI in your environment
  5. Seed if needed:
SEED_USER_ID=your_firebase_uid MONGODB_URI=your_atlas_uri npm run seed

Testing

# Frontend unit tests
npm run test:ci

# Backend tests
npm run test:server

# Full verification (typecheck + lint + tests)
npm run verify

CI runs on every pull request to development and main via GitHub Actions.


License

MIT

About

Pre-send email testing platform — create variants, collect human-response signals, and compare outcomes before launch.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors