Real-time engagement intelligence for live tutor–student sessions. Analyzes eye contact and speaking time for both participants, provides non-intrusive coaching nudges, and generates a post-session summary.
- Node.js 20+
- A LiveKit Cloud account (free tier works)
make setupThis single command installs dependencies, copies WASM files, creates .env.local from .env.example (if missing), and downloads the MediaPipe face model. Then edit .env.local with your LiveKit credentials.
make run
# or: npm run devOpen http://localhost:3000.
- Open two browser tabs/windows
- First tab: join as tutor (or use default identity)
- Second tab: join as student
- Both participants will see each other's video and live metrics
- Coaching nudges appear when engagement signals drop
- Click End Session to view the post-session report
make test
# or: npm testOne full-flow test: upload → start → play through entire video → report with metric bands:
make test:e2e
# or: npm run test:e2eAdd two videos to e2e/fixtures/videos/ (tutor.mp4, student.mp4). For longer clips, set E2E_VIDEO_DURATION_SEC (default 90). See e2e/fixtures/videos/README.md.
make build
# or: npm run build| Variable | Description |
|---|---|
LIVEKIT_URL |
LiveKit server WebSocket URL (e.g. wss://your-project.livekit.cloud) |
LIVEKIT_API_KEY |
LiveKit API key |
LIVEKIT_API_SECRET |
LiveKit API secret |
Room status is updated via 1s polling against /api/room/status.
- Frontend: Next.js 14 (App Router), React, Tailwind CSS
- Video analysis: MediaPipe Face Landmarker (browser WASM) — eye contact scoring at 1–2 Hz
- Audio analysis: Silero VAD via
@ricky0123/vad-web— per-participant talk-time - Coaching engine: Rule-based triggers with cooldowns
- Post-session: Template-based summary and recommendations
All video/audio processing runs entirely in the browser — no data is sent to any server.
Deploy to Vercel, Railway, Fly.io, or any standard host. LiveKit Cloud handles WebRTC.
fly launch
fly deployEnsure HTTPS is enabled (required for getUserMedia and WebRTC in production).