Skip to content

swifttarrow/nerdy-live-session-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

188 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SessionLens

Real-time engagement intelligence for live tutor–student sessions. Analyzes eye contact and speaking time for both participants, provides non-intrusive coaching nudges, and generates a post-session summary.

Prerequisites

Setup

make setup

This single command installs dependencies, copies WASM files, creates .env.local from .env.example (if missing), and downloads the MediaPipe face model. Then edit .env.local with your LiveKit credentials.

Running

make run
# or: npm run dev

Open http://localhost:3000.

Two-participant flow

  1. Open two browser tabs/windows
  2. First tab: join as tutor (or use default identity)
  3. Second tab: join as student
  4. Both participants will see each other's video and live metrics
  5. Coaching nudges appear when engagement signals drop
  6. Click End Session to view the post-session report

Testing

make test
# or: npm test

E2E (Playwright)

One full-flow test: upload → start → play through entire video → report with metric bands:

make test:e2e
# or: npm run test:e2e

Add two videos to e2e/fixtures/videos/ (tutor.mp4, student.mp4). For longer clips, set E2E_VIDEO_DURATION_SEC (default 90). See e2e/fixtures/videos/README.md.

Building

make build
# or: npm run build

Environment Variables

Variable Description
LIVEKIT_URL LiveKit server WebSocket URL (e.g. wss://your-project.livekit.cloud)
LIVEKIT_API_KEY LiveKit API key
LIVEKIT_API_SECRET LiveKit API secret

Room status is updated via 1s polling against /api/room/status.

Architecture

  • Frontend: Next.js 14 (App Router), React, Tailwind CSS
  • Video analysis: MediaPipe Face Landmarker (browser WASM) — eye contact scoring at 1–2 Hz
  • Audio analysis: Silero VAD via @ricky0123/vad-web — per-participant talk-time
  • Coaching engine: Rule-based triggers with cooldowns
  • Post-session: Template-based summary and recommendations

All video/audio processing runs entirely in the browser — no data is sent to any server.

Deployment

Deploy to Vercel, Railway, Fly.io, or any standard host. LiveKit Cloud handles WebRTC.

fly launch
fly deploy

Ensure HTTPS is enabled (required for getUserMedia and WebRTC in production).

Releases

No releases published

Packages

 
 
 

Contributors

Languages