Vibe Learning is a fully client-side web app that helps students convert handwritten or photographed solutions into LLM-ready text. It streamlines OCR, prompt tuning, and step-by-step reasoning so that you can plug in an OpenAI-compatible endpoint, press go, and start reviewing the model’s thought process on any mobile browser.
- ✅ OpenAI-Compatible API – Works with OpenAI-style endpoints and supports streaming responses out of the box.
- 📸 Multi-format Image Recognition – Upload HEIC, JPEG, PNG, WEBP, or other common formats for automatic OCR.
- 🤖 Thinking Trace Visualisation – Stream the assistant’s reasoning in real time and convert it to Markdown + LaTeX (MathJax v3) with one click.
- 💾 Local Storage Only – Problems, reasoning steps, and history live in IndexedDB; no server is required.
- 🌓 Mobile-First Dark Mode – Tailwind CSS UI with true black OLED support and touch-friendly layouts.
- 🌍 Bilingual UI – English and Chinese translations ship with the app.
- 📱 Installable PWA – Add to the home screen and continue working offline for core tasks.
- 🔒 Privacy First – API keys never leave the browser.
- ✅ Memory-Friendly Rendering – Manual
Rendercontrol keeps resource usage low (currently disabled while LaTeX rendering is being repaired).
- Framework: Vite + React 18 + TypeScript
- Styling: Tailwind CSS (mobile-first + dark mode)
- State Management: Zustand with persistence
- Storage: IndexedDB via Dexie
- Internationalisation: i18next
- Math Rendering: MathJax v3 (lazy loaded)
- PWA: vite-plugin-pwa
- Fix MathJax rendering for LaTeX (temporary workaround: https://jupyter.org/try-jupyter/lab/).
- Add multimodal toggle controls.
Requires Node.js 18+ and npm.
# Install dependencies
npm install --legacy-peer-deps
# Start the development server
npm run dev
# Build for production
npm run build
# Preview the production build
npm run previewIf you hit npm cache permission errors:
sudo chown -R $(id -u):$(id -g) "$HOME/.npm"
npm install --legacy-peer-deps- Settings – Enter the API base URL, token, and model name on the Settings page.
- Prepare – Upload or capture a problem image, then click
Recognizeto run OCR (optionally add steps, official answers, or notes). - Supplement – Provide any extra context that will help the LLM reason correctly.
- Start Solving – Tap
Start Solvingto stream the assistant’s reasoning and responses. - Render – Press
Renderto convert the stream into Markdown + LaTeX (currently disabled until the pipeline fix ships). - History – Open
Historyto review, copy, useAsk Again, or send follow-up questions to past sessions.
/src
/app - App.tsx and routing
/pages - Home, Settings, Prepare, Verify, Solve, History
/components - Reusable UI components
/state - Zustand stores
/lib - Core utilities: OCR, LLM, storage, etc.
/types - TypeScript type definitions
/styles - Global styles & Tailwind config
Ideal for static hosting services (GitHub Pages, Netlify, etc.):
npm run buildDeploy the contents of dist/ to your hosting provider.
For a CI/CD example, see .github/workflows/deploy.yml.
- API keys stay in the browser’s localStorage.
- Requests go directly from the browser to your configured LLM endpoint—no intermediary server.
- No analytics or tracking scripts are bundled.
- Exported history automatically excludes credentials.
Issues and pull requests are welcome.
Check out CLAUDE.md for development guidance.
Licensed under MIT.