Skip to content

cenaia/vibe-learning

Repository files navigation

Vibe Learning

中文说明

Vibe Learning is a fully client-side web app that helps students convert handwritten or photographed solutions into LLM-ready text. It streamlines OCR, prompt tuning, and step-by-step reasoning so that you can plug in an OpenAI-compatible endpoint, press go, and start reviewing the model’s thought process on any mobile browser.

Core Features

  • OpenAI-Compatible API – Works with OpenAI-style endpoints and supports streaming responses out of the box.
  • 📸 Multi-format Image Recognition – Upload HEIC, JPEG, PNG, WEBP, or other common formats for automatic OCR.
  • 🤖 Thinking Trace Visualisation – Stream the assistant’s reasoning in real time and convert it to Markdown + LaTeX (MathJax v3) with one click.
  • 💾 Local Storage Only – Problems, reasoning steps, and history live in IndexedDB; no server is required.
  • 🌓 Mobile-First Dark Mode – Tailwind CSS UI with true black OLED support and touch-friendly layouts.
  • 🌍 Bilingual UI – English and Chinese translations ship with the app.
  • 📱 Installable PWA – Add to the home screen and continue working offline for core tasks.
  • 🔒 Privacy First – API keys never leave the browser.
  • Memory-Friendly Rendering – Manual Render control keeps resource usage low (currently disabled while LaTeX rendering is being repaired).

Tech Stack Overview

  • Framework: Vite + React 18 + TypeScript
  • Styling: Tailwind CSS (mobile-first + dark mode)
  • State Management: Zustand with persistence
  • Storage: IndexedDB via Dexie
  • Internationalisation: i18next
  • Math Rendering: MathJax v3 (lazy loaded)
  • PWA: vite-plugin-pwa

Pending Vibe-Coding Work

Installation & Run

Requires Node.js 18+ and npm.

# Install dependencies
npm install --legacy-peer-deps

# Start the development server
npm run dev

# Build for production
npm run build

# Preview the production build
npm run preview

If you hit npm cache permission errors:

sudo chown -R $(id -u):$(id -g) "$HOME/.npm"
npm install --legacy-peer-deps

How to Use

  1. Settings – Enter the API base URL, token, and model name on the Settings page.
  2. Prepare – Upload or capture a problem image, then click Recognize to run OCR (optionally add steps, official answers, or notes).
  3. Supplement – Provide any extra context that will help the LLM reason correctly.
  4. Start Solving – Tap Start Solving to stream the assistant’s reasoning and responses.
  5. Render – Press Render to convert the stream into Markdown + LaTeX (currently disabled until the pipeline fix ships).
  6. History – Open History to review, copy, use Ask Again, or send follow-up questions to past sessions.

Project Structure (Overview)

/src
  /app        - App.tsx and routing
  /pages      - Home, Settings, Prepare, Verify, Solve, History
  /components - Reusable UI components
  /state      - Zustand stores
  /lib        - Core utilities: OCR, LLM, storage, etc.
  /types      - TypeScript type definitions
  /styles     - Global styles & Tailwind config

Deployment Tips

Ideal for static hosting services (GitHub Pages, Netlify, etc.):

npm run build

Deploy the contents of dist/ to your hosting provider.
For a CI/CD example, see .github/workflows/deploy.yml.

Security & Privacy

  • API keys stay in the browser’s localStorage.
  • Requests go directly from the browser to your configured LLM endpoint—no intermediary server.
  • No analytics or tracking scripts are bundled.
  • Exported history automatically excludes credentials.

Contributing

Issues and pull requests are welcome.
Check out CLAUDE.md for development guidance.
Licensed under MIT.

About

student learing in ai

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages