Skip to content

yltw27/system-sifu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

system-sifu

A command-line tool to practice system design interviews with AI-powered feedback using LangChain and Ollama.

🚀 Quick Start

1. Install Ollama (Recommended - Free & Local)

macOS:

brew install ollama

Linux:

curl -fsSL https://ollama.ai/install.sh | sh

Windows: Download from https://ollama.ai

2. Start Ollama and Download Model

# Start Ollama service (keep this terminal open)
ollama serve

# In a new terminal, download the recommended model
ollama pull gemma3

# Verify installation
ollama list

3. Install and Run the CLI

# Clone and setup
git clone git@github.com:yltw27/system-sifu.git
cd system-sifu
npm install
npm run build

# Configure settings (first time only)
npm start config

# Start your first interview!
npm start interview

✨ Features

  • 🆓 Local AI with Ollama - No API keys required, runs offline
  • 🤖 Multiple Providers - Supports Ollama, OpenAI, and Anthropic
  • 🎯 Experience-based Questions - Tailored for Junior to Staff+ levels
  • 📊 Structured Interview Flow - Mirrors real FAANG interview process
  • 💬 Detailed AI Feedback - Comprehensive performance analysis
  • ⚙️ Persistent Configuration - Saves your preferences

📋 Interview Process

The CLI follows a structured interview format used by major tech companies:

1. Clarifying Questions (5 minutes)

  • Understand functional requirements
  • Identify non-functional requirements
  • Clarify scope and constraints

2. Back-of-envelope Estimation (10 minutes)

  • Calculate system scale (users, requests/sec)
  • Estimate storage requirements
  • Determine bandwidth needs

3. High-level System Design (15 minutes)

  • Create architecture diagram using Mermaid syntax
  • Identify major components
  • Show data flow

4. Technical Deep Dive (15 minutes)

  • Discuss technology choices
  • Address scalability concerns
  • Explore trade-offs and alternatives

5. AI-Powered Feedback

  • Detailed performance analysis
  • Specific improvement suggestions
  • Score across multiple dimensions

⚙️ Configuration

Run npm start config to customize:

  • Experience Level: Junior, Mid, Senior, Staff+
  • Interview Duration: 15-90 minutes (< 30 mins = speed round, skips diagram)
  • AI Provider: Ollama (local), OpenAI, or Anthropic
  • Model Selection: Choose based on your preferences

🔧 Commands

# Start a practice interview
npm start interview

# Update your configuration
npm start config

# Show help
npm start --help

# Development mode
npm run dev interview

🛠️ Troubleshooting

Ollama Connection Issues

Check if Ollama is running:

curl http://localhost:11434/api/tags

Common solutions:

# Restart Ollama service
ollama serve

# List available models
ollama list

# Pull missing model
ollama pull gemma3

# Check system resources
ollama ps

🌟 Why Use This Tool?

For Job Seekers

  • Practice realistic system design interviews
  • Get instant, detailed feedback
  • Build confidence before real interviews
  • Track improvement over time

For Privacy-Conscious Users

  • Everything runs locally with Ollama
  • No data sent to external APIs
  • Practice with proprietary system designs safely

For Cost-Conscious Users

  • Completely free with Ollama
  • No API usage limits or charges
  • Practice as much as you want

🤝 Contributing

Contributions welcome! This project uses:

  • TypeScript for type safety
  • LangChain for AI integration
  • Commander.js for CLI interface
  • Inquirer for interactive prompts

📄 License

MIT License - see LICENSE for details.

🙏 Acknowledgments

  • Built with LangChain for AI orchestration
  • Powered by Ollama for local AI inference
  • Inspired by real system design interview practices

About

A CLI tool to help you practice system design interviews

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors