proto is a multi-model AI agent for the terminal. It connects to any OpenAI-compatible, Anthropic, or Gemini API endpoint. It's a fork of Qwen Code (which is based on Gemini CLI), rebuilt as a model-agnostic coding agent with features from the protoLabs Studio ecosystem.
- Multi-model: Connect any OpenAI-compatible, Anthropic, or Gemini endpoint
- beads_rust task management: SQLite-backed persistent tasks via
brCLI with task claiming - MCP support: Configure MCP servers in settings for tool extensions
- Plugin discovery: Auto-discovers Claude Code plugins from
~/.claude/plugins/ - Agentic workflow: Rich built-in tools (Skills, SubAgents, Plan Mode)
- Agent teams: Background agent spawning, shared task list, inter-agent mailbox,
/teamcommand - Hooks system: Command, HTTP, and prompt hook types with async execution and fine-grained
iffiltering - Harness engineering: Pre-flight baseline check, post-edit verification, 9-section compression, memory consolidation
- Terminal-first, IDE-friendly: Built for developers who live in the command line
- Runtime: Node.js 20+
- Language: TypeScript 5.3+
- Package Manager: npm with workspaces
- Build Tool: esbuild
- Testing: Vitest
- Linting: ESLint + Prettier
- UI Framework: Ink (React for CLI)
- React Version: 19.x
- Task Management: beads_rust (
brCLI, SQLite + JSONL)
├── packages/
│ ├── cli/ # Command-line interface (main entry point)
│ ├── core/ # Core backend logic and tool implementations
│ ├── sdk-typescript/ # TypeScript SDK
│ ├── test-utils/ # Shared testing utilities
│ ├── vscode-ide-companion/ # VS Code extension companion
│ ├── webui/ # Web UI components
│ └── zed-extension/ # Zed editor extension
├── scripts/ # Build and utility scripts
├── docs/ # Documentation source
├── docs-site/ # Documentation website (Next.js)
├── integration-tests/ # End-to-end integration tests
└── eslint-rules/ # Custom ESLint rules
The main CLI package providing:
- Interactive terminal UI using Ink/React
- Non-interactive/headless mode
- Authentication handling (OAuth, API keys)
- Configuration management
- Command system (
/help,/clear,/compress, etc.)
Core library containing:
- Tools: File operations (read, write, edit, glob, grep), shell execution, web fetch, LSP integration, MCP client, task management (beads_rust)
- Memory: File-per-memory system with YAML frontmatter, 4-type taxonomy (user/feedback/project/reference), auto-extraction agent, MEMORY.md index
- Subagents: Task delegation with
disallowedTools,permissionMode, background execution - Agent Teams: Coordinator agent, TeamMailbox, shared task claiming, team config + lifecycle
- Skills: 18 bundled skills for agentic workflows (incl. adversarial-verification, coding-agent-standards)
- Hooks: Command, HTTP, prompt hook types; async execution;
iffield for arg filtering; team lifecycle events - Models: Model configuration and registry for any OpenAI-compatible API
- Services: Git integration, file discovery, session management
- LSP Support: Language Server Protocol integration
- MCP: Model Context Protocol client for tool extensions
- Node.js: ~20.19.0 for development (use nvm to manage versions)
- Git
- Rust toolchain (optional, for beads_rust task management)
git clone https://github.com/protoLabsAI/protoCLI.git
cd protoCLI
npm install# Build and link globally (recommended)
npm run ship
# Build all packages
npm run build
# Development mode (runs from source, no build needed)
npm run dev# Start interactive CLI
proto
# One-shot mode
proto -p "your question"
# Or from the repo
npm startnpm run test # Unit tests
npm run test:e2e # Integration testsnpm run preflight # All checks (lint, format, build, test)
npm run lint # ESLint
npm run format # Prettier
npm run typecheck # Type checkSettings live in ~/.proto/settings.json (global) and .proto/settings.json (per-project).
Key config sections:
modelProviders— API endpoints and model definitionsmcpServers— MCP server configurationsenv— API keys and environment variablesmodel.name— Default model
Set LANGFUSE_PUBLIC_KEY + LANGFUSE_SECRET_KEY (+ optional LANGFUSE_BASE_URL) to trace all sessions in Langfuse. Activates automatically — no other config needed. Traces: session turns, LLM calls (all providers) with token counts, tool calls, subagents.
For any file search or grep in this repo use the fff MCP tools instead of the built-in Glob/Grep tools. FFF is faster, typo-resistant, and builds frecency memory so results improve over time.
fff__grep— search file contents by identifier (default choice)fff__find_files— find files by name/topic when you don't have a specific identifierfff__multi_grep— search multiple patterns at once (e.g. snake_case + PascalCase variants)
Proto uses a file-per-memory architecture with YAML frontmatter. Each memory is a .md file in .proto/memory/ (project) or ~/.proto/memory/ (global). Four types: user, feedback, project, reference.
A MEMORY.md index is auto-generated and loaded into the system prompt. After each turn, a background extraction agent reviews recent messages and auto-creates memories.
/help- Display available commands/model- Switch models/skills- List available skills/memory show- Display loaded memory content/memory list- List all memories with metadata/memory add <fact>- Save a memory/memory forget <name>- Delete a memory/memory refresh- Reload from disk/clear- Clear conversation history/compress- Compress history to save tokens/stats- Show session information/exitor/quit- Exit proto