A fun and functional terminal AI assistant using ChatGPT, Claude, or local models via Ollama.
Just type ai do something and let DMC help you drop command-line hits. Works with cloud-based and local AI models on Linux, macOS, and Windows (WSL).
See features.md for a complete list of features and capabilities.
bash <(curl -s https://raw.githubusercontent.com/kenshub/ai-run-cmd/main/scripts/install.sh)Clones the repo, installs dependencies, sets up
.env, and updates your.bashrcor.zshrc. On macOS, Homebrew will be installed automatically if not already present.
Download and run install.ps1 in PowerShell:
irm https://raw.githubusercontent.com/kenshub/ai-run-cmd/main/install.ps1 | iexInstalls WSL with Ubuntu if needed (requires a reboot), then runs the standard installer inside WSL.
See install.md for manual setup instructions.
| Provider | Requires |
|---|---|
| OpenAI (default) | API key |
| Anthropic (Claude) | API key |
| Groq | API key |
| Mistral | API key |
| Google Gemini | API key |
| Ollama (local) | No key — runs on your machine |
Switch providers anytime:
ai provider ollama
ai provider openaiRun AI entirely on your machine — no internet or API key required:
ai install-localThis installs Ollama and lets you choose a model:
| Model | Size | Best for |
|---|---|---|
qwen2.5:0.5b |
~400MB | Fastest, everyday commands |
llama3.2:1b |
~1.3GB | Good balance |
phi3:mini |
~2.2GB | More capable responses |
ai restart apache
ai run docker prune
ai list files by sizeSpecial commands:
ai explain tar -czpf # explain what a command does
ai rap hard drive space # get the answer in rap format
ai install-local # set up a local LLM
ai provider ollama # switch AI provider
ai debug on # enable debug outputIf this helped you out:
Or just ⭐ the repo and tell a friend.
