A local AI assistant built on top of Ollama, enabled via MCP
Clone the repository
git clone <repo-url>
- Install
uv
- Use your package manager of choice or directly install
- Set up your virtual environment (venv)
- run
uv venvin the root directory
- Sync dependencies
- run
uv syncto download and sync the dependencies
-
Install Ollama on your computer (in command line using your package manager)
-
Pull the model we are using (5.2 gb)
- run
ollama pull qwen3-> Will take up to 1hr
- Run the CLI
- cd into src
- run
uv run main.py server.pyto initialize the CLI - type ping to test the tool, or
quitto end the chat - run
uv run main.py server.py --guito use the assistant with a GUI - click disconnect, then
ctrl + c inthe terminal to stop the gui