A local AI companion with memory, voice, plugins, and a desktop-like web interface
🇷🇺 Русский | 🇬🇧 English
What is this • Features • Quick Start • Architecture • Configuration • Plugins • Contributors
DARIA (Desktop AI Reactive Intelligent Assistant) is a local AI companion built to run on your own machine with Ollama. It is designed to feel like a persistent desktop assistant rather than a generic chat window.
Out of the box, DARIA ships with Dasha: a warm Russian-speaking persona with her own tone, mood, preferences, and long-term memory.
It combines:
- 🧠 Long-term memory that keeps track of facts, preferences, and prior conversations
- 💬 Emotion-aware dialogue with mood shifts, emotion classification, and softer response shaping
- 🖥️ Desktop-style web UI with windows, widgets, and notifications in the browser
- 🎙️ Voice support with speech-to-text and text-to-speech backends
- 📚 RAG over local files, URLs, and RSS sources
- 🔌 Plugins for extending behavior without touching the core app
- 🖱️ Optional desktop control for screenshot capture and system interaction
- 🔧 CLI mode for a lighter workflow without the web UI
In short, DARIA is a customizable local companion project: part assistant, part character system, part personal AI sandbox.
| Feature | Description |
|---|---|
| Local LLM | Ollama-powered, fully offline. Supports any model (gemma, llama, mistral…) |
| Emotional system | 17 moods, ML emotion classifier (scikit-learn), natural transitions |
| Long-term memory | SQLite + vector search. Remembers facts, preferences, past events |
| Memory Gatekeeper | LLM-based gate — retrieves memories only when relevant |
| Thinking pipeline | Evaluator → Dasha → Checker for higher-quality responses |
| Voice | STT via faster-whisper, TTS via QwenTTS/Silero, optional voice-clone integration |
| RAG | Files (.md/.txt/.pdf), URLs, RSS. Sentence-transformers + TF-IDF + BM25 |
| Desktop control | Screenshots + mouse/keyboard via mss/pyautogui/xdotool (permission-gated) |
| Plugin system | Lifecycle hooks, isolated venvs, GitHub catalog, ZIP installs |
| MCP connectors | SQLite / PostgreSQL via Model Context Protocol |
| Multi-user mode | Per-user memory isolation for shared deployments |
| Streaming | SSE streaming for web + CLI |
| Mobile UI | Responsive layout for phones |
| SSH tunnels | Cloudflare / localhost.run / serveo / VPS reverse tunneling |
Requirements: Python 3.10+, Ollama running locally
# 1. Clone
git clone https://github.com/your-org/daria.git
cd daria
# 2. Install
python install.py
# 3. Pull a model
ollama pull gemma3:4b
# 4. Run
python run.pyOpen http://localhost:7777 in your browser.
python cli.py
python cli.py --model gemma3:4b
echo "Привет!" | python cli.py # pipe modepython run.py --update # interactive
python run.py --update-yes # non-interactive / CIdaria/
├── core/
│ ├── brain.py # DariaBrain — central response engine
│ ├── memory.py # WorkingMemory, LongTermMemory, MemoryGatekeeper
│ ├── llm.py # Ollama provider (streaming, vision, embeddings)
│ ├── config.py # DariaConfig — YAML + env loading
│ ├── emotion_classifier.py # scikit-learn ML emotion classifier (14 classes)
│ ├── desktop.py # Physical desktop capture & control
│ ├── plugins.py # Plugin loader, lifecycle, venvs
│ └── debug.py # Per-module debug (DARIA_DEBUG_*)
├── web/ # Flask server, SSE streaming, REST API
├── voice/ # STT (faster-whisper), TTS (QwenTTS/Silero)
├── rag/ # KnowledgeStore, Ingester, Retriever
├── mcp/ # Model Context Protocol connectors
├── plugins/ # Built-in plugins (tamagotchi, telegram-bot, …)
├── config/
│ ├── config.yaml # Main settings
│ ├── persona.yaml # Dasha's system prompt template
│ ├── character_memory.yaml # Dasha's self-knowledge
│ └── prompts.yaml # User-editable prompt variables (hot-reloaded)
├── cli.py # CLI REPL
├── run.py # Unified entry point
└── tests/ # Regression suite (pytest)
config/config.yaml is safe to publish as a template, but keep real secrets and deployment-specific values out of git.
Public-safe example:
web:
host: "127.0.0.1"
port: 7777
llm:
provider: "ollama"
model: "llama3.2-vision:11b"
voice:
stt_enabled: false
tts_enabled: false
rag:
enabled: false
server:
enabled: false
admin_username: "admin"
admin_password: ""
secret_key: ""
tunnel:
enabled: false
backend: "cloudflare"Use environment variables or local-only config for secrets like DARIA_ADMIN_PASSWORD, API keys, and tunnel credentials.
config/prompts.yaml — editable without touching code, hot-reloaded:
extra_rules: "" # Additional behavior rules
forbidden_topics: [] # Topics Dasha won't discuss
response_style: warm # warm | neutral | professional
use_emoji: false
prompt_prefix: "" # Prepended to every system prompt
user_context_hint: "" # Context about the userDARIA_DEBUG=1 python run.py # all modules
DARIA_DEBUG_BRAIN=1 python run.py # brain only
DARIA_DEBUG_MEMORY=1 python run.py # memory only
DARIA_DEBUG_FALLBACK=1 python run.py # fallback events| Plugin | Description |
|---|---|
tamagotchi |
Virtual pet with 5 needs + XP system |
telegram-bot |
Telegram interface for DARIA |
voice-call |
Voice call interface |
Install from catalog or ZIP:
# In-app: Settings → Plugins → CatalogPlugin API hooks: on_load, on_chat_message, on_chat_response
- Changelog (EN) — full version history
- Changelog (RU) — русская версия истории изменений
- tests/ — regression test suite (
pytest tests/)
Special thanks to @qtbn5jsrk9-hash for giving Dasha her voice and helping shape her personality, profile, and instructions.
Made with care. Runs offline. Remembers you. 🌿

