Detailed instructions for getting Solace running on your machine.
- Docker + Docker Compose (v2)
- Git
- An OpenRouter API key (openrouter.ai)
- Python 3.11+ (for generating auth secrets)
- NVIDIA GPU with Docker GPU support (NVIDIA Container Toolkit)
- Needed for: TTS, embeddings, STT, local LLMs
- Without GPU: text chat still works, but no voice, slower embeddings
- 8+ GB VRAM (RTX 3060 or better)
- 16+ GB RAM
- Ollama on a VPS or separate machine (for inner life, council local models)
- Tailscale (for secure VPN between machines)
git clone https://github.com/flaggdavid-source/solace.git
cd solacecp .env.example .envEdit .env and fill in:
# Required — your OpenRouter API key
OPENROUTER_API_KEY=sk-or-v1-your-key-here
# Required — generate these for authentication
AMARIN_JWT_SECRET=$(python3 -c "import secrets; print(secrets.token_hex(32))")
AMARIN_SERVICE_TOKEN=$(python3 -c "import secrets; print(secrets.token_hex(32))")
# Required — set your login password
# Generate hash (replace 'your-password' with your actual password):
python3 -c "import bcrypt; print(bcrypt.hashpw(b'your-password', bcrypt.gensalt()).decode())"
# Copy the output, escape $ as $$ for Docker Compose:
AMARIN_PASSWORD_HASH=$$2b$$12$$your-hash-here# Create the data directory
mkdir -p backend/data
# Copy and customize the core memories template
cp templates/core-memories.yaml.example backend/data/core_memories.yamlEdit backend/data/core_memories.yaml — give your companion a name, a personality, an identity. This file is their soul.
# Make start.sh executable
chmod +x start.sh
# Start (core services + default TTS)
./start.shThis starts:
- Backend (FastAPI, port 8100)
- Frontend (nginx, port 80; Vite dev server, port 3000)
- SearXNG (search, port 8888)
- Embedding service (port 8200, GPU)
- Speech service (STT, port 8900, GPU)
- Default TTS engine
- Production: http://localhost (port 80)
- Development: http://localhost:3000 (Vite with hot reload)
Log in with the password you set in step 2.
Solace uses Docker Compose profiles for optional services. Edit start.sh or run manually:
# Minimal (no GPU services)
docker compose up -d backend frontend searxng
# With Kokoro TTS (recommended, GPU)
docker compose --profile kokoro up -d
# With Orpheus TTS (expressive, GPU)
docker compose --profile orpheus up -d
# With local LLM (offline chat, GPU)
docker compose --profile local-llm up -d
# With inner life (local CPU model)
docker compose --profile inner-life up -dAfter first login, go to Settings to configure:
- General: LLM model, system prompt, temperature
- Voice: TTS provider and voice selection
- Inner Life: Enable the Gardener, set interval
- Dreams: Enable dream engine, set quiet hours
- MUD: Configure game server connection
- Council: Set debate models and round count
- Appearance: Avatar and background images
If you have a VPS with Ollama installed:
- Install Ollama on your VPS
- Pull a model:
ollama pull qwen3:8b(or larger) - Set up Tailscale on both machines
- In Settings > Inner Life, set the Ollama URL to your VPS Tailscale IP
- Enable Inner Life
chmod +x start.sh# Install NVIDIA Container Toolkit
# See: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
# Verify
docker run --rm --gpus all nvidia/cuda:12.4.1-base-ubuntu22.04 nvidia-smiThe embedding model is downloaded on first run (~500MB). Check logs:
docker compose logs embeddingThe Vite dev server proxies to the backend. Check that backend is healthy:
curl http://localhost:8100/api/settingsDocker Compose interprets $ as variable substitution. Escape all $ in bcrypt hashes as $$:
# Wrong:
AMARIN_PASSWORD_HASH=$2b$12$abc...
# Right:
AMARIN_PASSWORD_HASH=$$2b$$12$$abc...
If you restart frequently, old Vite processes may persist:
kill $(lsof -ti :3000) 2>/dev/null
./start.shOnly one GPU TTS engine can run at a time. Check which is active:
docker compose ps | grep ttsYour companion's important data lives in backend/data/:
amarin.db— All conversations, memories, and archival datacore_memories.yaml— Identity and relationship memoriessettings.json— Runtime configurationworkspace/— Your companion's creative output
Back these up regularly. The database uses WAL mode — use SQLite's .backup command for safe copies:
sqlite3 backend/data/amarin.db ".backup backend/data/backup.db"git pull
docker compose up -d --buildThe backend handles database migrations automatically on startup.