Skip to content

moldovancsaba/hatori

Repository files navigation

{hatori} logo

{hatori}

Local-first personal agent with UI + API, auditable learning, and retrieval over your own knowledge.

CI Version Platform API v1

Quick StartInstallationRun ModesAPI & IntegrationDocs

Product Overview

{hatori} is an offline-first assistant platform designed for two-way operation:

  • You chat with {hatori} in the local UI (23571) and teach through real interactions.
  • External apps call the API (23572) to ingest content, request replies, and report delivery outcomes (sent_as_is / edited_then_sent) so {hatori} learns from what was actually sent.

Core capabilities:

  • Local UI chat and history
  • API-first integration for upstream apps
  • RAG pipeline with PostgreSQL + pgvector
  • Auditable learning loop (learning_events, delivery_events)
  • Model gateway strategy (MLX preferred, Ollama fallback)
  • Localhost-first security and collision-safe service orchestration

Why Teams Use It

  • Local and private by default
  • No hidden cloud dependency for base operation
  • Reliable ingestion/reply/outcome loop for “learn from real sends” behavior
  • Clear contract for integrations (idempotency + auth + rate limits)

Quick Start

git clone https://github.com/moldovancsaba/hatori.git
cd hatori
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r ui/requirements.txt
./tools/scripts/hatori_env_init.sh
make run

Open:

  • UI: http://127.0.0.1:23571/chat
  • API health: http://127.0.0.1:23572/v1/health

Installation

Prerequisites

Required:

  • Python 3.11+
  • Docker engine (or Colima on macOS)
  • Bash-compatible shell

Optional:

  • Ollama (recommended local generator fallback)
  • MLX-LM (Apple Silicon MLX backend)
  • Node.js (for clients/hatori-client examples)

Environment Bootstrap

Create local env + token:

./tools/scripts/hatori_env_init.sh

File created:

  • ~/.config/hatori/hatori.env (permissions: 600)

Default keys include:

  • HATORI_API_TOKEN
  • UI_PORT=23571
  • API_PORT=23572
  • HATORI_GENERATOR_ORDER=mlx,ollama

Database

make up

This starts local Postgres (pgvector/pgvector:pg16) container hatori-pg.

Run Modes

1) Foreground local stack

make run

Starts/reuses DB + API + UI with safe port ownership checks.

2) Background macOS service (auto-start friendly)

make install-service
make service-status
make service-logs

Stop only {hatori} listeners:

make stop

Uninstall:

make uninstall-service

3) macOS menu bar app

make install-HatoriMenubar
make run-HatoriMenubar

App path:

  • ~/Applications/HatoriMenubar.app

API and Integration

Canonical API contract:

Integration guide for external apps:

Local operations runbook:

Stable API Endpoints (v1)

  • GET /v1/health
  • POST /v1/agent/respond
  • POST /v1/agent/feedback
  • POST /v1/agent/outcome
  • POST /v1/ingest/event
  • POST /v1/artefacts/upload
  • POST /v1/artefacts/ingest_path (default disabled)
  • GET /v1/search

/v1/agent/respond safety behavior:

  • if local model output is unavailable/unsafe/internal-scaffold, API returns a deterministic send-ready fallback text instead of surfacing model error text to integrators.

WebSocket Status

Current stable contract is HTTP-only. No public WebSocket endpoint is exposed in v1.

Security and Reliability

  • Default bind: 127.0.0.1
  • API write auth: X-Hatori-Token (HATORI_API_TOKEN)
  • Idempotency keys:
    • ingest: external_event_id
    • outcome: external_outcome_id
  • Token-scoped rate limits on key endpoints
  • Collision-safe startup:
    • reuse if {hatori} already owns the port
    • refuse if foreign process owns the port
    • never kill non-{hatori} services
  • Service ownership rule:
    • if UI/API were started manually (python -m uvicorn ...), launchd service mode reports them as foreign owners.
    • use make stop then make install-service to return to service-managed operation.

Validation and Testing

Run full suite:

make test

Run planning gate:

./tools/scripts/planning_check.sh

Run integration smoke:

make reply-smoke

Run strict integrator acceptance gate (ingest/respond/outcome + idempotency + auth):

make integration-acceptance

Versioning and Releases

Current version:

Release and SemVer policy:

Rule summary:

  • Patch: fixes/docs/non-breaking internal updates
  • Minor: backward-compatible features
  • Major: breaking changes

Repository Structure

  • api/ - API service
  • ui/ - UI service
  • hatori/ - core runtime and adapters
  • pks/ - schema and migrations
  • tools/ - scripts, launchd, menu app tooling
  • tests/ - golden tests and fixtures
  • docs/ - product, architecture, API, runbooks
  • clients/hatori-client/ - minimal TypeScript integration client

Documentation Map

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Run:
    • make test
    • ./tools/scripts/planning_check.sh
  4. Open a PR with verification output

License

No license file is currently included. Add a license before broad redistribution.

Reproducible Setup Anywhere

One-command local bootstrap:

make bootstrap

This performs:

  • env + token initialization (~/.config/hatori/hatori.env)
  • venv creation and dependency install
  • DB startup and reset
  • Ollama model pull for configured routes
  • service install and status check

Additional helpers:

make models-pull   # pull all route models from env
make doctor        # environment + service diagnostics

Final Per-Task Routing Structure

{hatori} now supports true per-task routing with primary + fallback model lanes.

Default route map in hatori.env:

  • Writer lane (reply_write, plan_write, rewrite_polish):
    • primary: mlx (Apertus model id in HATORI_ROUTE_*_MODEL)
    • fallback: ollama:gemma2:2b
  • Drafter lane (classify_intent, extract_fields, context_pack, retrieval_query_build, edit_pattern_cluster):
    • primary: ollama:granite4:350m (IBM Granite Nano — lightest)
    • fallback: ollama:gemma3:1b
  • Judge lane (answer_score, quality_gate):
    • primary: ollama:llama3.2:3b
    • fallback: ollama:gemma2:2b

Core implementation points:

  • task router: hatori/model.py (get_task_model_adapter(task))
  • UI/API use per-task routes for planning and reply generation
  • drafter context pack is internal-only and injected into prompt context

Environment keys are generated by:

  • tools/scripts/hatori_env_init.sh

About

everything started here

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors