Identity, memory, and soul for AI agents. Runs standalone or as part of a TPS office.
Agents forget everything between sessions. Flair gives them a persistent sense of self — who they are, what they know, how they think — backed by cryptographic identity and semantic search.
Built on Harper. Single process. No sidecars. Zero external API calls for embeddings.
Every agent framework gives you chat history. None of them give you identity.
An agent that can't remember what it learned yesterday, can't prove who it is to another agent, and loses its personality on restart isn't really an agent. It's a stateless function with a system prompt.
Flair fixes that:
- Identity — Ed25519 key pairs. Agents sign every request. No passwords, no API keys, no shared secrets.
- Memory — Persistent knowledge with semantic search. Write a lesson learned today, find it six months from now by meaning, not keywords.
- Soul — Personality, values, procedures. The stuff that makes an agent that agent, not just another LLM wrapper.
Flair is a native Harper v5 application. Harper handles HTTP, persistence (RocksDB), and application logic in a single process.
Agent ──[Ed25519-signed request]──▶ Flair (Harper)
├── Auth middleware (verify signature)
├── Identity (Agent + Integration tables)
├── Memory (write → auto-embed → store)
├── Soul (permanent personality/values)
└── Search (semantic + keyword, ranked)
No external dependencies at runtime. Embeddings are generated in-process using nomic-embed-text via a Harper plugin. Model runs on CPU or GPU (Metal, CUDA). No API calls, no sidecar processes, no network hops.
Every agent has an Ed25519 key pair. Requests are signed with agentId:timestamp:nonce:METHOD:/path and verified against the agent's registered public key. 30-second replay window with nonce deduplication.
Memories are automatically embedded on write using nomic-embed-text (768 dimensions). Search by meaning:
# Write a memory
flair memory write "Harper v5 sandbox blocks node:module but process.dlopen works"
# Find it later by concept, not exact words
flair memory search "native addon loading in sandboxed runtimes"
# → [0.67] Harper v5 sandbox blocks node:module but process.dlopen worksNot all memories are equal:
| Durability | Delete | TTL | Use Case |
|---|---|---|---|
permanent |
❌ Rejected | None | Identity, values, core knowledge |
persistent |
✅ Allowed | None | Daily logs, project context |
standard |
✅ Allowed | None | Working memory (default) |
ephemeral |
✅ Allowed | 24h | Scratch space, temp context |
Memories can be time-bounded with validFrom and validTo fields. Expired memories are excluded from search and bootstrap automatically — no manual cleanup.
Entity-to-entity triples with temporal bounds. Model structured knowledge — "Flint works-with Anvil since 2024-01-01" — queryable alongside semantic memory.
Subscribe to memory or soul changes via WebSocket/SSE. Useful for dashboards, cross-agent sync, or audit trails.
One Flair instance serves any number of agents. Each agent has its own keys, memories, and soul. Agents can't read each other's data without explicit access grants.
Built-in OAuth 2.1 server with PKCE, dynamic client registration, and a standards-compliant token endpoint. Agents and services can delegate auth to Flair without a separate IdP.
IdP integration for Google Workspace, Azure AD, and Okta. Bind agent identities to enterprise accounts — access follows your org's user lifecycle.
Server-rendered admin UI for managing principals, connectors, IdPs, and instance configuration. No separate dashboard service.
Hub-and-spoke sync between Flair instances using signed requests and pairing tokens. Originator enforcement prevents replay across federated nodes. Share memories across offices without giving any node raw access.
Context-signal-aware preloading. Bootstrap reads active project context, recent activity, and agent role to select the most relevant memories — not just the most recent ones.
Passive extraction of entities from memory content on write. Entities are indexed automatically — no tagging required. Feeds the relationship graph without agent intervention.
Pluggable import/export to foreign memory systems. Every agent-memory format (agentic-stack, Mem0, Letta, Anthropic memory, the next viral one) shouldn't need its own Flair PR. Bridges give you one contract with two shapes — a YAML descriptor for file-format targets or a code plugin for API targets — and a scaffold/test loop that lets an agent ship a working adapter in one pass. See docs/bridges.md.
# Install
npm install -g @tpsdev-ai/flair
# Bootstrap a Flair instance (installs Harper, creates database, starts service)
flair init
# Register your first agent
flair agent add mybot --name "My Bot" --role assistant
# Check everything is working
flair status
# Lifecycle management
flair stop # Stop the Flair instance
flair restart # Restart the Flair instance
flair uninstall # Remove the service (keeps data)
flair uninstall --purge # Remove everything including data and keysThat's it. Your agent now has identity and memory.
Flair works with any agent runtime. Pick the path that fits yours.
Use the flair CLI directly from any agent that can run shell commands.
# Write a memory
flair memory add --agent mybot --content "learned something important"
# Search by meaning
flair memory search --agent mybot --q "that important thing"
# Set personality
flair soul set --agent mybot --key role --value "Security reviewer"
# Cold-start bootstrap (soul + recent memories)
flair bootstrap --agent mybot --max-tokens 4000
# Backup / restore
flair backup --admin-pass "$FLAIR_ADMIN_PASS"
flair restore ./backup.json --admin-pass "$FLAIR_ADMIN_PASS"One command. Zero config.
openclaw plugins install @tpsdev-ai/openclaw-flairThe plugin auto-detects your agent identity, provides memory_store/memory_recall/memory_get tools, and injects relevant memories at session start. See the plugin README for details.
One MCP server, many CLIs. Install the MCP server for native tool integration in any MCP-capable client:
// .mcp.json in your project root (Claude Code / Cursor format)
{
"mcpServers": {
"flair": {
"command": "npx",
"args": ["-y", "@tpsdev-ai/flair-mcp"],
"env": { "FLAIR_AGENT_ID": "mybot" }
}
}
}Add to your CLAUDE.md:
At the start of every session, run mcp__flair__bootstrap before responding.
Your agent's memory follows it across CLIs — same Flair instance, same agent identity, switch from Claude Code to Gemini CLI to Codex CLI without losing state. The MCP server exposes memory_store, memory_search, memory_get, memory_delete, bootstrap, soul_set, soul_get.
For per-CLI config snippets (Gemini CLI's ~/.gemini/settings.json, Codex CLI's ~/.codex/config.toml, etc.), see docs/mcp-clients.md. For a deeper Claude Code walk-through with CLAUDE.md patterns, see docs/claude-code.md.
Use Flair as the memory backend for n8n's AI Agent. Same memories readable from Claude Code and OpenClaw — that's the point.
# In n8n: Settings → Community Nodes → Install
@tpsdev-ai/n8n-nodes-flairTwo nodes ship: Flair Chat Memory (Memory port, conversation buffer) and Flair Search (Tool port, semantic search + get-by-subject). Setup walkthrough, subject/sessionId patterns, and security guidance in docs/n8n.md.
For custom integrations, use the lightweight client — no Harper, no embeddings, just HTTP + auth:
npm install @tpsdev-ai/flair-clientimport { FlairClient } from '@tpsdev-ai/flair-client'
const flair = new FlairClient({
url: 'http://localhost:19926', // or remote: https://flair.example.com
agentId: 'mybot',
// key auto-resolved from ~/.flair/keys/mybot.key
})
// Write a memory
await flair.memory.write('Harper v5 sandbox blocks bare imports')
// Search by meaning
const results = await flair.memory.search('native module loading')
// Cold-start bootstrap
const ctx = await flair.bootstrap({ maxTokens: 4000 })
// Set personality
await flair.soul.set('role', 'Security reviewer')See the client README for the full API.
Flair is a pure HTTP API. Use it from Python, Go, Rust, shell scripts — anything that can make HTTP requests and sign with Ed25519.
# Search memories
curl -H "Authorization: TPS-Ed25519 mybot:$TS:$NONCE:$SIG" \
-X POST http://localhost:19926/SemanticSearch \
-d '{"agentId": "mybot", "q": "deployment procedure", "limit": 5}'
# Write a memory
curl -H "Authorization: TPS-Ed25519 mybot:$TS:$NONCE:$SIG" \
-X PUT http://localhost:19926/Memory/mybot-123 \
-d '{"id": "mybot-123", "agentId": "mybot", "content": "...", "durability": "standard"}'
# Bootstrap (soul + recent memories)
curl -H "Authorization: TPS-Ed25519 mybot:$TS:$NONCE:$SIG" \
-X POST http://localhost:19926/BootstrapMemories \
-d '{"agentId": "mybot", "maxTokens": 4000}'Auth is Ed25519 — sign agentId:timestamp:nonce:METHOD:/path with your private key. See SECURITY.md for the full protocol.
flair/
├── src/cli.ts # CLI: init, agent, status, backup, grant
├── config.yaml # Harper app configuration
├── schemas/
│ ├── agent.graphql # Agent + Integration + MemoryGrant tables
│ └── memory.graphql # Memory + Soul tables
├── resources/
│ ├── auth-middleware.ts # Ed25519 verification + agent scoping
│ ├── embeddings-provider.ts # In-process nomic embeddings
│ ├── Memory.ts # Durability enforcement + auto-embed
│ ├── Soul.ts # Permanent-by-default personality
│ ├── SemanticSearch.ts # Hybrid semantic + keyword search
│ ├── MemoryBootstrap.ts # Cold start context assembly
│ └── MemoryFeed.ts # Real-time memory changes
├── plugins/
│ └── openclaw-flair/ # @tpsdev-ai/openclaw-flair plugin
└── SECURITY.md # Threat model + auth documentation
- Harper-native — No Express, no middleware frameworks. Harper IS the runtime.
- In-process embeddings — Native nomic-embed-text (768 dimensions) via llama.cpp. Runs on CPU or GPU (Metal, CUDA). No API calls, no OpenAI key needed.
- Schema-driven — GraphQL schemas with
@table @exportauto-generate REST CRUD. Custom resources extend behavior (durability guards, auto-embedding, search). - Zero admin tokens on disk — Admin credentials come from the
HDB_ADMIN_PASSWORDenvironment variable only. Never stored on the filesystem.
flair initYour data stays on your machine. Best for personal agents, dev teams, and privacy-first setups. Flair runs as a single Harper process — no Docker, no cloud, no external services.
If the default port (19926) is already in use, initialize with a custom port:
flair init --port 8000Flair will automatically remember this port for future CLI commands by saving it to ~/.flair/config.yaml.
Run Flair on a VPS or cloud instance. Agents connect over HTTPS:
# On the server
flair init --port 19926
# Agents connect with:
FLAIR_URL=https://your-server:19926 flair agent add mybotGood for teams with multiple machines or always-on agents.
Managed multi-region deployment via Harper Fabric. Data replication, automatic failover, web dashboard. Enterprise scale without ops overhead.
See SECURITY.md for the full security model, threat analysis, and recommendations.
Key points:
- Ed25519 cryptographic identity — agents sign every request
- Collection-level data isolation — agents can't read each other's memories
- Admin credentials never stored on disk — environment variables only
- Key rotation via
flair agent rotate-key - Cross-agent access requires explicit grants
bun install # Install dependencies
bun run build # Compile TypeScript → dist/
bun test # Run unit + integration testsIntegration tests spin up a real Harper instance on a random port, run the test suite, and tear down. No mocks for the database layer.
203+ unit tests across 19 test files, covering 7 CI checks on every commit.
| Category | Tests | What's covered |
|---|---|---|
| Auth & Identity | auth-middleware, auth-scoping, key-paths-and-rotation | Ed25519 signature verification, agent isolation, key rotation |
| Memory | data-scoping, backup-restore, agent-remove-and-grants | Cross-agent access denied, data durability, grant lifecycle |
| Content Safety | content-safety | Prompt injection detection, identity hijacking, format injection, exfiltration patterns |
| Search | temporal-scoring, embeddings | Temporal decay, relevance scoring, embedding generation |
| Rate Limiting | rate-limiter | Per-agent rate limiting, bucket isolation |
| Integration | smoke, durability-guard | End-to-end write/search/bootstrap, durability tier enforcement |
| CLI | cli-v2, cli-api, first-run-soul | Full CLI command coverage, API layer, soul onboarding |
CI pipeline: unit tests, integration tests, type check, dependency audit, Semgrep SAST, CodeQL SAST, Docker from-scratch validation.
Note: Flair uses Harper v5, currently in beta. We run it in production daily and track upstream closely. Pin your Harper version.
Flair is in active development and daily use. We dogfood it — the agents that build Flair use Flair for their own memory and identity.
What works:
- ✅ Ed25519 agent identity and auth
- ✅ CLI: init, agent add/remove/rotate-key, status, backup/restore, export/import, grant/revoke
- ✅ Memory CRUD with durability enforcement and near-duplicate detection
- ✅ In-process semantic embeddings (768-dim nomic-embed-text via harper-fabric-embeddings)
- ✅ Hybrid search (semantic + keyword + temporal intent detection)
- ✅ Soul (permanent personality/values)
- ✅ Real-time feeds (WebSocket/SSE)
- ✅ Agent-scoped data isolation
- ✅ Cold start bootstrap with adaptive time window
- ✅ Predictive bootstrap (context-signal-aware preloading)
- ✅ Temporal validity (
validFrom/validToon memories) - ✅ Relationship graph (entity-to-entity triples with temporal bounds)
- ✅ Auto entity detection (passive extraction from memory content)
- ✅ OAuth 2.1 authorization server (PKCE, dynamic client registration, token endpoint)
- ✅ XAA enterprise authorization (Google Workspace, Azure AD, Okta)
- ✅ Web admin UI (principals, connectors, IdPs, instance config)
- ✅ Federation (hub-and-spoke sync with signed requests and pairing tokens)
- ✅ OpenClaw memory plugin
- ✅ MCP server for Claude Code / Cursor / Windsurf
- ✅ Lightweight client library (
@tpsdev-ai/flair-client) - ✅ Portable agent identity (export/import between instances)
- ✅
flair --version,flair upgrade
What's next:
- First-run soul wizard (interactive personality setup)
- Git-backed memory sync
- Encryption at rest (opt-in AES-256-GCM per memory)
- Harper Fabric deployment (managed multi-office)