AI Memory for Chatbots, Agents, and LLM Apps
Your AI forgets everything. CLAIV fixes that.
If you're building with OpenAI, Claude, or LangChain — you've seen this:
- Your chatbot forgets users between sessions
- Your AI agent loses context after a few steps
- Your app relies on fragile chat history or prompt stuffing
- Your RAG setup is inconsistent and unreliable
👉 This is not a model problem — it's a memory problem
CLAIV is a drop-in memory API for AI applications.
It gives your AI:
- 🧠 Persistent user memory across conversations
- 📚 Reliable document understanding (not just chunk retrieval)
- ⏱️ Temporal awareness — what happened and when
- 🎯 Context injection ready for LLM prompts
- 🧹 Controlled forgetting — GDPR-safe deletion
❌ Without memory
User: My name is Alex and I run a fitness business.
...later...
AI: How can I help you today?
✅ With CLAIV
AI: How is your fitness business going, Alex?
- Send events to
POST /v6/ingest - CLAIV extracts structured memory asynchronously
- Call
POST /v6/recallbefore each LLM response - Inject
llm_context.textinto your system prompt
That's it.
pip install claiv-memoryfrom claiv import ClaivClient
client = ClaivClient(api_key="your_key")
client.ingest({
"user_id": "user_123",
"conversation_id": "conv_abc",
"type": "message",
"role": "user",
"content": "I'm building an AI agent for compliance review."
})context = client.recall({
"user_id": "user_123",
"conversation_id": "conv_abc",
"query": "What is the user building?"
})
print(context["llm_context"]["text"])System prompt:
User memory:
{{ llm_context.text }}
JavaScript / TypeScript
npm install @claiv/memoryimport { ClaivClient } from '@claiv/memory';
const client = new ClaivClient({ apiKey: 'your_key' });
await client.ingest({
user_id: 'user_123',
conversation_id: 'conv_abc',
type: 'message',
role: 'user',
content: "I'm building an AI agent for compliance review.",
});
const context = await client.recall({
user_id: 'user_123',
conversation_id: 'conv_abc',
query: 'What is the user building?',
});Upload documents and CLAIV will:
- Parse structure into sections and spans
- Embed content with pgvector
- Retrieve relevant sections automatically
- Inject them into your LLM context
doc = client.upload_document({
"user_id": "user_123",
"project_id": "proj_abc",
"document_name": "Product Manual",
"content": open("manual.md").read(),
})
print(f"Indexed {doc['spans_created']} spans across {len(doc['sections'])} sections")Supports:
- Knowledge copilots over internal documentation
- Compliance and research tools
- Support assistants that reason over policies and manuals
| Feature | Vector DB | CLAIV |
|---|---|---|
| Remembers users over time | ❌ | ✅ |
| Structured fact extraction | ❌ | ✅ |
| Temporal reasoning | ❌ | ✅ |
| Token-efficient recall | ❌ | ✅ |
| Document + conversation memory | ❌ | ✅ |
| Controlled forgetting | ❌ | ✅ |
POST /v6/ingest → Store memory events
POST /v6/recall → Retrieve context
POST /v6/documents → Upload documents
POST /v6/forget → Delete memoryFull reference → claiv.io/docs
LoCoMo 10-dialogue J-score: 75.0%
| Category | Score |
|---|---|
| Single-hop | 68.8% |
| Temporal | 74.2% |
| Multi-hop | 55.2% |
| Open-domain | 79.7% |
LoCoMo is the standard benchmark for long-context conversational memory in production AI systems.
- AI chatbots with persistent memory
- AI agents with multi-step workflows
- SaaS apps with per-user context
- Customer support systems
- Internal knowledge copilots
- Research and compliance tools
- Ingest is async — memory may not appear instantly
- Always include
user_idandconversation_id - Inject
llm_context.textinto your system prompt
- API key authentication with tenant isolation
- Strict schema validation on all endpoints
- Scoped deletion via
POST /v6/forget
| Repo | Description |
|---|---|
| sdk-js | JavaScript / TypeScript SDK |
| sdk-py | Python SDK |
| template-openai-nodejs | OpenAI Node.js template |
| template-openai-python | OpenAI Python template |
| template-claude-python | Claude template |
| template-langchain | LangChain template |
| template-nextjs | Next.js chat app |
| template-document-rag-python | Document RAG (Python) |
| template-document-rag-nextjs | Document RAG (Next.js) |
👉 Get API key: claiv.io
👉 Docs: claiv.io/docs