Website · Dashboard · Docs · Setup Guide
AI assistants like Claude Desktop, Cursor, and Cline forget everything between sessions. You end up repeating the same preferences, project context, and decisions over and over.
MemContext solves this by providing a persistent memory layer that AI agents can access via the Model Context Protocol (MCP). Your preferences, facts, and decisions are stored as searchable memories that any connected AI assistant can retrieve automatically through hybrid search (vector similarity + full-text keyword search). Memories can evolve over time with versioning, temporal expiry, and feedback loops.
Get up and running in under 2 minutes:
- Sign up at app.memcontext.in (Google or GitHub OAuth)
- Create an API key from the dashboard (starts with
mc_) - Connect your AI assistant using the config below
- Add the agent instructions so your assistant knows when to save and search
That's it. Your assistant now has persistent memory across sessions.
Replace <your-api-key> with your actual API key from the dashboard.
Claude Code (CLI)
Add MemContext globally (available across all projects):
claude mcp add memcontext --scope user -- npx -y mcp-remote https://mcp.memcontext.in/mcp --header "MEMCONTEXT-API-KEY:<your-api-key>"Or for a specific project only:
claude mcp add memcontext -- npx -y mcp-remote https://mcp.memcontext.in/mcp --header "MEMCONTEXT-API-KEY:<your-api-key>"Verify installation:
claude mcp listClaude Desktop
Add to your claude_desktop_config.json:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"memcontext": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.memcontext.in/mcp",
"--header",
"MEMCONTEXT-API-KEY:<your-api-key>"
]
}
}
}Cursor
Add to your Cursor MCP config:
- Global:
~/.cursor/mcp.json - Project:
.cursor/mcp.jsonin your project root
{
"mcpServers": {
"memcontext": {
"type": "http",
"url": "https://mcp.memcontext.in/mcp",
"headers": {
"MEMCONTEXT-API-KEY": "<your-api-key>"
}
}
}
}OpenCode
Add to your opencode.json config:
- Global:
~/.config/opencode/opencode.json - Project:
opencode.jsonin your project root
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"memcontext": {
"type": "local",
"command": [
"npx",
"-y",
"mcp-remote",
"https://mcp.memcontext.in/mcp",
"--header",
"MEMCONTEXT-API-KEY:<your-api-key>"
],
"enabled": true
}
}
}Codex CLI (OpenAI)
Add to your ~/.codex/config.toml:
[mcp_servers.memcontext]
url = "https://mcp.memcontext.in/mcp"
[mcp_servers.memcontext.http_headers]
MEMCONTEXT-API-KEY = "<your-api-key>"Verify installation:
codex mcp listWindsurf / Other MCP Clients
For clients that support Streamable HTTP transport directly:
{
"mcpServers": {
"memcontext": {
"type": "http",
"url": "https://mcp.memcontext.in/mcp",
"headers": {
"MEMCONTEXT-API-KEY": "<your-api-key>"
}
}
}
}For clients that only support stdio transport, use the mcp-remote bridge:
{
"mcpServers": {
"memcontext": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.memcontext.in/mcp",
"--header",
"MEMCONTEXT-API-KEY:<your-api-key>"
]
}
}
}After connecting MCP, add these instructions to your AI assistant so it knows when to save and search memories. The dashboard setup page has copy-paste configs for each agent.
| Agent | Instructions File |
|---|---|
| Claude Code | ~/.claude/CLAUDE.md |
| Cursor | Settings > Rules and Commands > User Rules |
| OpenCode | ~/.config/opencode/AGENTS.md |
| Codex CLI | ~/.codex/instructions.md |
Add this to the relevant file:
# MemContext
At conversation start, ALWAYS call search_memory to load user context - do not skip.
Before making decisions or assumptions, search_memory to check for past context.
SAVE immediately (do not defer) when any of these happen:
- User shares a preference -> save_memory(category: "preference")
- A technology or architecture decision is made -> save_memory(category: "decision")
- User corrects you or says "remember" -> save_memory(category: "fact")
- Important project fact learned -> save_memory(category: "fact", project: "<name>")
- Significant work completed that creates useful future context -> save_memory(category: "context")
Duplicates are handled automatically - when in doubt, save useful durable context.
Memory persists across all sessions - use project param for project-specific context only.
Omit validUntil by default. Only pass validUntil for an exact known expiry/deadline; otherwise MemContext auto-TTL handles expiry.- You tell your AI assistant something worth remembering
- The assistant saves it to MemContext via MCP
- Next session, when relevant context is needed, the assistant searches MemContext
- Your stored memories are retrieved and used automatically
The system uses hybrid search — vector embeddings (1536-dim) for semantic similarity combined with PostgreSQL full-text search for exact keyword matching, merged via Reciprocal Rank Fusion. Both vector search and full-text search run across the original query and the generated query variants, improving recall for wording-sensitive searches like "caching system" vs "Upstash Redis" or "frontend migration" vs "App Router". When saving, the system automatically detects similar existing memories and classifies the relationship as saved, updated, extended, or duplicate. Larger notes may be accepted for background extraction into atomic memories and return accepted with a jobId. Memories support temporal validity (validUntil) — either set explicitly or auto-detected by the system during save. Time-sensitive information is automatically excluded from search results when expired. Search results are also ranked using feedback signals — memories marked "wrong" or "outdated" are demoted, while "helpful" memories get a boost.
The MCP server exposes four tools to AI assistants:
Save a memory with optional category, project grouping, and temporal expiry. Short memories are saved immediately; larger notes may be accepted for background extraction into atomic memories.
| Parameter | Type | Required | Description |
|---|---|---|---|
content |
string | Yes | Clear, atomic memory to save (1-10,000 chars) |
category |
enum | No | preference, fact, decision, or context |
project |
string | No | Project grouping (lowercase, no spaces). Omit when unsure |
validUntil |
string | No | Exact ISO 8601 expiry. Omit by default for auto-TTL |
MCP tools intentionally do not expose scope; they operate on unscoped assistant memory with optional project grouping. Use the REST API or SDK scope field when building multi-user or multi-tenant apps that need hard isolation.
Search for relevant memories using hybrid search (vector + keyword).
| Parameter | Type | Required | Description |
|---|---|---|---|
query |
string | Yes | Natural language search query (use complete sentences) |
limit |
number | No | Results to return, 1-10 (default: 5) |
category |
enum | No | Filter by preference, fact, decision, or context |
project |
string | No | Filter to a specific project. Omit to search all |
threshold |
number | No | Similarity threshold 0-1. Higher = broader. Default 0.6 |
Rate a retrieved memory to improve future retrieval quality.
| Parameter | Type | Required | Description |
|---|---|---|---|
memoryId |
string | Yes | The memory ID (from search results) |
type |
enum | Yes | helpful, not_helpful, outdated, or wrong |
context |
string | No | Why this feedback |
Delete a memory by ID.
| Parameter | Type | Required | Description |
|---|---|---|---|
memoryId |
string | Yes | The memory ID (from search results) |
| Category | Purpose | Example |
|---|---|---|
preference |
User likes, dislikes, style choices | "Prefers TypeScript over JavaScript" |
fact |
Objective info about projects or users | "Uses MacOS with Homebrew" |
decision |
Technical or project decisions | "Chose PostgreSQL for the database" |
context |
General background information | "Working on an e-commerce app" |
When you save a memory, the system automatically checks for similar existing memories:
| Relation | Meaning |
|---|---|
saved |
New memory, no similar memories found |
updated |
Replaces an existing memory (contradicting information) |
extended |
Adds detail to an existing memory |
Start free, scale as your AI memory grows. See memcontext.in/pricing for full details.
| Free | Hobby | Pro | |
|---|---|---|---|
| Price | $0/month | $5/month | $15/month |
| Memories | 300 | 2,000 | 10,000 |
| Memory retrieval | Limited | Unlimited | Unlimited |
| Projects | Unlimited | Unlimited | Unlimited |
| MCP integration | Yes | Yes | Yes |
| Support | Community | Priority | Priority |
| Early access | - | - | Yes |
| Operation | Limit |
|---|---|
| Save memory | 30 requests/min |
| Search memory | 60 requests/min |
| Feedback | 30 requests/min |
| Global (dashboard) | 100 requests/min |
MemContext is open source and can be self-hosted. The project is a Turborepo monorepo with the following structure:
| Package | Description |
|---|---|
apps/api |
Hono backend - all business logic, database access, and AI processing |
apps/mcp |
MCP server - thin wrapper that translates MCP calls into API requests |
apps/dashboard |
Next.js dashboard - manage memories, API keys, subscriptions |
apps/website |
Marketing landing page |
packages/types |
Shared TypeScript type definitions |
packages/sdk |
Published TypeScript SDK (memcontext-sdk) |
docs/ |
Public Mintlify documentation (docs.memcontext.in) |
| Component | Technology |
|---|---|
| Runtime | Node.js 20.9+ |
| Package Manager | pnpm 9.0 |
| Build System | Turborepo 2.7 |
| Language | TypeScript 5.9.2 |
| API Framework | Hono 4.7 |
| Frontend | Next.js 16.1, React 19, Tailwind CSS 4 |
| Database | Neon (PostgreSQL with pgvector) |
| ORM | Drizzle ORM 0.45 |
| Cache | Upstash Redis |
| Auth | Better Auth (Google + GitHub OAuth) |
| Payments | Dodo Payments |
| AI/Embeddings | OpenRouter, Vercel AI SDK |
| MCP | Model Context Protocol SDK |
- Node.js 20.9+
- pnpm 9.0+
- PostgreSQL database with pgvector extension (e.g. Neon)
- Upstash Redis account
- OpenRouter API key
- Google and/or GitHub OAuth credentials
pnpm installCreate .env files in apps/api, apps/mcp, apps/dashboard, and apps/website based on their respective .env.example files.
API (apps/api/.env):
| Variable | Description |
|---|---|
DATABASE_URL |
PostgreSQL connection string (with pgvector) |
OPENROUTER_API_KEY |
For embeddings and LLM classification |
UPSTASH_REDIS_REST_URL |
Redis for rate limiting and caching |
UPSTASH_REDIS_REST_TOKEN |
Redis auth token |
BETTER_AUTH_SECRET |
Auth secret (min 32 chars) |
BETTER_AUTH_URL |
API URL (e.g. http://localhost:3000) |
DASHBOARD_URL |
Dashboard URL (e.g. http://localhost:3020) |
GOOGLE_CLIENT_ID / GOOGLE_CLIENT_SECRET |
Google OAuth credentials |
GITHUB_CLIENT_ID / GITHUB_CLIENT_SECRET |
GitHub OAuth credentials |
MCP (apps/mcp/.env):
| Variable | Description |
|---|---|
MEMCONTEXT_API_KEY |
Your API key from the dashboard |
MEMCONTEXT_API_URL |
API URL (defaults to http://localhost:3000) |
pnpm dev # Run all apps
pnpm dev --filter=@memcontext/api # API only
pnpm dev --filter=@memcontext/mcp # MCP onlypnpm buildFull API docs: docs.memcontext.in
| Method | Path | Description |
|---|---|---|
| POST | /api/memories |
Save a memory |
| GET | /api/memories/search |
Hybrid search memories |
| GET | /api/memories/profile |
Pre-aggregated user context |
| GET | /api/memories/graph |
Memory graph data |
| GET | /api/memories |
List memories (with filters) |
| GET | /api/memories/:id |
Get a single memory |
| GET | /api/memories/:id/history |
Get memory version history |
| PATCH | /api/memories/:id |
Update a memory |
| DELETE | /api/memories/:id |
Delete a memory |
| POST | /api/memories/:id/forget |
Soft-delete (forget) a memory |
| POST | /api/memories/:id/feedback |
Submit feedback on a memory |
| Method | Path | Auth | Description |
|---|---|---|---|
| GET | /health |
None | Health check |
| POST | /api/api-keys |
Session only | Create an API key |
| GET | /api/api-keys |
Session only | List API keys |
| DELETE | /api/api-keys/:id |
Session only | Revoke an API key |
| GET | /api/user/profile |
Session only | Get user profile |
| GET | /api/user/subscription |
Session only | Get subscription info |
| GET | /api/user/dashboard-stats |
Session only | Get dashboard statistics |
| GET | /api/user/memory-hierarchy |
Session only | Get scope/project tree |
| POST | /api/subscription/change-plan |
Session only | Change subscription plan |
| GET | /api/subscription/current |
Session only | Get current subscription |
| POST | /api/waitlist |
None | Join waitlist |
REST and SDK clients can pass scope on memory operations for hard isolation. The dashboard renders memories as Global/unscoped first, then named scopes, with projects nested inside the selected scope. MCP tools intentionally omit scope to avoid agents inventing isolation IDs.
MemContext stands on the shoulders of two incredible open-source projects in the AI memory space:
-
Mem0 (50k+ stars) - The pioneering universal memory layer for AI agents. Mem0's work on intelligent memory extraction, user profiling, and their published research on scalable long-term memory laid the groundwork for how AI memory systems should work. Apache 2.0 licensed.
-
Supermemory (17k+ stars) - A blazing-fast memory engine ranking #1 on LongMemEval, LoCoMo, and ConvoMem benchmarks. Their open-source plugins for Claude Code, OpenCode, and OpenClaw, along with their MCP-first approach, have been a huge inspiration. MIT licensed.
Both projects proved that persistent AI memory is not just possible but essential. We built MemContext to bring a focused, MCP-native memory layer that's simple to set up and works across every major AI coding assistant.
Contributions are welcome. Please feel free to submit a Pull Request.
This project is licensed under the GNU General Public License v3.0. See the LICENSE file for details.