Skip to content

Commit f7d1210

Browse files
committed
v0.3.0: tray app UX overhaul, dashboard logs, orphan cleanup
Tray app: - Reordered menu with MongoDB gating - Configure Grove Access dialog (Azure AI Inference) - Credential checkmarks on menu items - Fix double-click Start/Stop bug via uiGrace - 36 unit tests for tray app Daemon: - Kill orphaned llama-server on startup - Rewire() for hot-swapping after delayed MongoDB connect - Synthesizer picks up keychain credentials after retry Dashboard: - Logs tab with auto-refreshing viewer - /api/logs endpoint (GET ?lines=N)
1 parent baa7346 commit f7d1210

13 files changed

Lines changed: 2342 additions & 644 deletions

File tree

.github/copilot-instructions.md

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,14 @@
1-
# memoryd — Copilot Instructions
1+
# memoryd — Codebase Reference for Copilot
22

3-
> Persistent memory for coding agents. A local daemon that gives Claude Code (and other agents) long-term memory via transparent RAG.
4-
5-
## What This Is
6-
7-
memoryd is a Go daemon that sits between a coding agent and the Anthropic API. It intercepts every request, enriches it with relevant context from a MongoDB vector store, and stores useful information from responses — all transparently. The agent never knows it's there.
8-
9-
```
10-
Developer → Claude Code → memoryd (127.0.0.1:7432) → Anthropic API
11-
12-
MongoDB (Atlas or Local)
13-
```
3+
> Shared memory/IP/project context is in the parent `.github/copilot-instructions.md` and `PROJECT_CONTEXT.md`. This file covers memoryd-specific architecture.
144
155
Module: `github.com/memory-daemon/memoryd`
166
Go version: 1.26+
177
Config: `~/.memoryd/config.yaml`
188

199
---
2010

21-
## Architecture
11+
## Codebase Reference
2212

2313
### Package Map
2414

CLAUDE.md

Lines changed: 51 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -1,62 +1,51 @@
1-
# memoryd — Persistent Memory for Claude Code
2-
3-
You have access to a long-term memory system via MCP tools. Use it aggressively — search at the start of every task, store after every meaningful piece of work. The system automatically deduplicates and filters noise, so you cannot over-store.
4-
5-
## When to search memory
6-
7-
- **At the start of EVERY task** — always call `memory_search` before doing any work. Prior sessions almost certainly have relevant context.
8-
- **When you encounter unfamiliar code** — search for prior notes about the module, pattern, or architecture.
9-
- **Before making decisions** — check if the decision was already made. Avoid contradicting prior work.
10-
- **When debugging** — search for known issues, workarounds, or environment gotchas.
11-
12-
## When to store memory
13-
14-
Store liberally. The system deduplicates automatically — if you store something that already exists, it gets silently skipped. Prefer storing too much over too little.
15-
16-
- **After completing any task** — summarize what was done, what decisions were made, and why.
17-
- **Discoveries** — architecture patterns, naming conventions, tricky configs, hidden dependencies.
18-
- **User preferences** — coding style, preferred libraries, workflow expectations, communication style.
19-
- **Debugging insights** — root causes, workarounds, environment quirks, version-specific issues.
20-
- **Decisions and rationale** — always include "why", not just "what". Future sessions need the reasoning.
21-
- **Gotchas and failures** — approaches that didn't work and why, so future sessions don't repeat them.
22-
23-
## Available tools
24-
25-
| Tool | Purpose |
26-
|------|---------|
27-
| `memory_search` | Search memory with a natural language query |
28-
| `memory_store` | Store content (auto-deduped, auto-filtered) |
29-
| `memory_list` | List stored memories (optional text filter) |
30-
| `memory_delete` | Delete a memory by ID |
31-
| `source_ingest` | Crawl a URL and ingest as a knowledge source |
32-
| `source_list` | List all ingested sources |
33-
| `source_remove` | Remove a source and its memories |
34-
| `quality_stats` | Check adaptive learning status |
35-
36-
## Example workflow
37-
38-
1. User asks: "Add pagination to the API"
39-
2. **Search first**: `memory_search` with "pagination API implementation"
40-
3. Memory returns notes from a prior session about API structure and pagination preferences
41-
4. You proceed informed by prior context
42-
5. **Store after**: `memory_store` with a concise summary of what was implemented and design decisions
43-
44-
## Writing good memories
45-
46-
- Concise, structured bullet points — not prose paragraphs
47-
- Include the "why" behind every decision
48-
- Tag with the relevant area: `[auth]`, `[api]`, `[deploy]`, `[config]`, etc.
49-
- One topic per store call — don't combine unrelated facts
50-
- Be specific: "Uses MongoDB Atlas vector search with cosine similarity, 1024-dim voyage-4-nano embeddings" not "Uses a database with search"
51-
52-
## Source ingestion
53-
54-
You can ingest external documentation (company wikis, internal docs) as knowledge sources. Use `source_ingest` with a name and base URL — the system will crawl, chunk, embed, and store all pages. Source memories are tagged `source:NAME|URL` and automatically deduplicated on refresh.
55-
56-
When you store a new memory that's similar (but not identical) to an existing source memory, the system tags it as an "extension" rather than a duplicate. This builds on reference material with project-specific context.
57-
58-
## Adaptive quality learning
59-
60-
The system tracks which memories get retrieved and how often. While in "learning mode" (< 50 retrieval events), it keeps everything. Use `quality_stats` to check the current learning status. Over time, memories that are never retrieved will score lower, helping the system learn what's worth keeping.
61-
62-
The system also learns what **noise** looks like. Exchanges rejected by the pre-filter or synthesizer are accumulated in a ring buffer. Every 25 rejections, the assistant texts are re-embedded as noise prototypes and hot-swapped into the content scorer. This means the system adapts to your team's specific noise patterns — the more it sees procedural chatter, the better it gets at filtering it before spending an LLM call.
1+
# memoryd — Codebase Reference for Claude Code
2+
3+
> Shared memory/IP/project context is in the parent `CLAUDE.md` and `PROJECT_CONTEXT.md`. This file covers memoryd-specific architecture.
4+
5+
Module: `github.com/memory-daemon/memoryd`
6+
Go version: 1.26+
7+
Config: `~/.memoryd/config.yaml`
8+
9+
## CLI Commands
10+
11+
```
12+
memoryd start Start daemon (foreground). Creates config on first run.
13+
memoryd mcp Start as MCP stdio server (for Claude Code MCP integration)
14+
memoryd status Ping health endpoint
15+
memoryd search Regex search on memory content
16+
memoryd forget Delete one memory by hex ID
17+
memoryd wipe Delete all memories (confirmation required)
18+
memoryd env Print ANTHROPIC_BASE_URL export
19+
memoryd version Print version
20+
memoryd ingest Crawl a URL and store as source
21+
memoryd sources List ingested sources
22+
memoryd export Export memories to markdown
23+
```
24+
25+
## Build & Test
26+
27+
```bash
28+
make build # → bin/memoryd
29+
go test ./... # all unit tests (no external deps needed)
30+
go vet ./... # static analysis
31+
```
32+
33+
## Conventions
34+
35+
- Standard Go: `gofmt`, `go vet`, no external linters
36+
- Interfaces defined in the package that uses them
37+
- Functional options pattern for configuration (e.g., `proxy.WithStore()`)
38+
- Errors logged at the boundary, not propagated through async paths
39+
- Unit tests use in-memory mocks, test files live next to their code
40+
- Write pipeline runs in goroutines — errors logged, never returned to caller
41+
- `redact.Clean()` strips secrets BEFORE embedding — secrets never enter the vector store
42+
- Daemon binds to 127.0.0.1 only
43+
44+
## Gotchas
45+
46+
1. **Embedding dim is 1024, not 512.** voyage-4-nano produces 1024-dim vectors. The vector index must match.
47+
2. **Atlas Local doesn't support `$search` or `$vectorSearch` filters.** Those are Atlas-proper features.
48+
3. **New memories have quality_score 0.** AtlasStore uses `$or` to avoid filtering out unscored memories.
49+
4. **SSE streaming.** The proxy buffers the full response for the write path while streaming to the client. Don't break the streaming path for write-path changes.
50+
5. **Config path expansion.** `~` in `model_path` is expanded by the config loader. Use the config package's path handling.
51+
6. **Content score pre-gate does NOT feed rejection store.** Only QuickFilter and synthesizer rejections feed back. Prevents positive feedback loop.

0 commit comments

Comments
 (0)