This release introduces intelligent context retrieval that uses LLM analysis to select the most relevant frames for any query.
- Natural language queries: Ask for context in plain English
- LLM-driven analysis: Intelligently selects relevant frames based on query semantics
- Token budget management: Stays within specified token limits
- Auditable reasoning: Every retrieval decision is explained
- Heuristic fallback: Works even without LLM provider
- Recent session summary: Frames, operations, files touched, errors
- Historical patterns: Topic counts, key decisions, recurring issues
- Queryable indices: By error, time, contributor, topic, file
- Summary statistics: Frame counts, event counts, anchor totals
context_retrieval:
compressed_summary:
recent_session: frames, operations, files, errors
historical_patterns: topic counts, key decisions, recurring issues
queryable_indices: by error, timeframe, contributor
llm_analysis:
inputs: current_query, compressed_summary, token_budget
output: reasoning (auditable), frames_to_retrieve, confidence_score
| Tool | Description |
|---|---|
smart_context |
LLM-driven context retrieval with natural language query |
get_summary |
Compressed summary of project memory |
- Trace Detection: Improved persistence and bundling
- Model-Aware Compaction: Handlers for context window management
- Linear Sync: Enhanced sync manager for Linear integration
- Query Parser: Extended natural language query parsing
src/core/retrieval/- Complete retrieval systemtypes.ts- Type definitionssummary-generator.ts- Compressed summary generationllm-context-retrieval.ts- Main retrieval orchestratorindex.ts- Module exports
src/core/context/compaction-handler.ts- Autocompaction detectionsrc/core/context/model-aware-compaction.ts- Model-specific handlingsrc/core/trace/trace-store.ts- Trace persistencesrc/integrations/linear/sync-manager.ts- Enhanced Linear sync
npm install -g @stackmemoryai/stackmemory@0.2.8# In Claude Desktop or MCP client:
smart_context "What did we work on related to authentication?"
get_summaryBuilt with LLM-driven context retrieval