Skip to content

feat: session compression + adaptive extraction throttling#318

Open
AliceLJY wants to merge 3 commits intoCortexReach:masterfrom
AliceLJY:feat/session-compression
Open

feat: session compression + adaptive extraction throttling#318
AliceLJY wants to merge 3 commits intoCortexReach:masterfrom
AliceLJY:feat/session-compression

Conversation

@AliceLJY
Copy link
Collaborator

Summary

  • Session Compression (src/session-compressor.ts): Scores conversation texts by information density (tool calls > corrections > decisions > substantive > questions > acknowledgments) and compresses them to fit within the extraction budget. Always preserves first/last text (session boundaries), handles paired tool call + result texts, and falls back to keeping recent texts when all scores are low.
  • Adaptive Extraction Throttling (in src/smart-extractor.ts + index.ts): Adds estimateConversationValue() to skip low-value conversations (< 0.2) before LLM extraction, plus a sliding-window rate limiter (default 30 extractions/hour) to prevent runaway costs during rapid-fire sessions.
  • Config: New sessionCompression and extractionThrottle sections in openclaw.plugin.json with sensible defaults (both enabled by default).

Refs: https://github.com/CortexReach/memory-lancedb-pro-enhancements (Features 1 & 7)

Changed files

File Change
src/session-compressor.ts New — scoreText, compressTexts, estimateConversationValue
src/smart-extractor.ts Added createExtractionRateLimiter at end of file
index.ts Import new modules, add rate limit + value check + compression in agent_end hook, parse new config fields
openclaw.plugin.json Schema + uiHints for sessionCompression and extractionThrottle
test/session-compressor.test.mjs 31 tests covering all new functionality

Test plan

  • npx jiti test/session-compressor.test.mjs — 31 tests pass (score ordering, budget enforcement, chronological re-ordering, first+last preservation, paired text handling, all-low-score fallback, conversation value estimation, rate limiting)
  • Manual verification: enable sessionCompression and extractionThrottle in a live OpenClaw instance, confirm debug logs show compression stats and throttling decisions
  • Verify existing auto-capture behavior is unchanged when both features are disabled via config

🤖 Generated with Claude Code

AliceLJY and others added 3 commits March 23, 2026 18:31
Implement two features to improve memory quality and reduce LLM cost:

Feature 1 — Session Compression (src/session-compressor.ts):
- Score conversation texts by information density (tool calls > corrections
  > decisions > substantive content > questions > acknowledgments > empty)
- Compress texts to fit within extractMaxChars budget, always preserving
  first and last text (session boundaries)
- Handle paired texts (tool call + result kept/dropped together)
- Fall back to keeping last N texts when all scores are low
- Integrated into agent_end auto-capture hook before smart extraction

Feature 7 — Adaptive Extraction Throttling:
- estimateConversationValue() heuristic to skip low-value conversations
  (value < 0.2) before extraction
- Extraction rate limiter (createExtractionRateLimiter) with sliding
  one-hour window, default 30 extractions/hour
- Both checks run before any LLM calls in the auto-capture pipeline

Config additions (openclaw.plugin.json):
- sessionCompression.enabled (default: true)
- sessionCompression.minScoreToKeep (default: 0.3)
- extractionThrottle.skipLowValue (default: true)
- extractionThrottle.maxExtractionsPerHour (default: 30)

Tests: 31 tests covering score ordering, budget enforcement,
chronological re-ordering, first+last preservation, paired text
handling, all-low-score fallback, conversation value estimation,
and rate limiting window behavior.

Refs: https://github.com/CortexReach/memory-lancedb-pro-enhancements

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ractor-branches test

The new session compression and adaptive throttling features skip
short/low-value conversations by default. The existing test uses
4 short Chinese messages that score 0.0 on the conversation value
heuristic, causing the extraction to be silently skipped.

Fix: explicitly disable both features in the test's mock config.
1. [High] Add memory-intent keywords (remember/recall/记住) to conversation
   value estimation — prevents skipLowValue from dropping explicit memory requests
2. [Medium] Remove over-broad tool call patterns (fenced code blocks, "$ ")
   that misclassified normal code as tool calls with score 1.0
3. [Medium] Fix paired text handling: only tool_call (not tool_result) can
   initiate a pair, preventing result lines from pulling unrelated neighbors
4. [Medium] Move rate limiter charge to AFTER successful extraction so
   no-op sessions don't consume the hourly quota
5. [Medium] Change defaults to opt-in (enabled: false, skipLowValue: false)
   to preserve backward compatibility for existing users
6. [Medium] Hard cap budget: even first/last text cannot exceed maxChars

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant