feat: session compression + adaptive extraction throttling#318
Open
AliceLJY wants to merge 3 commits intoCortexReach:masterfrom
Open
feat: session compression + adaptive extraction throttling#318AliceLJY wants to merge 3 commits intoCortexReach:masterfrom
AliceLJY wants to merge 3 commits intoCortexReach:masterfrom
Conversation
Implement two features to improve memory quality and reduce LLM cost: Feature 1 — Session Compression (src/session-compressor.ts): - Score conversation texts by information density (tool calls > corrections > decisions > substantive content > questions > acknowledgments > empty) - Compress texts to fit within extractMaxChars budget, always preserving first and last text (session boundaries) - Handle paired texts (tool call + result kept/dropped together) - Fall back to keeping last N texts when all scores are low - Integrated into agent_end auto-capture hook before smart extraction Feature 7 — Adaptive Extraction Throttling: - estimateConversationValue() heuristic to skip low-value conversations (value < 0.2) before extraction - Extraction rate limiter (createExtractionRateLimiter) with sliding one-hour window, default 30 extractions/hour - Both checks run before any LLM calls in the auto-capture pipeline Config additions (openclaw.plugin.json): - sessionCompression.enabled (default: true) - sessionCompression.minScoreToKeep (default: 0.3) - extractionThrottle.skipLowValue (default: true) - extractionThrottle.maxExtractionsPerHour (default: 30) Tests: 31 tests covering score ordering, budget enforcement, chronological re-ordering, first+last preservation, paired text handling, all-low-score fallback, conversation value estimation, and rate limiting window behavior. Refs: https://github.com/CortexReach/memory-lancedb-pro-enhancements Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ractor-branches test The new session compression and adaptive throttling features skip short/low-value conversations by default. The existing test uses 4 short Chinese messages that score 0.0 on the conversation value heuristic, causing the extraction to be silently skipped. Fix: explicitly disable both features in the test's mock config.
1. [High] Add memory-intent keywords (remember/recall/记住) to conversation value estimation — prevents skipLowValue from dropping explicit memory requests 2. [Medium] Remove over-broad tool call patterns (fenced code blocks, "$ ") that misclassified normal code as tool calls with score 1.0 3. [Medium] Fix paired text handling: only tool_call (not tool_result) can initiate a pair, preventing result lines from pulling unrelated neighbors 4. [Medium] Move rate limiter charge to AFTER successful extraction so no-op sessions don't consume the hourly quota 5. [Medium] Change defaults to opt-in (enabled: false, skipLowValue: false) to preserve backward compatibility for existing users 6. [Medium] Hard cap budget: even first/last text cannot exceed maxChars Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
src/session-compressor.ts): Scores conversation texts by information density (tool calls > corrections > decisions > substantive > questions > acknowledgments) and compresses them to fit within the extraction budget. Always preserves first/last text (session boundaries), handles paired tool call + result texts, and falls back to keeping recent texts when all scores are low.src/smart-extractor.ts+index.ts): AddsestimateConversationValue()to skip low-value conversations (< 0.2) before LLM extraction, plus a sliding-window rate limiter (default 30 extractions/hour) to prevent runaway costs during rapid-fire sessions.sessionCompressionandextractionThrottlesections inopenclaw.plugin.jsonwith sensible defaults (both enabled by default).Refs: https://github.com/CortexReach/memory-lancedb-pro-enhancements (Features 1 & 7)
Changed files
src/session-compressor.tsscoreText,compressTexts,estimateConversationValuesrc/smart-extractor.tscreateExtractionRateLimiterat end of fileindex.tsagent_endhook, parse new config fieldsopenclaw.plugin.jsonsessionCompressionandextractionThrottletest/session-compressor.test.mjsTest plan
npx jiti test/session-compressor.test.mjs— 31 tests pass (score ordering, budget enforcement, chronological re-ordering, first+last preservation, paired text handling, all-low-score fallback, conversation value estimation, rate limiting)sessionCompressionandextractionThrottlein a live OpenClaw instance, confirm debug logs show compression stats and throttling decisions🤖 Generated with Claude Code