Problem
Currently, intermediate analysis results are stored in /tmp/rlm-sessions/ which is ephemeral and deleted on reboot. If a long-running RLM analysis crashes mid-dispatch, all progress is lost. There's no way to inspect what the agent was thinking during intermediate steps.
Proposal (from AIGNE paper analysis)
Create persistent scratchpad at /rlm/scratchpad/{session_id}/ that:
- Stores intermediate chunk analysis results during dispatch
- Captures LLM reasoning steps before synthesis (for debugging/provenance)
- Auto-prunes after session finalization (keep for 7 days)
- Selective promotion — ask agent 'should this scratchpad content become permanent memory?'
Implementation
- Create scratchpad directory structure
- Modify dispatch logic to save intermediate findings
- Add CLI commands:
rlm scratchpad list/view/promote/prune
- Add auto-pruning cron job or hook
Impact
- Enables resumable long-running analyses
- Better debugging (see agent reasoning)
- Captures useful intermediate results
Effort
1-2 days
Related
Context Engineering Pipeline from 'Everything is Context' paper (arxiv 2512.05470)
Problem
Currently, intermediate analysis results are stored in
/tmp/rlm-sessions/which is ephemeral and deleted on reboot. If a long-running RLM analysis crashes mid-dispatch, all progress is lost. There's no way to inspect what the agent was thinking during intermediate steps.Proposal (from AIGNE paper analysis)
Create persistent scratchpad at
/rlm/scratchpad/{session_id}/that:Implementation
rlm scratchpad list/view/promote/pruneImpact
Effort
1-2 days
Related
Context Engineering Pipeline from 'Everything is Context' paper (arxiv 2512.05470)