Reusable project memory, search, and context bundle engine for Codex and Claude Code.
Large projects slow down when every session has to reload too much context.
Harnessing keeps high-value project memory small, searchable, and reusable by combining:
memento- project docs
- derived memory entries
- transcript-derived memories
- compact context bundles
- ingests project docs into a local SQLite/FTS index
- derives reusable memory entries from status, testing, troubleshooting, and memento docs
- ingests transcript files into searchable memory entries
- emits
lean,work, anddeepcontext bundles with token-aware budgets - prefers reusable memory over duplicate raw doc sections
Harnessing is already good enough for session recovery after a fresh /new start.
Current level:
- durable enough to recover project direction and current status
- not yet fully automatic or self-healing
Still needed for stronger continuity:
- startup presets per consumer repo
- stronger transcript and tool-write promotion
- query aliases and better retrieval guidance
- consumer-side auto-invoke workflows
Consumer Repo Artifacts
-> Ingestion
-> sections
-> derived memories
-> transcript memories
-> Retrieval
-> search
-> context
-> bundle
-> Agent Consumption
-> Codex
-> Claude Code
Harnessing starts as a CLI-first core engine because it keeps the memory layer easy to verify and easy to embed later.
- testable one step at a time
- immediately callable from local agent workflows
- clean foundation for later HTTP APIs, hooks, and editor integrations
Harnessing is currently Windows-first.
- development and verification start on Windows
- macOS and Linux are planned later
- the core stays portable where practical, but non-Windows support is not yet a delivery target
python src/harnessing/cli.py init
python src/harnessing/cli.py ingest
python src/harnessing/cli.py stats
python src/harnessing/cli.py search "document delta"
python src/harnessing/cli.py context "diagnostics requestId" --limit 3 --mode work
python src/harnessing/cli.py bundle "backend actions externalevent" --limit 4 --mode lean
python src/harnessing/cli.py transcript ingest --source <path>| Mode | Use | Goal |
|---|---|---|
lean |
session startup | minimum token cost |
work |
active implementation | balanced detail |
deep |
debugging and review | fuller context |
Each bundle reports:
- selected item count
- char usage
- estimated token count
- package:
src/harnessing - docs:
docs/ - local runtime state:
.harnessing/
ontology-for-cm is the proving-ground consumer for Harnessing.
The working model is:
ontology-for-cm proves a useful pattern
-> Harnessing generalizes it
-> ontology-for-cm consumes it again
- Documentation System
- Architecture
- Platform Support
- Dual-Track Operating Model
- Consumer Integration Contract
- Transcript Ingestion Spec
- Session Restart Workflow
- Smoke Test
- Transcript Ingestion Test
- strengthen transcript parsing and memory promotion
- add tool-write and edit-event ingestion
- add machine-readable context bundle export
- keep token usage low while improving retrieval quality