This repository is an early open-source MVP.
Contributions are welcome, but changes should preserve the current project shape:
- runbook semantics live in metadata, not in hardcoded runner branches
- adapter normalization lives in adapter metadata, not in runbook files
- replayability and inspectability matter more than cleverness
- benchmark and check should stay green
Read these first:
README.mddocs/architecture.mddocs/runbook-spec.mddocs/tool-adapter-spec.mddocs/open-source-readiness.md
Good first contribution areas:
- add new replayable fixture cases
- add hard cases that stress selector ambiguity
- improve metadata validation in
scripts/check.mjs - improve documentation clarity
- add new adapter normalization metadata for future mock operations
Contribution areas that need more caution:
- changing runbook metadata format
- changing evidence finding taxonomy
- changing benchmark scoring semantics
- adding write or repair behavior
Please preserve these invariants:
- runbook selection comes from
runbooks/*.selector.json - execution order comes from
runbooks/*.execution.json - conclusion logic and report wording come from
runbooks/*.decision.json - raw-response normalization comes from
adapters/*/*.normalization.json - cross-source derived evidence comes from
evidence-policies/*.json
If a change breaks one of these boundaries, explain why in the PR or issue.
Run these before proposing changes:
npm run check
npm run benchmarkOptional sanity checks:
npm run demo:order-task-missing
npm run check:demo-failWhen adding a runbook, keep the repository consistent:
- add
runbooks/<name>.yaml - add
runbooks/<name>.selector.json - add
runbooks/<name>.execution.json - add
runbooks/<name>.decision.json - add or reuse adapter normalization metadata for every referenced operation
- add at least one fixture case under
fixtures/cases/ - make sure
npm run checkandnpm run benchmarkstill pass
When adding a new operation:
- choose the adapter directory under
adapters/ - add
<operation>.normalization.json - make sure the operation name matches what runbook execution metadata references
- update fixtures so the new operation has replayable response data if it is exercised
A good contribution should make these easy to review:
- what changed
- which layer changed: runbook, adapter, evidence policy, runner, or docs
- whether new findings or operations were introduced
- whether
npm run checkandnpm run benchmarkpassed
This repository is not accepting changes that turn the MVP into:
- a production operator
- a write-enabled self-healing system
- an unrestricted SQL or shell agent
- a vendor-specific closed integration demo