Find contradictions in your company's knowledge. Fix them. Train a model on the fixes.
Your docs say one thing. Your wiki says another. Slack says a third.
Ody scans your documentation, finds the contradictions, and gives you a health score. When you fix a finding, it becomes training data for a model that actually knows your company.
Runs locally. Your data stays on your machine.
npx ody-refine ./docs/ ody refine · 47 files · 1m 42s
Health: 62/100
3 contradictions · 5 warnings · 4 info
Contradiction Rate limits: 1,000 req/min in API docs vs 500 in handbook
Stale "Weekly design sync" last mentioned 4 months ago
Time bomb SLA with Acme Corp expires in 14 days, no renewal doc
npm (recommended):
npm install -g ody-refineOr run without installing:
npx ody-refine ./docs/Individual packages (if you want to build on top of Ody):
npm install @useody/platform-core # Types, interfaces, providers
npm install @useody/detectors # Detection engine
npm install @useody/export # JSONL, HTML reports
npm install @useody/feedback # Correction signals, preference pairs| Detector | What it catches |
|---|---|
| Contradictions | API docs say 1,000 req/min, handbook says 500 |
| Staleness | "Weekly sync" last mentioned 4 months ago |
| Duplicates | PTO policy in wiki AND handbook with different rules |
| Time bombs | Contract expires in 14 days, no renewal doc |
| Undocumented | "We dropped Python support" only exists in Slack, never written down |
Detectors are composable functions: (nodes, edges, llm?) -> Detection[]. You can write your own.
Pull from where your team works:
| Source | Auth | Status |
|---|---|---|
| Local files (Markdown, PDF, text) | none | Stable |
| Notion | OAuth | Stable |
| Slack | OAuth | Stable |
| Confluence | OAuth | Stable |
| Jira | OAuth | Stable |
| Linear | API key | Stable |
| Gmail | OAuth | Stable |
| GitHub | OAuth | Stable |
| Microsoft Teams | OAuth | Stable |
ody-refine connect notion # Authenticate, then scan
ody-refine connect slack # Pick channels to scan
ody-refine scan ./docs/ # Local filesOdy needs an LLM for detection. Bring your own:
Local with Ollama (free, nothing leaves your machine):
ollama pull nomic-embed-text && ollama pull qwen2.5:7b
ody-refine ./docs/Cloud via OpenRouter (any model):
OPENROUTER_API_KEY=sk-or-... ody-refine ./docs/Also works with OpenAI, Anthropic, or any OpenAI-compatible endpoint.
Add a knowledge health check to pull requests:
# .github/workflows/knowledge-check.yml
name: Knowledge Health
on:
pull_request:
paths: ['docs/**', '*.md']
jobs:
refine:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ufukkaraca/ody-refine/.github/actions/refine@main
with:
path: './docs'
fail-on: 'critical'
min-health: '70'CLI mode:
ody-refine ci ./docs/ --fail-on critical --min-health 70Every resolved finding becomes a preference pair (DPO format). Export them:
ody-refine export --format sft -o training-data.jsonlUse the JSONL to fine-tune your own model. The more you fix, the smarter your model gets.
This is a monorepo. Each package is published independently to npm.
ody-refine/
apps/refine/ CLI (ody-refine)
packages/
core/ Types, store, providers
detectors/ Detection engine
eval/ Benchmarks
export/ Reports, JSONL
feedback/ Signals, rewards
training/ Training orchestrator
examples/ Sample docs, demo scripts
git clone https://github.com/ufukkaraca/ody-refine.git
cd ody-refine
pnpm install
pnpm verify # lint + typecheck + testSee CONTRIBUTING.md for code style, adding detectors, and PR process.
Ody -- built by Rodyr