TruthGit tracks claims and their verification status across multiple AI validators. It classifies the type of disagreement, not just the level of agreement. Part of the LumenSyntax research project.
$ pip install truthgit
$ truthgit init
$ truthgit claim "Water boils at 100°C at sea level" --domain physics
$ truthgit verify
[OLLAMA:HERMES3] 95% - Accurate under standard atmospheric pressure
[OLLAMA:NEMOTRON] 94% - True at 1 atm, varies with altitude
✓ PASSED Consensus: 95%
Verification: a7f3b2c1| Capability | Description |
|---|---|
| Consensus tracking | Records what multiple AI validators say about a claim |
| Disagreement classification | Distinguishes between logical errors, philosophical mysteries, and knowledge gaps |
| Audit trail | Immutable history of verification decisions |
| Governance API | Risk-aware verification for autonomous agents (proceed/abort/escalate/revise) |
| Pattern-based heuristics | Basic detection of common fallacy patterns |
| Misconception | Reality |
|---|---|
| "Verifies absolute truth" | It verifies AI consensus, not objective truth. If all AIs share the same bias, consensus can be wrong. |
| "Proves facts" | Cryptographic proofs verify data integrity (the hash matches), not factual accuracy. |
| "Deep semantic analysis" | Fallacy detection uses regex pattern matching, not NLU. |
| "Prevents AI hallucinations" | Multiple AIs can hallucinate the same thing if trained on similar data. |
LLM A says: "X is true" (90%)
LLM B says: "X is true" (85%)
LLM C says: "X is true" (88%)
→ Consensus: PASSED
But if all LLMs learned from the same biased data,
consensus can be consensus of error.
TruthGit detects inconsistency between validators, not ground truth.
Most verification systems ask: "How much agreement?" TruthGit also asks: "What type of disagreement?"
| Type | Meaning | Action |
|---|---|---|
| PASSED | Validators agree above threshold | Claim recorded as verified |
| LOGICAL_ERROR | One validator shows fallacy patterns | Flag outlier, recalculate |
| MYSTERY | Legitimate philosophical disagreement | Preserve all positions |
| GAP | Needs external data or human judgment | Escalate |
For philosophical questions, disagreement isn't failure. It's information.
# Core
pip install truthgit
# With local validation (Ollama, no API keys)
pip install truthgit[local]
# With cloud APIs (optional)
pip install truthgit[cloud]# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull models
ollama pull llama3
ollama pull mistral
# Verify
truthgit validators --local| Command | Description |
|---|---|
truthgit init |
Initialize a new repository |
truthgit claim "..." --domain x |
Create a claim to verify |
truthgit verify [--local] |
Verify with ontological consensus |
truthgit status |
Show repository status |
truthgit log |
Show verification history |
truthgit validators |
Show available validators |
truthgit safe-verify "..." --risk high |
Governance check with risk profile |
For autonomous agents. Instead of "Is this true?", it answers: "Should the agent act?"
| Profile | Confidence Required | Use Case |
|---|---|---|
low |
>= 60% | Internal logging, non-critical ops |
medium |
>= 75% | Standard operations (default) |
high |
>= 90% | Financial, medical, irreversible actions |
| Action | Meaning |
|---|---|
proceed |
Safe to act |
abort |
Do not act |
escalate |
Requires human judgment |
revise |
Claim needs reformulation |
truthgit safe-verify "Aspirin reduces fever" --risk medium
truthgit safe-verify "Delete all records older than 30 days" --risk high --context "Production database"from truthgit.governance import GovernanceEngine, RiskProfile
engine = GovernanceEngine()
result = engine.verify(
claim="User requested account deletion",
risk_profile=RiskProfile.HIGH,
domain="user_data",
context="GDPR compliance request"
)
if result.action == "proceed":
execute_deletion()
elif result.action == "escalate":
notify_human_reviewer(result.reason)curl -X POST https://api.lumensyntax.com/api/governance/verify \
-H "Content-Type: application/json" \
-d '{"claim": "Water boils at 100°C", "risk_profile": "low", "domain": "physics"}'from truthgit import TruthRepository
repo = TruthRepository()
repo.init()
claim = repo.claim(content="E=mc²", domain="physics")
verification = repo.verify(
verifier_results={
"HERMES3": (0.95, "Mass-energy equivalence"),
"NEMOTRON": (0.92, "Einstein's equation"),
},
claim_content="E=mc²",
claim_domain="physics",
)
print(f"Consensus: {verification.consensus.value:.0%}")
print(f"Type: {verification.ontological_consensus.disagreement_type}")TruthGit includes an MCP server for Claude Desktop integration.
Add to ~/.config/claude/claude_desktop_config.json:
{
"mcpServers": {
"truthgit": {
"command": "python",
"args": ["-m", "truthgit.mcp_server"]
}
}
}| Tool | Description |
|---|---|
truthgit_verify_claim |
Multi-validator consensus check |
truthgit_prove |
Generate integrity hash |
truthgit_governance_verify |
Governance verification for agents |
truthgit_search |
Search verified claims |
truthgit_status |
Repository status |
.truth/
├── objects/
│ ├── cl/ # Claims
│ ├── ax/ # Axioms
│ ├── ct/ # Contexts
│ └── vf/ # Verifications
├── refs/
│ ├── consensus/
│ └── perspectives/
└── HEAD
┌─────────────┐
│ Claim │
│ + Domain │
└──────┬──────┘
│
┌────────┼────────┐
▼ ▼ ▼
┌──────┐ ┌──────┐ ┌──────┐
│ LLM A│ │ LLM B│ │ LLM C│
└──────┘ └──────┘ └──────┘
│
▼
┌──────────────────┐
│ Classification │
│ PASSED/MYSTERY │
│ GAP/ERROR │
└──────────────────┘
TruthGit is part of the LumenSyntax research project investigating structural failure modes in AI safety systems. The related paper documents how evaluation frameworks can systematically misclassify epistemic humility as failure.
Paper: The Instrument Trap: Why Identity-as-Authority Breaks AI Safety Systems Benchmark: instrument-trap-benchmark
MIT
Rafael Rodriguez · Independent Researcher · lumensyntax.com