A framework for verifiable reasoning with language models.
-
Updated
Mar 25, 2026 - Python
A framework for verifiable reasoning with language models.
framework for detecting hallucinations in LLM chain-of-thought reasoning. Features synthetic data corruption, transformer-based classifiers, Streamlit UI, and FastAPI backend.
ΑΩ+: Speculative reasoning verification for LLMs. Models reasoning as a potential field over token embeddings, directly linking to attention geometry. Implements tetralectic logic & Φ-harmonic stability. Efficient batched implementation with Hutchinson trace estimation, masking for variable-length sequences, and MPS/CPU hybrid for Apple Silicon.
Add a description, image, and links to the reasoning-verification topic page so that developers can more easily learn about it.
To associate your repository with the reasoning-verification topic, visit your repo's landing page and select "manage topics."