Skip to content
View SolomonB14D3's full-sized avatar

Block or report SolomonB14D3

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
SolomonB14D3/README.md

Bryan — Independent ML Researcher

Sponsor Paper: SFT Paper: Grassmann Paper: Phase Transitions Paper: Confidence Cartography Paper: Contrastive Pretraining Paper: Expression Bottleneck Paper: Snap-On Paper: CF90 Paper: STEM Truth Oracle Paper: Breaking Frozen Priors Paper: NoetherSolve Toolkit

Building behavioral auditing and alignment tools for LLMs. Try the demo →


Tools

rho-eval — Drop-in behavioral audit for any LLM. Measures 8 dimensions, no internet required. Apple Silicon MLX + CUDA + CPU.

pip install rho-eval

# Audit any model
rho-eval Qwen/Qwen2.5-7B-Instruct --behaviors all

# One-command behavioral repair
rho-surgery Qwen/Qwen2.5-7B-Instruct -o ./repaired-7b/

# Diagnose expression gaps in base models
rho-unlock diagnose Qwen/Qwen2.5-1.5B --behaviors mmlu,arc,truthfulqa

# Train a modular adapter (zero knowledge damage)
snap-on train --model Qwen/Qwen2.5-1.5B --mode logit --save_dir ./adapter

noethersolve — Conservation law monitoring, discovery, and scientific auditing. 20 tools across physics, genetics, and unsolved mathematics. No ML required at runtime.

pip install noethersolve

# Monitor vortex dynamics conservation laws
from noethersolve import VortexMonitor

# Audit drug interactions
from noethersolve import audit_drug_list

# Verify number theory conjectures
from noethersolve import verify_goldbach, verify_collatz

Papers

  1. Rho-Guided SFT — Post-training repair of calibration damage in LLMs. DOI: 10.5281/zenodo.18854943
  2. Grassmann Geometry of Behavioral Entanglement — Surgery compresses subspaces, doesn't rotate them. DOI: 10.5281/zenodo.18865861
  3. Behavioral Phase Transitions — Geometric scaffolding precedes behavioral emergence. DOI: 10.5281/zenodo.18865198
  4. Confidence Cartography — Teacher-forced probability as a false-belief sensor. DOI: 10.5281/zenodo.18703505
  5. CF90 — Knowledge-preserving SVD compression for LLMs. DOI: 10.5281/zenodo.18718545
  6. Contrastive Pretraining Teaches Format Generation, Not Behavioral Knowledge — 5% injection breaks the behavioral wall at 7M. DOI: 10.5281/zenodo.18870555
  7. Small Language Models Already Know More Than They Can Say — The 41% universal constant and the generation bottleneck. DOI: 10.5281/zenodo.18895248
  8. Snap-On Communication Modules — Logit-space adapters that preserve base model knowledge. DOI: 10.5281/zenodo.18902617
  9. STEM Truth Oracle — Log-probability ranking reveals and corrects scale-invariant factual biases. DOI: 10.5281/zenodo.19005729
  10. Breaking Frozen Priors — Teaching LLMs to discover conservation laws from numerical simulation. Three-phase pipeline achieves Spearman rho = 0.932 physics ranking. DOI: 10.5281/zenodo.19017290
  11. NoetherSolve Toolkit — 20 conservation law monitoring, discovery, and scientific auditing tools across physics, genetics, and mathematics. 777 tests, 275 oracle-verified facts. DOI: 10.5281/zenodo.19029880

Repos

Repo What it does
knowledge-fidelity Behavioral auditing + alignment toolkit. PyPI.
noethersolve Autonomous scientific discovery: 20 tools across physics, genetics, and unsolved math. 275/275 facts taught to LLMs. PyPI. Dashboard.
confidence-cartography Teacher-forced confidence as a false-belief sensor.
intelligent-svd Knowledge-preserving SVD compression for LLMs.

Pinned Loading

  1. knowledge-fidelity knowledge-fidelity Public

    Behavioral auditing & repair toolkit for LLMs. Measures 8 dimensions via confidence probes.

    Python 3

  2. NoetherSolve NoetherSolve Public

    Find conserved quantities that LLMs don't recognize — then close those gaps with targeted adapters.

    Python 1