Last modified: 2026-03-21 14:47 EDT
Batchalign3 is a CLI-first tool. The primary interface is the batchalign3
command-line program. Python is used internally as a stateless ML model server —
it is not a public API surface.
uv venv .venv
uv pip install batchalign3All processing is done through the CLI:
batchalign3 transcribe input/ output/ --lang eng
batchalign3 morphotag input/ output/ --lang eng
batchalign3 align input/ output/ --lang engFor programmatic use from Python, call the CLI via subprocess:
import subprocess
subprocess.run([
"batchalign3", "morphotag",
"input/", "output/",
"--lang", "eng",
], check=True)batchalign.providers re-exports the typed worker payload surface:
BatchInferRequestBatchInferResponseInferResponseInferTaskWorkerJSONValue
Use those types when you build provider adapters or tests around the worker contract.
- Do not build new integrations against
batchalign.worker._*. - Do not assume undocumented inference modules are stable public API.
- The old
BatchalignPipeline,WhisperEngine,CHATFile,Document,ParsedChat,run_pipeline(), andbatchalign.compatsurfaces have been removed. Use the CLI instead.
The following Python APIs were removed as part of the pyo3 slimdown. The Rust server now owns all CHAT manipulation natively, making these redundant:
batchalign.compat— BA2 compatibility shim (CHATFile,Document,BatchalignPipeline). Was deprecated and emitting warnings. Use the CLI.batchalign.pipeline_api—run_pipeline(),LocalProviderInvoker,PipelineOperation. The Rust server handles pipeline orchestration.batchalign.inference.benchmark—compute_wer()Python wrapper. WER scoring is available viabatchalign3 compare.batchalign_core.ParsedChat— opaque CHAT handle with mutation methods. The Rust server usesChatFiledirectly; no Python-side handle is needed.