Releases: gleachkr/Lectic
v0.0.3
Added
- Native "thinking" support for Anthropic and Gemini models, including
Anthropic adaptive thinking plus configurablethinking_effortand
thinking_budgetsettings. - Serialized thought blocks in conversation logs with LSP hover and folding
support. - Configurable icons for tools, hooks, and inline attachments, which are
preserved in the XML and displayed in editor UIs. limitfield forexectools to cap the total characters returned from
stdout and stderr.- New
install.shscript for streamlined installation and automated
install-testcoverage in CI. - New
lectic-taskplugin atextra/plugins/lectic-taskproviding a
task-board TUI and persistent task tracking. - Migrated
lectic-skillsto a plugin atextra/plugins/skillsand added
new built-in skills for local serving and TUI checklists. - Support for
iconandnameattributes on:attachand:hook
directives to control UI presentation. - Autocomplete and diagnostics for
use:references in the YAML header. user_firsthook alias for running hooks before the first user message.promptoverride field forexectools.- Support for loading
output_schemafrom external files viafile:paths. - Edit/create via editor functionality in the task plugin.
- Plugin loading from
LECTIC_RUNTIMEdirectories. - Configurable labels for autocomplete entries.
Changed
- Non-zero hook exits now abort the current run by default unless
allow_failure: trueis set. - Consolidated CLI output controls around
--format; legacy-s,-S,
and-qremain as deprecated aliases. - Updated context compaction recipe and documentation to reflect recent
pattern improvements. - Improved tool-calling stability and signature handling for Gemini models.
- Refined timeout error messages to be less redundant.
extra/tab_completenow handles subcommand discovery more robustly.- Exec tools now always return the exit code.
- More flexible schemas for exec tool parameters.
- Bumped default models.
- Performance improvements for Neovim syntax highlighting.
- Performance improvements for LSP on long lectics.
- Improved spacing in streamed output.
- Better sanitization of user and assistant messages.
Fixed
- Gemini tool calling signatures for complex parameter sets.
lectic parseflag handling and reconstruction correctness.- Restored
-i/--inplaceas a boolean flag solectic -if <file>and
lectic -i -f <file>work correctly. - LSP folding and symbols for large configuration headers.
- Cursor positioning in
lectic.nvim. - Reversed boilerplate flag.
Removed
- Legacy
thinktool.
v0.0.2
Added
- Structured output support via
interlocutor.output_schema, including
validation and backend support. - Expanded JSON Schema support for structured outputs, plus a dedicated
reference page and cookbook recipe. - Top-level
importssupport for modular config, including recursive imports,
optional imports, directory imports (<dir>/lectic.yaml), and cycle errors. local:./...andlocal:../...support for import paths, plus
file:local:...forms in external prompt fields.- Named reusable definitions for hooks, env maps, and sandboxes via
hook_defs,env_defs,sandbox_defs, anduse:references. - New hook lifecycle events and aliases:
assistant_final,
assistant_intermediate,tool_use_post,run_start,run_end, and
error(alias of failingrun_end). - Additional hook environment context, including run metadata, token usage,
tool duration, and serialized tool success/error payloads. init_sqlsupport for the SQLite tool to initialize missing databases.MESSAGE_TEXTin macro expansion environment variables.- Programmable macro argument completions in the LSP via inline lists,
file:sources, orexec:sources. LECTIC_RUNTIMEenvironment variable to override recursive custom
subcommand discovery roots.- Experimental Nix sandbox plugin at
extra/plugins/nix-sandbox.
Changed
- Release artifacts now use a stable
<tag>-<platform>-<arch>naming
scheme and include platform tarballs for package manager distribution. - Subcommand discovery now follows symlinks.
- Release pipeline now publishes
SHA256SUMSand supports optional
CI-driven Homebrew and AUR publishing. - Linux tarball artifacts are built with Bun on Ubuntu runners and
validated with anldddependency sanity check. - Provider naming:
chatgptis nowcodex. - Custom subcommand discovery now searches
$LECTIC_CONFIGand
$LECTIC_DATArecursively before checking$PATH. - Bash completion discovery now mirrors runtime subcommand resolution and
loads adjacent.completion.bashfiles for discovered commands. - Expanded docs for hooks, configuration imports, external prompt path
handling, structured outputs, and custom subcommands.
Fixed
- Header validation and LSP diagnostics for
use:references on hooks,
sandbox, and tool/interlocutorsandboxfields. - Minor header validation and documentation link fixes.
- Added CI link checking and corrected link-check invocation flags.
v0.0.1
Added
- A2A support:
lectic a2aserver mode (JSON-RPC + SSE) exposing configured agents.- Monitoring endpoints for agents and tasks.
- Optional bearer token auth and resubscribe support.
- A2A client tool support, including
tasks/getpolling.
- Built-in directives:
:attach,:env,:fetch,:merge_yaml, and
:temp_merge_yaml. - Recursive macros with
pre/postphases for more advanced automation. lectic runsubcommand plus bash completion.- Global
sandboxconfiguration key and improved Bubblewrap wrapper.
Changed
lectic scriptnow bundles JS/TS/JSX/TSX scripts (withhttps://imports)
and supports React TSX/JSX via bundling.- MCP updates:
- Streamable HTTP support includes custom headers and OAuth.
- New
onlyallowlist support for server tools. - Removed deprecated MCP transport code paths.
- Documentation overhaul, including expanded automation/tooling references and
cookbook recipes.
Removed / Deprecated
- Deprecated
:cmdinline attachments in favor of:attach. - Removed legacy A2A aliases/endpoints.
Fixed
- Agent tool and backend stability fixes (including error handling).
- SQLite tool hardening and safety improvements.
- Macro/directive expansion edge cases.
- LSP correctness fixes (highlighting, completions, code actions).
v0.0.0-beta8
New Features
-
Custom subcommands: Lectic now supports git‑style custom subcommands. Any executable named
lectic-commandin yourPATH, config, or data directory can be invoked directly aslectic command. -
Usage and cost tracking: new
lectic usagesubcommand tracks token consumption and calculates costs. It features time‑granularity graphs, filtering by model, and automatic price data retrieval fromllm‑prices.com. -
Inline hook content: hooks with
inline: truecan now inject content directly into the conversation. Foruser_messageevents, output is provided as context to the LLM; forassistant_message, it is appended to the response, enabling automated multi‑turn workflows. -
Hook control headers: hooks can return
LECTIC:KEY:VALUEheaders (likeLECTIC:resetorLECTIC:final) to clear conversation context or prevent the assistant from responding to hook output. -
Scoped hooks: hooks can now be defined at the interlocutor or individual tool level, providing granular control over automation and human‑in‑the‑loop confirmations.
-
Worktree sandboxing: new
lectic-worktreesubcommand/sandbox wrapper creates isolated git worktrees for tool execution. This allows LLMs to perform git operations and file edits in a safe, namespaced environment without polluting your main working tree. -
Parser utility: new
lectic parsesubcommand converts.lecfiles into JSON or YAML structures and can reconstruct original files from that representation, facilitating programmatic analysis. -
Script runner: new
lectic scriptcommand runs ES module files using Lectic's internal Bun runtime. It supports hashbang execution, making it easier to write portable hooks and custom tools in TypeScript or JavaScript. -
LSP jump‑to‑definition: the LSP now supports navigating to the definitions of kits and interlocutors referenced in the document header, including those defined in workspace or system configuration files.
Improvements
-
A batteries-included nix installation:
nix profile add github:gleachkr/Lectic#lectic-fullnow installs lectic and the full set of subcommands from theextra/directory. -
Overhauled tab completion: the bash completion script has been rewritten for better reliability and now supports extensible completion for custom subcommands.
-
OpenAI tool compatibility: significant improvements to OpenAI tool serialization, including support for
strict: trueparameters and more robust JSON schema handling for complex object shapes. -
Enhanced hook context:
assistant_messagehooks now receive the full conversation body on standard input. New environment variables likeTOOL_USE_DONEandTOKEN_USAGE_CACHEDallow hooks to react more intelligently to the assistant's state. -
LSP folding and hovers: tool call and inline attachment folds now display descriptive text when collapsed. Hovering over tool call XML blocks provides a cleaner, more readable preview of the contents.
-
Macro flexibility: macros can now expand into first‑class Lectic directives, allowing for more complex abstractions and reusable conversation patterns.
-
:cmddirective behavior: newlines are now stripped from:cmddirectives to prevent unintended command fragmentation when directives are line‑wrapped in a text editor. -
Documentation generation: documentation is now automatically consolidated into a single‑file markdown version (
llms‑full.md) for easier consumption by LLMs and search tools.
Bug Fixes
-
Path resolution: fixed a bug in diagnostic and link handling where paths starting with
~/were not correctly expanded to the user's home directory. -
Subcommand discovery: corrected the globbing logic used to find custom subcommands, ensuring they are reliably detected across different platforms and installation methods.
-
Tool calling loop: addressed an edge case where the tool calling loop could fail to terminate properly or omit the final assistant response.
-
Header validation: improved the accuracy of diagnostics for YAML header fields, particularly around required properties in kit and interlocutor definitions.
-
Tool call serialization: ensured that tool call and attachment blocks are correctly parsed as HTML blocks to avoid interference with markdown formatting in certain providers.
v0.0.0-beta7
New Features
-
LSP server: experimental stdio LSP (
lectic lsp) with completions, hovers, diagnostics, folding, document and workspace symbols, code actions, go‑to‑definition, and semantic tokens.-
YAML header autocomplete: completions for interlocutor mappings (
interlocutor/interlocutors), including top‑level fields likename,prompt,provider,model,thinking_effort, andthinking_budget; completions for tool kinds, kits, agents, and native tool types. Model names are sourced at runtime from the provider backends -
Directive autocomplete: directives, including macros and agent calls, now autocomplete and provide information on hover.
-
YAML diagnostics: YAML header validation is now shared between runtime and LSP. The LSP surfaces missing prompts, type errors (e.g., non‑numeric
max_tokens), invalid hooks, kit validation issues, unknown interlocutor properties, and parse/merge errors from system and workspace configs. -
Link diagnostics and hovers: links to local files get diagnostics for missing paths, empty globs, and non‑absolute
file://URLs, plus hover previews of text file heads. Hovers on<tool-call>and<inline-attachment>blocks show argument and result contents, usingcontentMediaTypeand result mimetypes for syntax hints. -
Folding is now provided by the LSP, so it will be available in any editor that supports LSP folding.
-
LSP code actions and symbols: code actions include “Add header” when a document lacks frontmatter, plus other small helpers.
-
Workspace symbol search finds macro and interlocutor defintions works across Lectic files.
-
Document symbol search indexes messages and events in the current conversation
-
-
Tool kits: define shared tool sets once under a top‑level
kitsarray and reuse them withkit: nameinside interlocutortools. Kits are validated, referenced in LSP completions, and expanded transitively at runtime. -
Model discovery: new
lectic modelssubcommand lists available models for providers with configured API keys (Anthropic, Gemini, OpenAI Responses, OpenRouter). -
Thinking controls: new interlocutor fields
thinking_effortandthinking_budgetintegrate with Anthropic, Gemini (including Gemini 3), and the OpenAI Responses API for structured reasoning and thinking‑budget configuration. -
MCP media and resources: MCP tools now support text, resource links, embedded resources, and media blocks (image and audio). Non‑text results become URIs or
data:URLs that Lectic converts into attachments for Anthropic, Gemini, OpenAI, and other providers.
Improvements
-
Cached
:cmd[...]attachments::cmddirectives now emit inline<inline-attachment>blocks at the top of the next assistant message. These cache command results so prior:cmdcalls are not re‑executed on later runs and can be previewed in the LSP. -
Tool call serialization:
ToolCallResultnow records a mimetype and serializes<result type="…">…</result>. String‑typed schema properties can carrycontentMediaType, which is propagated into<tool-call>argument tags so editors can infer languages for previews. -
JSON Schema handling: schemas now support
nullandanyOf, including a heuristic ordering when deserializing to prefer object and array shapes over raw strings. Gemini’sparametersJsonSchemahandling is simplified and made more robust. -
Exec tool behavior: single‑line commands are parsed into argv with simple quote handling, without a shell. Multi‑line
exec:scripts are written to temporary files and executed via their shebang. CLI output is sanitized by normalizing CRLF, stripping ANSI escape sequences, and collapsing\r‑based progress updates. -
Exec tool environment: confirmation and sandbox paths go through environment expansion, including
LECTIC_INTERLOCUTOR. When a schema is provided, named parameters become environment variables; when absent, the call uses anargvarray for positional arguments. -
SQLite tool: new
readonly: trueoption opens the database in read‑only mode. Thequeryparameter is tagged withcontentMediaType: text/sqlfor better editor tooling. Existing behavior around atomic transactions and YAML results remains. -
MCP configuration: MCP tool specs now use
namefor the server name, aligning with other tools. A new optionalexcludelist lets you hide specific server‑provided tools by name. -
Glob expansion for local files expands
~to$HOMEand avoids double expansion when concrete matches are found. -
OpenAI SDK and caching: OpenAI usage moves to the newer SDK, adds
prompt_cache_retention: "24h"to both Responses and Chat backends, and deduplicates common backend logic. -
Anthropic backend: attachments from tools and user messages are merged into user messages in the order Anthropic expects. The
thinking_budgetfield is wired into Anthropic’sthinkingconfiguration. -
Agent tool results: agent tool outputs are sanitized so raw tool result XML does not leak back into the calling interlocutor unless explicitly requested.
-
Editor plugins: the Neovim plugin gains a minimal LSP client integration and defers folding to the LSP. The VS Code extension likewise drops its custom folding provider and relies on the external LSP server.
-
Configuration and merging: header merging normalizes cases where the active
interlocutorshares a name with an entry ininterlocutors, combining their properties. Kits, hooks, and macros participate in the same merged config hierarchy as interlocutors.
Bug Fixes
-
Thinking output: fixed issues where Gemini thought segments could leak into the visible output, and where non‑thought text could be omitted around thinking segments.
-
Exec tool output: fixed handling of
\rin stdout so progress lines and overwrites no longer leave confusing artifacts in tool outputs. ANSI escape sequences and OSC codes are stripped rather than fed to the model. -
Content references: fixed environment variable expansion for non‑URL links and normalized behavior around
$PWD,file://URLs, and globs. The LSP now reports empty glob matches and missing local paths explicitly. -
MCP non‑text results: fixed handling of MCP tool calls that return resource URIs with non‑text mimetypes, preventing crashes and ensuring binary results can be consumed as attachments.
-
OpenAI PDFs: corrected
file_dataconstruction for PDFs in theopenai/chatbackend so PDF attachments are delivered via the expecteddata:URLs. -
Gemini parameters: allowed
nullwhere the provider’s JSON Schema permits it, avoiding spurious type errors, and adjusted model listing logic to filter only models that supportgenerateContent. -
SQLite readonly mode: fixed a bug where the
readonlyoption could be ignored or misapplied; read‑only databases now behave as documented. -
LSP XML parsing: tightened XML parsing for hovers over tool calls and inline attachments, fixing cases where malformed blocks could crash hover handling.
-
LSP worker paths: fixed inconsistencies in worker script resolution between tests and production builds, so the parser worker loads reliably in both environments.
-
Diagnostics stability: addressed a diagnostics bug where messages could be duplicated or mapped to incorrect ranges, and added debouncing to avoid diagnostic thrash while typing.
-
Agent tool output sanitation: ensured that agent tool results do not surface raw
<tool-call>XML back to the user by default, avoiding confusing nested transcripts.
v0.0.0-beta6
New Features
- Hook system: define hooks on user_message, assistant_message, and error events; support multiple events via an array; assistant_message hook environment includes LECTIC_INTERLOCUTOR.
- Conversation control: new :reset[] directive to reset conversation context from that point forward.
- Exec tool: customizable parameter schemas; when a schema is provided, parameters are passed via environment variables to the command or script; add per-tool timeoutSeconds.
- SQLite tool: load SQLite extensions via an extensions field (string or array); expand environment variables in database paths.
- MCP: reuse MCP connections across tools; namespace tool names per server; add a server-specific list_resources tool when server_name is set.
- Configuration: load a per-project lectic.yaml from the current directory.
Improvements
- SQLite: return results as YAML for readability; trim trailing newlines to avoid empty statements; iterate results more robustly; correct size checks and improve error messages.
- Exec tool: expand environment variables in exec, sandbox, and confirm paths; expose LECTIC_INTERLOCUTOR to tool environments; confirmation displays the full parameter object when using schemas.
- MCP: expand environment variables in confirm and sandbox paths; detect and error on duplicate server names.
- OpenAI/Ollama: use “system” role for the developer message for broader compatibility (e.g., Ollama) while remaining compatible with OpenAI; bump OpenAI dependencies.
- Gemini: add a linebreak after native interpreter calls in generated content.
- Macros: include attributes from :macro[] directives in the expansion environment; fix predicate name for macro matching.
- Configuration and loader: expose LECTIC_FILE when using --inplace/--file; allow an optional env parameter to loader functions; document new environment variables and timeoutSeconds.
- Core structure: make Lectic a class to clarify responsibilities and enable reuse.
- Editor plugins: Neovim disables spellcheck inside LecticBlocks.
- Documentation: migrate to Quarto; add build automation; fix links and warning callouts; ignore build artifacts; fill in missing content.
- Maintenance: add markdown tests; clean up type idioms; improve .gitignore.
Bug Fixes
- SQLite: fix newline-related parsing that could introduce empty statements; correct result size checks; improve handling of unreadable BLOB columns.
- MCP: prevent ambiguous routing by erroring on duplicate server_name.
- OpenAI/Ollama: compatibility fix for role handling.
- Tests and typing: fix array element check; minor type lint fixes.
- Typo fixes and small cleanups across the codebase.
v0.0.0-beta5
New Features
- Macro system with
:macro[...]expansion, documentation, and tests. - New
:aside[...]directive for a one-round interlocutor switch. - File and URL fragments, including PDF page and range fragments (via
pdf-lib). - Environment variable expansion in
file:andexec:sources, applied
before globbing. - Extract images as content references.
- SQLite tool: support multi-statement scripts; make calls atomic.
- Exec tool: support inline scripts; add
envfield for per-tool vars. - Pass
LECTIC_*directories into subprocess environments. - New sandboxing options: nsjail script and nix/docker sandbox.
- New
anthropic/bedrockbackend. - Streaming HTTP support for responses.
Improvements
- OpenAI: use the Responses API by default; default model set to
gpt-5. - Preserve OpenAI encrypted reasoning blocks.
- Anthropic: add
nocacheoption for compatibility with older models. - Configuration and paths:
- Use standard config paths; add XDG utilities.
- Add
--Includeflag; supportLECTIC_CONFIGenv variable. - Improve header merging; merge main interlocutor with a same-named
entry in theinterlocutorsarray.
- Validate tool calls locally against JSON schemas.
- Editor plugins use the document’s directory as the working directory.
- Performance: optimize folding for long responses.
- Refactors and maintenance:
- Factor out loader; add utilities for merging and replacement.
- Strengthen types; address TypeScript warnings.
- Dependency bumps.
- Defaults: relax default
max_tokens(set explicitly if needed).
Bug Fixes
- Fix header merging logic.
- Fix fallthrough in interlocutor validation.
- Fix globbing when environment variables are present.
- Fix intermittent timing issues; improve script cleanup.
- Do not include empty
userPartson Anthropic backend. - Handle calls to non-existent tools properly.
- Handle/deserialize hallucinated tools without crashing.
- Remove unnecessary regex flag and leftover “memories” reference.
- Typo fixes and small cleanup across the codebase.
v0.0.0-beta4
New Features
-
Agent Tool: A new "agent" tool has been added, allowing one interlocutor to call another as a tool. This enables more complex, multi-agent workflows within a single conversation.
-
Parallel Tool Calling: Lectic can now execute multiple tool calls in parallel. This should lead to faster responses when multiple tools are needed. Combines well with the Agent tool.
-
External Configuration: Prompts and other configuration fields can now be loaded from external files or the output of commands using
file:andexec:prefixes. For example:prompt: file:./my-prompt.txt, orprompt: exec:./my-prompt.sh. -
Configurable
max_tool_use: You can now set amax_tool_uselimit in the header to specify how many times an LLM can use tools in a single turn.
Improvements
-
Tool Usage:
- The
exectool is now less "noisy" and provides better error feedback, includingstderr. - The
sqlitetool now gives the LLM a better view of the database schema.
- The
-
Editor Integration: The Neovim plugin now has a keybinding to cancel in-progress text generation.
-
General:
- The application now shuts down more gracefully if you interrupt it (e.g., with Ctrl+C).
- Each interlocutor in a multi-party conversation now gets their own private set of tools for better security and isolation.
- The integration with Ollama has been updated to use their OpenAI-compatible API. This should make it more maintainable.
Bug Fixes
- Numerous fixes to the tool-calling implementations for the Anthropic, Gemini, and OpenAI backends, improving reliability.
- Fixed a bug where line breaks could be lost when using Anthropic's native web search tool.
- Correctly handle system prompts when using the
openai/responsesprovider. - Resolved an issue where clearing the conversation history with
-Hicould fail.
v0.0.0-beta3
version bump
v0.0.0-beta2
Bump version