Releases: editor-code-assistant/eca
0.132.0
variantsByModelentries now support an optional:apifilter (string or vector) to restrict variant matching by provider API type.- Custom commands and skills now expose
:argumentsmetadata inferred from their content. Previously they always reported empty arguments. - Native
skill-create,plugin-install, andplugin-uninstallcommands now declare:required trueon their arguments in the command listing. - Fix documentation link in
--helpoutput. - Add built-in variants for
deepseek-v4-pro(none,high,max). - Improve skill tool description to resolve file paths and scripts mentioned in skill content against the skill's base directory.
- Add built-in
eca-infoskill that exposes the running ECA's information for debugging ECA itself.
0.131.1
- MCP tools that return image content blocks (e.g. an MCP image-generation/edit server) now render those images in the chat UI as
ChatImageContentand replay them back to the LLM as image inputs on follow-up turns when the model supports vision. Implemented foropenai-responses(synthetic user-roleinput_imageafter thefunction_call_output) andanthropic(mixed text + image blocks insidetool_result.content).openai-chatandollamacontinue to receive a text placeholder until a parallel pattern is implemented there. - Bugfix: MCP tools without a
description(which the MCP spec marks optional) no longer break Anthropic chat requests withtools.<n>.custom.description: Input should be a valid string. Missing/empty descriptions now fall back to the tool'stitle, then to a synthesizedMCP tool: <name>string at the MCP boundary so all providers receive a non-null string. - Hook
matchernow supports object form keyed by tool selectors with per-toolargsMatchers; legacy string regex matchers remain supported.
0.131.0
-
Add
${plugin:root}dynamic interpolation for plugin-provided config, hooks, commands, and rules. -
Support OpenAI built-in
image_generationtool via the Responses API for capable models (openai/gpt-5.x,openai/gpt-4.1). Generated images are streamed back as a newimagechat content carryingmediaType+ base64. Available on every provider whose api isopenai-responses(openai,github-copilotresponses-api models,litellm, custom providers). -
Support image edits via the same
image_generationtool: assistant-generated images now persist to chat history so subsequent turns can iterate ("now make it blue, smaller, with a red border"), resumed chats replay previously generated images, and clients can attach source images either by file path (existingFileContext) or via a new inline base64ImageContextrequest type for clients without filesystem access. -
Fix inline completion crash when renewing auth tokens before completion requests. #437
-
Bugfix: avoid
Divide by zerocrash in chat auto-compact when models.dev reports0for a model's context/output limits (e.g.openai/chatgpt-image-latest); such limits are now normalized tonilandauto-compact?skips models without a known positive context window. -
Bugfix: image edit follow-up turns no longer fail on the OpenAI Responses API when prior generations are replayed; generated images are now persisted under a dedicated
image_generation_callhistory role and replayed as a user-roleinput_imagedata URL across providers. -
Support regex patterns in markdown agent tool entries (e.g.
eca__shell_command(npm run .*)) for fine-grained tool approval, currently limited toeca__shell_command.
0.130.1
- Add configurable skill paths and recursive directory loading for configured rules, commands, and skills; local skills are also discovered from
.agents/skills. #423 - Bugfix:
/prompt-shownow renders the system instructions as plain text instead of a raw{:static :dynamic}map. - Fix MCP OAuth success/error page never rendering in the browser by sending the local-callback HTML response before invoking caller-supplied
on-success/on-errorhooks; previously the MCP callback synchronously stopped the Jetty server insideon-success, racing the response flush.
0.130.0
- Improve rules with frontmatter filters, condition variables, path-scoped loading, enforcement support, and clearer documentation. #222
preToolCallhooks now receiveapproval: "ask"for the nativeask_usertool so notification hooks (e.g. matching.approval == "ask") also fire when the chat is blocked waiting for a user answer, regardless of trust mode.- New
${cmd:some command}dynamic string backend that resolves to the trimmed stdout of a shell command, useful for password managers likepassorop. On macOS the user's interactive shell$PATHis queried once so GUI-launched ECA picks up Homebrew,mise/asdfshims, etc. #430
0.129.2
0.129.1
0.129.0
- Restore the model used at chat creation when resuming a chat:
chat/openand the/resumeslash command now emitconfig/updatedto realign the client's selected model to the persisted chat's:model, and the nextchat/promptprefers that stored model over the agent/global default (stale models still fall through gracefully). #417 - Fix
rewritehanging on large files by windowing the inlined file context around the selection instead of sending the whole file; configurable viarewrite.fullFileMaxLines(default 2000). #418 - Prefix plugin-sourced commands and skills with their plugin name (
/<plugin-name>:<name>) to avoid collisions across plugins. When the plugin name and the command/skill name are equal the prefix is dropped. #420 - Fix empty
.sha256for macOS aarch64 release artifact by usingshasum -a 256(portable across macOS runners) and enablingpipefailso silent pipe failures don't hide. - Fix install page
eca-desktopdownload buttons navigating to the wrong artifact (e.g. Linux/x86_64 AppImage leading toeca-mac-arm64.dmg) caused by hidden OS/arch panels still intercepting clicks on top of the visible panel; hidden primary tab and OS panels now usedisplay: noneso their nested:checkedrules can't re-activate and leak clicks. - Fix custom tools hanging on Windows by running them through the same platform-aware shell used by
shell_command, respectingtoolCall.shellCommand. #421
0.128.2
- Sign and notarize macOS native binaries in release CI.
- Disable
ask_usertool for subagents since they run non-interactively. #416 - Fix low-quality chat titles on 3rd-message retitle (e.g. literal "Understand" on Opus) by flattening the conversation into a single user message so the title model can't mirror prior planning-mode section headers, adding negative rules/examples to the title prompt, and hardening
sanitize-titleto skip a bare leading markdown header when more content follows.
0.128.1
- Fix stale system prompt being reused after switching agent mid-chat by scoping the chat-level prompt cache and the OpenAI Responses
prompt_cache_keyper active agent. #411 - Improve chat title quality on 3rd-message retitle by filtering tool calls, tool results, reasoning and flag entries from the history passed to the title LLM, and by respecting the last compact marker.
- Add the
/modelcommand allowing the user to change the model directly from the chat.