This is the command dispatch and execution layer — the bridge between user input and the engines/shell. It's the largest and most complex module so far.
Architecture
dispatch_command()
├── @scrub / @flush → handle_scrub()
├── @efficient → handle_efficient()
├── @sequence → handle_sequence()
│ └── dispatch_command() [recursive, parallel]
├── @sh → handle_sh()
│ ├── _build_sh_command()
│ ├── subprocess.run()
│ └── _format_artifact_{text,json}()
└── @<model> → handle_ai_interaction()
└── engine.call()
Critical Issues
1. @sequence recursively calls dispatch_command in threads — no thread safety on engine state
with ThreadPoolExecutor(max_workers=len(step_tasks)) as executor:
future_to_task = {
executor.submit(dispatch_command, task): t_idx
...
}
If two parallel tasks target the same engine, they'll concurrently mutate engine.history (a plain list). Lists are not thread-safe for concurrent append operations in CPython beyond the GIL's incidental protection — and even with the GIL, interleaved call() executions will corrupt the conversation history:
Thread A: engine.history.append(user_A)
Thread B: engine.history.append(user_B) # interleaved
Thread A: engine.history.append(assistant_A)
Thread B: engine.history.append(assistant_B)
# History is now: [user_A, user_B, assistant_A, assistant_B] — broken pairs
Fix: Either add a per-engine lock, or deep-copy engines for parallel tasks, or reject parallel tasks targeting the same engine.
2. Shell injection via --shell mode
if parsed.use_shell:
return parsed.command, True # passed directly to subprocess with shell=True
result = subprocess.run(
cmd,
shell=use_shell, # ← True
env=os.environ.copy(),
...
)
When use_shell is True, the raw command string is passed to the system shell. This is by design (the user explicitly passed --shell), but there's no warning, confirmation, or sandboxing. Combined with @sequence which can automate execution, this creates a pipeline where a crafted sequence file could execute arbitrary shell commands without interactive confirmation.
Recommendation: At minimum, add a confirmation prompt for --shell mode, or log a security warning.
3. handle_sh returns exit_code == 0 but artifact is always written
This means dispatch_command → handle_sh returns False for non-zero exit codes, which halts @sequence pipelines. But the artifact file is still written before returning. This is actually reasonable behavior, but it's undocumented and could surprise users — a failing @sh step writes its artifact, then the sequence halts, leaving a partial artifact trail.
Significant Issues
4. handle_efficient argument parsing is ambiguous
if parts[1].lower() in (list(engines.keys()) + ["all"]):
target = parts[1].lower()
filename = parts[2] if len(parts) > 2 else None
else:
target = "all"
filename = parts[1]
If an engine is named the same as a filename (unlikely but possible), or if someone has a persona file named "all.txt" and types @efficient all, it will be interpreted as target "all" with no filename instead of target "all" with filename "all".
More practically: @efficient gemini (forgetting the filename) silently sets filename = None, then hits the check below. But @efficient all without a filename also falls through to filename = None. The error handling works, but the parsing logic is fragile.
5. No output when handle_ai_interaction writes to file
When -w is used, the AI response is only written to file — never printed to console:
if parsed.write_file:
# ... write to file ...
safe_print(f"[*] Result saved to '{parsed.write_file}' ...")
else:
safe_print(f"\n--- {engine.name} ---\n{result}\n")
This is probably intentional for pipeline use, but means a user doing @gemini "explain X" -w out.txt gets zero feedback about what the AI said. Consider an option for tee-style output (both console and file).
6. os.fsync on every file write
with open(out_path, "w", encoding="utf-8") as f:
f.write(final_out.strip())
f.flush()
os.fsync(f.fileno())
os.fsync() forces a physical disk write, which is expensive and unusual for a CLI tool. This appears in both handle_ai_interaction and handle_sh. Unless crash recovery of artifacts is a hard requirement, this is unnecessary overhead.
7. _resolve_runner has a case-sensitivity hack
def _resolve_runner(filename):
ext = Path(filename).suffix.lower()
if Path(filename).suffix == ".R":
ext = ".R"
return RUNNER_MAP.get(ext)
This calls Path(filename).suffix twice and has a special case for .R vs .r. The RUNNER_MAP already has both .r and .R mapped to Rscript, so the .lower() call followed by the override is redundant — .r would already match after lowering.
# Simplified:
def _resolve_runner(filename):
ext = Path(filename).suffix
return RUNNER_MAP.get(ext) or RUNNER_MAP.get(ext.lower())
Moderate Issues
8. handle_sh timeout is hardcoded to 300 seconds
This should be configurable via INI. A data processing script might legitimately need more than 5 minutes, while a simple ls shouldn't wait that long.
9. Stdout/stderr truncation thresholds are hardcoded
max_lines = 50 # stdout
max_lines = 30 # stderr
These should be configurable, or at least named constants.
10. _format_artifact_text outputs Markdown, not plain text
The function name says "text" but the output uses Markdown formatting (#, **, triple backticks). This is fine functionally but the naming is misleading.
11. dispatch_command doesn't handle @sequence calling @sequence
There's no recursion guard. A sequence file could contain @sequence -e, which would open another editor, whose content could contain another @sequence -e, creating unbounded recursion (limited only by thread pool exhaustion or stack depth).
Code Duplication
The stdout and stderr display blocks in handle_sh are nearly identical:
# This pattern appears twice with only variable names changed
if stdout.strip():
display = stdout.rstrip()
max_lines = 50
lines = display.splitlines()
if len(lines) > max_lines:
print(f"--- stdout (showing first {max_lines}/{len(lines)} lines) ---")
print("\n".join(lines[:max_lines]))
print(f"--- (truncated, {len(lines) - max_lines} more lines) ---")
else:
print("--- stdout ---")
print(display)
print("--- end stdout ---")
Extract to a helper:
def _print_stream(label: str, content: str, max_lines: int):
content = content.rstrip()
if not content:
return
lines = content.splitlines()
if len(lines) > max_lines:
print(f"--- {label} (showing first {max_lines}/{len(lines)} lines) ---")
print("\n".join(lines[:max_lines]))
print(f"--- (truncated, {len(lines) - max_lines} more lines) ---")
else:
print(f"--- {label} ---")
print(content)
print(f"--- end {label} ---")
Summary Table
| Severity |
Issue |
Location |
| 🔴 Critical |
Parallel tasks corrupt shared engine history |
handle_sequence() |
| 🔴 Critical |
Unguarded shell=True in automated pipelines |
handle_sh() |
| 🟠 High |
No recursion guard for nested @sequence |
handle_sequence() → dispatch_command() |
| 🟠 High |
Ambiguous argument parsing |
handle_efficient() |
| 🟡 Medium |
No console output when -w flag is used |
handle_ai_interaction() |
| 🟡 Medium |
Hardcoded timeout (300s) |
handle_sh() |
| 🟡 Medium |
os.fsync on every write |
handle_ai_interaction(), handle_sh() |
| 🟡 Medium |
Duplicated stdout/stderr display code |
handle_sh() |
| 🟢 Low |
Misleading function name (_format_artifact_text → Markdown) |
_format_artifact_text() |
| 🟢 Low |
Redundant .R handling |
_resolve_runner() |
This is the command dispatch and execution layer — the bridge between user input and the engines/shell. It's the largest and most complex module so far.
Architecture
Critical Issues
1.
@sequencerecursively callsdispatch_commandin threads — no thread safety on engine stateIf two parallel tasks target the same engine, they'll concurrently mutate
engine.history(a plainlist). Lists are not thread-safe for concurrent append operations in CPython beyond the GIL's incidental protection — and even with the GIL, interleavedcall()executions will corrupt the conversation history:Fix: Either add a per-engine lock, or deep-copy engines for parallel tasks, or reject parallel tasks targeting the same engine.
2. Shell injection via
--shellmodeWhen
use_shellisTrue, the raw command string is passed to the system shell. This is by design (the user explicitly passed--shell), but there's no warning, confirmation, or sandboxing. Combined with@sequencewhich can automate execution, this creates a pipeline where a crafted sequence file could execute arbitrary shell commands without interactive confirmation.Recommendation: At minimum, add a confirmation prompt for
--shellmode, or log a security warning.3.
handle_shreturnsexit_code == 0but artifact is always writtenThis means
dispatch_command→handle_shreturnsFalsefor non-zero exit codes, which halts@sequencepipelines. But the artifact file is still written before returning. This is actually reasonable behavior, but it's undocumented and could surprise users — a failing@shstep writes its artifact, then the sequence halts, leaving a partial artifact trail.Significant Issues
4.
handle_efficientargument parsing is ambiguousIf an engine is named the same as a filename (unlikely but possible), or if someone has a persona file named
"all.txt"and types@efficient all, it will be interpreted as target"all"with no filename instead of target"all"with filename"all".More practically:
@efficient gemini(forgetting the filename) silently setsfilename = None, then hits the check below. But@efficient allwithout a filename also falls through tofilename = None. The error handling works, but the parsing logic is fragile.5. No output when
handle_ai_interactionwrites to fileWhen
-wis used, the AI response is only written to file — never printed to console:This is probably intentional for pipeline use, but means a user doing
@gemini "explain X" -w out.txtgets zero feedback about what the AI said. Consider an option for tee-style output (both console and file).6.
os.fsyncon every file writeos.fsync()forces a physical disk write, which is expensive and unusual for a CLI tool. This appears in bothhandle_ai_interactionandhandle_sh. Unless crash recovery of artifacts is a hard requirement, this is unnecessary overhead.7.
_resolve_runnerhas a case-sensitivity hackThis calls
Path(filename).suffixtwice and has a special case for.Rvs.r. TheRUNNER_MAPalready has both.rand.Rmapped toRscript, so the.lower()call followed by the override is redundant —.rwould already match after lowering.Moderate Issues
8.
handle_shtimeout is hardcoded to 300 secondsThis should be configurable via INI. A data processing script might legitimately need more than 5 minutes, while a simple
lsshouldn't wait that long.9. Stdout/stderr truncation thresholds are hardcoded
These should be configurable, or at least named constants.
10.
_format_artifact_textoutputs Markdown, not plain textThe function name says "text" but the output uses Markdown formatting (
#,**, triple backticks). This is fine functionally but the naming is misleading.11.
dispatch_commanddoesn't handle@sequencecalling@sequenceThere's no recursion guard. A sequence file could contain
@sequence -e, which would open another editor, whose content could contain another@sequence -e, creating unbounded recursion (limited only by thread pool exhaustion or stack depth).Code Duplication
The stdout and stderr display blocks in
handle_share nearly identical:Extract to a helper:
Summary Table
handle_sequence()shell=Truein automated pipelineshandle_sh()@sequencehandle_sequence()→dispatch_command()handle_efficient()-wflag is usedhandle_ai_interaction()handle_sh()os.fsyncon every writehandle_ai_interaction(),handle_sh()handle_sh()_format_artifact_text→ Markdown)_format_artifact_text().Rhandling_resolve_runner()