Conversation
Tuned OpenBrain recall queries to use type-filtered searches (decision, plan, convention) instead of generic keyword matching. Increased content truncation from 300 to 600 chars. Added roadmap/priorities query. Renamed MCP server from openbrain to core in mcp.json. Co-Authored-By: Virgil <virgil@lethean.io>
Multistage Dockerfile (FrankenPHP + Octane + Horizon + Reverb) with
docker-compose wiring 6 services: app, mariadb, qdrant, ollama, redis,
traefik. All data mounts to .core/vm/mnt/{config,data,log}. Traefik
routes *.lthn.sh with self-signed TLS. Setup script handles first-run
bootstrap including cert generation and embedding model pull.
Co-Authored-By: Virgil <virgil@lethean.io>
Move agent persona definitions from agents/ to prompts/personas/. Add plan templates (bug-fix, code-review, feature-port, new-feature, refactor) from the PHP PlanTemplateService into prompts/templates/. Co-Authored-By: Virgil <virgil@lethean.io>
New slash commands for the agentic system: - /dispatch — dispatch subagent to sandboxed workspace - /status — show workspace status (running/blocked/completed) - /recall — search OpenBrain memories - /scan — find actionable Forge issues Fix MCP URL: api.lthn.sh/mcp → mcp.lthn.sh (subdomain routing). Co-Authored-By: Virgil <virgil@lethean.io>
/dispatch → /core:dispatch, /status → /core:status, etc. Prevents collision with built-in Claude commands. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
max_concurrent: 2 limits parallel agent processes. Agent identities: cladius (dispatch/review), athena (worker), charon (worker/review, inactive), clotho (worker, inactive). Co-Authored-By: Virgil <virgil@lethean.io>
…sona New commands: - /core:review — review agent workspace output, diff, merge options - /core:sweep — batch dispatch audit across all Go repos Updated /core:dispatch with persona argument support. Co-Authored-By: Virgil <virgil@lethean.io>
Plugin commands derive namespace from directory (claude/code/) not the name field. Commands now: /code:dispatch, /code:status, /code:recall, /code:review, /code:sweep, /code:scan. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
…tically Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
… modes Co-Authored-By: Virgil <virgil@lethean.io>
New scan templates: - dependency-audit: find code rolling its own vs using framework - dead-code: unreachable functions, unused exports, orphaned files - test-gaps: untested functions, missing error path coverage - api-consistency: endpoint naming, response shapes, error formats - doc-sync: documentation vs code drift Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Removed: - prefer-core.sh: blocked raw go/php commands unnecessarily - post-commit-check.sh: noisy warnings after every commit - block-docs.sh: blocked writing specs and RFCs - capture-context.sh, extract-actionables.sh, pr-created.sh, suggest-compact.sh: orphaned scripts not referenced by any hook Kept: - go-format.sh, php-format.sh: auto-format after edits (helpful) - check-debug.sh: warns about dd()/fmt.Print* (lightweight) - session-start.sh, pre-compact.sh, session-save.sh: essential Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Add Issue, Sprint, IssueComment models with workspace scoping, soft deletes, activity logging, and MCP context support. Migration creates issues, sprints, and issue_comments tables. API routes registered with read/write permission scopes. Controller stubs for IssueController and SprintController. Co-Authored-By: Virgil <virgil@lethean.io>
…rints Issue Actions: CreateIssue, GetIssue, ListIssues, UpdateIssue, ArchiveIssue, AddIssueComment with full validation and workspace scoping. Sprint Actions: CreateSprint, GetSprint, ListSprints, UpdateSprint, ArchiveSprint with status lifecycle management. IssueController: REST endpoints with filtering by status, type, priority, sprint, and label. Comment sub-resource endpoints. SprintController: REST endpoints with progress tracking. Co-Authored-By: Virgil <virgil@lethean.io>
… actions IssueTest covers: model CRUD, status transitions, slug generation, sprint association, comments, label management, scopes, MCP context, and all Issue Action classes with validation. SprintTest covers: model lifecycle, progress calculation, scopes, MCP context, and all Sprint Action classes with validation. Co-Authored-By: Virgil <virgil@lethean.io>
AgentApiAuth middleware sets workspace_id attribute, not workspace. Controllers were trying to read ->id on null. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
- Add paths.go with WorkspaceRoot(), CoreRoot(), PlansRoot() - All workspace paths now check CORE_WORKSPACE env var first - Fallback: ~/Code/.core/workspace (works on any machine) - Remove committed core-agent and mcp binaries from tracking - Add .gitignore for compiled binaries Co-Authored-By: Virgil <virgil@lethean.io>
….core
Scripts now use ${CORE_WORKSPACE:-$HOME/Code/.core} instead of
hardcoded host-uk path. Works on both Cladius and Charon.
Co-Authored-By: Virgil <virgil@lethean.io>
Uses just "core-agent" (no absolute paths) — works on any machine where core-agent is on PATH. Co-Authored-By: Virgil <virgil@lethean.io>
Plugin's mcp.json now uses local "core-agent mcp" via stdio instead of remote HTTPS. This means installing the plugin auto-registers the MCP server — no manual .mcp.json needed. Co-Authored-By: Virgil <virgil@lethean.io>
- Plugin now provides MCP server via mcp.json (local core-agent stdio) - No manual .mcp.json needed — plugin auto-registers - Portable paths (CORE_WORKSPACE env var, no hardcoded machine paths) Co-Authored-By: Virgil <virgil@lethean.io>
…ve invalid hooks from plugin.json Co-Authored-By: Virgil <virgil@lethean.io>
Config now found at: 1. ~/Code/.core/agents.yaml (shared, portable) 2. ~/Code/core/agent/config/agents.yaml (repo fallback) Both Cladius and Charon find the same config. Co-Authored-By: Virgil <virgil@lethean.io>
- orchestrator: queue dir uses CoreRoot()/queue - cmd/agent: queue dir resolved by config, not hardcoded - cmd/workspace: specs path uses ~/Code/specs (no host-uk) - Zero remaining /home/claude or host-uk references in Go Co-Authored-By: Virgil <virgil@lethean.io>
Removed: - pkg/loop/ — superseded by Claude native tool use - pkg/lifecycle/ — 14K lines, old PHP API polling client - pkg/jobrunner/ — old CodeRabbit orchestration (rebuilt in verify.go) - pkg/orchestrator/ — old AgentCI config (replaced by agents.yaml) - pkg/workspace/ — empty stub - pkg/plugin/ — empty stub - cmd/agent/ — old fleet management CLI - cmd/dispatch/ — old polling dispatcher - cmd/workspace/ — unused CLI - cmd/tasks/ — unused CLI - cmd/taskgit/ — unused CLI 120 files deleted, 28,780 lines removed. Remaining: 31 Go files, 6,666 lines — cmd/core-agent + pkg/agentic + pkg/brain + pkg/monitor. All functionality preserved in the new MCP-native architecture. Co-Authored-By: Virgil <virgil@lethean.io>
Critical: - Fix token leaked in URL query string (getLabelID) — now uses Authorization header - Fix unchecked type assertion panic in parseMessages — safe toInt() helper High: - Remove dead canDispatch() function - Remove time.Now import hack from mirror.go - Remove invalid --no-force git flag from mirror.go - Tidy go.mod (remove unused deps from deleted packages) Co-Authored-By: Virgil <virgil@lethean.io>
- os.ReadFile/WriteFile → coreio.Local in monitor and remote packages - Deduplicate agentName() → shared agentic.AgentName() - Deduplicate workspaceRoot() → shared agentic.WorkspaceRoot() - fileExists uses IsFile() instead of reading whole file - Fix SPDX-Licence → SPDX-License typo - Remove time.Now import hack - Fix hardcoded PR #1 in review queue Co-Authored-By: Virgil <virgil@lethean.io>
Important: - Fix nil pointer dereference on resp.StatusCode when err!=nil (8 sites) - Replace curl shell-out with net/http in monitor inbox check - Handle clone failure in prep.go (was silently swallowed) - Use GitHubOrg() instead of hardcoded "dAppCore" Medium: - Fix JSONL append race (read+write → os.OpenFile O_APPEND) - Remove dead google/mcp/ directory Co-Authored-By: Virgil <virgil@lethean.io>
Simplifier pass (-38 lines): - Consolidate status update branches in spawnAgent (3 → 1 write) - Remove 6 duplicate defer resp.Body.Close() calls - Fix nil err reference in non-200 error paths (scan.go, pr.go) - Remove redundant plansDir() and workspaceRoot() wrappers - Simplify countRunningByAgent to use baseAgent() helper - Extract markMerged in verify.go to remove duplication - Clean imports and remove dead code Co-Authored-By: Virgil <virgil@lethean.io>
Dispatched agents must now run a 4-pass polish cycle before committing: 1. Self-review (conventions, nil pointers, hardcoded paths) 2. Build + test (fix failures, add tests for critical paths) 3. Simplify (consolidate, deduplicate, reduce complexity) 4. Final check (build clean → commit) This captures the review→fix→simplify→re-review flow that produces polished code. Goal: CodeRabbit finds zero issues on agent output. Co-Authored-By: Virgil <virgil@lethean.io>
…fier Plugin agents for the closeout sequence: - agent-task-code-review: conventions, security, nil pointers, confidence scoring - agent-task-code-simplifier: consolidate, deduplicate, flatten, no behaviour change Namespace: agent-task-code-* (SASE boundary pattern) Co-Authored-By: Virgil <virgil@lethean.io>
- Move prompts/ → pkg/prompts/lib/ (112 templates + personas) - pkg/prompts provides Template(), Persona(), ListTemplates(), ListPersonas() - prep.go uses embedded prompts instead of filesystem reads - No external file dependency — prompts compiled into binary - Remove old prompts/ directory Co-Authored-By: Virgil <virgil@lethean.io>
engineering/engineering-security-engineer.md → engineering/security-engineer.md
testing/testing-api-tester.md → testing/api-tester.md
etc. across all 68 persona files.
Path is now: prompts.Persona("engineering/security-engineer")
Folder provides the namespace, filename provides the role.
Co-Authored-By: Virgil <virgil@lethean.io>
engineering/security-* family: - security-senior: full-stack security (was security-engineer) - security-developer: code-level review, OWASP, fixes code - security-devops: Docker, Traefik, Ansible, CI/CD, TLS - security-secops: incident response, monitoring, forensics - security-architect: threat modelling, STRIDE, trust boundaries - security-junior: checklist-based scanning, batch convention checks Each persona is a system prompt attached via dispatch: agentic_dispatch persona=engineering/security-developer Folder = domain, filename = function, template = task type. Co-Authored-By: Virgil <virgil@lethean.io>
- marketing/ → smm/ (social media marketing) - Add smm/security-developer (OAuth, API keys, account security) - Add smm/security-secops (account compromise, brand hijacking, incidents) Pattern: same role name in different domains produces domain-specific expertise. engineering/security-developer → reviews Go/PHP code smm/security-developer → reviews social media API integrations 14 domains, 95 personas. Path has semantic meaning. Co-Authored-By: Virgil <virgil@lethean.io>
New domain: devops/ (3 personas — security-developer, senior, junior) Cross-cutting security-developer role now in 7 domains: engineering/ — Go/PHP code security, nil pointers, injection devops/ — Ansible, Docker, Traefik, CI/CD security smm/ — OAuth tokens, platform API keys, account security support/ — customer incident investigation, data exposure testing/ — security test writing, fuzzing, auth bypass tests design/ — XSS, CSRF, CSP, clickjacking, template escaping product/ — feature security review, threat models, privacy Same role name, different domain knowledge. Path = context, file = lens. 16 domains, 116 personas. Co-Authored-By: Virgil <virgil@lethean.io>
CLAUDE.md: - Complete rewrite reflecting current architecture (30 files, 6.5K lines) - Documents all 33 MCP tools, 7 agent types, dispatch flow - Removes all references to deleted packages Tests: - pkg/agentic: 20 tests covering paths, extractPRNumber, truncate, countFindings, parseRetryAfter, resolveHost, baseAgent, validPlanStatus, generatePlanID, extractJSONField - pkg/prompts: 7 tests covering Template, Persona, ListTemplates, ListPersonas, prefix duplication check Fix: rename support/support-responder → support/responder (caught by test) Co-Authored-By: Virgil <virgil@lethean.io>
Extract 4 hardcoded templates from prep.go raw strings into
pkg/prompts/lib/templates/ as markdown files:
- coding.md — main coding template with closeout sequence
- conventions.md — convention audit (report only)
- security.md — security review
- verify.md — PR verification
- default.md — fallback
prep.go now calls prompts.Template("coding") instead of
maintaining 120 lines of raw Go string literals.
Backticks now work properly in templates — no more
concatenation hacks for inline code formatting.
Co-Authored-By: Virgil <virgil@lethean.io>
Domain renames: engineering/ → code/ (write/review code) paid-media/ → ads/ (advertising) project-management/ → plan/ (planning, coordination) spatial-computing/ → spatial/ (XR, Metal) New domains: secops/ — security operations (from engineering/security-*) blockchain/ — chain, consensus, crypto (from specialized/) Dissolved: specialized/ → split into blockchain/, code/, support/, smm/, testing/ strategy/ → merged into plan/ (playbooks, runbooks, coordination) examples/ → removed (reference only, not functional) secops/ names cleaned: security-developer → developer, security-senior → senior, etc. 13 domains, 111 personas. Path = context, file = lens. secops/developer — code-level security secops/architect — threat modelling secops/devops — infrastructure security code/backend-architect — system design smm/tiktok-strategist — TikTok content Co-Authored-By: Virgil <virgil@lethean.io>
Three distinct concepts in lib/:
prompts/ — System prompts (PROMPT.md, HOW to work)
tasks/ — Structured task plans (PLAN.md, WHAT to do)
flows/ — Multi-phase workflows (orchestration)
personas/ — Domain/role system prompts (WHO you are)
API updated:
prompts.Prompt("coding") — system prompt
prompts.Task("bug-fix") — task plan
prompts.Flow("prod-push-polish") — workflow
prompts.Template() — backwards compat (searches both)
templates/ dir reserved for future output templates
(CodeRabbit report formatting, CLI output parsing, etc.)
Co-Authored-By: Virgil <virgil@lethean.io>
flows/git.md — commit workflow flows/go.md — build, vet, test, tidy flows/php.md — pint, phpstan, pest, audit flows/ts.md — tsc, eslint, test, build flows/cpp.md — cmake configure, build, test flows/py.md — venv, ruff, pytest flows/docker.md — build, smoke test, compose flows/npm.md — lint, test, build, publish flows/release.md — tag, push, artefacts, downstream deps Each flow is a reminder of the standard build sequence for that stack. Co-Authored-By: Virgil <virgil@lethean.io>
Plurals → singular:
prompts/ → prompt/
tasks/ → task/
flows/ → flow/
personas/ → persona/
Nested task namespace:
task/code/review.yaml
task/code/refactor.yaml
task/code/dead-code.yaml
task/code/test-gaps.yaml
Mirrors agent naming: core:agent-task-code-review → task/code/review
API: prompts.Task("code/review"), prompts.Persona("secops/developer")
Co-Authored-By: Virgil <virgil@lethean.io>
Pattern: task name maps to file + bundle directory
task/code/review.md ← the task definition
task/code/review/ ← additional context files
conventions.md ← checklist
severity.md ← rating guide
plan.yaml ← structured phases
TaskBundle("code/review") returns (main, bundle, err)
where bundle is map[filename]content of everything in review/.
Agent name maps to lib path:
core:agent-task-code-review → lib/task/code/review + review/
Converted code/ tasks from YAML plans to markdown task prompts
with optional bundles. Simplifier gets patterns.md bundle.
Co-Authored-By: Virgil <virgil@lethean.io>
- orchestrate skill: full plan→dispatch→review→fix→verify pipeline - prompts skill: browse/read personas, tasks, flows, templates - review-pipeline skill: multi-stage security→fix→simplify→architecture→verify - local-dispatch.sh: dispatch agents without MCP (clone, run CLI, capture) - workspace-status.sh: check all workspace statuses - main.go: fix ServiceFor removal (use c.Service() + type assert) Co-Authored-By: Virgil <virgil@lethean.io>
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟡 Minor comments (18)
pkg/agentic/verify.go-331-338 (1)
331-338:⚠️ Potential issue | 🟡 MinorMake PR number extraction robust to trailing slash/query.
Current parsing fails for URLs like
/pulls/123/or/pulls/123?foo=bar.Suggested fix
func extractPRNumber(prURL string) int { - parts := strings.Split(prURL, "/") + clean := strings.TrimSpace(prURL) + clean = strings.SplitN(clean, "?", 2)[0] + clean = strings.TrimRight(clean, "/") + parts := strings.Split(clean, "/") if len(parts) == 0 { return 0 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/verify.go` around lines 331 - 338, The extractPRNumber function fails on URLs with trailing slashes or query strings; update it to robustly obtain the last path segment and parse the integer: parse the URL (using net/url.Parse) or at minimum strip any query string after '?' and any trailing '/' before splitting, then take the final path segment (or use path.Base on parsed URL.Path) and Atoi it, returning 0 on parse errors; modify extractPRNumber to use these steps so inputs like "/pulls/123/", "/pulls/123?foo=bar" and full URLs all work.pkg/agentic/verify.go-61-72 (1)
61-72:⚠️ Potential issue | 🟡 MinorUse the retry outcome when flagging for review.
flagForReview(..., result)uses the first attempt result even if the second attempt fails differently, so the posted reason can be wrong.Suggested fix
- result := s.attemptVerifyAndMerge(srcDir, org, st.Repo, st.Branch, prNum) - if result == mergeSuccess { + result := s.attemptVerifyAndMerge(srcDir, org, st.Repo, st.Branch, prNum) + if result == mergeSuccess { markMerged() return } @@ - if result == mergeConflict || result == testFailed { + finalResult := result + if result == mergeConflict || result == testFailed { if s.rebaseBranch(srcDir, st.Branch) { - if s.attemptVerifyAndMerge(srcDir, org, st.Repo, st.Branch, prNum) == mergeSuccess { + finalResult = s.attemptVerifyAndMerge(srcDir, org, st.Repo, st.Branch, prNum) + if finalResult == mergeSuccess { markMerged() return } } } @@ - s.flagForReview(org, st.Repo, prNum, result) + s.flagForReview(org, st.Repo, prNum, finalResult)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/verify.go` around lines 61 - 72, The code currently passes the original result into flagForReview even if the second attempt (rebase + attemptVerifyAndMerge) produced a different outcome; update the logic to capture the outcome of the second attempt and pass that final result to flagForReview instead of the initial result. Concretely, call attemptVerifyAndMerge and assign its return value to a variable (or overwrite result) when doing the rebase branch flow (referencing s.rebaseBranch, s.attemptVerifyAndMerge), then use that final outcome when invoking s.flagForReview(org, st.Repo, prNum, ...). Ensure flagForReview receives the second attempt's result when the second attempt fails.pkg/agentic/review_queue.go-187-188 (1)
187-188:⚠️ Potential issue | 🟡 MinorKeep training verdict detection in lockstep with the runtime verdict.
Line 187 recognises
LGTMas clean, but Line 336 does not. A cleanLGTMreview will therefore be recorded inreviews.jsonlasfindings, which will skew the training corpus. Please share oneisCleanReviewOutput()helper between both paths.Also applies to: 329-337
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/review_queue.go` around lines 187 - 188, Create a single helper function isCleanReviewOutput(output string) bool that encapsulates the clean-detection logic (checking for "No findings", "no issues", "LGTM", etc.) and replace the duplicated inline checks at the two call sites (the one setting result.Verdict = "clean" and the other around the reviews.jsonl path currently duplicated at the 329–337/336 area) to call this helper so both runtime verdict and training-record logic stay in lockstep; ensure the helper is used by the code that currently inspects output and that it treats "LGTM" the same as the other clean indicators.docs/plans/2026-03-16-issue-tracker.md-76-83 (1)
76-83:⚠️ Potential issue | 🟡 MinorSpecify a fenced code language for the routes snippet.
This currently triggers markdownlint MD040 and hurts rendering/tooling consistency.
🔧 Suggested fix
-``` +```php Route::apiResource('issues', IssueController::class); Route::post('issues/{issue}/comments', [IssueController::class, 'addComment']); Route::get('issues/{issue}/comments', [IssueController::class, 'listComments']); Route::apiResource('sprints', SprintController::class)->only(['index', 'store']); Route::post('sprints/{sprint}/start', [SprintController::class, 'start']); Route::post('sprints/{sprint}/complete', [SprintController::class, 'complete']);</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/plans/2026-03-16-issue-tracker.mdaround lines 76 - 83, The fenced code
block showing routing definitions lacks a language tag which triggers MD040;
update the block to use a PHP language fence by adding "php" after the opening
triple backticks so the snippet containing Route::apiResource('issues',
IssueController::class), Route::post('issues/{issue}/comments',
[IssueController::class, 'addComment']), Route::get('issues/{issue}/comments',
[IssueController::class, 'listComments']), Route::apiResource('sprints',
SprintController::class)->only(['index', 'store']),
Route::post('sprints/{sprint}/start', [SprintController::class, 'start']), and
Route::post('sprints/{sprint}/complete', [SprintController::class, 'complete'])
is fenced as PHP.</details> </blockquote></details> <details> <summary>claude/core/commands/review.md-14-15 (1)</summary><blockquote> `14-15`: _⚠️ Potential issue_ | _🟡 Minor_ **Use `CORE_WORKSPACE` env var for path consistency with production scripts.** Documentation should reference `${CORE_WORKSPACE:-$HOME/Code/.core}` like other workspace-aware scripts in the codebase (check-inbox.sh, workspace-status.sh). Hardcoding `.core/workspace` bypasses the established env var convention. The `main` branch reference is safe; the codebase consistently uses `main` as the default branch. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@claude/core/commands/review.mdaround lines 14 - 15, Replace the hardcoded
".core/workspace/{workspace}/src" path in the two git commands with the
CORE_WORKSPACE-aware expansion used elsewhere; update the occurrences of "git -C
.core/workspace/{workspace}/src log --oneline main..HEAD" and "git -C
.core/workspace/{workspace}/src diff --stat main" to use
"${CORE_WORKSPACE:-$HOME/Code/.core}/workspace/{workspace}/src" so the docs
follow the same environment variable convention as check-inbox.sh and
workspace-status.sh.</details> </blockquote></details> <details> <summary>claude/core/scripts/workspace-status.sh-24-25 (1)</summary><blockquote> `24-25`: _⚠️ Potential issue_ | _🟡 Minor_ **Handle missing log files gracefully.** When no `agent-*.log` files exist, `$LOG` is empty, causing `basename "" .log` to output `.log` and `sed` to produce `.log` as the agent name. This results in confusing output like `workspace | .log | completed | 0b log | 0 commits`. <details> <summary>🛠️ Proposed fix</summary> ```diff # Check for agent log LOG=$(ls "$ws"/agent-*.log 2>/dev/null | head -1) - AGENT=$(basename "$LOG" .log 2>/dev/null | sed 's/agent-//') + if [ -n "$LOG" ]; then + AGENT=$(basename "$LOG" .log | sed 's/agent-//') + else + AGENT="none" + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude/core/scripts/workspace-status.sh` around lines 24 - 25, The code computes LOG and then unconditionally runs basename/sed which yields ".log" when LOG is empty; update workspace-status.sh to handle missing log files by checking LOG before deriving AGENT: if LOG is non-empty, set AGENT using basename "$LOG" .log and sed 's/agent-//', otherwise set AGENT to an empty string or a sensible placeholder (e.g., "none") so that downstream output doesn't show ".log"; ensure the new check uses the existing LOG and AGENT symbols and avoids calling basename with an empty argument.pkg/agentic/remote_client.go-64-72 (1)
64-72:⚠️ Potential issue | 🟡 MinorErrors silently ignored during initialized notification.
Line 64 ignores the error from
http.NewRequestWithContext, and lines 67-70 silently succeed even if the notification request fails. If the notification fails, the MCP session may be in an inconsistent state.🛠️ Proposed fix
- notifReq, _ := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(notifBody)) + notifReq, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(notifBody)) + if err != nil { + return "", coreerr.E("mcpInitialize", "create notification request", err) + } setHeaders(notifReq, token, sessionID) notifResp, err := client.Do(notifReq) - if err == nil { - notifResp.Body.Close() + if err != nil { + return "", coreerr.E("mcpInitialize", "notification failed", err) } + notifResp.Body.Close() return sessionID, nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/remote_client.go` around lines 64 - 72, Check and propagate errors when building and sending the initialized notification in remote_client.go: do not ignore the error returned by http.NewRequestWithContext when creating notifReq (validate err and return it), and check the result of client.Do(notifReq) for both network errors and non-2xx HTTP status codes; ensure notifResp.Body is closed only if notifResp != nil and return a descriptive error (rather than silently succeeding) so the caller can detect a failed notification; reference notifReq, setHeaders, client.Do, notifResp, and sessionID when making these changes.claude/core/agents/agent-task-code-simplifier.md-13-13 (1)
13-13:⚠️ Potential issue | 🟡 MinorClarify the Git range semantics in the prompt.
origin/main..HEADshows commits in HEAD not in origin/main (branch-relative changes), not "the last commit." The semantics are:
HEAD^..HEAD— last commit only (fails in this repository; no parent commit)origin/main..HEAD— changes in HEAD relative to origin/main (works; appropriate for PR review)origin/main...HEAD— symmetric difference between branches (fails; no merge base)The command in the prompt is correct for showing PR changes, but the prose "since the last commit" is misleading. Reword to clarify that this shows files changed relative to the main branch, not relative to the immediately preceding commit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude/core/agents/agent-task-code-simplifier.md` at line 13, Update the prose in agent-task-code-simplifier.md to remove "since the last commit" and instead clarify that the command `git diff --name-only origin/main..HEAD` lists files changed in HEAD relative to the main branch (branch-relative changes), not the immediately preceding commit; mention the correct semantics and examples (`HEAD^..HEAD` for the last commit, `origin/main..HEAD` for PR/branch changes, and `origin/main...HEAD` for symmetric difference) so the prompt accurately describes which range each form shows.docker/scripts/setup.sh-89-89 (1)
89-89:⚠️ Potential issue | 🟡 MinorAvoid printing secrets to terminal output.
Echoing the Brain API key to stdout may expose it in terminal scrollback, CI logs, or screen recordings. Consider writing it to a file or instructing the user where to find it instead.
🛡️ Suggested improvement
-echo "Brain API key: $(grep CORE_BRAIN_KEY "$DOCKER_DIR/.env" | cut -d= -f2)" +echo "Brain API key: See $DOCKER_DIR/.env (CORE_BRAIN_KEY)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/scripts/setup.sh` at line 89, The script setup.sh currently echoes the Brain API key via the line that reads CORE_BRAIN_KEY from "$DOCKER_DIR/.env" and prints it to stdout; instead stop printing secrets by removing that echo and either write the full key to a secure file with restrictive permissions (e.g., create a file like "$DOCKER_DIR/brain_key.txt" and use umask/chmod 600 before saving the cut output) or print an instruction pointing users to the .env location (without showing the value) or print a masked value; update the code referencing CORE_BRAIN_KEY/DOCKER_DIR in setup.sh accordingly so the key is never logged to stdout or CI logs.CLAUDE.md-45-58 (1)
45-58:⚠️ Potential issue | 🟡 MinorThe MCP tool count in this heading is off by one.
The table below lists 34 tools, not 33. Small issue, but this file is used as runtime context and the mismatch makes the guidance less trustworthy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 45 - 58, Update the heading that currently reads "MCP Tools (33)" to reflect the actual count "MCP Tools (34)" so the header matches the 34 items listed; locate the heading text "MCP Tools (33)" in the file (near the tools table) and change the number to 34.cmd/core-agent/main.go-84-85 (1)
84-85:⚠️ Potential issue | 🟡 MinorHandle
os.UserHomeDirfailure before building the PID path.If
HOMEis unset,filepath.Join("", ".core", "core-agent.pid")resolves to a relative path, so PID tracking silently moves into the current working directory.💡 Proposed fix
- home, _ := os.UserHomeDir() + home, err := os.UserHomeDir() + if err != nil { + return cli.Wrap(err, "resolve home directory") + } pidFile := filepath.Join(home, ".core", "core-agent.pid")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/core-agent/main.go` around lines 84 - 85, The code calls os.UserHomeDir() and ignores the error, which can produce a relative pidFile path; check the returned error from os.UserHomeDir() before using the home value and handle it (return error/exit or fall back to a safe absolute directory), then build pidFile with filepath.Join(home, ".core", "core-agent.pid"); update the block around the os.UserHomeDir call in main (where home and pidFile are set) to validate the error and ensure pidFile is always an absolute, well-defined path.pkg/brain/direct.go-133-151 (1)
133-151:⚠️ Potential issue | 🟡 MinorMissing fields in remember request payload.
The
RememberInputstruct (frompkg/brain/tools.go) includesConfidence,Supersedes, andExpiresInfields, but these are not forwarded to the API call. This means callers cannot use these features when storing memories via the direct subsystem.🔧 Proposed fix to include missing fields
result, err := s.apiCall(ctx, "POST", "/v1/brain/remember", map[string]any{ "content": input.Content, "type": input.Type, "tags": input.Tags, "project": input.Project, "agent_id": agentName(), + "confidence": input.Confidence, + "supersedes": input.Supersedes, + "expires_in": input.ExpiresIn, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/brain/direct.go` around lines 133 - 151, The remember handler is not forwarding three fields from RememberInput to the API; update the payload in DirectSubsystem.remember to include input.Confidence, input.Supersedes and input.ExpiresIn in the map passed to s.apiCall (use the same snake_case keys style as the existing payload, e.g. "confidence", "supersedes", "expires_in") so the API receives those values when creating a memory; keep existing behavior for other fields and return values unchanged.pkg/agentic/mirror.go-260-274 (1)
260-274:⚠️ Potential issue | 🟡 MinorFragile JSON field extraction.
extractJSONFielduses simple string manipulation that doesn't handle escaped quotes, nested objects, or arrays. For example, a URL containing\"or a field value with"url"as a substring could cause incorrect parsing. Consider usingencoding/jsonfor reliable extraction.🔧 More robust approach using json.Unmarshal
-// extractJSONField extracts a simple string field from JSON array output. -func extractJSONField(jsonStr, field string) string { - // Quick and dirty — works for gh CLI output like [{"url":"https://..."}] - key := fmt.Sprintf(`"%s":"`, field) - idx := strings.Index(jsonStr, key) - if idx < 0 { - return "" - } - start := idx + len(key) - end := strings.Index(jsonStr[start:], `"`) - if end < 0 { - return "" - } - return jsonStr[start : start+end] -} +// extractJSONField extracts a simple string field from JSON array output. +func extractJSONField(jsonStr, field string) string { + var items []map[string]any + if err := json.Unmarshal([]byte(jsonStr), &items); err != nil || len(items) == 0 { + return "" + } + if val, ok := items[0][field].(string); ok { + return val + } + return "" +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/mirror.go` around lines 260 - 274, The current extractJSONField function is brittle because it uses string searches to parse JSON; replace its implementation to parse jsonStr with encoding/json (e.g., json.Unmarshal into []map[string]interface{} or a defined struct) and then safely read the requested field from the first element, returning "" if the JSON is invalid or the field is missing; update extractJSONField to handle escaped characters and nested structures by using the parsed value (type-assert to string) instead of manual substring operations.pkg/agentic/prep.go-191-193 (1)
191-193:⚠️ Potential issue | 🟡 MinorGit branch creation error ignored.
If
git checkout -bfails (e.g., branch already exists, or git error), the error is silently ignored. The workspace preparation continues but may be in an unexpected state.🔧 Consider logging or handling the error
branchCmd := exec.CommandContext(ctx, "git", "checkout", "-b", branchName) branchCmd.Dir = srcDir -branchCmd.Run() +if err := branchCmd.Run(); err != nil { + // Branch may already exist, try switching to it + switchCmd := exec.CommandContext(ctx, "git", "checkout", branchName) + switchCmd.Dir = srcDir + switchCmd.Run() +} out.Branch = branchName🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/prep.go` around lines 191 - 193, The git branch creation error is currently ignored; update the code around the exec.CommandContext call that creates branchCmd (exec.CommandContext(ctx, "git", "checkout", "-b", branchName)) to capture and handle the returned error from branchCmd.Run(): check the error value, log a clear message including the branchName and underlying error (using the existing logger or processLogger), and return or propagate the error (or fallback behavior) so workspace preparation does not continue silently on failure.pkg/agentic/scan.go-102-127 (1)
102-127:⚠️ Potential issue | 🟡 MinorIgnored errors in HTTP request creation and JSON decoding.
At line 104,
http.NewRequestWithContexterror is discarded. At line 120,json.Decodeerror is silently ignored, which could result in an empty slice being returned without any indication of parse failure. This pattern repeats inlistRepoIssues.🔧 Proposed fix for listOrgRepos
func (s *PrepSubsystem) listOrgRepos(ctx context.Context, org string) ([]string, error) { url := fmt.Sprintf("%s/api/v1/orgs/%s/repos?limit=50", s.forgeURL, org) - req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) + if err != nil { + return nil, coreerr.E("scan.listOrgRepos", "failed to create request", err) + } req.Header.Set("Authorization", "token "+s.forgeToken) resp, err := s.client.Do(req) // ... var repos []struct { Name string `json:"name"` } - json.NewDecoder(resp.Body).Decode(&repos) + if err := json.NewDecoder(resp.Body).Decode(&repos); err != nil { + return nil, coreerr.E("scan.listOrgRepos", "failed to decode response", err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/scan.go` around lines 102 - 127, In listOrgRepos (and similarly in listRepoIssues) stop ignoring errors from http.NewRequestWithContext and json.NewDecoder.Decode: assign the returned error from NewRequestWithContext (req, err := http.NewRequestWithContext(...)) and if non-nil return coreerr.E("scan.listOrgRepos", "failed to create request", err); after reading the response check the JSON decode error (err = json.NewDecoder(resp.Body).Decode(&repos)) and if non-nil return coreerr.E("scan.listOrgRepos", "failed to decode response", err); apply the same pattern and error messages in listRepoIssues to surface request/parse failures instead of silently returning empty results.pkg/agentic/prep.go-176-189 (1)
176-189:⚠️ Potential issue | 🟡 MinorTask slug could become empty after sanitisation.
The slug sanitisation replaces invalid characters with
-, then truncates to 40 chars, then trims-from both ends. If the task contains only invalid characters (e.g., all spaces or symbols), the result would be an empty string, leading to a branch name of just"agent/".🔧 Add fallback for empty slug
taskSlug = strings.Trim(taskSlug, "-") +if taskSlug == "" { + taskSlug = fmt.Sprintf("task-%d", time.Now().Unix()) +} branchName := fmt.Sprintf("agent/%s", taskSlug)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/prep.go` around lines 176 - 189, After sanitising input.Task into taskSlug (in the taskSlug mapping and trimming logic) add a fallback if taskSlug is empty: generate a non-empty slug (e.g., "untitled" or a short deterministic fallback such as "task-<short-hash-of-input.Task>") and assign it to taskSlug before building branchName; update the code around taskSlug and branchName (fmt.Sprintf("agent/%s", taskSlug)) so branchName is never "agent/" even when input.Task contains only invalid characters.pkg/agentic/plan.go-230-232 (1)
230-232:⚠️ Potential issue | 🟡 MinorNormalise phase fields on update, not only on create.
When phases are replaced in Line 230, defaults applied in create flow (status/number) are skipped. This allows inconsistent phase data after updates.
♻️ Suggested fix
if input.Phases != nil { plan.Phases = input.Phases + for i := range plan.Phases { + if plan.Phases[i].Status == "" { + plan.Phases[i].Status = "pending" + } + if plan.Phases[i].Number == 0 { + plan.Phases[i].Number = i + 1 + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/plan.go` around lines 230 - 232, The code currently replaces plan.Phases with input.Phases but skips the create-flow normalization (defaulting fields like status and number), causing inconsistent data after updates; after assigning plan.Phases = input.Phases ensure you run the same normalization used on create — e.g., iterate plan.Phases and for each phase set default Status (e.g., "pending") if empty and set Number to its index+1 if zero, or invoke the existing normalizePhases/normalizePhase helper used during creation so updated phases get the same defaults.pkg/agentic/plan.go-344-346 (1)
344-346:⚠️ Potential issue | 🟡 MinorHandle
rand.Readerror in ID generation.The
rand.Readcall at line 345 ignores errors. If entropy reading fails, the generated ID will use uninitialized bytes, degrading suffix quality and increasing collision risk.🛠️ Suggested fix
// Append short random suffix for uniqueness b := make([]byte, 3) - rand.Read(b) + if _, err := rand.Read(b); err != nil { + return fmt.Sprintf("%s-%d", slug, time.Now().UnixNano()) + } return slug + "-" + hex.EncodeToString(b)Note: The
fmtpackage import must be added if not already present.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/plan.go` around lines 344 - 346, The call to rand.Read(b) must check for errors: replace the ignored call with n, err := rand.Read(b) and if err != nil log the error (e.g., using fmt.Printf or process logger) and return a safe fallback suffix (for example hex.EncodeToString([]byte{0,0,0}) or another deterministic value) to avoid using uninitialized bytes; otherwise use hex.EncodeToString(b) as before — update imports to include fmt if you use fmt.Printf. Ensure you edit the rand.Read, b, and hex.EncodeToString usage in this function accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 01b2d38d-df4a-49f9-8aad-ee7daf8ec958
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (299)
.claude-plugin/marketplace.json.gitignore.mcp.jsonCLAUDE.mdCODEX.mdGEMINI.mdagents/examples/README.mdagents/examples/nexus-spatial-discovery.mdagents/examples/workflow-landing-page.mdagents/examples/workflow-startup-mvp.mdagents/examples/workflow-with-memory.mdclaude/code/.claude-plugin/plugin.jsonclaude/code/hooks/hooks.jsonclaude/code/hooks/prefer-core.shclaude/code/scripts/block-docs.shclaude/code/scripts/capture-context.shclaude/code/scripts/extract-actionables.shclaude/code/scripts/post-commit-check.shclaude/code/scripts/pr-created.shclaude/code/scripts/session-start.shclaude/code/scripts/suggest-compact.shclaude/core/.claude-plugin/plugin.jsonclaude/core/agents/agent-task-code-review.mdclaude/core/agents/agent-task-code-simplifier.mdclaude/core/collection/HOOKS.mdclaude/core/collection/collect-whitepaper.shclaude/core/collection/dispatch.shclaude/core/collection/hooks.jsonclaude/core/collection/update-index.shclaude/core/commands/code-review.mdclaude/core/commands/dispatch.mdclaude/core/commands/pipeline.mdclaude/core/commands/ready.mdclaude/core/commands/recall.mdclaude/core/commands/remember.mdclaude/core/commands/review-pr.mdclaude/core/commands/review.mdclaude/core/commands/scan.mdclaude/core/commands/security.mdclaude/core/commands/status.mdclaude/core/commands/sweep.mdclaude/core/commands/tests.mdclaude/core/commands/verify.mdclaude/core/commands/yes.mdclaude/core/hooks.jsonclaude/core/mcp.jsonclaude/core/scripts/auto-approve.shclaude/core/scripts/check-completions.shclaude/core/scripts/check-debug.shclaude/core/scripts/check-inbox.shclaude/core/scripts/check-notify.shclaude/core/scripts/ensure-commit.shclaude/core/scripts/go-format.shclaude/core/scripts/local-dispatch.shclaude/core/scripts/php-format.shclaude/core/scripts/post-pr-create.shclaude/core/scripts/pre-compact.shclaude/core/scripts/pre-push-check.shclaude/core/scripts/session-save.shclaude/core/scripts/session-start.shclaude/core/scripts/workspace-status.shclaude/core/skills/architecture-review.mdclaude/core/skills/bitcointalk/SKILL.mdclaude/core/skills/bitcointalk/collect.shclaude/core/skills/block-explorer/SKILL.mdclaude/core/skills/block-explorer/generate-jobs.shclaude/core/skills/coinmarketcap/SKILL.mdclaude/core/skills/coinmarketcap/generate-jobs.shclaude/core/skills/coinmarketcap/process.shclaude/core/skills/community-chat/SKILL.mdclaude/core/skills/cryptonote-discovery/SKILL.mdclaude/core/skills/cryptonote-discovery/discover.shclaude/core/skills/cryptonote-discovery/registry.jsonclaude/core/skills/github-history/SKILL.mdclaude/core/skills/github-history/collect.shclaude/core/skills/job-collector/SKILL.mdclaude/core/skills/job-collector/generate-jobs.shclaude/core/skills/job-collector/process.shclaude/core/skills/ledger-papers/SKILL.mdclaude/core/skills/ledger-papers/archive/00-genesis/README.mdclaude/core/skills/ledger-papers/archive/01-cryptonote/README.mdclaude/core/skills/ledger-papers/archive/02-mrl/README.mdclaude/core/skills/ledger-papers/archive/03-privacy/README.mdclaude/core/skills/ledger-papers/archive/04-smart-contracts/README.mdclaude/core/skills/ledger-papers/archive/05-layer2/README.mdclaude/core/skills/ledger-papers/archive/06-consensus/README.mdclaude/core/skills/ledger-papers/archive/07-cryptography/README.mdclaude/core/skills/ledger-papers/archive/08-defi/README.mdclaude/core/skills/ledger-papers/archive/09-storage/README.mdclaude/core/skills/ledger-papers/archive/10-identity/README.mdclaude/core/skills/ledger-papers/archive/11-dag/README.mdclaude/core/skills/ledger-papers/archive/12-mev/README.mdclaude/core/skills/ledger-papers/archive/13-standards-btc/README.mdclaude/core/skills/ledger-papers/archive/14-standards-eth/README.mdclaude/core/skills/ledger-papers/archive/15-p2p/README.mdclaude/core/skills/ledger-papers/archive/16-zk-advanced/README.mdclaude/core/skills/ledger-papers/archive/17-oracles/README.mdclaude/core/skills/ledger-papers/archive/18-bridges/README.mdclaude/core/skills/ledger-papers/archive/19-attacks/README.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/README.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/README.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/RFC-001-GSD-general-supernode-design.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/RFC-002-SLS-supernode-list-selection.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/RFC-003-RTVF-rta-transaction-validation.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/RFC-005-DF-disqualification-flow.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/auth-sample-selection-algorithm.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/blockchain-based-list-selection-analysis.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/communication-options-p2p-design.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/rta-double-spend-attack-vectors.mdclaude/core/skills/ledger-papers/archive/20-cryptonote-projects/graft/udht-implementation.mdclaude/core/skills/ledger-papers/archive/README.mdclaude/core/skills/ledger-papers/discover.shclaude/core/skills/ledger-papers/registry.jsonclaude/core/skills/mining-pools/SKILL.mdclaude/core/skills/mining-pools/generate-jobs.shclaude/core/skills/orchestrate.mdclaude/core/skills/project-archaeology/SKILL.mdclaude/core/skills/project-archaeology/digs/graftnetwork/SALVAGE-REPORT.mdclaude/core/skills/project-archaeology/excavate.shclaude/core/skills/project-archaeology/templates/LESSONS.mdclaude/core/skills/project-archaeology/templates/SALVAGE-REPORT.mdclaude/core/skills/prompts.mdclaude/core/skills/reality-check.mdclaude/core/skills/review-pipeline.mdclaude/core/skills/security-review.mdclaude/core/skills/senior-dev-fix.mdclaude/core/skills/test-analysis.mdclaude/core/skills/wallet-releases/SKILL.mdclaude/core/skills/whitepaper-archive/SKILL.mdclaude/review/.claude-plugin/plugin.jsonclaude/review/hooks.jsonclaude/verify/.claude-plugin/plugin.jsonclaude/verify/hooks.jsoncmd/agent/cmd.gocmd/core-agent/main.gocmd/dispatch/cmd.gocmd/dispatch/ratelimit.gocmd/mcp/core_cli.gocmd/mcp/core_cli_test.gocmd/mcp/ethics.gocmd/mcp/ethics_test.gocmd/mcp/main.gocmd/mcp/marketplace.gocmd/mcp/marketplace_test.gocmd/mcp/plugin_info.gocmd/mcp/server.gocmd/mcp/types.gocmd/mcp/util.gocmd/taskgit/cmd.gocmd/tasks/cmd.gocmd/tasks/updates.gocmd/workspace/cmd.gocmd/workspace/cmd_agent.gocmd/workspace/cmd_agent_test.gocmd/workspace/cmd_prep.gocmd/workspace/cmd_task.gocmd/workspace/cmd_task_test.gocmd/workspace/cmd_workspace.gocmd/workspace/config.goconfig/agents.yamldocker/.env.exampledocker/Dockerfiledocker/config/octane.inidocker/config/supervisord.confdocker/config/traefik-tls.ymldocker/docker-compose.ymldocker/scripts/entrypoint.shdocker/scripts/setup.shdocs/CHARON-ONBOARDING.mddocs/github-app-setup.mddocs/plans/2026-03-15-local-stack.mddocs/plans/2026-03-16-issue-tracker.mdgo.modgoogle/gemini-cli/GEMINI.mdgoogle/gemini-cli/gemini-extension.jsongoogle/mcp/README.mdgoogle/mcp/main.gogoogle/mcp/main_test.gogoogle/mcp/testdata/corepkg/agentic/auto_pr.gopkg/agentic/dispatch.gopkg/agentic/epic.gopkg/agentic/events.gopkg/agentic/ingest.gopkg/agentic/mirror.gopkg/agentic/paths.gopkg/agentic/paths_test.gopkg/agentic/plan.gopkg/agentic/pr.gopkg/agentic/prep.gopkg/agentic/queue.gopkg/agentic/remote.gopkg/agentic/remote_client.gopkg/agentic/remote_status.gopkg/agentic/resume.gopkg/agentic/review_queue.gopkg/agentic/scan.gopkg/agentic/status.gopkg/agentic/verify.gopkg/agentic/watch.gopkg/brain/brain.gopkg/brain/brain_test.gopkg/brain/direct.gopkg/brain/messaging.gopkg/brain/provider.gopkg/brain/tools.gopkg/jobrunner/coverage_boost_test.gopkg/jobrunner/forgejo/signals.gopkg/jobrunner/forgejo/signals_test.gopkg/jobrunner/forgejo/source.gopkg/jobrunner/forgejo/source_extra_test.gopkg/jobrunner/forgejo/source_phase3_test.gopkg/jobrunner/forgejo/source_supplementary_test.gopkg/jobrunner/forgejo/source_test.gopkg/jobrunner/handlers/completion.gopkg/jobrunner/handlers/completion_test.gopkg/jobrunner/handlers/coverage_boost_test.gopkg/jobrunner/handlers/dispatch.gopkg/jobrunner/handlers/dispatch_test.gopkg/jobrunner/handlers/enable_auto_merge.gopkg/jobrunner/handlers/enable_auto_merge_test.gopkg/jobrunner/handlers/handlers_extra_test.gopkg/jobrunner/handlers/integration_test.gopkg/jobrunner/handlers/publish_draft.gopkg/jobrunner/handlers/publish_draft_test.gopkg/jobrunner/handlers/resolve_threads.gopkg/jobrunner/handlers/resolve_threads_test.gopkg/jobrunner/handlers/send_fix_command.gopkg/jobrunner/handlers/send_fix_command_test.gopkg/jobrunner/handlers/testhelper_test.gopkg/jobrunner/handlers/tick_parent.gopkg/jobrunner/handlers/tick_parent_test.gopkg/jobrunner/journal.gopkg/jobrunner/journal_replay_test.gopkg/jobrunner/journal_test.gopkg/jobrunner/poller.gopkg/jobrunner/poller_test.gopkg/jobrunner/types.gopkg/jobrunner/types_test.gopkg/lifecycle/allowance.gopkg/lifecycle/allowance_edge_test.gopkg/lifecycle/allowance_error_test.gopkg/lifecycle/allowance_redis.gopkg/lifecycle/allowance_redis_test.gopkg/lifecycle/allowance_service.gopkg/lifecycle/allowance_sqlite.gopkg/lifecycle/allowance_sqlite_test.gopkg/lifecycle/allowance_test.gopkg/lifecycle/brain.gopkg/lifecycle/brain_test.gopkg/lifecycle/client.gopkg/lifecycle/client_test.gopkg/lifecycle/completion.gopkg/lifecycle/completion_git_test.gopkg/lifecycle/completion_test.gopkg/lifecycle/config.gopkg/lifecycle/config_test.gopkg/lifecycle/context.gopkg/lifecycle/context_git_test.gopkg/lifecycle/context_test.gopkg/lifecycle/coverage_boost_test.gopkg/lifecycle/dispatcher.gopkg/lifecycle/dispatcher_test.gopkg/lifecycle/embed.gopkg/lifecycle/embed_test.gopkg/lifecycle/events.gopkg/lifecycle/events_integration_test.gopkg/lifecycle/events_test.gopkg/lifecycle/lifecycle_test.gopkg/lifecycle/logs.gopkg/lifecycle/logs_test.gopkg/lifecycle/plan_dispatcher.gopkg/lifecycle/plans.gopkg/lifecycle/prompts/commit.mdpkg/lifecycle/registry.gopkg/lifecycle/registry_redis.gopkg/lifecycle/registry_redis_test.gopkg/lifecycle/registry_sqlite.gopkg/lifecycle/registry_sqlite_test.gopkg/lifecycle/registry_test.gopkg/lifecycle/router.gopkg/lifecycle/router_test.gopkg/lifecycle/score.gopkg/lifecycle/score_test.gopkg/lifecycle/service.gopkg/lifecycle/service_test.gopkg/lifecycle/sessions.gopkg/lifecycle/status.gopkg/lifecycle/status_test.gopkg/lifecycle/submit.gopkg/lifecycle/submit_test.gopkg/lifecycle/types.gopkg/loop/engine.gopkg/loop/engine_test.gopkg/loop/parse.gopkg/loop/parse_test.gopkg/loop/prompt.gopkg/loop/prompt_test.gopkg/loop/tools_eaas.go
💤 Files with no reviewable changes (58)
- claude/code/scripts/pr-created.sh
- claude/code/hooks/hooks.json
- google/mcp/testdata/core
- pkg/jobrunner/forgejo/source_extra_test.go
- google/mcp/README.md
- cmd/mcp/ethics.go
- google/mcp/main_test.go
- claude/verify/hooks.json
- pkg/jobrunner/forgejo/source_phase3_test.go
- cmd/tasks/cmd.go
- agents/examples/workflow-startup-mvp.md
- cmd/mcp/main.go
- claude/code/scripts/suggest-compact.sh
- agents/examples/nexus-spatial-discovery.md
- cmd/taskgit/cmd.go
- cmd/mcp/marketplace_test.go
- pkg/jobrunner/handlers/completion.go
- cmd/mcp/ethics_test.go
- claude/code/scripts/extract-actionables.sh
- cmd/tasks/updates.go
- pkg/jobrunner/forgejo/source_test.go
- cmd/mcp/util.go
- claude/code/scripts/block-docs.sh
- pkg/jobrunner/handlers/completion_test.go
- agents/examples/README.md
- agents/examples/workflow-with-memory.md
- claude/code/scripts/capture-context.sh
- cmd/mcp/marketplace.go
- google/mcp/main.go
- cmd/mcp/plugin_info.go
- cmd/mcp/types.go
- claude/code/scripts/post-commit-check.sh
- pkg/jobrunner/forgejo/signals_test.go
- claude/code/hooks/prefer-core.sh
- claude/code/.claude-plugin/plugin.json
- agents/examples/workflow-landing-page.md
- cmd/dispatch/ratelimit.go
- cmd/agent/cmd.go
- cmd/mcp/core_cli.go
- cmd/workspace/cmd_agent.go
- claude/code/scripts/session-start.sh
- cmd/workspace/cmd_task_test.go
- cmd/workspace/cmd_agent_test.go
- pkg/jobrunner/coverage_boost_test.go
- cmd/workspace/cmd.go
- pkg/jobrunner/forgejo/signals.go
- pkg/jobrunner/forgejo/source.go
- cmd/mcp/core_cli_test.go
- claude/verify/.claude-plugin/plugin.json
- cmd/workspace/cmd_task.go
- claude/review/.claude-plugin/plugin.json
- claude/review/hooks.json
- cmd/mcp/server.go
- pkg/jobrunner/forgejo/source_supplementary_test.go
- cmd/dispatch/cmd.go
- cmd/workspace/config.go
- cmd/workspace/cmd_prep.go
- cmd/workspace/cmd_workspace.go
| func planPath(dir, id string) string { | ||
| return filepath.Join(dir, id+".json") | ||
| } |
There was a problem hiding this comment.
Block path traversal by validating plan IDs before path join.
Line 315 joins id directly into the filesystem path. Because input.ID is user-controlled in read/update/delete flows, values like ../ can escape PlansRoot() and allow arbitrary file read/delete.
🔐 Suggested fix
+func validatePlanID(id string) error {
+ if id == "" || strings.Contains(id, "..") || strings.ContainsRune(id, '/') || strings.ContainsRune(id, '\\') {
+ return coreerr.E("validatePlanID", "invalid plan id", nil)
+ }
+ return nil
+}
+
func planPath(dir, id string) string {
return filepath.Join(dir, id+".json")
} func (s *PrepSubsystem) planRead(_ context.Context, _ *mcp.CallToolRequest, input PlanReadInput) (*mcp.CallToolResult, PlanReadOutput, error) {
if input.ID == "" {
return nil, PlanReadOutput{}, coreerr.E("planRead", "id is required", nil)
}
+ if err := validatePlanID(input.ID); err != nil {
+ return nil, PlanReadOutput{}, err
+ }Apply the same validatePlanID check in planUpdate and planDelete.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func planPath(dir, id string) string { | |
| return filepath.Join(dir, id+".json") | |
| } | |
| func validatePlanID(id string) error { | |
| if id == "" || strings.Contains(id, "..") || strings.ContainsRune(id, '/') || strings.ContainsRune(id, '\\') { | |
| return coreerr.E("validatePlanID", "invalid plan id", nil) | |
| } | |
| return nil | |
| } | |
| func planPath(dir, id string) string { | |
| return filepath.Join(dir, id+".json") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/plan.go` around lines 315 - 317, The planPath helper currently
joins the raw id into a filesystem path, allowing path traversal; add and use
validatePlanID to reject IDs containing path separators or traversal patterns
before constructing paths and apply this validation in planUpdate and planDelete
flows (and any other callers) so planPath only receives a validated id, and
return/handle an error when validatePlanID fails to prevent
reading/updating/deleting files outside PlansRoot.
| wsDir := filepath.Join(WorkspaceRoot(), input.Workspace) | ||
| srcDir := filepath.Join(wsDir, "src") | ||
|
|
||
| if _, err := os.Stat(srcDir); err != nil { | ||
| return nil, CreatePROutput{}, coreerr.E("createPR", "workspace not found: "+input.Workspace, nil) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the relevant file
find . -name "pr.go" -type f | head -20Repository: dAppCore/agent
Length of output: 77
🏁 Script executed:
# Check the complete createPR function implementation
rg -A 30 "func.*createPR" pkg/agentic/Repository: dAppCore/agent
Length of output: 1774
🏁 Script executed:
# Look for WorkspaceRoot definition
rg -B 2 -A 5 "func.*WorkspaceRoot" .Repository: dAppCore/agent
Length of output: 1023
🏁 Script executed:
# Check for existing path validation on input.Workspace
rg -B 5 -A 5 "input\.Workspace" pkg/agentic/Repository: dAppCore/agent
Length of output: 4481
🏁 Script executed:
# Check how git commands are executed with srcDir
rg -B 3 -A 3 "git.*rev-parse\|git.*push" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Find the git push command in pr.go
rg -B 5 -A 10 "git.*push" pkg/agentic/pr.goRepository: dAppCore/agent
Length of output: 642
🏁 Script executed:
# Check if there's any path validation utility in the codebase
rg -i "filepath.*clean\|validate.*path\|secure.*path" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Look for tests to understand intended behaviour
fd -e "*test.go" pkg/agentic/ -exec grep -l "Workspace" {} \;Repository: dAppCore/agent
Length of output: 302
🏁 Script executed:
# Check status.go to see how it handles workspace names
cat -n pkg/agentic/status.go | head -60Repository: dAppCore/agent
Length of output: 2818
🏁 Script executed:
# Check input struct definitions for CreatePRInput
rg -B 5 -A 5 "type CreatePRInput" pkg/agentic/Repository: dAppCore/agent
Length of output: 793
🏁 Script executed:
# Check if there's any validation of workspace names in the subsystem initialization
rg -B 10 -A 10 "type PrepSubsystem" pkg/agentic/Repository: dAppCore/agent
Length of output: 918
🏁 Script executed:
# Look for any tests that verify workspace path safety
rg "createPR\|Workspace.*\\.\\." pkg/agentic/ -t goRepository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Check how workspace names are set/managed - look at dispatch or creation logic
rg -B 3 -A 8 "NewWorkspace\|CreateWorkspace" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Check if this is an MCP-exposed interface and where workspace names come from
rg -B 5 -A 15 "RegisterTool.*createPR\|agentic_create_pr" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Look for where workspace names are generated/validated at creation time
rg -B 5 -A 10 "dispatch\|workspace.*name" pkg/agentic/ -i | head -80Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Verify filepath.Join behaviour with .. components to be absolutely sure
python3 << 'EOF'
import os
import posixpath
# Simulate Go's filepath.Join behavior on Unix
root = "/tmp/workspace"
workspace = "../../etc"
result = posixpath.normpath(posixpath.join(root, workspace))
print(f"Root: {root}")
print(f"Input: {workspace}")
print(f"Result: {result}")
print(f"Escaped root? {not result.startswith(root)}")
print()
# More realistic examples
examples = [
("workspace1", "/tmp/workspace/workspace1"),
("../../../etc", "/etc"),
("ws/../../../etc", "/etc"),
("ws/normal", "/tmp/workspace/ws/normal"),
]
for inp, expected in examples:
result = posixpath.normpath(posixpath.join(root, inp))
print(f"Input: {inp:20} → {result:30} (Expected: {expected}, Match: {result == expected})")
EOFRepository: dAppCore/agent
Length of output: 533
🏁 Script executed:
# Look for dispatch function and how workspace names are generated
rg -B 10 -A 20 "func.*dispatch\|Workspace.*=" pkg/agentic/ | head -120Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Check the MCP tool definitions to see what fields are exposed
rg -B 2 -A 8 "\"workspace\".*json" pkg/agentic/pr.goRepository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Verify if workspace names are user-controlled input via MCP or system-generated
rg -B 5 -A 10 "dispatch.*Input\|DispatchInput" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Find all files in pkg/agentic to understand the codebase structure
fd . pkg/agentic/ -type f -name "*.go" | sortRepository: dAppCore/agent
Length of output: 228
🏁 Script executed:
# Check dispatch.go or similar files
cat -n pkg/agentic/dispatch.go 2>/dev/null | head -100Repository: dAppCore/agent
Length of output: 4690
🏁 Script executed:
# Look at the MCP tool registration and what inputs are exposed
rg "RegisterTool\|AddTool" pkg/agentic/ -A 5 | head -60Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Search for where input.Workspace is used - trace back to caller
rg "createPR\(" pkg/agentic/ -B 3 -A 3Repository: dAppCore/agent
Length of output: 463
🏁 Script executed:
# Check the dispatch function to see how workspace directory is created
rg -B 5 -A 30 "func.*dispatch.*ctx.*context" pkg/agentic/dispatch.go | head -80Repository: dAppCore/agent
Length of output: 1081
🏁 Script executed:
# Look for workspace directory creation and naming
rg "WorkspaceRoot\|wsDir.*=" pkg/agentic/dispatch.go -B 3 -A 3 | head -50Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Check if there's any input validation or sanitization applied to workspace names
rg -i "clean\|sanitize\|filepath.Base\|basename" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Find the prepWorkspace function to see where workspace naming happens
rg -B 5 -A 40 "func.*prepWorkspace" pkg/agentic/Repository: dAppCore/agent
Length of output: 2401
🏁 Script executed:
# Look for workspace name generation (likely using timestamp or random)
rg "workspace.*uuid\|workspace.*time\|workspace.*rand\|fmt.Sprintf.*workspace" pkg/agentic/ -iRepository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Check the full createPR function to understand the complete flow
sed -n '40,120p' pkg/agentic/pr.goRepository: dAppCore/agent
Length of output: 2302
Validate input.Workspace against path traversal before using it in Git operations.
The createPR function accepts input.Workspace as untrusted caller input. Using filepath.Join(WorkspaceRoot(), input.Workspace) without validation allows path traversal: an attacker can pass "../../../tmp" or similar to escape WorkspaceRoot() and redirect Git operations (git rev-parse, git push) to arbitrary filesystem locations. The os.Stat() check only verifies path existence, not bounds.
Apply filepath.Base() to strip directory components, or validate that input.Workspace matches the expected workspace name format (e.g., {repo}-{timestamp}).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/pr.go` around lines 57 - 62, The code uses input.Workspace
directly when building wsDir/srcDir in createPR which allows path traversal;
sanitize and validate input.Workspace before joining with WorkspaceRoot: first
reduce it to a single segment (e.g., workspaceName :=
filepath.Base(input.Workspace)) or run filepath.Clean and then require it
matches an allowed pattern (e.g., regex for "{repo}-{timestamp}"), then
recompute wsDir := filepath.Join(WorkspaceRoot(), workspaceName) and (for extra
safety) resolve abs paths and verify the final wsDir has WorkspaceRoot as its
prefix before proceeding with os.Stat or any Git operations.
| if input.State == "" { | ||
| input.State = "open" | ||
| } | ||
| if input.Limit == 0 { | ||
| input.Limit = 20 | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "pr.go" -path "*/agentic/*" | head -5Repository: dAppCore/agent
Length of output: 77
🏁 Script executed:
cat -n pkg/agentic/pr.go | sed -n '255,310p'Repository: dAppCore/agent
Length of output: 1642
Validate and enforce limit consistently to prevent panics and honour user requests.
Two issues require fixing:
-
Negative limits cause panic: Negative
input.Limitvalues bypass the zero-check at line 264 and reach line 294's slice operation (allPRs[:input.Limit]), causing a panic. This can crash the handler. -
Requested limits above 10 are silently ignored: The
listRepoPRsfunction hard-codeslimit=10in the API URL (line 305), regardless of the user's requested limit. For single-repo requests, users requesting 30 or 50 PRs receive at most 10 because the loop cannot fetch more from a single repository.
Required changes:
- Change line 264 condition from
== 0to<= 0to reject negative values - Pass the remaining limit to
listRepoPRsinstead of hard-coding 10 - Update the function signature to accept and use the limit parameter
Suggested fix
- if input.Limit == 0 {
+ if input.Limit <= 0 {
input.Limit = 20
}
@@
- prs, err := s.listRepoPRs(ctx, input.Org, repo, input.State)
+ remaining := input.Limit - len(allPRs)
+ prs, err := s.listRepoPRs(ctx, input.Org, repo, input.State, remaining)
if err != nil {
continue
}
@@
-func (s *PrepSubsystem) listRepoPRs(ctx context.Context, org, repo, state string) ([]PRInfo, error) {
- url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls?state=%s&limit=10",
- s.forgeURL, org, repo, state)
+func (s *PrepSubsystem) listRepoPRs(ctx context.Context, org, repo, state string, limit int) ([]PRInfo, error) {
+ url := fmt.Sprintf("%s/api/v1/repos/%s/%s/pulls?state=%s&limit=%d",
+ s.forgeURL, org, repo, state, limit)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/pr.go` around lines 261 - 266, The handler currently allows
negative input.Limit and always uses a hard-coded page size in listRepoPRs;
change the zero-check to "input.Limit <= 0" so negatives are normalized/rejected
(e.g., set to default 20 or treat as invalid) and update calls to listRepoPRs to
pass the remaining limit instead of 10; modify listRepoPRs' function signature
(e.g., listRepoPRs(..., limit int)) and ensure its URL/query uses that limit
when building the API request so slicing allPRs[:input.Limit] cannot panic and
the API fetch honors the requested limit.
| wsDir := filepath.Join(WorkspaceRoot(), input.Workspace) | ||
| srcDir := filepath.Join(wsDir, "src") | ||
|
|
||
| // Verify workspace exists | ||
| if _, err := os.Stat(srcDir); err != nil { | ||
| return nil, ResumeOutput{}, coreerr.E("resume", "workspace not found: "+input.Workspace, nil) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /repo && find . -name "resume.go" -type f | head -20Repository: dAppCore/agent
Length of output: 113
🏁 Script executed:
cd /repo && cat -n pkg/agentic/resume.go | head -100Repository: dAppCore/agent
Length of output: 113
🏁 Script executed:
cd /repo && wc -l pkg/agentic/resume.goRepository: dAppCore/agent
Length of output: 113
🏁 Script executed:
find . -name "resume.go" -type fRepository: dAppCore/agent
Length of output: 81
🏁 Script executed:
cat -n pkg/agentic/resume.go | head -120Repository: dAppCore/agent
Length of output: 4424
🏁 Script executed:
rg "filepath.Join.*WorkspaceRoot" -A 5 -B 5Repository: dAppCore/agent
Length of output: 2362
🏁 Script executed:
cat > /tmp/test_filepath.go << 'EOF'
package main
import (
"fmt"
"path/filepath"
)
func main() {
// Test 1: Normal join
result1 := filepath.Join("/workspace", "myws")
fmt.Printf("Test 1 (normal): %s\n", result1)
// Test 2: Join with .. segment
result2 := filepath.Join("/workspace", "../etc")
fmt.Printf("Test 2 (with ..): %s\n", result2)
// Test 3: Multiple .. segments
result3 := filepath.Join("/workspace", "../../root")
fmt.Printf("Test 3 (multiple ..): %s\n", result3)
// Test 4: Absolute path in second argument (overrides first)
result4 := filepath.Join("/workspace", "/etc/passwd")
fmt.Printf("Test 4 (absolute second): %s\n", result4)
}
EOF
go run /tmp/test_filepath.goRepository: dAppCore/agent
Length of output: 189
🏁 Script executed:
rg "input.Workspace" pkg/agentic/resume.go -B 2 -A 2Repository: dAppCore/agent
Length of output: 826
🏁 Script executed:
rg "WorkspaceRoot" -A 2 -B 2 --type-add 'go:*.go' | head -50Repository: dAppCore/agent
Length of output: 2670
🏁 Script executed:
rg "CallToolRequest" -B 5 -A 5 --type-add 'go:*.go' | head -80Repository: dAppCore/agent
Length of output: 3998
🏁 Script executed:
rg "mcp.AddTool" -B 2 -A 10 | head -60Repository: dAppCore/agent
Length of output: 3797
🏁 Script executed:
cat pkg/agentic/resume.go | grep -A 15 "registerResumeTool"Repository: dAppCore/agent
Length of output: 746
Reject workspace names that escape the workspace root.
The code at line 46 uses filepath.Join(WorkspaceRoot(), input.Workspace) without validating the input. Go's filepath.Join normalises .. segments, so input.Workspace of ../etc/passwd would escape the workspace root entirely. This allows an attacker to write ANSWER.md to arbitrary directories (line 72–76) and spawn an agent process in an arbitrary location (line 96). Add validation to reject workspace names containing / or .. segments, or use filepath.IsLocal to ensure the resolved path remains under the workspace root.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/resume.go` around lines 46 - 52, Validate and reject workspace
names that escape the workspace root before joining paths: ensure
input.Workspace does not contain "/" or ".." segments and/or resolve the
absolute path of wsDir (using WorkspaceRoot() and input.Workspace) and verify it
is a child of WorkspaceRoot() (e.g., using filepath.Abs + filepath.Rel or
filepath.Clean) before creating srcDir; if validation fails return the existing
error instead of proceeding to create/write ANSWER.md or spawn the agent
process. Target the variables and calls wsDir, srcDir, WorkspaceRoot(), and
input.Workspace when adding the checks so the code never allows a path outside
the workspace root.
| func (s *PrepSubsystem) pushAndMerge(ctx context.Context, repoDir, repo string) error { | ||
| // Push to dev | ||
| pushCmd := exec.CommandContext(ctx, "git", "push", "github", "HEAD:refs/heads/dev", "--force") | ||
| pushCmd.Dir = repoDir | ||
| if out, err := pushCmd.CombinedOutput(); err != nil { | ||
| return coreerr.E("pushAndMerge", "push failed: "+string(out), err) |
There was a problem hiding this comment.
Use a leased push for github/dev.
Line 238 uses git push --force, which can overwrite commits added after candidate selection or by another queue worker. This is a non-idempotent external write on a shared branch; please use --force-with-lease at minimum, or avoid rewriting dev entirely.
Suggested fix
- pushCmd := exec.CommandContext(ctx, "git", "push", "github", "HEAD:refs/heads/dev", "--force")
+ pushCmd := exec.CommandContext(
+ ctx,
+ "git", "push", "github", "HEAD:refs/heads/dev",
+ "--force-with-lease=refs/heads/dev",
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (s *PrepSubsystem) pushAndMerge(ctx context.Context, repoDir, repo string) error { | |
| // Push to dev | |
| pushCmd := exec.CommandContext(ctx, "git", "push", "github", "HEAD:refs/heads/dev", "--force") | |
| pushCmd.Dir = repoDir | |
| if out, err := pushCmd.CombinedOutput(); err != nil { | |
| return coreerr.E("pushAndMerge", "push failed: "+string(out), err) | |
| func (s *PrepSubsystem) pushAndMerge(ctx context.Context, repoDir, repo string) error { | |
| // Push to dev | |
| pushCmd := exec.CommandContext( | |
| ctx, | |
| "git", "push", "github", "HEAD:refs/heads/dev", | |
| "--force-with-lease=refs/heads/dev", | |
| ) | |
| pushCmd.Dir = repoDir | |
| if out, err := pushCmd.CombinedOutput(); err != nil { | |
| return coreerr.E("pushAndMerge", "push failed: "+string(out), err) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/review_queue.go` around lines 236 - 241, In pushAndMerge, replace
the non-idempotent forced push that uses "git push ... --force" (constructed in
pushCmd) with a leased push to avoid overwriting concurrent commits: update the
git args to use "--force-with-lease" (or implement a safeguard that avoids
rewriting "dev" entirely) so the push command will fail if the remote branch has
new commits; ensure the change is applied to the exec.CommandContext invocation
that builds pushCmd in the pushAndMerge function.
| func (s *PrepSubsystem) runVerification(srcDir string) verifyResult { | ||
| if fileExists(filepath.Join(srcDir, "go.mod")) { | ||
| return s.runGoTests(srcDir) | ||
| } | ||
| if fileExists(filepath.Join(srcDir, "composer.json")) { | ||
| return s.runPHPTests(srcDir) | ||
| } | ||
| if fileExists(filepath.Join(srcDir, "package.json")) { | ||
| return s.runNodeTests(srcDir) | ||
| } | ||
| return verifyResult{passed: true, testCmd: "none", output: "No test runner detected"} | ||
| } |
There was a problem hiding this comment.
Fail closed when verification cannot run.
Several “no runner / unreadable config” paths return passed: true, which allows auto-merge without executing any tests.
Suggested fix
- return verifyResult{passed: true, testCmd: "none", output: "No test runner detected"}
+ return verifyResult{passed: false, testCmd: "none", output: "No supported test runner detected"}
@@
- return verifyResult{passed: true, testCmd: "none", output: "Could not read package.json"}
+ return verifyResult{passed: false, testCmd: "none", output: "Could not read package.json"}
@@
- return verifyResult{passed: true, testCmd: "none", output: "No test script in package.json"}
+ return verifyResult{passed: false, testCmd: "none", output: "No test script in package.json"}Also applies to: 274-276, 281-283
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/verify.go` around lines 218 - 229, The verification paths
currently "fail open" by returning verifyResult{passed:true,...} when no test
runner or config is detected; change these to fail closed by returning
passed:false, set testCmd to "none" (or the attempted runner name), and set
output to a clear error message like "No test runner detected" or "Unreadable
config" so merges are blocked; update the runVerification function and any
similar branches in the related verification helpers (e.g., the
unreadable-config branches in the runGoTests/runPHPTests/runNodeTests flows) to
return passed:false instead of true.
| cmd2 := exec.Command("./vendor/bin/pest", "--no-interaction") | ||
| cmd2.Dir = srcDir | ||
| out2, err2 := cmd2.CombinedOutput() | ||
| if err2 != nil { | ||
| return verifyResult{passed: true, testCmd: "none", output: "No PHP test runner found"} | ||
| } | ||
| return verifyResult{passed: true, output: string(out2), exitCode: 0, testCmd: "vendor/bin/pest"} | ||
| } |
There was a problem hiding this comment.
Pest failures are currently reported as passes.
In the fallback path, any err2 (including test assertion failures with a non-zero exit code) returns passed: true.
Suggested fix
cmd2 := exec.Command("./vendor/bin/pest", "--no-interaction")
cmd2.Dir = srcDir
out2, err2 := cmd2.CombinedOutput()
if err2 != nil {
- return verifyResult{passed: true, testCmd: "none", output: "No PHP test runner found"}
+ if exitErr, ok := err2.(*exec.ExitError); ok {
+ return verifyResult{
+ passed: false,
+ output: string(out2),
+ exitCode: exitErr.ExitCode(),
+ testCmd: "vendor/bin/pest",
+ }
+ }
+ return verifyResult{passed: false, testCmd: "vendor/bin/pest", output: fmt.Sprintf("Failed to run pest: %v", err2), exitCode: 1}
}
return verifyResult{passed: true, output: string(out2), exitCode: 0, testCmd: "vendor/bin/pest"}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cmd2 := exec.Command("./vendor/bin/pest", "--no-interaction") | |
| cmd2.Dir = srcDir | |
| out2, err2 := cmd2.CombinedOutput() | |
| if err2 != nil { | |
| return verifyResult{passed: true, testCmd: "none", output: "No PHP test runner found"} | |
| } | |
| return verifyResult{passed: true, output: string(out2), exitCode: 0, testCmd: "vendor/bin/pest"} | |
| } | |
| cmd2 := exec.Command("./vendor/bin/pest", "--no-interaction") | |
| cmd2.Dir = srcDir | |
| out2, err2 := cmd2.CombinedOutput() | |
| if err2 != nil { | |
| if exitErr, ok := err2.(*exec.ExitError); ok { | |
| return verifyResult{ | |
| passed: false, | |
| output: string(out2), | |
| exitCode: exitErr.ExitCode(), | |
| testCmd: "vendor/bin/pest", | |
| } | |
| } | |
| return verifyResult{passed: false, testCmd: "vendor/bin/pest", output: fmt.Sprintf("Failed to run pest: %v", err2), exitCode: 1} | |
| } | |
| return verifyResult{passed: true, output: string(out2), exitCode: 0, testCmd: "vendor/bin/pest"} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/verify.go` around lines 259 - 266, The fallback block in
verify.go treats any err2 from cmd2.CombinedOutput() as "no test runner" and
returns passed: true; instead detect whether err2 is an exec.ExitError (non-zero
test exit) and treat that as a failing test: set passed: false, include the
CombinedOutput() string as output and the actual exit code (from
ExitError.ExitCode()) and set testCmd to "vendor/bin/pest"; only treat the case
where the binary is missing (e.g., os.IsNotExist or other exec-related absence)
as "no PHP test runner found" with passed: true. Update the logic around cmd2,
err2, and verifyResult accordingly so test failures are reported correctly.
| // resolveWorkspaceDir converts a workspace name to full path. | ||
| func (s *PrepSubsystem) resolveWorkspaceDir(name string) string { | ||
| if filepath.IsAbs(name) { | ||
| return name | ||
| } | ||
| return filepath.Join(WorkspaceRoot(), name) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the watch.go file around the target lines and understand the full context
cat -n pkg/agentic/watch.go | head -200 | tail -50Repository: dAppCore/agent
Length of output: 1673
🏁 Script executed:
# Get the full watch.go file to understand the broader context
wc -l pkg/agentic/watch.goRepository: dAppCore/agent
Length of output: 82
🏁 Script executed:
# Search for all uses of resolveWorkspaceDir to see how it's called
rg "resolveWorkspaceDir" --context 3Repository: dAppCore/agent
Length of output: 703
🏁 Script executed:
# Check how WorkspaceRoot is defined and what it returns
rg "func WorkspaceRoot" -A 5Repository: dAppCore/agent
Length of output: 396
🏁 Script executed:
# Also check imports to understand filepath usage
head -30 pkg/agentic/watch.goRepository: dAppCore/agent
Length of output: 906
🏁 Script executed:
# Find where 'remaining' is populated in the watch function
rg "remaining" pkg/agentic/watch.go -B 5 -A 2Repository: dAppCore/agent
Length of output: 1083
🏁 Script executed:
# Check the full Watch function to understand input handling
cat -n pkg/agentic/watch.go | head -140Repository: dAppCore/agent
Length of output: 4689
🏁 Script executed:
# Search for input validation on WatchInput
rg "WatchInput" -A 10 pkg/agentic/watch.go | head -50Repository: dAppCore/agent
Length of output: 927
🏁 Script executed:
# Test filepath.Join behavior with .. segments in Go
python3 << 'EOF'
import subprocess
import json
# Test Go's filepath.Join behavior with .. segments
test_code = '''
package main
import (
"fmt"
"path/filepath"
)
func main() {
wsRoot := "/home/user/workspace"
// Test 1: absolute path
test1 := filepath.Join(wsRoot, "/etc/passwd")
fmt.Printf("Test 1 (absolute): %s\n", test1)
// Test 2: relative path with ..
test2 := filepath.Join(wsRoot, "../../../etc/passwd")
fmt.Printf("Test 2 (relative with ..): %s\n", test2)
// Test 3: relative path with ..
test3 := filepath.Join(wsRoot, "myworkspace/../../etc/passwd")
fmt.Printf("Test 3 (relative with nested ..): %s\n", test3)
// Test 4: normal relative path
test4 := filepath.Join(wsRoot, "myworkspace")
fmt.Printf("Test 4 (normal relative): %s\n", test4)
// Test 5: check if path traversal is possible
test5 := filepath.Join(wsRoot, "workspace", "..", "..", "etc", "passwd")
fmt.Printf("Test 5 (multiple ..): %s\n", test5)
}
'''
with open('/tmp/test.go', 'w') as f:
f.write(test_code)
# Run the test
result = subprocess.run(['go', 'run', '/tmp/test.go'], capture_output=True, text=True, timeout=10)
print(result.stdout)
if result.stderr:
print("STDERR:", result.stderr)
EOFRepository: dAppCore/agent
Length of output: 847
🏁 Script executed:
# Find the readStatus function to see if it validates the path
rg "func readStatus" -A 15Repository: dAppCore/agent
Length of output: 845
🏁 Script executed:
# Check if there's any path validation or cleaning in the codebase
rg "filepath.Clean" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Look at readStatus more closely to understand what it does with the path
rg "readStatus" -B 2 -A 10 pkg/agentic/Repository: dAppCore/agent
Length of output: 10408
🏁 Script executed:
# Check the Go filepath documentation behavior by examining actual usage patterns
# Let's test filepath.Join with .. manually using a simpler approach
cat > /tmp/filepath_test.go << 'EOF'
package main
import (
"fmt"
"path/filepath"
)
func main() {
wsRoot := "/home/user/workspace"
// Test with ..
result := filepath.Join(wsRoot, "../../../etc/passwd")
fmt.Println("filepath.Join with ..: ", result)
// What if we check if a path is within root?
abs, _ := filepath.Abs(result)
fmt.Println("Absolute: ", abs)
}
EOF
go run /tmp/filepath_test.goRepository: dAppCore/agent
Length of output: 112
🏁 Script executed:
# Test filepath.Join behaviour with Go code (without test file syntax)
cat > /tmp/test_join.go << 'EOF'
package main
import (
"fmt"
"path/filepath"
)
func main() {
root := "/home/user/workspace"
// Relative path with ..
result1 := filepath.Join(root, "../../../../etc/passwd")
fmt.Printf("filepath.Join with ..: %s\n", result1)
// Absolute path check
result2 := filepath.Join(root, "/etc/passwd")
fmt.Printf("filepath.Join with absolute: %s\n", result2)
// Normal relative
result3 := filepath.Join(root, "myworkspace")
fmt.Printf("filepath.Join normal: %s\n", result3)
}
EOF
cd /tmp && go run test_join.goRepository: dAppCore/agent
Length of output: 208
🏁 Script executed:
# Check if there's any validation elsewhere in the codebase
rg "filepath.Clean|filepath.Abs|filepath.EvalSymlinks" pkg/agentic/Repository: dAppCore/agent
Length of output: 40
🏁 Script executed:
# Check how the input flows from WatchInput to resolveWorkspaceDir
rg "WatchInput|input.Workspaces" pkg/agentic/watch.goRepository: dAppCore/agent
Length of output: 296
Validate that resolved workspace paths remain within WorkspaceRoot().
resolveWorkspaceDir allows callers to escape the managed workspace tree: absolute paths are returned unchanged, and relative paths with .. segments can traverse upwards (e.g. "../../../../etc/passwd" resolves to /etc/passwd). Add validation to reject any path that does not resolve beneath WorkspaceRoot() before calling readStatus.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agentic/watch.go` around lines 188 - 194, resolveWorkspaceDir currently
returns absolute or joined paths without checking they live under
WorkspaceRoot(), allowing escapes; change resolveWorkspaceDir (or add a
validation step before readStatus) to clean and make the path absolute, get the
absolute WorkspaceRoot(), compute filepath.Rel(root, resolved) and reject
(return an error) if the relative path begins with ".." or equals ".." (i.e. the
resolved path is outside the workspace); ensure callers such as readStatus
handle the new error instead of operating on an out-of-tree path.
Forge → GitHub Sync
Commits: 58
Files changed: 329
Large sync — may need splitting for CodeRabbit review.
Co-Authored-By: Virgil virgil@lethean.io
Summary by CodeRabbit
New Features
Documentation
Chores