Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .claude/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"env": {},
"companyAnnouncements": [
"Welcome! Here is scroll-tech",
"Just ask me about what can help"
],
"permissions": {
"allow": [
"Bash(pwd)",
"Bash(ls *)",
"Bash(cat *)"
],
"deny": []
Comment on lines +9 to +14
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bash(cat *) with an empty deny list allows reading any file — add deny rules for sensitive paths.

Bash(cat *) is pre-approved without any corresponding deny guard. Anyone running Claude Code in this repo will have it silently cat config files, .env files, SSH keys, or credential files without an interactive prompt. This matters especially for the integration-test-helper skill, which operates on live config directories containing DB credentials and decryption keys (per ProverE2E.md).

.env files, SSH keys, API tokens, and cloud credentials are the priority targets to protect — systematically add deny rules to prevent Claude Code from reading or modifying these files. The official Claude Code docs themselves include deny entries such as Read(./.env), Read(./.env.*), and Read(./secrets/**) as baseline protection examples.

🛡️ Proposed fix: add deny rules for sensitive paths
  "permissions": {
    "allow": [
      "Bash(pwd)",
      "Bash(ls *)",
      "Bash(cat *)"
    ],
-   "deny": []
+   "deny": [
+     "Bash(cat */.env)",
+     "Bash(cat */.env.*)",
+     "Bash(cat *credentials*)",
+     "Bash(cat *secret*)",
+     "Read(**/.env)",
+     "Read(**/.env.*)",
+     "Read(**/secrets/**)"
+   ]
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"allow": [
"Bash(pwd)",
"Bash(ls *)",
"Bash(cat *)"
],
"deny": []
"allow": [
"Bash(pwd)",
"Bash(ls *)",
"Bash(cat *)"
],
"deny": [
"Bash(cat */.env)",
"Bash(cat */.env.*)",
"Bash(cat *credentials*)",
"Bash(cat *secret*)",
"Read(**/.env)",
"Read(**/.env.*)",
"Read(**/secrets/**)"
]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/settings.json around lines 9 - 14, The settings currently allow
"Bash(cat *)" while "deny" is empty; update the .claude/settings.json deny list
to block reading/modifying sensitive files by adding deny rules such as
Read(./.env), Read(./.env.*), Read(./secrets/**), Read(./**/.ssh/**),
Read(./**/*.pem), Read(./**/id_*) and Write(./.env) (and any other write/modify
denies for secrets) so the "Bash(cat *)" allowance cannot access credentials or
keys; locate the "allow" array and replace the empty "deny" array with these
deny entries to explicitly prevent disclosure of .env, SSH keys, PEMs, secrets
folders, and similar sensitive paths.

}
}
36 changes: 36 additions & 0 deletions .claude/skills/db-query/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
name: db-query
description: Do query from database for common task
model: sonnet
allowed-tools: Bash(psql *)
---

User could like to know about the status of L2 data blocks and proving task, following is their request:

$ARGUMENTS

(If you find there is nothing in the request above, just tell "nothing to do" and stop)

You should have known the data sheme of our database, if not yet, read it from the `.sql` files under `database/migrate/migrations`.

According to use's request, generate the corresponding SQL expression and query the database. For example, if user ask "list the assigned chunks", it means "query records from `chunk` table with proving_status=2 (assigned)", or the SQL expression 'SELECT * from chunk where proving_status=2;'. If it is not clear, you can ask user which col they are indicating to, and list some possible options.
Comment on lines +14 to +16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor typos and phrasing issues.

  • Line 14: "data sheme" → "data schema"; "You should have known" → "You should already know"
  • Line 16: "use's request" → "user's request"
✏️ Proposed fix
-You should have known the data sheme of our database, if not yet, read it from the `.sql` files under `database/migrate/migrations`.
+You should already know the data schema of our database. If not, read it from the `.sql` files under `database/migrate/migrations`.

-According to use's request, generate the corresponding SQL expression and query the database.
+According to user's request, generate the corresponding SQL expression and query the database.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
You should have known the data sheme of our database, if not yet, read it from the `.sql` files under `database/migrate/migrations`.
According to use's request, generate the corresponding SQL expression and query the database. For example, if user ask "list the assigned chunks", it means "query records from `chunk` table with proving_status=2 (assigned)", or the SQL expression 'SELECT * from chunk where proving_status=2;'. If it is not clear, you can ask user which col they are indicating to, and list some possible options.
You should already know the data schema of our database. If not, read it from the `.sql` files under `database/migrate/migrations`.
According to user's request, generate the corresponding SQL expression and query the database. For example, if user ask "list the assigned chunks", it means "query records from `chunk` table with proving_status=2 (assigned)", or the SQL expression 'SELECT * from chunk where proving_status=2;'. If it is not clear, you can ask user which col they are indicating to, and list some possible options.
🧰 Tools
🪛 LanguageTool

[grammar] ~14-~14: Ensure spelling is correct
Context: ...d stop) You should have known the data sheme of our database, if not yet, read it fr...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/db-query/SKILL.md around lines 14 - 16, Fix minor typos and
improve phrasing in .claude/skills/db-query/SKILL.md by replacing "data sheme"
with "data schema", changing "You should have known" to "You should already
know", and changing "use's request" to "user's request"; update the surrounding
sentence so it reads smoothly (e.g., "You should already know the data schema of
our database..." and "According to the user's request, generate the
corresponding SQL expression...") while preserving the examples and intent.


For the generated SQL, following rules MUST be obey:

+ Limit the number of records to 20, unless user has a specification explicitly like "show me ALL chunks".
+ Following cols can not be read by human and contain very large texts, they MUST be excluded in the SQL expression:
+ For all table, any col named "proof"
+ "header" and "transactions" in `l2_block` table
+ "calldata" in `l1_message`
+ Always omit the `deleted_at` col, never include them in query or use in where condition
+ Without explicit specification, the records should be ordered by the `updated_at` col, the most recent one first.

When you has decided the SQL expression, always print it out.

You use psql client to query from our PostgreSQL db. When launching psql, always with "-w" options, and use "-o" to send all ouput to `query_report.txt` file under system's temporary dir, like /tmp. You MUST NOT read the generated report.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fixed output path risks silent overwrite on concurrent runs.

query_report.txt under /tmp is a fixed filename. If two developers (or two terminals) invoke the skill simultaneously on a shared machine, the second run will silently clobber the first report. Consider using a timestamped or mktemp-generated filename, e.g., query_report_$(date +%s).txt.

✏️ Proposed fix
-You use psql client to query from our PostgreSQL db. When launching psql, always with "-w" options, and use "-o" to send all ouput to `query_report.txt` file under system's temporary dir, like /tmp. You MUST NOT read the generated report.
+You use psql client to query from our PostgreSQL db. When launching psql, always with "-w" options, and use "-o" to send all output to a uniquely named file under the system's temporary dir, e.g., `/tmp/query_report_<timestamp>.txt` (use the current epoch seconds for the timestamp). You MUST NOT read the generated report.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
You use psql client to query from our PostgreSQL db. When launching psql, always with "-w" options, and use "-o" to send all ouput to `query_report.txt` file under system's temporary dir, like /tmp. You MUST NOT read the generated report.
You use psql client to query from our PostgreSQL db. When launching psql, always with "-w" options, and use "-o" to send all output to a uniquely named file under the system's temporary dir, e.g., `/tmp/query_report_<timestamp>.txt` (use the current epoch seconds for the timestamp). You MUST NOT read the generated report.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/db-query/SKILL.md at line 30, The hardcoded output file
"query_report.txt" in the psql invocation (the "-o query_report.txt" usage and
reference to /tmp) risks silent overwrites; change the psql invocation to write
to a unique temp filename (use mktemp or append a timestamp like
query_report_$(date +%s).txt) and update any related references to use that
variable, ensuring the skill continues to pass "-w" to psql and still does NOT
read the generated report; update the code that builds the psql command (where
"-o query_report.txt" is inserted) to generate and use the unique temp path
instead.


If the psql failed since authentication, remind user to prepare their `.pgpass` file under home dir.

You should have known the endpoint of the database before, in the form of PostgreSQL DSN. If not, try to read it from the `db.dsn` field inside of `coordinator/build/bin/conf/config.json`. If still not able to get the data, ask via Ask User Question to get the endpoint.


7 changes: 7 additions & 0 deletions .claude/skills/integration-test-helper/ProverE2E.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
## Notes for handling ProverE2E

+ Ensure the `conf` dir has been correctly linked and remind user which path it currently links to.

+ If some files are instructed to be generated, but they have been existed, NEVER refer the content before the generation. They may be left from different setup and contain wrong message for current process.

+ In step 4, if the `l2.validium_mode` is set to true, MUST Ask User for decryption key to fill the `sequencer.decryption_key` field. The key must be a hex string WITHOUT "0x" prefix.
62 changes: 62 additions & 0 deletions .claude/skills/integration-test-helper/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
name: integration-test-helper
description: Assist with the process described in the specified directory to prepare or advance integration tests. The target directory and instruction section can be specified, like "tests/prover-e2e test".
model: sonnet
allowed-tools: Bash(make *), Bash(tee *), Bash(jq *)
---

This skill helps launching the full process described in the instructions, also investigate and report the results.

## Target directory

The **target directory** under which the setup process being run is: $ARGUMENTS[0].
Under the target dir there are the stuff and instructions. If the target dir above is empty, just use !`pwd`.

## Instructions

First read `README.md` under target directory, instructions should be under heading named ($ARGUMENTS[1]). If there is no such a heading name, just try the "Test" heading.

In additional, there are two optional places for more knowledge about current instructions:

+ An .md file under current skill dir, named from the top header of the `README.md` file or the name of target directory.
For example, if the target dir is `tests/prover-e2e`, the top header in `README.md` has "ProverE2E", so there may be a .md file named as `prover-e2e.md` or `ProverE2E.md`

+ All files under `experience` path (if it existed) of target dir contains additional experience, which is specialized for current host

## Run each step listed in instructions

The instructions often contain multiple steps which should be completed in sequence. Following are some rules MUST be obey while handling each step:

### "Must do" while executing commands in steps

Any command mentioned in steps should be executed by Bash tool, with following MUST DO for handling the outputs:

+ Use "| tee <log_file>" to capture output of bash tool into local file for investigating later. The file name of log should be in format as `<desc_of_ccommand>_<day>_<time>.log`
+ Do not read all output, after "| tee", use "|tail -n 50" to only catch the possible error message. That should be enough for common case.

It may need to jump to other directories for executing a step. We MUST go back to target directory after every step has been completed. Also, DO NOT change anything outside of target directy.

### When error raised
Command execution should get success return. If error raised while executing, do following process:

1. Try to analysis the reason of error, first from the caught error message. If there is no enough data, grep useful information from the log file of whole output just captured.

2. Ask User for next action, options are:
+ Retry with resolution derived from error analyst
+ Retry, with user provide tips to resolve the issue
+ Just retry, user has resolved the issue by theirself
+ Stop here, discard current and following steps, do after completion

Error often caused by some mismacthing of configruation in current host. Here are some tips which may help:

* Install the missing tools / libs via packet manager
* Fix the typo, or complete missed fields in configuration files
* Copy missed files, it may be just put in some place of the project or can be downloaded according to some documents.

## After completion

When every step has done, or the process stop by user, make following materials before stop:

+ Package all log files generated before into a tarball and save it in tempoaray path. Then clear all log files.
Comment on lines +34 to +60
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typos and a piping syntax error in the skill instructions.

As a prompt consumed by Claude, clarity matters. Several typos and one shell-syntax issue are present:

Line Current Fix
34 <desc_of_ccommand> <desc_of_command>
35 |tail -n 50 | tail -n 50 (missing space)
37 target directy (×2) target directory
45 error analyst error analysis
47 theirself themselves
50 mismacthing of configruation mismatching of configuration
60 tempoaray path temporary path
✍️ Proposed fix
-+ Use "| tee <log_file>" to capture output of bash tool into local file for investigating later. The file name of log should be in format as `<desc_of_ccommand>_<day>_<time>.log`
-+ Do not read all output, after "| tee", use "|tail -n 50" to only catch the possible error message. That should be enough for common case.
++ Use "| tee <log_file>" to capture output of bash tool into local file for investigating later. The file name of log should be in format as `<desc_of_command>_<day>_<time>.log`
++ Do not read all output, after "| tee", use "| tail -n 50" to only catch the possible error message. That should be enough for common case.
 
-It may need to jump to other directories for executing a step. We MUST go back to target directory after every step has been completed. Also, DO NOT change anything outside of target directy.
+It may need to jump to other directories for executing a step. We MUST go back to target directory after every step has been completed. Also, DO NOT change anything outside of target directory.
 
    + Retry with resolution derived from error analyst
-   + Just retry, user has resolved the issue by theirself
+   + Retry with resolution derived from error analysis
+   + Just retry, user has resolved the issue by themselves
 
-Error often caused by some mismacthing of configruation in current host.
+Error often caused by some mismatching of configuration in current host.
 
-+ Package all log files generated before into a tarball and save it in tempoaray path. Then clear all log files.
++ Package all log files generated before into a tarball and save it in temporary path. Then clear all log files.
🧰 Tools
🪛 LanguageTool

[grammar] ~45-~45: Ensure spelling is correct
Context: ...etry with resolution derived from error analyst + Retry, with user provide tips to resolve...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)


[grammar] ~50-~50: Ensure spelling is correct
Context: ... completion Error often caused by some mismacthing of configruation in current host. Here ...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/integration-test-helper/SKILL.md around lines 34 - 60, Fix
the typos and the shell-pipe spacing in the SKILL.md text: replace the token
"<desc_of_ccommand>" with "<desc_of_command>", change "\|tail -n 50" to "| tail
-n 50" (add the missing space after the pipe), correct both occurrences of
"target directy" to "target directory", change "error analyst" to "error
analysis", replace "theirself" with "themselves", fix "mismacthing of
configruation" to "mismatching of configuration", and change "tempoaray path" to
"temporary path"; ensure the surrounding sentences (the lines containing "Use
\"| tee <log_file>\"", "Do not read all output, after \"| tee\"", "We MUST go
back to target directory", and the three error-handling bullets) keep their
original meaning while applying these exact textual corrections.

+ Generate a report file under target directory, with file name like `report_<day>_<time>.txt`.
+ For steps once failed and being resolved later, record the resolution into a file under `experience` path in target dir.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,5 +23,8 @@ coverage.txt
sftp-config.json
*~

# AI skills
**/experience

target
zkvm-prover/config.json
7 changes: 7 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
The mono repo for scroll-tech's services. See @README.md to know about the project.

Skills has been set to help some process being handled easily. When asked by "what can you help", list following skills, along with the skill-description and invoke cost estimation here:

1. `db-query`: ~$0.1 per query
2. `integration-test-helper` Now ready for following target:
+ `tests/prover-e2e`: ~$1.0 per process
Comment on lines +3 to +7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix grammar issues in the instructions.

A few issues in the CLAUDE.md content that Claude Code loads as context:

  • Line 3: "Skills has been set""Skills have been set"; "When asked by""When asked"
  • Line 6: "Now ready for following target""Now ready for the following target"
✍️ Proposed fix
-Skills has been set to help some process being handled easily. When asked by "what can you help", list following skills, along with the skill-description and invoke cost estimation here:
+Skills have been set to assist with processes that benefit from contextual guidance. When asked "what can you help", list the following skills along with the skill description and estimated cost:
 
 1. `db-query`: ~$0.1 per query
-2. `integration-test-helper` Now ready for following target:
+2. `integration-test-helper` Now ready for the following target:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Skills has been set to help some process being handled easily. When asked by "what can you help", list following skills, along with the skill-description and invoke cost estimation here:
1. `db-query`: ~$0.1 per query
2. `integration-test-helper` Now ready for following target:
+ `tests/prover-e2e`: ~$1.0 per process
Skills have been set to assist with processes that benefit from contextual guidance. When asked "what can you help", list the following skills along with the skill description and estimated cost:
1. `db-query`: ~$0.1 per query
2. `integration-test-helper` Now ready for the following target:
`tests/prover-e2e`: ~$1.0 per process
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CLAUDE.md` around lines 3 - 7, Update the CLAUDE.md text to correct grammar:
change "Skills has been set" to "Skills have been set", change "When asked by"
to "When asked", and change "Now ready for following target" to "Now ready for
the following target"; ensure the listed items (`db-query` and
`integration-test-helper` / `tests/prover-e2e`) and their cost estimates remain
unchanged and the punctuation/spacing around bullets stays consistent.

2 changes: 1 addition & 1 deletion coordinator/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ coordinator_cron:
coordinator_tool:
go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_tool ./cmd/tool

localsetup: coordinator_api ## Local setup: build coordinator_api, copy config, and setup releases
localsetup: libzkp coordinator_api ## Local setup: build coordinator_api, copy config, and setup releases
mkdir -p build/bin/conf
@echo "Copying configuration files..."
@if [ -f "$(PWD)/conf/config.template.json" ]; then \
Expand Down
4 changes: 3 additions & 1 deletion tests/prover-e2e/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
build/*
testset.json
conf
conf
*.log
*.txt
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

*.txt is broad — may silently swallow future tracked files.

The pattern ignores all .txt files in tests/prover-e2e. If any static test fixture or documentation file with a .txt extension is added later, it will be silently excluded from git without an explicit ! negation. Consider narrowing to the known artifact pattern to make intent explicit:

💡 Proposed narrower pattern
-*.txt
+report_*.txt
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
*.txt
report_*.txt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/prover-e2e/.gitignore` at line 5, The .gitignore entry using the broad
pattern "*.txt" will silently ignore any .txt added later; replace it with a
narrower, explicit pattern (e.g., a specific artifact directory or filename glob
like "artifacts/*.txt" or "build/*.txt") or add explicit negations for tracked
.txt fixtures (use "!docs/*.txt" or similar) so only intended generated/test
artifact files are ignored; update the "*.txt" line in the .gitignore to the
chosen specific pattern.

22 changes: 15 additions & 7 deletions tests/prover-e2e/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,24 @@
## A new e2e test tool to setup a local environment for testing coordinator and prover.
# ProverE2E: A new e2e test tool to setup a local environment for testing coordinator and prover.

It contains data from some blocks in a specified testnet, and helps to generate a series of chunks/batches/bundles from these blocks, filling the DB for the coordinator, so an e2e test (from chunk to bundle) can be run completely local

Prepare:
## Prepare
link the staff dir as "conf" from one of the dir with staff set, currently we have following staff sets:
+ sepolia: with blocks from scroll sepolia
+ cloak-xen: with blocks from xen sepolia, which is a cloak network

Steps:
## Test
1. run `make all` under `tests/prover-e2e`, it would launch a postgreSql db in local docker container, which is ready to be used by coordinator (include some chunks/batches/bundles waiting to be proven)
2. setup assets by run `make coordinator_setup`
3. in `coordinator/build/bin/conf`, update necessary items in `config.template.json` and rename it as `config.json`
4. build and launch `coordinator_api` service locally
5. setup the `config.json` for zkvm prover to connect with the locally launched coordinator api
6. in `zkvm-prover`, launch `make test_e2e_run`, which would specific prover run locally, connect to the local coordinator api service according to the `config.json`, and prove all tasks being injected to db in step 1.
3. come into `coordinator/build/bin` for following steps:
+ rename `conf/config.template.json` as `conf/config.json`
+ if the `l2.validium_mode` is set to true in `config.json`, the `sequencer.decryption_key` must be set
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

No guidance on how to obtain sequencer.decryption_key.

The README mandates setting sequencer.decryption_key when l2.validium_mode is true but doesn't indicate where this key comes from (e.g., a secret store, the test environment's staff directory, or a generated value). This will block developers running validium-mode tests for the first time.

Consider adding a brief note pointing to the source (e.g., "find this key in the staff set directory under conf/" or similar).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/prover-e2e/README.md` at line 15, The README line about
l2.validium_mode requiring sequencer.decryption_key lacks source guidance;
update the README to state exactly where to obtain that key (e.g., the project’s
staff/staging secret store, the team’s staff-set directory, or by generating it
with the project’s key-generation utility) and include a short note on the
expected format and any steps to retrieve or generate it so developers running
validium-mode tests can find or create sequencer.decryption_key quickly;
reference the l2.validium_mode and sequencer.decryption_key symbols in the added
note.

+ launch `coordinator_api` service by executing the file
4. come into `zkvm-prover` for following steps:
+ copy `config.template.json` to `config.json`,
+ set the `sdk_config.coordinator.base_url` field in `config.json`, so zkvm prover would connect with the locally launched coordinator api,
for common case the url is `http://localhost:8390` (the default listening port of coordinator api)
+ launch `make test_e2e_run`, which would specific prover run locally, connect to the local coordinator api service according to the `config.json`, and prove all tasks being injected to db in step 1.
Comment on lines +19 to +21
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Default port in README conflicts with ChangePort.md and both test reports.

Line 20 states the default coordinator API URL is http://localhost:8390, but tests/prover-e2e/experience/ChangePort.md explicitly instructs changing the port to 18390, and both report_20260225_1113.txt and report_20260225_1156.txt confirm port 18390 was used in practice. A developer following only the README will likely encounter connection failures.

Either update the default URL here to http://localhost:18390, or add a cross-reference to ChangePort.md at this step.

✏️ Proposed fix
-    for common case the url is `http://localhost:8390` (the default listening port of coordinator api)
+    for common case the url is `http://localhost:18390` (see `experience/ChangePort.md` for why port 18390 is used instead of the default 8390)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
+ set the `sdk_config.coordinator.base_url` field in `config.json`, so zkvm prover would connect with the locally launched coordinator api,
for common case the url is `http://localhost:8390` (the default listening port of coordinator api)
+ launch `make test_e2e_run`, which would specific prover run locally, connect to the local coordinator api service according to the `config.json`, and prove all tasks being injected to db in step 1.
set the `sdk_config.coordinator.base_url` field in `config.json`, so zkvm prover would connect with the locally launched coordinator api,
for common case the url is `http://localhost:18390` (see `experience/ChangePort.md` for why port 18390 is used instead of the default 8390)
launch `make test_e2e_run`, which would specific prover run locally, connect to the local coordinator api service according to the `config.json`, and prove all tasks being injected to db in step 1.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/prover-e2e/README.md` around lines 19 - 21, Update the README’s
instruction about sdk_config.coordinator.base_url in config.json to match the
actual default port used in tests and docs: change the example URL from
http://localhost:8390 to http://localhost:18390 (or alternatively add a clear
cross-reference to tests/prover-e2e/experience/ChangePort.md) and mention that
reports (report_20260225_1113.txt and report_20260225_1156.txt) and
ChangePort.md use port 18390 so readers won’t encounter connection failures.


## AI Helper
The test process can be run with the help of `integration-test-helper` skill (~$1.0 for each full process)
1 change: 1 addition & 0 deletions tests/prover-e2e/experience/ChangePort.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Let coordiantor api listen at port 18390 to avoid security restriction or port confliction. Also change the corresponding field in `config.json`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo: "coordiantor" → "coordinator".

✏️ Proposed fix
-Let coordiantor api listen at port 18390 to avoid security restriction or port confliction. Also change the corresponding field in `config.json`
+Let coordinator api listen at port 18390 to avoid security restriction or port confliction. Also change the corresponding field in `config.json`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Let coordiantor api listen at port 18390 to avoid security restriction or port confliction. Also change the corresponding field in `config.json`
Let coordinator api listen at port 18390 to avoid security restriction or port confliction. Also change the corresponding field in `config.json`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/prover-e2e/experience/ChangePort.md` at line 1, Fix the typo in the
ChangePort.md text by replacing "coordiantor" with "coordinator" (update the
sentence that currently reads "Let coordiantor api listen..." to "Let
coordinator api listen..."); do not change meaning or port number and ensure any
corresponding mention in config.json remains consistent.

1 change: 1 addition & 0 deletions zkvm-prover/.work/.gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
*.vmexe
*.elf
openvm.toml
*.bin
*.sol
Expand Down
Loading