Skip to content

fix: pre-audit fixes and correctness improvements#40

Open
panos-xyz wants to merge 35 commits intomainfrom
panos/pre-audit-fixes
Open

fix: pre-audit fixes and correctness improvements#40
panos-xyz wants to merge 35 commits intomainfrom
panos/pre-audit-fixes

Conversation

@panos-xyz
Copy link
Contributor

@panos-xyz panos-xyz commented Mar 9, 2026

Summary

Pre-audit batch of fixes, security hardening, and correctness improvements for morph-reth.

Security & Correctness Fixes

  • Pre-audit security hardening: block size validation, MorphTx version checks, fee refund safety, txpool validation improvements
  • BLOCKHASH opcode alignment: implement Morph's custom keccak256(chain_id || block_number) semantics to match go-eth
  • SLOAD original_value fix: scope fix to token fee txs using save-restore pattern to prevent state corruption
  • Token fee handling: decouple token fee logs from handler pipeline to survive tx reverts; use evm_call for ERC20 balanceOf; discard balanceOf logs to match go-eth StaticCall semantics
  • Token transfer check alignment: align with go-eth behavior and clean up txpool removal logic
  • L1BlockInfo: read per-tx instead of per-block cache to ensure correctness across block boundaries

Refactoring & Performance

  • TxMorph Compact codec: rewrite using derive helper struct pattern for correctness and maintainability
  • Consensus validation: deduplicate MorphTx validation logic
  • Precompile contracts: load once instead of per-block reconstruction
  • Reth persistence threshold: tune to reduce disk I/O contention during sync
  • Reth fork bump: bump to f137577 (unwind state root warning fix)

Tooling

  • Hoodi testnet scripts: add local-test-hoodi/ scripts for Hoodi testnet sync testing
  • Stress test script: add for load testing

Test plan

  • cargo test --all passes
  • cargo clippy --all --all-targets -- -D warnings passes
  • Hoodi testnet sync tested with local-test-hoodi/sync-test.sh
  • Mainnet sync verified against go-eth at matching block heights

Summary by CodeRabbit

  • New Features

    • BLS12-381 and P256 precompiles; expanded hardfork precompile sets
    • Per-transaction L1-data and token-fee caching with pre/post-fee logs
    • Local "hoodi" test/orchestration scripts (prepare/start/stop/status/sync/etc.)
  • Bug Fixes

    • Fixes for token-fee/storage and ERC20 transfer validation
    • Enforced cross-block L1 message monotonicity, stricter timestamp/gas-limit rules
    • Earlier validation of transaction format and block continuity
  • Performance

    • Reduced cloning, fewer allocations, and improved preallocation
  • Tests

    • Expanded coverage for hardforks, fees, validation, and compact encoding

@coderabbitai
Copy link

coderabbitai bot commented Mar 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Updated workspace reth dependencies; reworked consensus L1 validation and MorphTx gating; refactored REVM/EVM to cache per-tx L1/token fee info, replace RLP extraction with envelope-based MorphTx handling, add opcodes/precompiles, and adjust block/receipt/executor flows; txpool, primitives, payload builder, and many local hoodi scripts updated.

Changes

Cohort / File(s) Summary
Workspace dependencies
Cargo.toml
Bumped many reth-* git revs and added/adjusted default-features = false / features for multiple reth crates.
Consensus
crates/consensus/src/validation.rs
Split header vs body L1 validation; added in-block L1 ordering, per-tx MorphTx gating, monotonic header checks, hardfork-aware timestamp/gas rules; constructor became non-const and chain_spec() accessor added.
Engine API / RPC
crates/engine-api/src/builder.rs, crates/engine-api/src/rpc.rs
Introduced EngineStateTracker block tag cache; specific discontinuity errors on canonical violations; new_safe_l2_block takes mut SafeL2Data; recording canonical head and adjusted RPC logging targets.
EVM block & receipts
crates/evm/src/block/..., crates/evm/src/block/receipt.rs
Per-block hardfork cached in executor; per-tx cached L1 fee; receipts now include pre/post fee logs; adjusted get_morph_tx_fields signature and commit flow.
MorphEvm core
crates/evm/src/evm.rs, crates/revm/src/evm.rs
MorphEvm now wraps inner revm instance; exposes cached token/L1 fee and pre/post fee logs; added helpers to access/take these caches; replaced PhantomData with hardfork field.
REVM handler / token-fee / tx
crates/revm/src/handler.rs, crates/revm/src/token_fee.rs, crates/revm/src/tx.rs
Per-tx L1/token-fee caching and reimbursement refactor; added evm_call helpers and ERC20 balance checks; mapping-slot APIs switched to fixed-size keys; removed RLP extraction path and fee_slot_original_values.
Precompiles & L1 block info
crates/revm/src/precompiles.rs, crates/revm/src/l1block.rs
Added BLS12-381 precompile addresses and input-size wrappers; L1BlockInfo fields made non-optional, added had_token_fee_deduction, and added L1 fee capping constant.
Primitives & encoding
crates/primitives/src/...
MorphReceiptEnvelope.as_receipt returns &Receipt; MorphTxEnvelope.memo returns Option<&Bytes>; TxMorph.gas_limit changed u128→u64; compact encoding refactored to pack memo+input.
Payload builder & errors
crates/payload/builder/src/*
Preallocated Vec capacities for hot paths; removed two MorphPayloadBuilderError variants.
Tx pool & validation
crates/txpool/src/*
Pool bindings tightened to MorphPooledTransaction/Typed2718; validation now uses morph_tx.validate() and new InvalidFormat error; removal now purges descendants; reduced cloning via trait-based helpers.
Node components & validator
crates/node/src/*
Pool builder disables inner balance check for MorphTx; small tracing target renames in validator.
RPC & morph-reth binary
crates/rpc/src/eth/transaction.rs, bin/morph-reth/src/main.rs, crates/revm/Cargo.toml
RPC clones memo and performs early MorphTx validation; expanded main.rs comment; removed alloy-rlp.workspace = true in crates/revm/Cargo.toml.
Local testing tooling
local-test-hoodi/*, local-test/*
Added many hoodi/local scripts for prepare/start/stop/reset/status/sync-test/start-all/stop-all and updated .gitignore; shared local-test defaults adjusted.

Sequence Diagram(s)

sequenceDiagram
participant Client
participant RPC
participant EngineAPI
participant REVM_Handler as REVM
participant MorphEvm as EVM
participant DB

Client->>RPC: submit / build MorphTx
RPC->>EngineAPI: validate_l2_block / new_safe_l2_block
EngineAPI->>REVM: assemble payload, deduct fees
REVM->>EVM: validate_and_deduct_eth/token_fee (cache fee info)
EVM->>DB: read ERC20 balance / storage (via evm_call)
EVM-->>REVM: execution result + pre/post fee logs
REVM->>EngineAPI: return payload/import result
EngineAPI->>RPC: final response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • anylots
  • chengwenxi

Poem

🐰 I nibbled bytes and hopped through trees of code,

Cached fees in burrows, stamped each hardfork road.
Envelopes snug, precompiles bloom in light,
Hoodi scripts hum, the node wakes from night. 🥕✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: pre-audit fixes and correctness improvements' is directly related to the changeset, which includes security hardening, correctness improvements, refactors, performance tweaks, and tooling additions.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch panos/pre-audit-fixes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/rpc/src/eth/transaction.rs (1)

244-260: ⚠️ Potential issue | 🟠 Major

fee_limit still bypasses the new MorphTx validation path.

morph_tx.validate() only runs after this helper decides the request is Morph-specific, but that gate ignores fee_limit. A request like feeTokenID = 0, feeLimit > 0 still returns Ok(None) here, so the send/simulate paths silently downgrade it to a standard tx while try_into_tx_env() keeps the Morph-only fields.

Suggested fix
 fn try_build_morph_tx_from_request(
     req: &alloy_rpc_types_eth::TransactionRequest,
     fee_token_id: U64,
     fee_limit: U256,
     reference: Option<alloy_primitives::B256>,
     memo: Option<alloy_primitives::Bytes>,
 ) -> Result<Option<TxMorph>, &'static str> {
     let fee_token_id_u16 = u16::try_from(fee_token_id.to::<u64>()).map_err(|_| "invalid token")?;
 
     // Check if this should be a MorphTx
     let has_fee_token = fee_token_id_u16 > 0;
+    let has_fee_limit = fee_limit > U256::ZERO;
     let has_reference = reference.is_some();
     let has_memo = memo.as_ref().is_some_and(|m| !m.is_empty());
 
     if !has_fee_token && !has_reference && !has_memo {
+        if has_fee_limit {
+            return Err("fee_limit requires fee_token_id > 0");
+        }
         // No Morph-specific fields → standard Ethereum tx
         return Ok(None);
     }

Also applies to: 278-299

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/src/eth/transaction.rs` around lines 244 - 260, The helper
try_build_morph_tx_from_request currently treats requests with feeTokenID == 0
and feeLimit > 0 as non-Morph and returns Ok(None), so ensure fee_limit is
considered a Morph-specific field: add a has_fee_limit check (e.g. let
has_fee_limit = fee_limit > U256::zero()) and include it in the condition that
decides Morph vs standard tx (replace if !has_fee_token && !has_reference &&
!has_memo with if !has_fee_token && !has_reference && !has_memo &&
!has_fee_limit), and apply the same change to the analogous branch around the
code referenced at 278-299 so morph_tx.validate() runs when fee_limit is set.
🧹 Nitpick comments (9)
crates/engine-api/src/builder.rs (1)

784-800: Test helper lacks FeeVault beneficiary override logic present in production code.

The test helper apply_executable_data_overrides directly sets beneficiary = data.miner (line 788), while the production code at lines 643-649 overrides beneficiary to Address::ZERO when FeeVault is enabled. This divergence may cause issues if tests later need to verify FeeVault-related behavior.

Consider adding a chain_spec parameter or a boolean flag to optionally apply the FeeVault override for consistency, or document that this helper intentionally excludes FeeVault logic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/engine-api/src/builder.rs` around lines 784 - 800, The test helper
apply_executable_data_overrides currently sets header.inner.beneficiary =
data.miner but must mirror production FeeVault logic; update the helper (the
map_header closure that builds the MorphHeader before
RecoveredBlock::new_unhashed) to accept a chain_spec or a boolean
fee_vault_enabled flag and, when FeeVault is enabled, set
header.inner.beneficiary = Address::ZERO instead of data.miner (otherwise keep
data.miner), ensuring the other normalized fields remain unchanged so test
behavior matches production.
local-test-hoodi/common.sh (2)

39-39: Missing blank line before function definition.

There's no newline between the last environment variable default and the check_binary function, which reduces readability.

Add blank line
 : "${MORPH_GETH_RPC_URL:=}"
+
 check_binary() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/common.sh` at line 39, Add a blank line between the
environment variable default assignment for MORPH_GETH_RPC_URL and the start of
the check_binary function to improve readability; locate the line with ":
\"${MORPH_GETH_RPC_URL:=}\"" and insert one newline before the "check_binary"
function definition.

50-53: Glob expansion may fail with set -e if no files match.

The glob {[0-9]*,*.log*} on line 52 may fail if no files match and set -e is active. Consider adding || true or using shopt -s nullglob.

Make glob expansion safe
 cleanup_runtime_logs() {
   rm -f "${NODE_LOG_FILE}" "${RETH_LOG_FILE}"
-  rm -rf "$(dirname "${RETH_LOG_FILE}")"/{[0-9]*,*.log*}
+  rm -rf "$(dirname "${RETH_LOG_FILE}")"/{[0-9]*,*.log*} 2>/dev/null || true
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/common.sh` around lines 50 - 53, The rm -rf with the glob in
cleanup_runtime_logs can cause the script to exit under set -e when the pattern
"{[0-9]*,*.log*}" doesn't match anything; update cleanup_runtime_logs to make
the glob safe by either enabling nullglob around the removal (e.g., shopt -s
nullglob before the removal and restore after) or by appending a fallback to the
rm command (e.g., run the rm -rf "$(dirname "${RETH_LOG_FILE}")"/{[0-9]*,*.log*}
|| true) so that no-match expansion doesn't cause the script to fail; keep
references to NODE_LOG_FILE and RETH_LOG_FILE and ensure the dirname usage
remains the same.
local-test/node-start.sh (2)

26-34: Remove unnecessary trailing backslashes in array declaration.

Inside a Bash array declaration ( ... ), newlines are already handled as element separators. The trailing backslashes on lines 27-33 are unnecessary and could cause confusion.

Suggested cleanup
 args=(
-  --home "${NODE_HOME}" \
-  --l2.jwt-secret "${JWT_SECRET}" \
-  --l2.eth "http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" \
-  --l2.engine "http://${RETH_AUTHRPC_ADDR}:${RETH_AUTHRPC_PORT}" \
-  --l1.rpc "${MORPH_NODE_L1_RPC}" \
-  --sync.depositContractAddr "${MORPH_NODE_DEPOSIT_CONTRACT}" \
+  --home "${NODE_HOME}"
+  --l2.jwt-secret "${JWT_SECRET}"
+  --l2.eth "http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}"
+  --l2.engine "http://${RETH_AUTHRPC_ADDR}:${RETH_AUTHRPC_PORT}"
+  --l1.rpc "${MORPH_NODE_L1_RPC}"
+  --sync.depositContractAddr "${MORPH_NODE_DEPOSIT_CONTRACT}"
   --log.filename "${NODE_LOG_FILE}"
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test/node-start.sh` around lines 26 - 34, The args array declaration
uses unnecessary trailing backslashes which are only needed for multi-line shell
commands, not inside a Bash array; update the args=( ... ) block (the args
variable assignment) by removing the backslashes after each element (lines
containing NODE_HOME, JWT_SECRET, RETH_HTTP_ADDR, RETH_HTTP_PORT,
RETH_AUTHRPC_ADDR, RETH_AUTHRPC_PORT, MORPH_NODE_L1_RPC,
MORPH_NODE_DEPOSIT_CONTRACT, NODE_LOG_FILE) so each element is a plain quoted
string separated by newlines/whitespace within the array.

36-40: Word splitting approach for extra flags is acceptable but fragile.

The # shellcheck disable=SC2206 and subsequent word splitting on MORPH_NODE_EXTRA_FLAGS will fail on flags containing spaces (e.g., --flag "value with spaces"). This is acceptable if users are expected to only pass simple flags, but worth documenting the limitation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test/node-start.sh` around lines 36 - 40, The current word-splitting of
MORPH_NODE_EXTRA_FLAGS into extra_flags and then appending to args is fragile
for values with spaces; update the block around MORPH_NODE_EXTRA_FLAGS /
extra_flags / args to either (A) document this limitation clearly in an inline
comment above the code and instruct users to pass flags as separate env entries
or a shell array (e.g., export MORPH_NODE_EXTRA_FLAGS as a bash array), or (B)
change the parsing to preserve quoted values by using a safe eval-style
reconstruction (e.g., build extra_flags with eval
"extra_flags=($MORPH_NODE_EXTRA_FLAGS)") if you accept the eval risk — pick one
approach, add the explanatory comment, and ensure MORPH_NODE_EXTRA_FLAGS,
extra_flags, and args are adjusted accordingly.
local-test-hoodi/node-start.sh (1)

37-40: Use an array-safe format for MORPH_NODE_EXTRA_FLAGS.

extra_flags=(${MORPH_NODE_EXTRA_FLAGS}) does shell word-splitting and glob expansion, so quoted values or paths with spaces/* will not reach the node intact. If this knob needs arbitrary flags, prefer newline-delimited entries or a sourced Bash array.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/node-start.sh` around lines 37 - 40, Replace the unsafe
word-splitting assignment extra_flags=(${MORPH_NODE_EXTRA_FLAGS}) with a
newline-safe array loader such as: use mapfile -t extra_flags <<<
"$MORPH_NODE_EXTRA_FLAGS" (or IFS=$'\n' read -r -a extra_flags <<<
"$MORPH_NODE_EXTRA_FLAGS") to preserve quoted values and spaces and avoid glob
expansion, then keep args+=("${extra_flags[@]}") to append them; update the
block that checks MORPH_NODE_EXTRA_FLAGS accordingly.
crates/txpool/src/morph_tx_validation.rs (1)

237-242: Prefer matching the error variant instead of the full reason string.

This assertion is tightly coupled to the exact wording returned by validate(), so harmless text edits will break the test. Checking MorphTxError::InvalidFormat { .. } plus a stable substring would keep the intent with less churn.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/txpool/src/morph_tx_validation.rs` around lines 237 - 242, The test
currently asserts the full error string from validate() which is brittle; update
the assertion to match the MorphTxError::InvalidFormat variant (e.g., using
matches! or a match on err for MorphTxError::InvalidFormat { reason }) and then
assert that the reason contains a stable substring such as "FeeLimit" or
"FeeTokenID is 0" (e.g., assert!(reason.contains("FeeLimit"))), so the test
verifies the variant and a small stable clue instead of exact wording.
crates/revm/src/tx.rs (1)

299-306: Consider making DecodedMorphTxFields dead-code free.

The struct name DecodedMorphTxFields suggests RLP decoding, but it's now used for direct extraction from the envelope. Consider renaming to MorphTxFields or similar to better reflect its current purpose.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/revm/src/tx.rs` around lines 299 - 306, The struct
DecodedMorphTxFields is misnamed for its current use; rename it to MorphTxFields
(or another concise name) to reflect that it holds extracted envelope fields
rather than RLP-decoded data, and update all references/usages (e.g., in
functions, type annotations, constructors, and pattern matches that mention
DecodedMorphTxFields) to the new identifier; ensure existing derives/field types
(version, fee_token_id, fee_limit, reference, memo) and visibility remain
unchanged so compilation and any trait impls still work.
local-test-hoodi/sync-test.sh (1)

310-322: Guard against division by zero in ETA calculation.

The bc expression ${MAINNET_TIP} / ${GETH_AVG_BPS} will fail with a division by zero error if GETH_AVG_BPS is exactly "0" (not just less than or equal to zero). The current check $(echo "${GETH_AVG_BPS} > 0" | bc -l) handles this, but bc may return ".00" for zero which could pass the check depending on locale.

🛡️ Safer zero check
-if [[ $(echo "${GETH_AVG_BPS} > 0" | bc -l) -eq 1 ]]; then
+if [[ -n "${GETH_AVG_BPS}" && "${GETH_AVG_BPS}" != "0" && $(echo "${GETH_AVG_BPS} > 0" | bc -l) -eq 1 ]]; then
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/sync-test.sh` around lines 310 - 322, The ETA division can
still hit division-by-zero when GETH_AVG_BPS or RETH_AVG_BPS is exactly "0";
update the guards to explicitly prevent division by zero by checking the
variable is non-zero before computing MAINNET_TIP / ... For both GETH_AVG_BPS
and RETH_AVG_BPS replace the current bc-only condition with an explicit non-zero
check (e.g. ensure "${GETH_AVG_BPS}" != "0" && $(echo "${GETH_AVG_BPS} > 0" | bc
-l) -eq 1) and the analogous check for RETH_AVG_BPS before calculating
geth_eta_seconds/reth_eta_seconds and calling format_duration, so the division
never runs when the avg bps is zero.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/primitives/src/transaction/morph_transaction.rs`:
- Around line 937-965: The from_compact implementation must validate helper.data
before slicing to avoid panics: add a check that helper.data.len() > 0, read
memo_len = helper.data[0] as usize only after that, and ensure 1 + memo_len <=
helper.data.len() before creating memo and input slices; if the checks fail,
return a clear error (e.g., change from_compact to return Result<(Self, &[u8]),
DecodeError> or propagate the crate's decode error) so malformed/truncated
packed data yields a controlled error instead of a panic; keep existing field
mappings (chain_id, nonce, version as u8, fee_token_id as u16, etc.) unchanged.

In `@local-test-hoodi/reset.sh`:
- Around line 38-41: The script currently swallows shutdown failures by
appending "|| true" after calling "${SCRIPT_DIR}/stop-all.sh" and pm2_stop
"morph-geth", which can let processes remain alive while the datadirs are
removed; remove the "|| true" from the pm2_stop invocation and from the
stop-all.sh invocation, check the exit status of "${SCRIPT_DIR}/stop-all.sh" and
pm2_stop "morph-geth" (or retry/wait) and abort (non-zero exit) if either fails
before proceeding to rm -rf, ensuring functions like pm2_stop and the call to
stop-all.sh are allowed to surface real errors and the script exits instead of
deleting live data.

In `@local-test-hoodi/start-all.sh`:
- Around line 25-33: The readiness curl and final printed RPC endpoint use the
bind address variable RETH_HTTP_ADDR (which may be 0.0.0.0); introduce a
separate client-facing host variable (e.g., RETH_HTTP_HOST or
RETH_HTTP_CLIENT_ADDR) defaulting to 127.0.0.1/localhost, update the curl
invocation that currently references RETH_HTTP_ADDR:RETH_HTTP_PORT to use the
new client host and port, and update any RPC-printing logic and the similar
occurrence near the end of the script (the other reference mentioned) to use the
client host variable so readiness checks and printed RPC URLs are usable when
the bind address is 0.0.0.0.

---

Outside diff comments:
In `@crates/rpc/src/eth/transaction.rs`:
- Around line 244-260: The helper try_build_morph_tx_from_request currently
treats requests with feeTokenID == 0 and feeLimit > 0 as non-Morph and returns
Ok(None), so ensure fee_limit is considered a Morph-specific field: add a
has_fee_limit check (e.g. let has_fee_limit = fee_limit > U256::zero()) and
include it in the condition that decides Morph vs standard tx (replace if
!has_fee_token && !has_reference && !has_memo with if !has_fee_token &&
!has_reference && !has_memo && !has_fee_limit), and apply the same change to the
analogous branch around the code referenced at 278-299 so morph_tx.validate()
runs when fee_limit is set.

---

Nitpick comments:
In `@crates/engine-api/src/builder.rs`:
- Around line 784-800: The test helper apply_executable_data_overrides currently
sets header.inner.beneficiary = data.miner but must mirror production FeeVault
logic; update the helper (the map_header closure that builds the MorphHeader
before RecoveredBlock::new_unhashed) to accept a chain_spec or a boolean
fee_vault_enabled flag and, when FeeVault is enabled, set
header.inner.beneficiary = Address::ZERO instead of data.miner (otherwise keep
data.miner), ensuring the other normalized fields remain unchanged so test
behavior matches production.

In `@crates/revm/src/tx.rs`:
- Around line 299-306: The struct DecodedMorphTxFields is misnamed for its
current use; rename it to MorphTxFields (or another concise name) to reflect
that it holds extracted envelope fields rather than RLP-decoded data, and update
all references/usages (e.g., in functions, type annotations, constructors, and
pattern matches that mention DecodedMorphTxFields) to the new identifier; ensure
existing derives/field types (version, fee_token_id, fee_limit, reference, memo)
and visibility remain unchanged so compilation and any trait impls still work.

In `@crates/txpool/src/morph_tx_validation.rs`:
- Around line 237-242: The test currently asserts the full error string from
validate() which is brittle; update the assertion to match the
MorphTxError::InvalidFormat variant (e.g., using matches! or a match on err for
MorphTxError::InvalidFormat { reason }) and then assert that the reason contains
a stable substring such as "FeeLimit" or "FeeTokenID is 0" (e.g.,
assert!(reason.contains("FeeLimit"))), so the test verifies the variant and a
small stable clue instead of exact wording.

In `@local-test-hoodi/common.sh`:
- Line 39: Add a blank line between the environment variable default assignment
for MORPH_GETH_RPC_URL and the start of the check_binary function to improve
readability; locate the line with ": \"${MORPH_GETH_RPC_URL:=}\"" and insert one
newline before the "check_binary" function definition.
- Around line 50-53: The rm -rf with the glob in cleanup_runtime_logs can cause
the script to exit under set -e when the pattern "{[0-9]*,*.log*}" doesn't match
anything; update cleanup_runtime_logs to make the glob safe by either enabling
nullglob around the removal (e.g., shopt -s nullglob before the removal and
restore after) or by appending a fallback to the rm command (e.g., run the rm
-rf "$(dirname "${RETH_LOG_FILE}")"/{[0-9]*,*.log*} || true) so that no-match
expansion doesn't cause the script to fail; keep references to NODE_LOG_FILE and
RETH_LOG_FILE and ensure the dirname usage remains the same.

In `@local-test-hoodi/node-start.sh`:
- Around line 37-40: Replace the unsafe word-splitting assignment
extra_flags=(${MORPH_NODE_EXTRA_FLAGS}) with a newline-safe array loader such
as: use mapfile -t extra_flags <<< "$MORPH_NODE_EXTRA_FLAGS" (or IFS=$'\n' read
-r -a extra_flags <<< "$MORPH_NODE_EXTRA_FLAGS") to preserve quoted values and
spaces and avoid glob expansion, then keep args+=("${extra_flags[@]}") to append
them; update the block that checks MORPH_NODE_EXTRA_FLAGS accordingly.

In `@local-test-hoodi/sync-test.sh`:
- Around line 310-322: The ETA division can still hit division-by-zero when
GETH_AVG_BPS or RETH_AVG_BPS is exactly "0"; update the guards to explicitly
prevent division by zero by checking the variable is non-zero before computing
MAINNET_TIP / ... For both GETH_AVG_BPS and RETH_AVG_BPS replace the current
bc-only condition with an explicit non-zero check (e.g. ensure "${GETH_AVG_BPS}"
!= "0" && $(echo "${GETH_AVG_BPS} > 0" | bc -l) -eq 1) and the analogous check
for RETH_AVG_BPS before calculating geth_eta_seconds/reth_eta_seconds and
calling format_duration, so the division never runs when the avg bps is zero.

In `@local-test/node-start.sh`:
- Around line 26-34: The args array declaration uses unnecessary trailing
backslashes which are only needed for multi-line shell commands, not inside a
Bash array; update the args=( ... ) block (the args variable assignment) by
removing the backslashes after each element (lines containing NODE_HOME,
JWT_SECRET, RETH_HTTP_ADDR, RETH_HTTP_PORT, RETH_AUTHRPC_ADDR,
RETH_AUTHRPC_PORT, MORPH_NODE_L1_RPC, MORPH_NODE_DEPOSIT_CONTRACT,
NODE_LOG_FILE) so each element is a plain quoted string separated by
newlines/whitespace within the array.
- Around line 36-40: The current word-splitting of MORPH_NODE_EXTRA_FLAGS into
extra_flags and then appending to args is fragile for values with spaces; update
the block around MORPH_NODE_EXTRA_FLAGS / extra_flags / args to either (A)
document this limitation clearly in an inline comment above the code and
instruct users to pass flags as separate env entries or a shell array (e.g.,
export MORPH_NODE_EXTRA_FLAGS as a bash array), or (B) change the parsing to
preserve quoted values by using a safe eval-style reconstruction (e.g., build
extra_flags with eval "extra_flags=($MORPH_NODE_EXTRA_FLAGS)") if you accept the
eval risk — pick one approach, add the explanatory comment, and ensure
MORPH_NODE_EXTRA_FLAGS, extra_flags, and args are adjusted accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 72f0331d-bf6b-4cc9-8135-410519514b02

📥 Commits

Reviewing files that changed from the base of the PR and between 7454ea1 and b3d73fb.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (45)
  • Cargo.toml
  • bin/morph-reth/src/main.rs
  • crates/consensus/src/validation.rs
  • crates/engine-api/src/builder.rs
  • crates/engine-api/src/rpc.rs
  • crates/evm/src/block/mod.rs
  • crates/evm/src/block/receipt.rs
  • crates/evm/src/evm.rs
  • crates/node/src/components/pool.rs
  • crates/node/src/validator.rs
  • crates/payload/builder/src/builder.rs
  • crates/payload/builder/src/error.rs
  • crates/primitives/src/receipt/envelope.rs
  • crates/primitives/src/transaction/envelope.rs
  • crates/primitives/src/transaction/morph_transaction.rs
  • crates/revm/Cargo.toml
  • crates/revm/src/evm.rs
  • crates/revm/src/handler.rs
  • crates/revm/src/l1block.rs
  • crates/revm/src/precompiles.rs
  • crates/revm/src/token_fee.rs
  • crates/revm/src/tx.rs
  • crates/rpc/src/eth/transaction.rs
  • crates/txpool/src/error.rs
  • crates/txpool/src/maintain.rs
  • crates/txpool/src/morph_tx_validation.rs
  • crates/txpool/src/validator.rs
  • local-test-hoodi/.gitignore
  • local-test-hoodi/common.sh
  • local-test-hoodi/geth-start.sh
  • local-test-hoodi/geth-stop.sh
  • local-test-hoodi/node-start.sh
  • local-test-hoodi/node-stop.sh
  • local-test-hoodi/prepare.sh
  • local-test-hoodi/reset.sh
  • local-test-hoodi/reth-start.sh
  • local-test-hoodi/reth-stop.sh
  • local-test-hoodi/start-all.sh
  • local-test-hoodi/status.sh
  • local-test-hoodi/stop-all.sh
  • local-test-hoodi/sync-test.sh
  • local-test/common.sh
  • local-test/node-start.sh
  • local-test/reth-start.sh
  • local-test/sync-test.sh
💤 Files with no reviewable changes (2)
  • crates/revm/Cargo.toml
  • crates/payload/builder/src/error.rs

Comment on lines +937 to +965
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (helper, remaining) = TxMorphCompact::from_compact(buf, len);

(
Self {
chain_id,
nonce,
gas_limit,
max_fee_per_gas,
max_priority_fee_per_gas,
to,
value,
access_list,
version: version as u8,
fee_token_id: fee_token_id as u16,
fee_limit,
reference,
// Unpack memo + input from the combined data field
let memo_len = helper.data[0] as usize;
let memo = if memo_len == 0 {
None
} else {
Some(Bytes::copy_from_slice(&helper.data[1..1 + memo_len]))
};
let input = Bytes::copy_from_slice(&helper.data[1 + memo_len..]);

let tx = Self {
chain_id: helper.chain_id,
nonce: helper.nonce,
gas_limit: helper.gas_limit,
max_fee_per_gas: helper.max_fee_per_gas,
max_priority_fee_per_gas: helper.max_priority_fee_per_gas,
to: helper.to,
value: helper.value,
access_list: helper.access_list,
version: helper.version as u8,
fee_token_id: helper.fee_token_id as u16,
fee_limit: helper.fee_limit,
reference: helper.reference,
memo,
input,
},
buf,
)
};
(tx, remaining)
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cd crates/primitives/src/transaction && head -20 morph_transaction.rs && echo "---" && wc -l morph_transaction.rs

Repository: morph-l2/morph-reth

Length of output: 841


🏁 Script executed:

# Get the lines around the reviewed section
sed -n '930,980p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 2183


🏁 Script executed:

# Find TxMorphCompact definition
rg "struct TxMorphCompact|impl.*Compact.*TxMorph" crates/primitives/src/transaction/morph_transaction.rs -A 20

Repository: morph-l2/morph-reth

Length of output: 1650


🏁 Script executed:

# Check if there's any validation logic around data handling
rg "helper\.data|from_compact" crates/primitives/src/transaction/morph_transaction.rs -B 3 -A 3

Repository: morph-l2/morph-reth

Length of output: 1133


🏁 Script executed:

# Look at the Compact trait definition and other implementations
fd "compact" crates/primitives/src --type f

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Find where TxMorphCompact implements Compact
rg "impl Compact for TxMorphCompact" crates/primitives/src/transaction/morph_transaction.rs -A 30

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Check if reth_codecs provides Compact trait and its guarantees
fd "compact" crates --type f | head -20

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Search for any validation or bounds checks around data unpacking
rg "data\[0\]|memo_len|from_compact" crates/primitives/src/transaction/morph_transaction.rs -B 5 -A 5

Repository: morph-l2/morph-reth

Length of output: 2372


🏁 Script executed:

# Look at tests to see what edge cases are tested
rg "fn test.*compact|#\[test\]" crates/primitives/src/transaction/morph_transaction.rs -A 10

Repository: morph-l2/morph-reth

Length of output: 12056


🏁 Script executed:

# Check what reth_codecs::Compact trait signature looks like
find crates -name "*.rs" -type f -exec grep -l "trait Compact" {} \;

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Check the Compact derive macro and TxMorphCompact definition more carefully
sed -n '880,930p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 2650


🏁 Script executed:

# Look for where from_compact is called and if there's any validation wrapper
rg "from_compact" crates/primitives/src/transaction/morph_transaction.rs -B 10 | head -60

Repository: morph-l2/morph-reth

Length of output: 1351


🏁 Script executed:

# Check if there's validation before unpacking memo_len
rg "memo_len|data\[0\]" crates/primitives/src/transaction/morph_transaction.rs -B 3 -A 8

Repository: morph-l2/morph-reth

Length of output: 1744


🏁 Script executed:

# Check the to_compact side - does it enforce memo constraints?
sed -n '900,930p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1587


🏁 Script executed:

# Search for any bounds checking or defensive code in from_compact
ast-grep --pattern 'fn from_compact($_, $_) -> $_ {
  $$$
}'

Repository: morph-l2/morph-reth

Length of output: 6390


🏁 Script executed:

# Check if there's memo validation in the validate method
rg "validate|memo" crates/primitives/src/transaction/morph_transaction.rs | head -30

Repository: morph-l2/morph-reth

Length of output: 1602


🏁 Script executed:

# Look for any MAX_MEMO_LENGTH constant
rg "MAX_MEMO|memo.*64|64.*memo" crates/primitives/src/transaction/morph_transaction.rs

Repository: morph-l2/morph-reth

Length of output: 679


🏁 Script executed:

# Check if there's a test case for malformed/empty data
rg "empty|truncat|malform" crates/primitives/src/transaction/morph_transaction.rs -A 5

Repository: morph-l2/morph-reth

Length of output: 4474


🏁 Script executed:

# Look at the broader storage context - how is from_compact called?
rg "TxMorph::from_compact|from_compact.*buf" crates --type rs | head -20

Repository: morph-l2/morph-reth

Length of output: 89


🏁 Script executed:

rg "TxMorph::from_compact" crates --type=rust

Repository: morph-l2/morph-reth

Length of output: 443


🏁 Script executed:

# Check how decode works for the RLP path to see comparison
sed -n '795,850p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 2149


🏁 Script executed:

# Look for any defensive code or error handling in related compact decoders
rg "from_compact.*data\[|memo_len.*panic|bounds|empty" crates --type=rust -A 2

Repository: morph-l2/morph-reth

Length of output: 18333


🏁 Script executed:

# Check the envelope implementation to see context of from_compact usage
sed -n '375,425p' crates/primitives/src/transaction/envelope.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 2643


🏁 Script executed:

# Check if there's any DB layer validation before calling from_compact
rg "Compact.*from_compact|decode.*from_compact" crates --type=rust -B 5 -A 5 | head -80

Repository: morph-l2/morph-reth

Length of output: 5633


🏁 Script executed:

# Look for any schema or format validation in storage/DB code
fd "storage\|db" crates --type d

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Check if there are any tests for corrupted compact data
rg "corrupt|truncat|malform|panic" crates/primitives/src/transaction/morph_transaction.rs

Repository: morph-l2/morph-reth

Length of output: 256


🏁 Script executed:

# Check if Bytes field can be empty and what TxMorphCompact::from_compact does
rg "impl Compact for.*Compact|derive.*Compact" crates/primitives/src/transaction/morph_transaction.rs -B 2 -A 15

Repository: morph-l2/morph-reth

Length of output: 786


🏁 Script executed:

# Look at validation in the RLP decode path for memo_len bounds
sed -n '590,650p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1626


🏁 Script executed:

# Check if there's a test that covers empty data or memo_len > 64
rg "test.*compact|test.*memo" crates/primitives/src/transaction/morph_transaction.rs | head -10

Repository: morph-l2/morph-reth

Length of output: 381


🏁 Script executed:

# Look at the full test for compact roundtrip to see what data looks like
sed -n '2045,2075p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1391


🏁 Script executed:

# Check if the auto-derived Compact for TxMorphCompact guarantees non-empty Bytes
rg "Bytes::new|data.is_empty" crates/primitives/src/transaction/morph_transaction.rs

Repository: morph-l2/morph-reth

Length of output: 728


🏁 Script executed:

# See how the version/fee_token_id narrowing cast affects real values
sed -n '900,930p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1587


🏁 Script executed:

# Check what happens after from_compact unpacking - is there validation?
sed -n '937,980p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1952


🏁 Script executed:

# Look at whether the Compact trait is supposed to panic on invalid data
rg "reth_codecs|Compact trait" crates --type=rust | head -20

Repository: morph-l2/morph-reth

Length of output: 1701


🏁 Script executed:

# Check if there's validation logic in validate() that catches narrowing issues
sed -n '350,380p' crates/primitives/src/transaction/morph_transaction.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1569


🏁 Script executed:

# Double-check: can version/fee_token_id exceed u8/u16 bounds in normal operation?
rg "version.*=|fee_token_id.*=" crates/primitives/src/transaction/morph_transaction.rs | grep -v "self.version\|self.fee_token_id" | head -20

Repository: morph-l2/morph-reth

Length of output: 471


🏁 Script executed:

# Check if there are any validation constraints on version and fee_token_id
rg "version.*0x|fee_token_id.*0x|MORPH_TX_VERSION|MAX_FEE" crates/primitives/src/transaction/morph_transaction.rs | head -20

Repository: morph-l2/morph-reth

Length of output: 1098


🏁 Script executed:

# Check if validate() is called on decoded transactions
rg "from_compact.*validate|\.validate\(\)" crates/primitives/src/transaction/morph_transaction.rs

Repository: morph-l2/morph-reth

Length of output: 922


🏁 Script executed:

# Look at how MorphReceipt handles similar compact decoding
sed -n '570,595p' crates/primitives/src/receipt/mod.rs | cat -n

Repository: morph-l2/morph-reth

Length of output: 1009


🏁 Script executed:

# Check if there's documentation about the compact codec design philosophy
rg "compact encoding.*DB\|database.*compact\|infallible" crates/primitives/src/transaction/envelope.rs -B 2 -A 2

Repository: morph-l2/morph-reth

Length of output: 45


Add bounds validation before unpacking memo and input from the packed data field.

helper.data[0] and the derived slice ranges will panic on empty or truncated payloads. Because this PR intentionally changes the on-disk format, stale or corrupted rows could turn into crashes instead of clean errors. Add a check that data.len() > 0 and that 1 + memo_len does not exceed data.len() before attempting to unpack.

The narrowing casts (as u8, as u16) are acceptable given the Compact trait's design philosophy (panics signal DB corruption), but the packed data format requires explicit validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/primitives/src/transaction/morph_transaction.rs` around lines 937 -
965, The from_compact implementation must validate helper.data before slicing to
avoid panics: add a check that helper.data.len() > 0, read memo_len =
helper.data[0] as usize only after that, and ensure 1 + memo_len <=
helper.data.len() before creating memo and input slices; if the checks fail,
return a clear error (e.g., change from_compact to return Result<(Self, &[u8]),
DecodeError> or propagate the crate's decode error) so malformed/truncated
packed data yields a controlled error instead of a panic; keep existing field
mappings (chain_id, nonce, version as u8, fee_token_id as u16, etc.) unchanged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Compact trait is intentionally infallible by reth convention — panics signal unrecoverable DB corruption. See doc comment in envelope.rs:376-381. Compact encoding is only used for internal DB storage, never for untrusted network data (which uses RLP + Result).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

sed -n '370,390p' crates/primitives/src/transaction/envelope.rs

Length of output: 1211


@panos-xyz, that's confirmed — envelope.rs:376-381 has an explicit doc comment stating exactly that:

"The Compact trait is infallible by design (reth convention) — compact encoding is only used for internal DB storage, never for untrusted network data (which goes through RLP and returns Result). An unknown identifier here indicates database corruption, which is unrecoverable."

My concern was misplaced. The panics-on-corruption behaviour is intentional and consistent with how reth uses Compact throughout. Withdrawing the comment.


✏️ Learnings added
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: crates/primitives/src/transaction/morph_transaction.rs:937-965
Timestamp: 2026-03-10T01:34:23.059Z
Learning: In morph-reth (and reth generally), the `Compact` trait (reth_codecs::Compact) is intentionally infallible by design. Panics in `from_compact` implementations signal unrecoverable DB corruption, not bugs. Compact encoding is exclusively used for internal DB storage; untrusted network data uses RLP with `Result`-based error handling. Suggesting `Result`-returning `from_compact` or defensive bounds checks in Compact impls is incorrect for this codebase. See doc comment at `crates/primitives/src/transaction/envelope.rs:376-381` for the canonical explanation.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 20
File: crates/rpc/Cargo.toml:12-17
Timestamp: 2026-02-05T06:48:53.195Z
Learning: The morph-rpc crate in crates/rpc/Cargo.toml intentionally enables both `serde-bincode-compat` and `reth-codec` features for morph-primitives to support standalone compilation (cargo build -p morph-rpc), even though full workspace compilation would enable some features transitively.

Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 10
File: crates/payload/builder/src/builder.rs:288-381
Timestamp: 2026-01-21T08:07:53.562Z
Learning: In the Morph payload builder (crates/payload/builder/src/builder.rs), L1 messages (sequencer transactions from payload attributes) are mandatory inclusions that bypass DA (data availability) limits. Only pool transactions should be subject to max_da_block_size enforcement. The cumulative_da_bytes_used field should only track pool transaction bytes, not sequencer transaction bytes.

Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 21
File: crates/evm/src/assemble.rs:66-89
Timestamp: 2026-02-09T07:34:14.207Z
Learning: In Morph L2 (morph-reth), the `extra_data` field in block headers should remain empty (Default::default()) in the MorphBlockAssembler. This is intentional L2 behavior; L2s don't populate extra_data like L1 chains do. File: crates/evm/src/assemble.rs

Learnt from: anylots
Repo: morph-l2/morph-reth PR: 8
File: crates/revm/src/handler.rs:305-362
Timestamp: 2026-01-20T02:14:54.857Z
Learning: In Morph's token fee implementation (crates/revm/src/handler.rs), the refund calculation using `calculate_token_amount` (which rounds up) for both charging and refunding is consistent with geth's implementation and is intentional behavior.

Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 12
File: crates/engine-api/src/validator.rs:10-10
Timestamp: 2026-01-23T10:00:04.805Z
Learning: In Rust, trait methods only become available when the trait is in scope. For files that call methods from a trait on implementing types, ensure you import that trait so calls like is_mpt_fork_active_at_timestamp() on MorphChainSpec instances compile. Apply this rule broadly to all Rust files in the morph-reth repository (i.e., pattern: **/*.rs).

Comment on lines +38 to +41
"${SCRIPT_DIR}/stop-all.sh" || true
pm2_stop "morph-geth" 2>/dev/null || true

rm -rf "${RETH_DATA_DIR}/db" "${RETH_DATA_DIR}/static_files" "${GETH_DATA_DIR}/geth" "${NODE_HOME}/data"
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not ignore shutdown failures before deleting the datadirs.

pm2_stop already treats “not running” as harmless. The extra || true also hides real stop failures, so this script can rm -rf live state while a process is still using it.

🩹 Proposed fix
-"${SCRIPT_DIR}/stop-all.sh" || true
-pm2_stop "morph-geth" 2>/dev/null || true
+"${SCRIPT_DIR}/stop-all.sh"
+if command -v pm2 >/dev/null 2>&1; then
+  pm2_stop "morph-geth"
+fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"${SCRIPT_DIR}/stop-all.sh" || true
pm2_stop "morph-geth" 2>/dev/null || true
rm -rf "${RETH_DATA_DIR}/db" "${RETH_DATA_DIR}/static_files" "${GETH_DATA_DIR}/geth" "${NODE_HOME}/data"
"${SCRIPT_DIR}/stop-all.sh"
if command -v pm2 >/dev/null 2>&1; then
pm2_stop "morph-geth"
fi
rm -rf "${RETH_DATA_DIR}/db" "${RETH_DATA_DIR}/static_files" "${GETH_DATA_DIR}/geth" "${NODE_HOME}/data"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/reset.sh` around lines 38 - 41, The script currently
swallows shutdown failures by appending "|| true" after calling
"${SCRIPT_DIR}/stop-all.sh" and pm2_stop "morph-geth", which can let processes
remain alive while the datadirs are removed; remove the "|| true" from the
pm2_stop invocation and from the stop-all.sh invocation, check the exit status
of "${SCRIPT_DIR}/stop-all.sh" and pm2_stop "morph-geth" (or retry/wait) and
abort (non-zero exit) if either fails before proceeding to rm -rf, ensuring
functions like pm2_stop and the call to stop-all.sh are allowed to surface real
errors and the script exits instead of deleting live data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dev-only script. The || true is intentional — stop commands fail when processes aren't running (normal after crash/manual stop). Reset should clean up unconditionally.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panos-xyz, that makes sense — for a dev reset script, failing to stop a process that isn't running shouldn't block the cleanup. Acknowledged, I'll stand down on this one.


✏️ Learnings added
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/reset.sh:38-41
Timestamp: 2026-03-10T01:33:59.039Z
Learning: In `local-test-hoodi/reset.sh`, the `|| true` after `stop-all.sh` and `pm2_stop "morph-geth"` is intentional. This is a dev-only reset script where stop commands are expected to fail when processes aren't running (e.g., after a crash or manual stop), and the cleanup should proceed unconditionally regardless of process state.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: anylots
Repo: morph-l2/morph-reth PR: 8
File: crates/revm/src/handler.rs:305-362
Timestamp: 2026-01-20T02:14:54.857Z
Learning: In Morph's token fee implementation (crates/revm/src/handler.rs), the refund calculation using `calculate_token_amount` (which rounds up) for both charging and refunding is consistent with geth's implementation and is intentional behavior.

panos-xyz added 21 commits March 9, 2026 18:28
Fixes from internal security audit before external audit engagement:

- consensus: eliminate AtomicU64 call-order coupling in L1 message validation,
  split into independent stateless checks (validate_l1_messages_in_block +
  header monotonicity check)
- consensus: add timestamp validation for pre-Emerald (strict >) vs
  post-Emerald (>=)
- revm/handler: add disable_fee_charge check in token fee path for eth_call
  simulation correctness
- revm/handler: handle missing cached_token_fee_info in reimburse_caller
  when fee charge was disabled
- revm/handler: use checked_sub in transfer_erc20_with_slot for from balance
- revm/l1block: cap L1 data fee to u64::MAX for circuit compatibility
- engine-api: fix ValidateL2Block error semantics (internal vs invalid)
- engine-api: fix FeeVault coinbase override
- rpc: use TxMorph::validate() instead of validate_version() for complete
  validation at RPC layer
- primitives: minor TxMorph validation improvements
- txpool: improve error types for MorphTx validation
- consensus: use checked_add for queue_index overflow protection (C-H1)
- handler: add ETH value balance check in token fee path (X-6)
- handler: checked_add for token recipient balance overflow (R-2)
- handler: propagate error from effective_balance_spending (R-3)
- handler: skip self-transfer post-check when from == to (GPT-3)
- txpool: add TxMorph::validate() in shared validation path (GPT-4)
- txpool: add InvalidFormat error variant for structural violations
Precompile divergences with go-eth (H-1 ~ H-4):
- Add modexp 32-byte input limit for Bernoulli~Viridian hardforks
- Add BN256 pairing 4-pair limit from Morph203 onwards
- Rewrite emerald() to explicitly list precompiles, excluding 0x0a (KZG)
- Upgrade emerald modexp to eip7823+eip7883 (removing 32-byte limit)

Balance check fix (M-1):
- Use max_fee_per_gas for balance validation, effective_gas_price for
  deduction, matching go-eth's buyGas() semantics

Token transfer safety (M-5):
- Replace saturating_sub with checked_sub in ERC20 balance verification
  to catch underflow instead of silently returning zero

Transaction pool (D-1):
- Add disable_balance_check() to MorphPoolBuilder so MorphTx with
  fee_token_id > 0 can pass inner EthTransactionValidator

Compact decode documentation (M-2):
- Document panic! in MorphTxType::from_compact as intentional for DB
  corruption (Compact trait is infallible by reth convention)

Defensive error handling (L-2):
- Replace .expect() with .ok_or() in L1 message queue index validation

Documentation (L-5):
- Improve take_revert_logs() comment explaining the workaround context
…efund)

- Add block payload size validation in consensus (M-1), matching go-eth
  IsValidBlockSize(block.PayloadSize())
- Add MorphTx version validation in consensus (M-2), matching go-eth
  ValidateMorphTxVersion() with Jade fork gating for V1
- Fix token fee refund to silently continue on failure, matching go-eth
  "refund should not cause transaction to fail" semantics
- Add defensive .max(0) for gas.refunded() i64→u64 cast (M-4)
- Remove from==to skip in fee token transfer validation to match
  go-ethereum's transferAltTokenByEVM (self-transfers should fail)
- Use remove_transactions_and_descendants in txpool maintain to clean
  up nonce-dependent orphans immediately
- Remove unnecessary RECEIPT_CAPACITY_HINT pre-allocation
Set --engine.persistence-threshold 256 and --engine.memory-block-buffer-target 16
to batch MDBX writes and avoid competing with Tendermint's LevelDB fsyncs.
…ceipt

ERC20 Transfer event logs were lost during token fee deduction/reimbursement
because revm's system_call pipeline drains logs from the journal via
execution_result() → take_logs(). This caused receipt root mismatch at
blocks with MorphTx (0x7F) using ERC20 token gas payment.

Replace system_call_one_with_caller with evm_call() that only runs the
handler's execution() phase, keeping logs in the journal naturally —
matching go-ethereum's evm.Call() semantics.

Also remove dead code: evm.logs field, take_logs(), take_revert_logs().
go-ethereum's StateDB.logs is independent of the state snapshot/revert
mechanism, so Transfer events from buyAltTokenGas() and refundAltTokenGas()
always appear in receipts regardless of whether the main tx reverts.

revm's ExecutionResult::Revert has no logs field — the handler pipeline's
take_logs() discards all logs on revert. Previously, fee Transfer logs
were kept in the journal and restored via a saved_logs save/restore hack,
which only worked for the success path.

This commit caches fee Transfer logs in MorphEvm (pre_fee_logs for
deduction, post_fee_logs for reimbursement) separately from the journal.
The receipt builder merges them in chronological order:
  [deduct Transfer] + [main tx logs] + [refund Transfer]

This handles both success and revert uniformly without overriding
execution_result, and removes the saved_logs save/restore workaround.
…semantic

go-ethereum uses evm.StaticCall() for balanceOf queries which cannot
emit events. Our evm_call() runs a regular CALL, so a non-standard
token could emit logs that leak into pre_fee_logs or post_fee_logs.

Truncate journal logs after evm_call_balance_of to match the read-only
semantic. Also fix a misplaced doc comment on EVM_CALL_GAS_LIMIT.
- Pre-allocate receipt log Vec with exact capacity instead of
  growing through repeated extend calls
- Use direct assignment for post_fee_logs instead of extend
  (the Vec is always empty at that point)
Replace inline version/field validation in validate_morph_txs() with
a call to TxMorph::validate(), which already implements the same checks
(version constraints, memo length, fee_limit rules, gas price ordering).

Only the hardfork-gated V1 rejection remains in the consensus layer,
as it requires chain_spec context unavailable in the primitive layer.
revm's mark_warm_with_transaction_id() resets original_value = present_value
on cold→warm transitions. After token fee deduction marks slots cold, the
main tx's first SLOAD corrupts original_value, making EIP-2200 SSTORE see
"clean" slots (2900 gas) instead of "dirty" (100 gas) — a 2800 gas mismatch
vs go-eth at block 19720219.

Fix: custom SLOAD reads the true committed value from DB (O(1) cache hit)
on cold loads and restores original_value if corrupted. Only triggers on
cold SLOADs; zero overhead on warm accesses.
…tore

The previous SLOAD override (d71106e) unconditionally read database.storage()
on every cold SLOAD to fix original_value corruption from token fee deduction's
mark_cold(). This had two problems:

1. For non-token-fee transactions, it incorrectly overwrote the correct
   original_value set by mark_warm_with_transaction_id() during cross-tx
   slot access, causing SSTORE gas miscalculation in multi-tx blocks.

2. database.storage() returns the beginning-of-block value, which is wrong
   for the 2nd+ transaction in a block where a prior tx modified the same
   slot (go-eth's originStorage reflects the post-previous-tx value).

Fix: add a had_token_fee_deduction flag on L1BlockInfo (the chain context
field) to scope the fix to only token fee transactions. Replace the
database.storage() read with a save-restore pattern: save the slot's
original_value from journal state BEFORE sload_skip_cold_load corrupts it
via mark_warm_with_transaction_id, then restore after. The journal state's
original_value at that point correctly reflects the post-previous-tx
committed value, matching go-eth's originStorage semantics.
The manual impl Compact for TxMorph was fundamentally broken:
- to_compact wrote variable-length fields without flags bytes
- from_compact passed total buffer length to each field's decoder,
  causing u64::from_compact(buf, huge_len) to panic on arr[8-len]

This made stage unwind and any static file read of MorphTx crash.

Fix: follow reth's standard pattern (same as TxEip1559 in reth-codecs):
- TxMorphCompact helper struct with #[derive(Compact)] for automatic
  bitfield/flags generation
- u8/u16 fields stored as u64 (reth_codecs doesn't support u8/u16)
- memo + input packed into single Bytes field (derive allows only one,
  must be last): [memo_len: u8][memo_bytes][input_bytes]

Note: new encoding is incompatible with old broken data in static files.
Existing reth databases with MorphTx must be rebuilt.
Update reth dependency to include the MerkleUnwind patch that
downgrades state root validation errors to warnings during unwind,
allowing unwind past pre-MPTFork blocks with ZK-trie roots.
Adapted from local-test (mainnet) with Hoodi-specific config:
- chain: hoodi (chain ID 2910)
- L1 RPC: ethereum-hoodi-rpc.publicnode.com
- deposit contract: 0xd7f39d837f...
- rollup contract: 0x57e0e6dde8...
- config bundle from morph-l2/run-morph-node hoodi/
The L1 Gas Price Oracle can be updated by a regular L2 transaction
(from the external gas-oracle service) within the same block.
Caching L1BlockInfo once per block in apply_pre_execution_changes
caused subsequent user txs to use stale fee parameters, leading to
"lack of funds" errors (observed at mainnet block 152685).

Restore per-tx L1BlockInfo::try_fetch() in the handler's
validate_and_deduct_eth_fee / validate_and_deduct_token_fee,
matching go-ethereum's per-tx read behavior.
During rebase onto main, PR#39's fee_slot_saves SLOAD fix was superseded
by the branch's save-restore SLOAD approach (had_token_fee_deduction flag).
This commit removes the leftover code from PR#39:
- Duplicate sload_morph function and SLOAD registration in evm.rs
- fee_slot_saves collection in handler.rs slot-based transfer path
- fee_slot_original_values field on MorphTxEnv in tx.rs
@panos-xyz panos-xyz force-pushed the panos/pre-audit-fixes branch from b3d73fb to 68a1bdc Compare March 9, 2026 10:42
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/revm/src/handler.rs (1)

322-365: ⚠️ Potential issue | 🟠 Major

Short-circuit ETH fee charging when fee charging is disabled.

Unlike validate_and_deduct_token_fee(), this path still reads L1BlockInfo and deducts gas + l1_data_fee when cfg.is_fee_charge_disabled() is true. That changes execution-time balances in eth_call / estimateGas and can fail just because the oracle account is unavailable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/revm/src/handler.rs` around lines 322 - 365,
validate_and_deduct_eth_fee currently always fetches L1BlockInfo and
computes/deducts L2+L1 fees even when fee charging is disabled; change
validate_and_deduct_eth_fee to check cfg.is_fee_charge_disabled() early (using
evm.ctx_ref().cfg() or the cfg from evm.ctx().all_mut()) and if true, set
evm.cached_l1_data_fee = 0 (or equivalent zero value) and return Ok(()) without
calling L1BlockInfo::try_fetch, l1_block_info.calculate_tx_l1_cost, or
calculate_caller_fee_with_l1_cost so that eth_call/estimateGas paths do not
mutate balances or depend on the oracle.
♻️ Duplicate comments (1)
crates/primitives/src/transaction/morph_transaction.rs (1)

937-947: ⚠️ Potential issue | 🟠 Major

Add bounds validation before unpacking memo and input from the packed data field.

The from_compact implementation accesses helper.data[0] and derives slices without verifying that the data buffer has sufficient length. If helper.data is empty or truncated, this will panic.

🛡️ Proposed fix to add bounds validation
 fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
     let (helper, remaining) = TxMorphCompact::from_compact(buf, len);

+    // Validate packed data has at least the memo length byte
+    if helper.data.is_empty() {
+        panic!("corrupted compact data: missing memo length byte");
+    }
+
     // Unpack memo + input from the combined data field
     let memo_len = helper.data[0] as usize;
+
+    // Validate data has enough bytes for memo and input
+    if helper.data.len() < 1 + memo_len {
+        panic!("corrupted compact data: truncated memo/input");
+    }
+
     let memo = if memo_len == 0 {
         None
     } else {
         Some(Bytes::copy_from_slice(&helper.data[1..1 + memo_len]))
     };
     let input = Bytes::copy_from_slice(&helper.data[1 + memo_len..]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/primitives/src/transaction/morph_transaction.rs` around lines 937 -
947, from_compact currently reads helper.data[0] and slices helper.data without
validating lengths, which can panic on empty/truncated buffers; update
from_compact (the function that calls TxMorphCompact::from_compact and then
reads helper.data, memo_len, and constructs memo/input via
Bytes::copy_from_slice) to first check that helper.data.len() >= 1, then compute
memo_len and verify helper.data.len() >= 1 + memo_len before slicing for the
memo, and finally ensure there are bytes remaining for the input slice (or allow
empty input if intended); return an appropriate error or handle the malformed
case instead of slicing when bounds fail.
🧹 Nitpick comments (2)
crates/txpool/src/error.rs (1)

127-128: Consider adding test coverage for InvalidFormat error handling.

The new InvalidFormat variant is added and handled in is_bad_transaction, but the test module doesn't appear to include a test case verifying this behavior.

🧪 Suggested test addition
#[test]
fn test_invalid_format_is_bad_transaction() {
    let err = MorphTxError::InvalidFormat {
        reason: "test reason".to_string(),
    };
    assert!(err.is_bad_transaction());
    assert!(err.to_string().contains("invalid MorphTx format"));
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/txpool/src/error.rs` around lines 127 - 128, Add a unit test that
covers the new MorphTxError::InvalidFormat variant to ensure
is_bad_transaction() returns true and the Display/ToString contains the expected
message; specifically, in the existing test module add a test (e.g.,
test_invalid_format_is_bad_transaction) that constructs
MorphTxError::InvalidFormat { reason: "test reason".to_string() }, asserts
err.is_bad_transaction() is true, and asserts err.to_string() contains "invalid
MorphTx format" (or the exact wording used in Display for InvalidFormat).
local-test-hoodi/sync-test.sh (1)

60-61: Dead code: sample_count and total_bps are unused.

These variables are accumulated during sampling but never used. The final average is correctly computed from total_blocks / duration at line 93, which is more accurate. Consider removing the unused variables.

🧹 Proposed cleanup
-  local elapsed=0 sample_count=0
-  local total_bps=0 peak_bps=0
+  local elapsed=0
+  local peak_bps=0

And in the loop:

-    sample_count=$((sample_count + 1))
-    total_bps=$(echo "${total_bps} + ${bps}" | bc)
-
     # Track peak

Also applies to: 78-79

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/sync-test.sh` around lines 60 - 61, Remove the dead
accumulation of sample_count and total_bps: delete their declarations ("local
elapsed=0 sample_count=0" / "local total_bps=0") and remove any increments/uses
of sample_count and total_bps inside the sampling loop; leave peak_bps and
total_blocks/duration-based average logic intact (the final average should
continue to be computed from total_blocks and duration). This keeps symbols
sample_count and total_bps out of the script and avoids unused-variable noise
while retaining total_blocks and duration for the correct average calculation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/revm/src/handler.rs`:
- Around line 802-859: Before invoking evm_call, create a journal/checkpoint on
the EVM state and, after the call, if any of the post-call validations (ABI bool
check using frame_result.interpreter_result().output, balance verification via
evm_call_balance_of, or any other validation that currently only restores
evm.tx) fail, revert the EVM to that checkpoint (in addition to restoring
evm.tx) so logs/balances are rolled back; only clear/commit the checkpoint after
all validations succeed (and ensure failures propagated through
reimburse_caller_token_fee() still trigger the revert path).
- Around line 745-766: The helper evm_call_balance_of currently performs a
normal internal call and only truncates logs, which lets malicious balanceOf
calls mutate storage; change it to take a journal snapshot before calling
evm_call (or use a read-only execution path if available), execute the call,
decode the output as now (using result.instruction_result() and
result.interpreter_result().output), then revert the journal to the snapshot so
any storage/log changes are discarded; update places that used log_count_before
and logs.truncate to instead snapshot/revert around the evm_call invocation in
evm_call_balance_of.

In `@local-test-hoodi/common.sh`:
- Line 32: The default RPC bind address currently sets RETH_HTTP_ADDR to 0.0.0.0
which exposes the HTTP RPC on all interfaces; change the default to 127.0.0.1 by
updating the RETH_HTTP_ADDR default assignment in local-test-hoodi/common.sh
(the : "${RETH_HTTP_ADDR:=0.0.0.0}" line) so local reth-start.sh/geth-start.sh
instances bind only to localhost.
- Around line 56-68: pm2_check currently only verifies pm2 but pm2_is_running
relies on jq; extend pm2_check to also verify jq (use command -v jq &>/dev/null)
and emit a clear error/installation hint (e.g., "ERROR: jq is not installed;
install with: apt-get install -y jq or brew install jq" or similar) and return
non-zero when missing so callers like pm2_is_running won't silently fail; update
the function name references pm2_check and pm2_is_running so reviewers can
locate the changes.

In `@local-test-hoodi/geth-start.sh`:
- Around line 32-40: The script currently unconditionally adds the flags
--http.corsdomain "*" and --http.vhosts "*" which should be opt-in; change the
logic that builds the reth command so those flags are appended only when
explicit env vars are set (e.g. RETH_ALLOW_WILDCARD_CORS="true" and
RETH_ALLOW_WILDCARD_VHOSTS="true"), otherwise omit --http.corsdomain and
--http.vhosts; use the existing RETH_* env var pattern (like
RETH_HTTP_ADDR/RETH_HTTP_PORT) to detect the opt-in and check for a value of
"true" (case-insensitive) before including the corresponding flags.

In `@local-test-hoodi/prepare.sh`:
- Around line 24-25: The script's preparation path logs a warning when required
files under NODE_HOME (config/config.toml, config/genesis.json,
data/priv_validator_state.json) are missing and DOWNLOAD_CONFIG_IF_MISSING is
"0", but it still proceeds and prints "Preparation finished." causing later
runtime failures; update prepare.sh to perform a final postcondition check after
the existing conditional blocks (including the similar block around lines 82-85)
that verifies those three files exist and, if any are missing, emit a clear
error via echo/processLogger and exit with a non-zero status to fail fast;
reference the variables DOWNLOAD_CONFIG_IF_MISSING and NODE_HOME and the file
checks used in the original if-statement to locate where to add the
postcondition and exit.

In `@local-test-hoodi/start-all.sh`:
- Around line 25-33: The readiness loop in start-all.sh only checks curl's exit
code and discards the response body, so non-JSON-RPC or error responses can
falsely mark RPC as ready; update the loop to capture the curl response (against
"http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" for the eth_chainId call), then
validate the JSON-RPC .result field (like the existing get_block_number()
pattern in sync-test.sh) using jq -e (or equivalent) to ensure a valid chainId
is returned before breaking the loop and proceeding.

In `@local-test-hoodi/status.sh`:
- Around line 24-27: The assignments like chain_id in status.sh abort the script
under set -euo pipefail when curl or jq fail; create a helper (e.g.,
probe_json_rpc or safe_probe) that runs the curl/jq pipeline but absorbs
failures (use curl ... || true and capture exit-safe output), return a default
token like "unavailable" or "error" on failure, and replace the direct
assignments (chain_id and the other JSON-RPC probe variables and the Tendermint
/status and /net_info probes) to call this helper so the script can branch on
the captured output instead of exiting immediately.

In `@local-test-hoodi/sync-test.sh`:
- Around line 275-277: The grep pipeline that reads lines matching "Block Sync
Rate" from "${NODE_LOG_FILE}" can cause the script to exit when no matches are
found (because grep returns non‑zero), so modify the pipeline to ignore grep's
non‑zero exit (e.g., wrap the grep in a command group or append a "|| true")
before piping into tail and the while loop; specifically update the line
invoking grep ("grep \"Block Sync Rate\" \"${NODE_LOG_FILE}\" 2>/dev/null") so
its failure is neutralized (for example: { grep ... 2>/dev/null || true; } |
tail -5 | while read -r line; do ...) to prevent script termination when there
are no matches.
- Around line 213-215: The grep pipeline using grep "Block Sync Rate"
"${NODE_LOG_FILE}" can return exit code 1 when there are no matches, causing the
script to exit under set -eo pipefail; change the pipeline so grep's non-zero
exit won't abort the pipeline—for example, group the grep with a fallback (use {
grep "Block Sync Rate" "${NODE_LOG_FILE}" 2>/dev/null || true; } | tail -5 |
while read -r line; do ...) so the rest of the pipeline (tail and the while
loop) always runs; update the line referencing NODE_LOG_FILE and the
grep/tail/while pipeline accordingly.

In `@local-test/node-start.sh`:
- Around line 36-40: The current code re-splits MORPH_NODE_EXTRA_FLAGS with
extra_flags=(${MORPH_NODE_EXTRA_FLAGS}) which breaks quoted flags and globbing;
instead initialize MORPH_NODE_EXTRA_FLAGS as a Bash array in common.sh (e.g.,
MOPRH_NODE_EXTRA_FLAGS=(--mainnet) or use :
"${MORPH_NODE_EXTRA_FLAGS[@]:=(--mainnet)}") and in node-start.sh append it
directly with args+=("${MORPH_NODE_EXTRA_FLAGS[@]}") (remove the unquoted array
re-splitting line) so quoted multi-word flags and patterns are preserved when
passed to morph-node.

---

Outside diff comments:
In `@crates/revm/src/handler.rs`:
- Around line 322-365: validate_and_deduct_eth_fee currently always fetches
L1BlockInfo and computes/deducts L2+L1 fees even when fee charging is disabled;
change validate_and_deduct_eth_fee to check cfg.is_fee_charge_disabled() early
(using evm.ctx_ref().cfg() or the cfg from evm.ctx().all_mut()) and if true, set
evm.cached_l1_data_fee = 0 (or equivalent zero value) and return Ok(()) without
calling L1BlockInfo::try_fetch, l1_block_info.calculate_tx_l1_cost, or
calculate_caller_fee_with_l1_cost so that eth_call/estimateGas paths do not
mutate balances or depend on the oracle.

---

Duplicate comments:
In `@crates/primitives/src/transaction/morph_transaction.rs`:
- Around line 937-947: from_compact currently reads helper.data[0] and slices
helper.data without validating lengths, which can panic on empty/truncated
buffers; update from_compact (the function that calls
TxMorphCompact::from_compact and then reads helper.data, memo_len, and
constructs memo/input via Bytes::copy_from_slice) to first check that
helper.data.len() >= 1, then compute memo_len and verify helper.data.len() >= 1
+ memo_len before slicing for the memo, and finally ensure there are bytes
remaining for the input slice (or allow empty input if intended); return an
appropriate error or handle the malformed case instead of slicing when bounds
fail.

---

Nitpick comments:
In `@crates/txpool/src/error.rs`:
- Around line 127-128: Add a unit test that covers the new
MorphTxError::InvalidFormat variant to ensure is_bad_transaction() returns true
and the Display/ToString contains the expected message; specifically, in the
existing test module add a test (e.g., test_invalid_format_is_bad_transaction)
that constructs MorphTxError::InvalidFormat { reason: "test reason".to_string()
}, asserts err.is_bad_transaction() is true, and asserts err.to_string()
contains "invalid MorphTx format" (or the exact wording used in Display for
InvalidFormat).

In `@local-test-hoodi/sync-test.sh`:
- Around line 60-61: Remove the dead accumulation of sample_count and total_bps:
delete their declarations ("local elapsed=0 sample_count=0" / "local
total_bps=0") and remove any increments/uses of sample_count and total_bps
inside the sampling loop; leave peak_bps and total_blocks/duration-based average
logic intact (the final average should continue to be computed from total_blocks
and duration). This keeps symbols sample_count and total_bps out of the script
and avoids unused-variable noise while retaining total_blocks and duration for
the correct average calculation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4b6ba3fe-891d-4944-bf36-a7fdae4c7906

📥 Commits

Reviewing files that changed from the base of the PR and between b3d73fb and aba9c71.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (45)
  • Cargo.toml
  • bin/morph-reth/src/main.rs
  • crates/consensus/src/validation.rs
  • crates/engine-api/src/builder.rs
  • crates/engine-api/src/rpc.rs
  • crates/evm/src/block/mod.rs
  • crates/evm/src/block/receipt.rs
  • crates/evm/src/evm.rs
  • crates/node/src/components/pool.rs
  • crates/node/src/validator.rs
  • crates/payload/builder/src/builder.rs
  • crates/payload/builder/src/error.rs
  • crates/primitives/src/receipt/envelope.rs
  • crates/primitives/src/transaction/envelope.rs
  • crates/primitives/src/transaction/morph_transaction.rs
  • crates/revm/Cargo.toml
  • crates/revm/src/evm.rs
  • crates/revm/src/handler.rs
  • crates/revm/src/l1block.rs
  • crates/revm/src/precompiles.rs
  • crates/revm/src/token_fee.rs
  • crates/revm/src/tx.rs
  • crates/rpc/src/eth/transaction.rs
  • crates/txpool/src/error.rs
  • crates/txpool/src/maintain.rs
  • crates/txpool/src/morph_tx_validation.rs
  • crates/txpool/src/validator.rs
  • local-test-hoodi/.gitignore
  • local-test-hoodi/common.sh
  • local-test-hoodi/geth-start.sh
  • local-test-hoodi/geth-stop.sh
  • local-test-hoodi/node-start.sh
  • local-test-hoodi/node-stop.sh
  • local-test-hoodi/prepare.sh
  • local-test-hoodi/reset.sh
  • local-test-hoodi/reth-start.sh
  • local-test-hoodi/reth-stop.sh
  • local-test-hoodi/start-all.sh
  • local-test-hoodi/status.sh
  • local-test-hoodi/stop-all.sh
  • local-test-hoodi/sync-test.sh
  • local-test/common.sh
  • local-test/node-start.sh
  • local-test/reth-start.sh
  • local-test/sync-test.sh
💤 Files with no reviewable changes (2)
  • crates/revm/Cargo.toml
  • crates/payload/builder/src/error.rs
🚧 Files skipped from review as they are similar to previous changes (14)
  • bin/morph-reth/src/main.rs
  • local-test-hoodi/reset.sh
  • local-test/common.sh
  • local-test-hoodi/node-start.sh
  • crates/evm/src/block/receipt.rs
  • local-test-hoodi/stop-all.sh
  • crates/node/src/validator.rs
  • local-test/reth-start.sh
  • crates/node/src/components/pool.rs
  • crates/txpool/src/morph_tx_validation.rs
  • crates/payload/builder/src/builder.rs
  • local-test-hoodi/.gitignore
  • local-test/sync-test.sh
  • local-test-hoodi/reth-stop.sh

: "${RETH_BIN:=./target/release/morph-reth}"
: "${RETH_DATA_DIR:=./local-test-hoodi/reth-data}"
: "${RETH_LOG_FILE:=./local-test-hoodi/reth.log}"
: "${RETH_HTTP_ADDR:=0.0.0.0}"
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Default the RPC bind address to localhost.

RETH_HTTP_ADDR is reused by both reth-start.sh and geth-start.sh. With 0.0.0.0 as the default, those scripts expose debug-capable HTTP RPC on every interface unless the caller remembers to override it. For a local harness, 127.0.0.1 should be the safe default.

🩹 Suggested fix
-: "${RETH_HTTP_ADDR:=0.0.0.0}"
+: "${RETH_HTTP_ADDR:=127.0.0.1}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
: "${RETH_HTTP_ADDR:=0.0.0.0}"
: "${RETH_HTTP_ADDR:=127.0.0.1}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/common.sh` at line 32, The default RPC bind address
currently sets RETH_HTTP_ADDR to 0.0.0.0 which exposes the HTTP RPC on all
interfaces; change the default to 127.0.0.1 by updating the RETH_HTTP_ADDR
default assignment in local-test-hoodi/common.sh (the :
"${RETH_HTTP_ADDR:=0.0.0.0}" line) so local reth-start.sh/geth-start.sh
instances bind only to localhost.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dev-only script for local testing. 0.0.0.0 is intentional — makes the RPC accessible from Docker containers and the host machine during multi-process test setups.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panos-xyz, understood — binding to 0.0.0.0 makes sense for a multi-process local test harness where Docker containers and the host machine all need to reach the RPC endpoint. Thanks for the context!


✏️ Learnings added
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/common.sh:32-32
Timestamp: 2026-03-10T01:38:12.316Z
Learning: In `local-test-hoodi/common.sh`, `RETH_HTTP_ADDR` defaults to `0.0.0.0` intentionally. This is a dev-only local testing script designed for multi-process setups (Docker containers + host machine), so the RPC must be accessible on all interfaces, not just localhost.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/reset.sh:38-41
Timestamp: 2026-03-10T01:33:59.039Z
Learning: In `local-test-hoodi/reset.sh`, the `|| true` after `stop-all.sh` and `pm2_stop "morph-geth"` is intentional. This is a dev-only reset script where stop commands are expected to fail when processes aren't running (e.g., after a crash or manual stop), and the cleanup should proceed unconditionally regardless of process state.

Learnt from: anylots
Repo: morph-l2/morph-reth PR: 8
File: crates/revm/src/handler.rs:305-362
Timestamp: 2026-01-20T02:14:54.857Z
Learning: In Morph's token fee implementation (crates/revm/src/handler.rs), the refund calculation using `calculate_token_amount` (which rounds up) for both charging and refunding is consistent with geth's implementation and is intentional behavior.

Comment on lines +32 to +40
--http \
--http.addr "${RETH_HTTP_ADDR}" \
--http.port "${RETH_HTTP_PORT}" \
--http.corsdomain "*" \
--http.vhosts "*" \
--http.api "web3,eth,debug,txpool,net,morph,engine" \
--authrpc.addr "${RETH_AUTHRPC_ADDR}" \
--authrpc.port "${RETH_AUTHRPC_PORT}" \
--authrpc.vhosts "*" \
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make wildcard CORS/vhosts opt-in.

This script always enables --http.corsdomain "*" --http.vhosts "*". If someone exposes the bind address beyond localhost, browser access to the debug/txpool RPC surface is wide open. Gate these behind explicit env vars instead of defaulting to *.

🩹 Suggested fix
-  --http.corsdomain "*" \
-  --http.vhosts "*" \
+  --http.corsdomain "${GETH_HTTP_CORSDOMAIN:-http://127.0.0.1}" \
+  --http.vhosts "${GETH_HTTP_VHOSTS:-localhost,127.0.0.1}" \
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/geth-start.sh` around lines 32 - 40, The script currently
unconditionally adds the flags --http.corsdomain "*" and --http.vhosts "*" which
should be opt-in; change the logic that builds the reth command so those flags
are appended only when explicit env vars are set (e.g.
RETH_ALLOW_WILDCARD_CORS="true" and RETH_ALLOW_WILDCARD_VHOSTS="true"),
otherwise omit --http.corsdomain and --http.vhosts; use the existing RETH_* env
var pattern (like RETH_HTTP_ADDR/RETH_HTTP_PORT) to detect the opt-in and check
for a value of "true" (case-insensitive) before including the corresponding
flags.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dev-only script for local testing — not for production deployment. Wildcard CORS is intentional for dev convenience with browser-based tools.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panos-xyz, understood — noted that the wildcard --http.corsdomain "*" and --http.vhosts "*" settings in local-test-hoodi/geth-start.sh are intentional for dev convenience with browser-based tools, and this is a local-testing-only script not intended for production. I'll keep this in mind for future reviews.

ʕ •ᴥ•ʔ noted and filed away in the burrow!


✏️ Learnings added
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/geth-start.sh:32-40
Timestamp: 2026-03-10T01:38:17.613Z
Learning: In `local-test-hoodi/geth-start.sh`, the flags `--http.corsdomain "*"` and `--http.vhosts "*"` are intentional. This is a dev-only local testing script (not for production), and wildcard CORS/vhosts are set deliberately for convenience with browser-based RPC tools. Do not flag these as security issues in this file.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/reset.sh:38-41
Timestamp: 2026-03-10T01:33:59.039Z
Learning: In `local-test-hoodi/reset.sh`, the `|| true` after `stop-all.sh` and `pm2_stop "morph-geth"` is intentional. This is a dev-only reset script where stop commands are expected to fail when processes aren't running (e.g., after a crash or manual stop), and the cleanup should proceed unconditionally regardless of process state.

Learnt from: anylots
Repo: morph-l2/morph-reth PR: 8
File: crates/revm/src/handler.rs:305-362
Timestamp: 2026-01-20T02:14:54.857Z
Learning: In Morph's token fee implementation (crates/revm/src/handler.rs), the refund calculation using `calculate_token_amount` (which rounds up) for both charging and refunding is consistent with geth's implementation and is intentional behavior.

Comment on lines +25 to +33
# Wait for RPC to be ready
echo "Waiting for RPC..."
max_retries=60
retry_count=0
while [[ ${retry_count} -lt ${max_retries} ]]; do
if curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
"http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" >/dev/null 2>&1; then
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify that the readiness probe ignores the JSON-RPC body today.
sed -n '25,33p' local-test-hoodi/start-all.sh
rg -n '\.result|jq' local-test-hoodi/start-all.sh

Repository: morph-l2/morph-reth

Length of output: 405


🏁 Script executed:

# Check for other readiness probes in the codebase
rg -n 'curl.*POST.*jsonrpc|readiness|health.*check' --type sh -i

Repository: morph-l2/morph-reth

Length of output: 256


🏁 Script executed:

# Check the broader context of the start-all.sh file to understand the workflow
sed -n '20,45p' local-test-hoodi/start-all.sh | cat -n

Repository: morph-l2/morph-reth

Length of output: 929


🏁 Script executed:

# Check what happens after the RPC readiness check (after line 33)
sed -n '33,50p' local-test-hoodi/start-all.sh | cat -n

Repository: morph-l2/morph-reth

Length of output: 630


🏁 Script executed:

# Verify curl behavior: does -s flag suppress HTTP error exit codes?
# Looking for curl usage patterns in the repo
rg -A 2 'curl.*-s' local-test-hoodi/ --type sh | head -40

Repository: morph-l2/morph-reth

Length of output: 2136


🏁 Script executed:

# Verify how sync-test.sh validates the JSON-RPC response
sed -n '1,30p' local-test-hoodi/sync-test.sh | cat -n

Repository: morph-l2/morph-reth

Length of output: 1455


🏁 Script executed:

# Search for the specific line in sync-test.sh that captures and validates the curl result
rg -B 2 -A 5 'eth_chainId' local-test-hoodi/sync-test.sh

Repository: morph-l2/morph-reth

Length of output: 352


Validate JSON-RPC response before treating RPC as ready.

This loop only checks curl exit status and discards the response body. If another process is bound to RETH_HTTP_PORT, or the endpoint returns an HTTP error or JSON-RPC error response, the script still proceeds. The codebase already has a validated pattern in sync-test.sh's get_block_number() function (lines 22–30) that extracts and validates .result. Apply the same approach here.

🩹 Suggested fix
 while [[ ${retry_count} -lt ${max_retries} ]]; do
-  if curl -s -X POST \
+  if curl -fsS -X POST \
     -H "Content-Type: application/json" \
     --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
-    "http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" >/dev/null 2>&1; then
+    "http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" 2>/dev/null | jq -e '.result != null' >/dev/null; then
     echo "RPC ready"
     break
   fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Wait for RPC to be ready
echo "Waiting for RPC..."
max_retries=60
retry_count=0
while [[ ${retry_count} -lt ${max_retries} ]]; do
if curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
"http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" >/dev/null 2>&1; then
# Wait for RPC to be ready
echo "Waiting for RPC..."
max_retries=60
retry_count=0
while [[ ${retry_count} -lt ${max_retries} ]]; do
if curl -fsS -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
"http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" 2>/dev/null | jq -e '.result != null' >/dev/null; then
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/start-all.sh` around lines 25 - 33, The readiness loop in
start-all.sh only checks curl's exit code and discards the response body, so
non-JSON-RPC or error responses can falsely mark RPC as ready; update the loop
to capture the curl response (against
"http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" for the eth_chainId call), then
validate the JSON-RPC .result field (like the existing get_block_number()
pattern in sync-test.sh) using jq -e (or equivalent) to ensure a valid chainId
is returned before breaking the loop and proceeding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dev-only script. The readiness probe only needs to confirm the RPC is accepting connections — the response body doesn't matter for a startup check.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panos-xyz, understood — for a dev-only startup script, a connection-level check is sufficient and adding jq parsing would be unnecessary overhead. Acknowledged.


✏️ Learnings added
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/start-all.sh:25-33
Timestamp: 2026-03-10T01:38:22.274Z
Learning: In `local-test-hoodi/start-all.sh`, the RPC readiness probe in the `start-all.sh` loop intentionally uses only curl's exit code to confirm the endpoint is accepting connections. This is a dev-only script and validating the JSON-RPC response body (e.g., `.result`) is not required — a successful TCP connection is sufficient for the startup check.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/reset.sh:38-41
Timestamp: 2026-03-10T01:33:59.039Z
Learning: In `local-test-hoodi/reset.sh`, the `|| true` after `stop-all.sh` and `pm2_stop "morph-geth"` is intentional. This is a dev-only reset script where stop commands are expected to fail when processes aren't running (e.g., after a crash or manual stop), and the cleanup should proceed unconditionally regardless of process state.

Learnt from: anylots
Repo: morph-l2/morph-reth PR: 8
File: crates/revm/src/handler.rs:305-362
Timestamp: 2026-01-20T02:14:54.857Z
Learning: In Morph's token fee implementation (crates/revm/src/handler.rs), the refund calculation using `calculate_token_amount` (which rounds up) for both charging and refunding is consistent with geth's implementation and is intentional behavior.

Comment on lines +24 to +27
chain_id=$(curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
"http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" 2>/dev/null | jq -r '.result // "error"')
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Demonstrate the errexit behavior and show the affected assignments.
if bash -c 'set -euo pipefail; out=$(false); echo unreachable' >/dev/null 2>&1; then
  echo "unexpected: command substitution did not abort"
else
  echo "expected: failing command substitution aborts with set -e"
fi

sed -n '22,67p' local-test-hoodi/status.sh
sed -n '78,99p' local-test-hoodi/status.sh

Repository: morph-l2/morph-reth

Length of output: 2563


🏁 Script executed:

head -n 25 local-test-hoodi/status.sh

Repository: morph-l2/morph-reth

Length of output: 696


🏁 Script executed:

# Verify all affected line ranges mentioned in the review
sed -n '24,27p' local-test-hoodi/status.sh | cat -n
echo "---"
sed -n '38,41p' local-test-hoodi/status.sh | cat -n
echo "---"
sed -n '51,54p' local-test-hoodi/status.sh | cat -n
echo "---"
sed -n '64,67p' local-test-hoodi/status.sh | cat -n
echo "---"
sed -n '80p' local-test-hoodi/status.sh | cat -n
echo "---"
sed -n '96p' local-test-hoodi/status.sh | cat -n

Repository: morph-l2/morph-reth

Length of output: 1287


These probes abort status.sh instead of falling back.

With set -euo pipefail, any unreachable endpoint or jq parse failure inside these assignments terminates the script before the "unavailable" / "RPC not available" branches run. Wrap the probe in a helper that absorbs failures and branch on the captured output instead.

🩹 Suggested fix
+rpc_result() {
+  local url="$1"
+  local body="$2"
+  curl -fsS -X POST \
+    -H "Content-Type: application/json" \
+    --data "${body}" \
+    "${url}" 2>/dev/null | jq -r '.result // empty' 2>/dev/null || true
+}
+
 # Chain ID
 echo -n "Chain ID: "
-chain_id=$(curl -s -X POST \
-  -H "Content-Type: application/json" \
-  --data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
-  "http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" 2>/dev/null | jq -r '.result // "error"')
-if [[ "${chain_id}" != "error" && "${chain_id}" != "null" ]]; then
+chain_id=$(rpc_result "http://${RETH_HTTP_ADDR}:${RETH_HTTP_PORT}" '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}')
+if [[ -n "${chain_id}" ]]; then
   # Convert hex to decimal
   chain_id_dec=$((chain_id))
   echo "${chain_id} (${chain_id_dec})"
 else
   echo "unavailable"
 fi

Apply the same helper pattern to the other JSON-RPC probes, and use curl ... || true for the Tendermint /status and /net_info assignments.

Also applies to: 38-41, 51-54, 64-67, 80, 96

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/status.sh` around lines 24 - 27, The assignments like
chain_id in status.sh abort the script under set -euo pipefail when curl or jq
fail; create a helper (e.g., probe_json_rpc or safe_probe) that runs the curl/jq
pipeline but absorbs failures (use curl ... || true and capture exit-safe
output), return a default token like "unavailable" or "error" on failure, and
replace the direct assignments (chain_id and the other JSON-RPC probe variables
and the Tendermint /status and /net_info probes) to call this helper so the
script can branch on the captured output instead of exiting immediately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dev-only script. The set -e + curl pattern works fine here — if curl fails, the script exits which is the desired behavior (no point continuing status checks if RPC is unreachable).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panos-xyz, understood — makes sense for a dev-only status script. If the RPC is unreachable there's nothing meaningful left to display, so aborting early is a reasonable design choice.


✏️ Learnings added
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/status.sh:24-27
Timestamp: 2026-03-10T01:38:27.215Z
Learning: In `local-test-hoodi/status.sh`, the `set -euo pipefail` + bare `curl | jq` assignment pattern (without `|| true`) is intentional. This is a dev-only status script where aborting on RPC/curl failure is the desired behavior — there is no value in continuing status checks if the endpoint is unreachable.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: panos-xyz
Repo: morph-l2/morph-reth PR: 40
File: local-test-hoodi/reset.sh:38-41
Timestamp: 2026-03-10T01:33:59.039Z
Learning: In `local-test-hoodi/reset.sh`, the `|| true` after `stop-all.sh` and `pm2_stop "morph-geth"` is intentional. This is a dev-only reset script where stop commands are expected to fail when processes aren't running (e.g., after a crash or manual stop), and the cleanup should proceed unconditionally regardless of process state.

Learnt from: anylots
Repo: morph-l2/morph-reth PR: 8
File: crates/revm/src/handler.rs:305-362
Timestamp: 2026-01-20T02:14:54.857Z
Learning: In Morph's token fee implementation (crates/revm/src/handler.rs), the refund calculation using `calculate_token_amount` (which rounds up) for both charging and refunding is consistent with geth's implementation and is intentional behavior.

Comment on lines +275 to +277
grep "Block Sync Rate" "${NODE_LOG_FILE}" 2>/dev/null | tail -5 | while read -r line; do
echo " ${line}"
done
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same grep pipeline issue as in Phase 1.

Apply the same fix here to prevent script termination if no log matches are found.

🛡️ Proposed fix
-  grep "Block Sync Rate" "${NODE_LOG_FILE}" 2>/dev/null | tail -5 | while read -r line; do
+  grep "Block Sync Rate" "${NODE_LOG_FILE}" 2>/dev/null | tail -5 | while read -r line; do
     echo "    ${line}"
-  done
+  done || true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
grep "Block Sync Rate" "${NODE_LOG_FILE}" 2>/dev/null | tail -5 | while read -r line; do
echo " ${line}"
done
grep "Block Sync Rate" "${NODE_LOG_FILE}" 2>/dev/null | tail -5 | while read -r line; do
echo " ${line}"
done || true
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test-hoodi/sync-test.sh` around lines 275 - 277, The grep pipeline that
reads lines matching "Block Sync Rate" from "${NODE_LOG_FILE}" can cause the
script to exit when no matches are found (because grep returns non‑zero), so
modify the pipeline to ignore grep's non‑zero exit (e.g., wrap the grep in a
command group or append a "|| true") before piping into tail and the while loop;
specifically update the line invoking grep ("grep \"Block Sync Rate\"
\"${NODE_LOG_FILE}\" 2>/dev/null") so its failure is neutralized (for example: {
grep ... 2>/dev/null || true; } | tail -5 | while read -r line; do ...) to
prevent script termination when there are no matches.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above — dev-only, early exit on no matches is acceptable.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rate Limit Exceeded

@panos-xyz have exceeded the limit for the number of chat messages per hour. Please wait 0 minutes and 13 seconds before sending another message.

Comment on lines +36 to +40
if [[ -n "${MORPH_NODE_EXTRA_FLAGS}" ]]; then
# shellcheck disable=SC2206
extra_flags=(${MORPH_NODE_EXTRA_FLAGS})
args+=("${extra_flags[@]}")
fi
Copy link

@coderabbitai coderabbitai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail

tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT
cd "$tmp"
touch file_a file_b

MORPH_NODE_EXTRA_FLAGS='--label "alpha beta" --pattern *'

# Current implementation
# shellcheck disable=SC2206
current=(${MORPH_NODE_EXTRA_FLAGS})

printf 'current (%d args):\n' "${`#current`[@]}"
for arg in "${current[@]}"; do
  printf '<%s>\n' "$arg"
done

printf '\nreference array form:\n'
reference=(--label "alpha beta" --pattern '*')
for arg in "${reference[@]}"; do
  printf '<%s>\n' "$arg"
done

Repository: morph-l2/morph-reth

Length of output: 199


🏁 Script executed:

# Check if the files exist and examine their current content
git ls-files local-test/ | head -20

Repository: morph-l2/morph-reth

Length of output: 386


🏁 Script executed:

# Examine the node-start.sh file around lines 36-40
head -45 local-test/node-start.sh | tail -15

Repository: morph-l2/morph-reth

Length of output: 477


🏁 Script executed:

# Check if common.sh exists and search for MORPH_NODE_EXTRA_FLAGS definition
if [ -f local-test/common.sh ]; then
  grep -n "MORPH_NODE_EXTRA_FLAGS" local-test/common.sh
else
  echo "common.sh not found"
fi

Repository: morph-l2/morph-reth

Length of output: 106


🏁 Script executed:

# Read common.sh to see full context around line 15
head -25 local-test/common.sh | cat -n

Repository: morph-l2/morph-reth

Length of output: 1336


🏁 Script executed:

# Search for all usages of MORPH_NODE_EXTRA_FLAGS in the repository
rg "MORPH_NODE_EXTRA_FLAGS" --type sh

Repository: morph-l2/morph-reth

Length of output: 471


🏁 Script executed:

# Verify the array length check syntax is valid
bash << 'EOF'
# Test the proposed fix syntax
declare -a test_array=("--label" "alpha beta" "--pattern" "*")

# Check if the array length syntax works
if ((${`#test_array`[@]})); then
  echo "Array check: PASS (array has elements)"
  printf "Array has %d elements\n" "${`#test_array`[@]}"
fi

# Also test with empty array
declare -a empty_array=()
if ((${`#empty_array`[@]})); then
  echo "Empty array: FAIL"
else
  echo "Empty array: PASS (correctly detected as empty)"
fi
EOF

Repository: morph-l2/morph-reth

Length of output: 170


Preserve extra flags as an array instead of re-splitting a string.

Line 38 loses information: extra_flags=(${MORPH_NODE_EXTRA_FLAGS}) unquoted expansion ignores embedded quoting and expands globs, so flags like --label "alpha beta" or --pattern * are corrupted before morph-node sees them. Since common.sh is sourced, define MORPH_NODE_EXTRA_FLAGS as a Bash array there and append it directly here.

Proposed fix

In local-test/node-start.sh (lines 36-40):

-if [[ -n "${MORPH_NODE_EXTRA_FLAGS}" ]]; then
-  # shellcheck disable=SC2206
-  extra_flags=(${MORPH_NODE_EXTRA_FLAGS})
-  args+=("${extra_flags[@]}")
-fi
+if ((${`#MORPH_NODE_EXTRA_FLAGS`[@]})); then
+  args+=("${MORPH_NODE_EXTRA_FLAGS[@]}")
+fi

In local-test/common.sh (line 15), change:

: "${MORPH_NODE_EXTRA_FLAGS:=--mainnet}"

to:

: "${MORPH_NODE_EXTRA_FLAGS[@]:=(--mainnet)}"

or initialize as an array with multiple values if needed:

MORPH_NODE_EXTRA_FLAGS=(--mainnet)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@local-test/node-start.sh` around lines 36 - 40, The current code re-splits
MORPH_NODE_EXTRA_FLAGS with extra_flags=(${MORPH_NODE_EXTRA_FLAGS}) which breaks
quoted flags and globbing; instead initialize MORPH_NODE_EXTRA_FLAGS as a Bash
array in common.sh (e.g., MOPRH_NODE_EXTRA_FLAGS=(--mainnet) or use :
"${MORPH_NODE_EXTRA_FLAGS[@]:=(--mainnet)}") and in node-start.sh append it
directly with args+=("${MORPH_NODE_EXTRA_FLAGS[@]}") (remove the unquoted array
re-splitting line) so quoted multi-word flags and patterns are preserved when
passed to morph-node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dev-only script. MORPH_NODE_EXTRA_FLAGS is only set by the operator themselves — word splitting is intentional to allow multiple flags in a single env var (common shell script pattern for extra flags).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rate Limit Exceeded

@panos-xyz have exceeded the limit for the number of chat messages per hour. Please wait 0 minutes and 13 seconds before sending another message.

The validate_l1_messages_in_block header consistency check used strict
equality (header.next_l1_msg_index == last_queue_index + 1), which
rejected blocks where the sequencer skipped L1 messages. Morph allows
L1 messages to be skipped (not included in block body but counted as
processed), as documented by go-eth's NumL1MessagesProcessed():
"This count includes both skipped and included messages."

Block 628697 (mainnet) triggered this with queue_index 2572, 2573 and
header next_l1_msg_index=2575, meaning queue_index 2574 was skipped.

Change the check from != to < so header.next_l1_msg_index must be at
least last_queue_index + 1 but may be greater when messages are skipped.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/consensus/src/validation.rs`:
- Around line 492-494: Update the doc comment for the "Header Consistency" rule
to reflect the relaxed invariant: instead of stating that the last L1 message's
`queue_index + 1` must equal `header.next_l1_msg_index`, document that if L1
messages are present the header's `header.next_l1_msg_index` must be greater
than or equal to the last L1 message's `queue_index + 1` to allow skipped L1
messages; reference the same symbols (`header.next_l1_msg_index` and the last
message's `queue_index`) so readers know the intended non-strict (>=)
relationship used by the implementation and tests.
- Around line 232-243: The current cross-block check in
validate_block_pre_execution (the header check comparing
header.next_l1_msg_index to parent.next_l1_msg_index) allows replayed L1 indices
because it only enforces non-decrease; update the validator to also ensure the
first in-block L1 message index in the block body is >= parent.next_l1_msg_index
(i.e., when inspecting the block's L1 messages verify messages[0].index >=
parent.next_l1_msg_index) or, if that body-level check already exists elsewhere,
add a regression test in the import path that exercises a parent with
next_l1_msg_index=10, a body containing L1(5),L1(6), and a
header.next_l1_msg_index=10 to ensure such replayed indices are rejected;
reference header.next_l1_msg_index, parent.next_l1_msg_index, the in-block L1
message sequence, and validate_block_pre_execution when implementing the fix.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6a3177f1-996b-4764-91fa-e76550c87a1e

📥 Commits

Reviewing files that changed from the base of the PR and between aba9c71 and 96eb175.

📒 Files selected for processing (1)
  • crates/consensus/src/validation.rs

Comment on lines +232 to +243
// Cross-block L1 message index monotonicity: next_l1_msg_index must never
// decrease across blocks. This is the header-only half of L1 message
// validation; the body-level half is in validate_block_pre_execution.
if header.next_l1_msg_index < parent.next_l1_msg_index {
return Err(ConsensusError::Other(
MorphConsensusError::InvalidNextL1MessageIndex {
expected: parent.next_l1_msg_index,
actual: header.next_l1_msg_index,
}
.to_string(),
));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the full context of the validation function at lines 232-243
sed -n '220,260p' crates/consensus/src/validation.rs

Repository: morph-l2/morph-reth

Length of output: 1715


🏁 Script executed:

# Now let's look at the validate_l1_messages_in_block function at lines 566-599
sed -n '556,610p' crates/consensus/src/validation.rs

Repository: morph-l2/morph-reth

Length of output: 2189


🏁 Script executed:

# Let's search for any validation that checks L1 queue index against parent frontier
rg "next_l1_msg_index" crates/consensus/src/validation.rs -B 2 -A 2

Repository: morph-l2/morph-reth

Length of output: 4766


🏁 Script executed:

# Let's check if there's any import-time validation or execution-path validation
# Search for L1 message validation in broader context
fd . crates/consensus/src -type f -name "*.rs" | head -20

Repository: morph-l2/morph-reth

Length of output: 233


🏁 Script executed:

# Search for files that handle L1 transaction execution or validation
find crates -name "*.rs" -type f | head -30

Repository: morph-l2/morph-reth

Length of output: 947


🏁 Script executed:

# Look for any validation of L1 message queue indices
rg "queue_index|queueIndex" crates/ --type rust -l

Repository: morph-l2/morph-reth

Length of output: 425


🏁 Script executed:

# Search for pre-execution or execution validation
rg "validate_block_pre_execution|pre_execution" crates/ --type rust -B 2 -A 5

Repository: morph-l2/morph-reth

Length of output: 8560


🏁 Script executed:

# Check where L1Msg transactions are actually processed
rg "L1Msg|L1Message" crates/consensus/src/validation.rs -B 1 -A 3

Repository: morph-l2/morph-reth

Length of output: 2822


🏁 Script executed:

# Look at the block execution/import paths
fd "execute\|import" crates --type f -name "*.rs" | head -20

Repository: morph-l2/morph-reth

Length of output: 293


🏁 Script executed:

# Search for L1 message processing/execution in the evm module
rg "queue_index|L1Msg" crates/evm/src/ -B 2 -A 5

Repository: morph-l2/morph-reth

Length of output: 899


🏁 Script executed:

# Check the engine validator for L1 message handling
cat crates/engine-api/src/validator.rs

Repository: morph-l2/morph-reth

Length of output: 3962


🏁 Script executed:

# Search for any state tracking of processed L1 messages
rg "processed.*l1|l1.*processed|frontier" crates/evm/src/ --ignore-case

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Look at block execution flow to see where L1 messages are handled
sed -n '1,100p' crates/evm/src/block/mod.rs

Repository: morph-l2/morph-reth

Length of output: 4001


🏁 Script executed:

# Check payload builder for L1 message validation
rg "l1.*message|queue_index" crates/payload/builder/src/builder.rs -B 3 -A 3

Repository: morph-l2/morph-reth

Length of output: 3332


🏁 Script executed:

# Search for block import or finalization that might validate L1 messages
find crates -name "*.rs" -type f -exec grep -l "import\|finalize\|apply_block" {} \; | head -15

Repository: morph-l2/morph-reth

Length of output: 339


🏁 Script executed:

# Look at what happens with L1 messages during state application
rg "apply.*block|commit.*block|BlockBody" crates/ --type rust -l | head -10

Repository: morph-l2/morph-reth

Length of output: 297


🏁 Script executed:

# Check if there's any validation of L1 message queue indices during execution
rg "queue_index.*parent\|parent.*queue_index" crates/ --type rust

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Search for any state tracking of processed frontier or similar
rg "processed.*frontier\|l1_frontier\|message_index" crates/ --type rust -i

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Get full context of payload builder's execute_l1_messages
sed -n '1,150p' crates/payload/builder/src/builder.rs | head -80

Repository: morph-l2/morph-reth

Length of output: 3530


🏁 Script executed:

# Search for where blocks from the network are validated vs. locally built
rg "validate_header_against_parent|consensus.*validate" crates/ --type rust -l | head -10

Repository: morph-l2/morph-reth

Length of output: 97


🏁 Script executed:

# Check if L1 messages can only come from a single source/queue
rg "L1.*queue\|sequencer.*l1" crates/payload/ --type rust -B 2 -A 2

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Understand the architecture: are blocks always built locally or can they arrive externally?
rg "import.*block\|new_payload\|validate.*external" crates/engine-api/ --type rust -B 2 -A 2

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Check if this is a single sequencer system or multi-validator
rg "sequencer\|validator.*node\|block.*source" crates/consensus/src/validation.rs -B 2 -A 2

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Look at the high-level documentation about how blocks enter the system
sed -n '1,100p' crates/consensus/src/lib.rs

Repository: morph-l2/morph-reth

Length of output: 1146


🏁 Script executed:

# Check if there's a difference between building and validating blocks
rg "new_payload\|engine.*validate\|import.*block" crates/engine-api/src/api.rs -B 3 -A 3

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Understand: does the consensus layer validate external blocks or only locally built?
rg "new_payload\|engine_newPayloadV" crates/ --type rust -A 10 | head -50

Repository: morph-l2/morph-reth

Length of output: 45


🏁 Script executed:

# Check the comment around L1 message validation to understand the design intent
sed -n '450,520p' crates/consensus/src/validation.rs

Repository: morph-l2/morph-reth

Length of output: 2716


Replayed L1 queue indices can bypass the validator if sequencer is faulty.

The consensus validator accepts parent.next=10, txs=[L1(5), L1(6)], header.next=10 because the non-decreasing cross-block check (header.next >= parent.next) and the sequential in-block check do not prevent re-including indices already processed. While the payload builder enforces an exact match against the parent frontier during block construction, the consensus validator should not rely solely on sequencer honesty. For defense-in-depth, either add a validator check that first in-block L1 message index must be ≥ parent frontier, or add a regression test in the import path that validates both constraints together.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/consensus/src/validation.rs` around lines 232 - 243, The current
cross-block check in validate_block_pre_execution (the header check comparing
header.next_l1_msg_index to parent.next_l1_msg_index) allows replayed L1 indices
because it only enforces non-decrease; update the validator to also ensure the
first in-block L1 message index in the block body is >= parent.next_l1_msg_index
(i.e., when inspecting the block's L1 messages verify messages[0].index >=
parent.next_l1_msg_index) or, if that body-level check already exists elsewhere,
add a regression test in the import path that exercises a parent with
next_l1_msg_index=10, a body containing L1(5),L1(6), and a
header.next_l1_msg_index=10 to ensure such replayed indices are rejected;
reference header.next_l1_msg_index, parent.next_l1_msg_index, the in-block L1
message sequence, and validate_block_pre_execution when implementing the fix.

- Fix base fee doc: "after Curie" → "always active" (matches code)
- Fix receipt doc: token_fee_info → morph_tx_fields, add missing fields
- Fix get_morph_tx_fields doc: add missing sender parameter
- Fix transfer ABI: transfer(address,amount) → transfer(address,uint256)
- Fix checked_add comment: clarify difference from go-eth unbounded Add
- Fix reimburse comment: remove misleading "system transactions" mention
- Fix L1 message validation doc: equality → >= (skipped messages)
- Fix evm.rs typo: "Consumed" → "Consumes"
- Remove ~12 redundant "restates the code" comments in handler.rs
The ForkchoiceState sent to reth's engine tree always had
finalized_block_hash = B256::ZERO. Because reth's changeset-cache
eviction, sidechain pruning, and block-buffer cleanup all depend on a
non-zero finalized hash, in-memory state grew without bound during
historical sync, eventually causing OOM on 32 GiB servers.

Morph uses Tendermint instant finality so every committed block is
final. During historical sync — before BlockTagService can provide
L1-based values (all batches already finalized, state roots may be
deleted) — FCU now falls back to head as finalized, enabling memory
cleanup. Once BlockTagService starts providing L1 finalized hashes
(via set_block_tags), FCU forwards those instead, keeping RPC-visible
tags consistent with actual L1 finalization status.

EngineStateTracker is extended with a BlockTagCache so set_block_tags
values survive across FCU calls without race conditions.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
crates/engine-api/src/builder.rs (1)

852-866: Test helper diverges from production FeeVault logic.

The test-only apply_executable_data_overrides function unconditionally sets beneficiary = data.miner (line 855), whereas the production execution_payload_from_executable_data applies the FeeVault override. This divergence is acceptable for isolated unit tests but worth noting if these tests are later used to validate full header construction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/engine-api/src/builder.rs` around lines 852 - 866, The test helper
currently unconditionally sets header.inner.beneficiary = data.miner, diverging
from production; update apply_executable_data_overrides so it applies the
FeeVault override the same way execution_payload_from_executable_data does
(i.e., compute or call the same FeeVault override logic instead of always using
data.miner) — locate the closure in block.map_header and replace the direct
assignment to beneficiary with a call to the production fee-override codepath or
reuse the helper that determines the effective beneficiary so test headers match
production behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@crates/engine-api/src/builder.rs`:
- Around line 852-866: The test helper currently unconditionally sets
header.inner.beneficiary = data.miner, diverging from production; update
apply_executable_data_overrides so it applies the FeeVault override the same way
execution_payload_from_executable_data does (i.e., compute or call the same
FeeVault override logic instead of always using data.miner) — locate the closure
in block.map_header and replace the direct assignment to beneficiary with a call
to the production fee-override codepath or reuse the helper that determines the
effective beneficiary so test headers match production behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 67b340f0-dd6f-4582-be1c-6b9eb59285e5

📥 Commits

Reviewing files that changed from the base of the PR and between 6c6a33b and b66c890.

📒 Files selected for processing (1)
  • crates/engine-api/src/builder.rs

panos-xyz and others added 9 commits March 11, 2026 17:03
…anup

Validator / derivation nodes do not run BlockTagService, so
set_block_tags is never called externally.  With the new
resolve_fcu_block_tag_hash fallback, once a derived block's timestamp
is within 60 s of wall-clock time the FCU would send
finalized_block_hash = B256::ZERO indefinitely, stopping
changeset-cache eviction and causing a slow memory leak.

Pass block_hash as the finalized hint in set_block_tags so the
EngineStateTracker cache is seeded and the FCU always finds a non-zero
fallback.  When validators adopt BlockTagService the L1-derived
finalized value will naturally supersede this hint.
Replace logs.truncate with checkpoint_revert in evm_call_balance_of and
transfer_erc20_with_evm to properly isolate side effects (storage writes,
logs, account touches) from helper EVM calls, matching go-ethereum's
StaticCall semantics. Remove unused track_forced_cold_slot_from_state.
…geth_disk_root

Remove custom should_compute_state_root to use the default trait behavior.
Make fetch_geth_disk_root public for use by the state-root-check CLI tool.
Standalone binary that opens reth DB read-only and computes the MPT state
root at a given block, with optional geth morph_diskRoot cross-validation
and bisect mode to find the first divergent block.
- Remove RETH_LEGACY_STATE_ROOT env var and --engine.legacy-state-root
  from both mainnet and hoodi reth-start scripts (always use parallel).
- Fix MORPH_GETH_RPC_URL default assignment: use := -> = so that an
  explicitly empty value is preserved instead of being overridden.
bench-sync.sh: automated Geth vs Reth sync speed comparison with BPS
sampling and ETA calculation.
bench-sync-multi.sh: multi-round wrapper with mean/min/max aggregation.
Remove --morph.geth-rpc-url CLI arg and all per-block geth diskRoot
cross-validation during sync. This significantly improves sync speed
by eliminating HTTP round-trips on every block.

The standalone state-root-check tool (bin/state-root-check) is preserved
for offline verification when needed.
Add check-state-root.sh for both mainnet and hoodi that builds
state-root-check and supports three modes: local-only, compare
with geth, and bisect to find first mismatch.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant