Skip to content

chore: refactor flows and align others #22

Open
tac0turtle wants to merge 4 commits intomainfrom
marko/audit
Open

chore: refactor flows and align others #22
tac0turtle wants to merge 4 commits intomainfrom
marko/audit

Conversation

@tac0turtle
Copy link
Contributor

@tac0turtle tac0turtle commented Mar 6, 2026

Overview

Summary by CodeRabbit

  • Bug Fixes

    • Stronger nonce validation and replay protection; mempool now finalizes only executed transactions and requeues remainder on failure.
  • New Features

    • Effective gas price propagated into receipts and index data.
    • Gas-aware block proposals and executed-vs-proposed accounting.
    • Ethereum-compatible transaction indexing and keccak256 event-topic hashing.
    • External-consensus node runner and configurable gRPC/shutdown settings.
    • Test utilities and ETH EOA code identifier exposed for tests.
  • Tests

    • Added nonce-handling and mempool execution-prefix tests.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

Adds canonical TxContext authentication with strict nonce equality and atomic increment; replaces mempool select with gas-aware propose/finalize that finalizes only executed txs; extends StoredReceipt and indexing with Ethereum fields and effective_gas_price; introduces an external-consensus ETH runner and related RPC/gRPC configuration and tests.

Changes

Cohort / File(s) Summary
TxContext authentication & tests
bin/testapp/src/eth_eoa.rs, bin/testapp/tests/authentication_tests.rs, crates/app/tx/eth/src/mempool.rs
Adds authenticate_canonical_tx with sender recovery, strict nonce equality check and increment_nonce; routes TxContext through new auth path; adds nonce replay/gap tests and imports ERR_NONCE_TOO_LOW/ERR_NONCE_TOO_HIGH.
Mempool proposal & block finalization
bin/testapp/src/sim_testing.rs, crates/app/server/src/dev.rs, crates/rpc/evnode/src/service.rs, crates/app/tx/eth/src/mempool.rs
Replaces mempool select/take_mempool_batch with gas-aware propose(...) / finalize_proposed_batch; producer records proposed txs, finalizes only executed prefix, and requeues non-executed txs; handles empty proposals.
Chain-index: receipts & Ethereum fields
crates/rpc/chain-index/src/types.rs, crates/rpc/chain-index/src/index.rs, crates/rpc/chain-index/src/integration.rs, crates/rpc/chain-index/src/provider.rs
Adds effective_gas_price: U256 to StoredReceipt, updates DB row deserialization and queries; extracts Ethereum-specific tx fields (gas_price, nonce, v/r/s, tx_type, chain_id) and threads them through indexing; updates generics to Tx: Transaction + Any.
Event topic hashing & RPC log types
crates/rpc/types/src/log.rs
Replaces SHA-256 topic[0] hashing with keccak256, removes sha2 usage, and updates tests to expect keccak-based topic.
External-consensus ETH runner & evnode integration
crates/rpc/evnode/src/runner.rs, crates/rpc/evnode/src/lib.rs, bin/evd/src/main.rs, bin/testapp/src/lib.rs, crates/rpc/evnode/Cargo.toml, bin/evd/Cargo.toml
Introduces run_external_consensus_node_eth runner: storage/bootstrap, commit sink, optional persistent chain index, RPC/gRPC server lifecycle, shutdown orchestration, and public runner API; refactors evd main to use runner and adds storage/genesis helpers.
RPC config & shutdown settings
crates/app/node/src/lib.rs, crates/app/node/src/config.rs
Extends RpcConfig with grpc_enable_gzip, grpc_max_message_size, shutdown_timeout_secs; adds builder methods and helpers to build GrpcServerConfig and derive shutdown Duration; integrates into node wiring.
Service & dev exports
crates/app/server/src/lib.rs, crates/app/server/src/dev.rs
Makes compute_block_hash and state_changes_to_operations public/re-exported; updates block production to report executed vs proposed tx counts and use propose/finalize flow.
Test scaffolding, signing helpers & manifests
crates/app/tx/eth/src/test_utils.rs, crates/app/tx/eth/src/lib.rs, bin/testapp/src/sim_testing.rs, bin/testapp/src/lib.rs, various Cargo.toml changes
Adds sign_hash test util (re-exported behind testing feature), exposes ETH_EOA_CODE_ID, replaces local signing helpers with sign_hash imports, adds DEFAULT_TEST_BLOCK_GAS_LIMIT and genesis/testapp helpers, and updates several Cargo.toml dependency feature flags.
Workspace / manifests
bin/evd/Cargo.toml, crates/rpc/evnode/Cargo.toml, bin/testapp/Cargo.toml, bin/txload/Cargo.toml, crates/app/node/Cargo.toml, crates/app/tx/eth/Cargo.toml
Adjusts workspace members and dev-dependencies; narrows bin/evd workspace entries; enables testing feature for evolve_tx_eth and adds optional k256 for tx/eth testing.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Mempool
    participant BlockProducer
    participant EthEoa

    rect rgba(100,150,255,0.5)
    Note over Client,EthEoa: TxContext Authentication Flow
    Client->>EthEoa: Message(with TxContext)
    activate EthEoa
    EthEoa->>EthEoa: detect TxContext, recover sender
    EthEoa->>EthEoa: verify sender == recovered
    EthEoa->>EthEoa: verify tx.nonce == stored_nonce
    EthEoa->>EthEoa: increment_nonce (saturating)
    EthEoa-->>Client: auth result (ok / ERR_NONCE_TOO_LOW/HIGH)
    deactivate EthEoa
    end

    rect rgba(200,150,100,0.5)
    Note over Client,BlockProducer: Mempool Propose & Finalize Flow
    Client->>Mempool: submit(tx)
    activate Mempool
    Mempool->>Mempool: queue tx
    deactivate Mempool

    BlockProducer->>Mempool: propose(gas_limit,max_txs)
    activate Mempool
    Mempool-->>BlockProducer: (tx_hashes, transactions)
    deactivate Mempool

    BlockProducer->>BlockProducer: execute transactions
    BlockProducer->>BlockProducer: determine executed_tx_count (prefix)
    BlockProducer->>Mempool: finalize_proposed_batch(executed_tx_hashes)
    activate Mempool
    Mempool->>Mempool: remove executed, requeue rest
    deactivate Mempool
    BlockProducer-->>Client: block produced (executed_tx_count)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hop the nonce and check the sign,

keccak hums where topics align,
Proposals picked by gas and grace,
Executed prefix keeps its place,
A rabbit cheers the chain's new shine.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'chore: refactor flows and align others' is vague and generic, using non-specific terms like 'refactor flows' and 'align others' that don't convey meaningful information about the actual changes made. Consider a more specific title that highlights the primary change, such as 'chore: consolidate transaction authentication and mempool finalization flows' or similar.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 95.73% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch marko/audit

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
bin/testapp/src/eth_eoa.rs (1)

69-72: Register the sender-mismatch error instead of inlining 0x51.

Adding another raw ErrorCode::new(0x51) makes this validation path easy to drift from the shared error catalog, and 0x51 also lands in the system range rather than the validation range. Please define a named error constant and reuse it here and in the fallback branches.

As per coding guidelines "Define errors using the define_error! macro for compile-time error registration with error codes in ranges: 0x00-0x3F (validation), 0x40-0x7F (system), 0x80-0xBF (business logic)".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/src/eth_eoa.rs` around lines 69 - 72, Define a named validation
error using the define_error! macro (in the same module as eth_eoa.rs) for the
sender-mismatch case (use a validation-range code 0x00-0x3F), e.g.
SENDER_MISMATCH, and replace the inlined evolve_core::ErrorCode::new(0x51) in
the comparison block (where sender_address is compared to expected_address in
the function that calls tx.sender_address()) with that named error; also reuse
the same SENDER_MISMATCH constant in the related fallback branches so all
sender-mismatch paths reference the single registered error constant.
crates/app/server/src/dev.rs (1)

379-526: Split produce_block_with_txs before it grows further.

This method still bundles block assembly, execution, persistence, archiving, indexing, metrics, and logging in one path. Since this change adds even more bookkeeping, extracting post-exec persistence/indexing helpers would make the flow much easier to reason about and bring it back toward the repo limit. As per coding guidelines, "Keep functions to less than 70 lines to maintain bounded complexity".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/server/src/dev.rs` around lines 379 - 526, The
produce_block_with_txs function is too large and should be split: extract the
post-execution responsibilities into small helpers (e.g.,
persist_state_and_commit(exec_state, storage) to convert state changes to
operations, call storage.batch and storage.commit and return
state_root/commit_hash; archive_block_async(block, height, block_hash,
state_root, gas_used, on_block_archive) to borsh-encode archived block and spawn
the archive task; index_block_async(block, result, metadata, chain_index,
subscriptions) to build index data and spawn the blocking index task; and
update_state_and_metrics(state, block_hash, timestamp, produced_counts) to
update last_hash/last_timestamp and emit tracing/info. Replace the corresponding
inline blocks in produce_block_with_txs with calls to these helpers (keeping
compute_block_hash, gas/count calculations and STF execution in-proc) to reduce
function length and preserve behavior.
bin/testapp/src/sim_testing.rs (1)

499-507: Extract the finalize-after-execution sequence into one helper.

The same tx_hashes -> executed prefix -> finalize flow is now repeated in four paths. Pulling that into a single helper would keep the simulator and dev-server semantics from drifting the next time proposal/finalization rules change.

Also applies to: 633-640, 662-669, 700-709

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/src/sim_testing.rs` around lines 499 - 507, Extract the repeated
"take prefix of tx_hashes equal to executed tx count and call
finalize_mempool_batch" into a helper (e.g., fn
finalize_after_execution(tx_hashes: &[TxHash], executed_count: usize) or a
method on the simulator like finalize_executed_tx_hashes(&self, tx_hashes:
&[TxHash], executed_count: usize)) and replace the four duplicated blocks (the
sequences that call propose_mempool_batch, produce_block_internal(...) returning
result, then compute executed via
tx_hashes.iter().copied().take(result.tx_results.len()).collect() and finally
call finalize_mempool_batch(&executed)) with a single call to that helper;
ensure the helper constructs the same Vec of executed hashes and calls
finalize_mempool_batch exactly as before so semantics of propose_mempool_batch,
produce_block_internal, and finalize_mempool_batch are preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/testapp/src/eth_eoa.rs`:
- Around line 57-60: The nonce increment currently uses
unwrap_or_default().saturating_add(1) inside increment_nonce, which hides
overflow by saturating at u64::MAX; change this to use checked_add(1) and return
an explicit overflow error via the SdkResult error path instead of saturating.
Locate increment_nonce (and the closure passed to self.nonce.update) and replace
saturating_add with a checked_add call, map None (overflow) to a descriptive
SdkResult error (e.g., NonceOverflow or a suitable SdkError variant) so the
update returns Err(...) when the nonce would exceed u64::MAX. Ensure the error
propagates from update(...) so callers receive the explicit overflow failure.

In `@bin/testapp/src/sim_testing.rs`:
- Around line 464-467: propose_mempool_batch currently calls pool.propose(...,
max_txs) without checking for max_txs == 0, but Mempool::propose(..., 0) means
“no count limit” and will propose everything; change propose_mempool_batch to
guard for max_txs == 0 and return an empty selection (empty Vec<[u8;32]> and
empty Vec<TxContext>) before acquiring the mempool lock or calling pool.propose,
so produce_block_from_mempool(0) yields an empty block rather than draining the
mempool; locate the logic in the propose_mempool_batch function and add the
early return before calling Mempool::propose.

In `@crates/rpc/chain-index/src/integration.rs`:
- Around line 157-158: The change in integration.rs that sets name_hash =
keccak256(event.name.as_bytes()) will change persisted topic values and break
historical log lookups because index.rs stores topics verbatim and provider.rs
compares them byte-for-byte for eth_getLogs; either revert to the previous topic
encoding used historically (restore the original hash/bytes logic in
integration.rs where name_hash is computed) or implement a migration/reindex
path: add a migration routine that converts existing SQLite topic rows to the
new keccak format (or store both old and new topic values) and update
provider.rs to accept/compare both legacy and new topic encodings until
migration completes. Reference the name_hash assignment in integration.rs, the
persistence logic in index.rs, and the comparison logic in provider.rs when
making the change.
- Around line 85-96: EthereumIndexedTransaction currently omits the EIP-1559 fee
fields so tx_type == 2 transactions lose max_fee_per_gas and
max_priority_fee_per_gas; add fields max_fee_per_gas: Option<U256> and
max_priority_fee_per_gas: Option<U256> to the EthereumIndexedTransaction struct
and populate them in the extraction logic for type 2 transactions (the same code
paths around the other tx fields and the areas noted 107-143) by reading the
transaction's EIP-1559 fields (e.g., transaction.max_fee_per_gas /
transaction.max_priority_fee_per_gas) and passing those through into
StoredTransaction/serialization so eth_getTransactionByHash and full-block
outputs return the correct values.

---

Nitpick comments:
In `@bin/testapp/src/eth_eoa.rs`:
- Around line 69-72: Define a named validation error using the define_error!
macro (in the same module as eth_eoa.rs) for the sender-mismatch case (use a
validation-range code 0x00-0x3F), e.g. SENDER_MISMATCH, and replace the inlined
evolve_core::ErrorCode::new(0x51) in the comparison block (where sender_address
is compared to expected_address in the function that calls tx.sender_address())
with that named error; also reuse the same SENDER_MISMATCH constant in the
related fallback branches so all sender-mismatch paths reference the single
registered error constant.

In `@bin/testapp/src/sim_testing.rs`:
- Around line 499-507: Extract the repeated "take prefix of tx_hashes equal to
executed tx count and call finalize_mempool_batch" into a helper (e.g., fn
finalize_after_execution(tx_hashes: &[TxHash], executed_count: usize) or a
method on the simulator like finalize_executed_tx_hashes(&self, tx_hashes:
&[TxHash], executed_count: usize)) and replace the four duplicated blocks (the
sequences that call propose_mempool_batch, produce_block_internal(...) returning
result, then compute executed via
tx_hashes.iter().copied().take(result.tx_results.len()).collect() and finally
call finalize_mempool_batch(&executed)) with a single call to that helper;
ensure the helper constructs the same Vec of executed hashes and calls
finalize_mempool_batch exactly as before so semantics of propose_mempool_batch,
produce_block_internal, and finalize_mempool_batch are preserved.

In `@crates/app/server/src/dev.rs`:
- Around line 379-526: The produce_block_with_txs function is too large and
should be split: extract the post-execution responsibilities into small helpers
(e.g., persist_state_and_commit(exec_state, storage) to convert state changes to
operations, call storage.batch and storage.commit and return
state_root/commit_hash; archive_block_async(block, height, block_hash,
state_root, gas_used, on_block_archive) to borsh-encode archived block and spawn
the archive task; index_block_async(block, result, metadata, chain_index,
subscriptions) to build index data and spawn the blocking index task; and
update_state_and_metrics(state, block_hash, timestamp, produced_counts) to
update last_hash/last_timestamp and emit tracing/info. Replace the corresponding
inline blocks in produce_block_with_txs with calls to these helpers (keeping
compute_block_hash, gas/count calculations and STF execution in-proc) to reduce
function length and preserve behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bb7192eb-ceb8-4c01-a203-5000cc28936e

📥 Commits

Reviewing files that changed from the base of the PR and between 05639d7 and dce8d84.

📒 Files selected for processing (11)
  • bin/testapp/src/eth_eoa.rs
  • bin/testapp/src/sim_testing.rs
  • bin/testapp/tests/authentication_tests.rs
  • crates/app/node/src/lib.rs
  • crates/app/server/src/dev.rs
  • crates/app/tx/eth/src/mempool.rs
  • crates/rpc/chain-index/src/index.rs
  • crates/rpc/chain-index/src/integration.rs
  • crates/rpc/chain-index/src/provider.rs
  • crates/rpc/chain-index/src/types.rs
  • crates/rpc/types/src/log.rs

Comment on lines +57 to +60
fn increment_nonce(&self, env: &mut dyn Environment) -> SdkResult<()> {
self.nonce
.update(|v| Ok(v.unwrap_or_default().saturating_add(1)), env)?;
Ok(())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid saturating the nonce counter.

Once Line 59 reaches u64::MAX, the stored nonce stops advancing, so a max-nonce transaction becomes replayable forever. Return an explicit overflow error instead of saturating here.

As per coding guidelines "Fail fast with assertions for invariants and explicit error handling in Rust".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/src/eth_eoa.rs` around lines 57 - 60, The nonce increment
currently uses unwrap_or_default().saturating_add(1) inside increment_nonce,
which hides overflow by saturating at u64::MAX; change this to use
checked_add(1) and return an explicit overflow error via the SdkResult error
path instead of saturating. Locate increment_nonce (and the closure passed to
self.nonce.update) and replace saturating_add with a checked_add call, map None
(overflow) to a descriptive SdkResult error (e.g., NonceOverflow or a suitable
SdkError variant) so the update returns Err(...) when the nonce would exceed
u64::MAX. Ensure the error propagates from update(...) so callers receive the
explicit overflow failure.

Comment on lines +464 to +467
fn propose_mempool_batch(&mut self, max_txs: usize) -> (Vec<[u8; 32]>, Vec<TxContext>) {
let selected = {
let mut pool = self.mempool.blocking_write();
pool.select(max_txs)
pool.propose(DEFAULT_TEST_BLOCK_GAS_LIMIT, max_txs).0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard max_txs == 0 before calling pool.propose.

Mempool::propose(..., 0) means “no count limit”, so this helper currently turns produce_block_from_mempool(0) into “propose everything” instead of “produce an empty block”. That can silently drain the mempool in tests and diverges from the dev-server fix in crates/app/server/src/dev.rs.

💡 Suggested fix
 fn propose_mempool_batch(&mut self, max_txs: usize) -> (Vec<[u8; 32]>, Vec<TxContext>) {
+    if max_txs == 0 {
+        return (Vec::new(), Vec::new());
+    }
+
     let selected = {
         let mut pool = self.mempool.blocking_write();
         pool.propose(DEFAULT_TEST_BLOCK_GAS_LIMIT, max_txs).0
     };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn propose_mempool_batch(&mut self, max_txs: usize) -> (Vec<[u8; 32]>, Vec<TxContext>) {
let selected = {
let mut pool = self.mempool.blocking_write();
pool.select(max_txs)
pool.propose(DEFAULT_TEST_BLOCK_GAS_LIMIT, max_txs).0
fn propose_mempool_batch(&mut self, max_txs: usize) -> (Vec<[u8; 32]>, Vec<TxContext>) {
if max_txs == 0 {
return (Vec::new(), Vec::new());
}
let selected = {
let mut pool = self.mempool.blocking_write();
pool.propose(DEFAULT_TEST_BLOCK_GAS_LIMIT, max_txs).0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/src/sim_testing.rs` around lines 464 - 467, propose_mempool_batch
currently calls pool.propose(..., max_txs) without checking for max_txs == 0,
but Mempool::propose(..., 0) means “no count limit” and will propose everything;
change propose_mempool_batch to guard for max_txs == 0 and return an empty
selection (empty Vec<[u8;32]> and empty Vec<TxContext>) before acquiring the
mempool lock or calling pool.propose, so produce_block_from_mempool(0) yields an
empty block rather than draining the mempool; locate the logic in the
propose_mempool_batch function and add the early return before calling
Mempool::propose.

Comment on lines +85 to +96
#[derive(Debug, Clone)]
struct EthereumIndexedTransaction {
value: U256,
gas_price: U256,
input: Bytes,
nonce: u64,
v: u64,
r: U256,
s: U256,
tx_type: u8,
chain_id: Option<u64>,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Type-2 transactions still drop their 1559 fee-cap fields.

The new extraction path captures tx_type = 2 but still discards max_fee_per_gas and max_priority_fee_per_gas. crates/rpc/chain-index/src/types.rs serializes those directly from StoredTransaction, so eth_getTransactionByHash and full-block transaction results will keep returning None for both fields on EIP-1559 transactions.

Also applies to: 107-143

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/chain-index/src/integration.rs` around lines 85 - 96,
EthereumIndexedTransaction currently omits the EIP-1559 fee fields so tx_type ==
2 transactions lose max_fee_per_gas and max_priority_fee_per_gas; add fields
max_fee_per_gas: Option<U256> and max_priority_fee_per_gas: Option<U256> to the
EthereumIndexedTransaction struct and populate them in the extraction logic for
type 2 transactions (the same code paths around the other tx fields and the
areas noted 107-143) by reading the transaction's EIP-1559 fields (e.g.,
transaction.max_fee_per_gas / transaction.max_priority_fee_per_gas) and passing
those through into StoredTransaction/serialization so eth_getTransactionByHash
and full-block outputs return the correct values.

Comment on lines +157 to 158
let name_hash = keccak256(event.name.as_bytes());

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This changes persisted log topics without a migration/reindex path.

crates/rpc/chain-index/src/index.rs persists topics verbatim, and crates/rpc/chain-index/src/provider.rs compares them byte-for-byte for eth_getLogs. Switching the hash here will leave existing SQLite indexes with the old topic values and newly indexed blocks with keccak values, so historical log queries stop matching cleanly at the upgrade boundary unless the index is rebuilt or migrated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/chain-index/src/integration.rs` around lines 157 - 158, The change
in integration.rs that sets name_hash = keccak256(event.name.as_bytes()) will
change persisted topic values and break historical log lookups because index.rs
stores topics verbatim and provider.rs compares them byte-for-byte for
eth_getLogs; either revert to the previous topic encoding used historically
(restore the original hash/bytes logic in integration.rs where name_hash is
computed) or implement a migration/reindex path: add a migration routine that
converts existing SQLite topic rows to the new keccak format (or store both old
and new topic values) and update provider.rs to accept/compare both legacy and
new topic encodings until migration completes. Reference the name_hash
assignment in integration.rs, the persistence logic in index.rs, and the
comparison logic in provider.rs when making the change.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/app/node/src/config.rs (1)

70-79: ⚠️ Potential issue | 🟠 Major

Propagate the configured gRPC address through to_rpc_config.

Line 76 still forces grpc_addr: None, so callers that rely on this adapter can never actually start the gRPC sidecar; the new gzip/message-size settings stay inert. If the sidecar is meant to be configurable from NodeConfig, map self.parsed_grpc_addr() here instead of silently dropping it.

💡 Proposed fix
         RpcConfig {
             http_addr: self.parsed_rpc_addr(),
             chain_id: self.chain.chain_id,
             enabled: self.rpc.enabled,
             enable_block_indexing: self.rpc.enable_block_indexing,
-            grpc_addr: None,
+            grpc_addr: Some(self.parsed_grpc_addr()),
             grpc_enable_gzip: self.grpc.enable_gzip,
             grpc_max_message_size: self.grpc_max_message_size_usize(),
             shutdown_timeout_secs: self.operations.shutdown_timeout_secs,
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/node/src/config.rs` around lines 70 - 79, The RpcConfig built in
to_rpc_config currently drops the configured gRPC address by setting grpc_addr:
None; update to propagate the node config's parsed gRPC address instead (use
self.parsed_grpc_addr()) so the gRPC sidecar can actually start and the grpc_*
settings (grpc_enable_gzip, grpc_max_message_size) take effect; ensure the type
matches RpcConfig.grpc_addr (Option/SocketAddr) and remove the hardcoded None in
to_rpc_config.
🧹 Nitpick comments (2)
crates/rpc/evnode/src/runner.rs (1)

367-543: Break run_external_consensus_node_eth into focused helpers.

This entry point now owns storage bootstrap, genesis recovery, RPC startup, commit-sink wiring, EVNode startup, and shutdown persistence in one function. Pulling those phases apart would make the failure handling much easier to audit and keep the new runner inside the repo’s function-size limit.

As per coding guidelines, "Keep functions to less than 70 lines to maintain bounded complexity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/runner.rs` around lines 367 - 543, The function
run_external_consensus_node_eth is too large and should be split into focused
helper functions: extract storage bootstrap (use build_storage and
storage_config), chain state recovery/genesis logic (use load_chain_state,
build_genesis_stf, run_genesis, state_changes_to_operations), RPC/startup wiring
(use start_external_consensus_rpc_server and return the rpc_handle), commit-sink
and server construction (use ExternalConsensusCommitSink::spawn,
EvnodeServer::with_mempool, and return commit_sink + server), and
shutdown/persistence (use run_server_with_shutdown and save_chain_state to
persist ChainState and stop rpc_handle). Move each phase into a small helper
with clear inputs/outputs so run_external_consensus_node_eth composes them
sequentially and returns early on errors, preserving existing symbols and
behavior.
crates/app/node/src/lib.rs (1)

880-1195: Split run_dev_node_with_rpc_and_mempool_eth_impl into smaller phases.

The new helper is still 300+ lines and now mixes storage bootstrap, genesis replay, archive policy, RPC startup, consensus wiring, and shutdown persistence in one place. Extracting those phases into dedicated helpers would make the sidecar-mode branches much easier to audit.

As per coding guidelines, "Keep functions to less than 70 lines to maintain bounded complexity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/node/src/lib.rs` around lines 880 - 1195, The function
run_dev_node_with_rpc_and_mempool_eth_impl is too large and mixes multiple
phases; split it into smaller helper functions for storage initialization,
genesis/bootstrap, archive setup, RPC/gRPC server startup, consensus
wiring/production, and shutdown persistence. Concretely: extract the storage
creation block (the call to (build_storage) and storage_config) into
init_storage(), move the load_chain_state / commit_genesis logic into
ensure_genesis_or_resume(), move archive_cb creation into
build_block_archive_if_enabled(), encapsulate RPC startup (chain_index creation,
SubscriptionManager, start_server_with_subscriptions, grpc spawn) into
start_rpc_servers() returning (handle, grpc_handle, chain_index, subscriptions),
factor consensus construction (DevConsensus::with_rpc_and_mempool / with_mempool
and .with_block_archive) into build_consensus(), and isolate the tokio::select
run_block_production_with_mempool plus save_chain_state into
run_and_persist_consensus(); wire these helpers back in
run_dev_node_with_rpc_and_mempool_eth_impl to keep that function under ~70 lines
and preserve existing behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/rpc/evnode/src/runner.rs`:
- Around line 316-349: A SubscriptionManager is created
(SubscriptionManager::new()) and passed into start_server_with_subscriptions,
but the external-consensus runner never publishes block/log events to it so
eth_subscribe streams stay silent; wire the same subscriptions instance into the
commit sink or block-commit path used by external consensus (where commits are
applied/observed) so commits and logs are forwarded into SubscriptionManager
(e.g., add a parameter or register the SubscriptionManager with the commit
sink/ChainIndex or the function that handles new commits) ensuring that when
blocks/logs are committed the SubscriptionManager publishes events and
start_server_with_subscriptions receives those updates.
- Around line 95-114: The code is computing a state_root from storage.commit()
(B256::from_slice(commit_hash.as_bytes())) and using that in BlockMetadata::new,
which discards the executor-provided info.state_root and causes a mismatch with
ExecuteTxsResponse.updated_state_root; instead, keep using info.state_root for
the metadata state root (replace the B256::from_slice(commit_hash.as_bytes())
usage with info.state_root) and do not assume storage.commit() value as the
canonical root — you may still perform storage.batch(operations) and commit if
needed, but pass info.state_root into BlockMetadata::new so metadata aligns with
the executor result and handles empty change sets like B256::ZERO correctly.

In `@crates/rpc/evnode/src/service.rs`:
- Around line 484-494: The mempool finalize call is skipped when
block.transactions is empty, which prevents requeuing of in_flight transactions;
modify the logic in the block handling (the code around self.mempool,
executed_hashes, and the finalize call) so that you always obtain
executed_hashes (an empty Vec when executed_tx_count is 0) and always call
pool.finalize(&executed_hashes) inside the mempool write block (i.e., remove the
conditional on block.transactions.is_empty() and ensure finalize is invoked even
with an empty slice).

---

Outside diff comments:
In `@crates/app/node/src/config.rs`:
- Around line 70-79: The RpcConfig built in to_rpc_config currently drops the
configured gRPC address by setting grpc_addr: None; update to propagate the node
config's parsed gRPC address instead (use self.parsed_grpc_addr()) so the gRPC
sidecar can actually start and the grpc_* settings (grpc_enable_gzip,
grpc_max_message_size) take effect; ensure the type matches RpcConfig.grpc_addr
(Option/SocketAddr) and remove the hardcoded None in to_rpc_config.

---

Nitpick comments:
In `@crates/app/node/src/lib.rs`:
- Around line 880-1195: The function run_dev_node_with_rpc_and_mempool_eth_impl
is too large and mixes multiple phases; split it into smaller helper functions
for storage initialization, genesis/bootstrap, archive setup, RPC/gRPC server
startup, consensus wiring/production, and shutdown persistence. Concretely:
extract the storage creation block (the call to (build_storage) and
storage_config) into init_storage(), move the load_chain_state / commit_genesis
logic into ensure_genesis_or_resume(), move archive_cb creation into
build_block_archive_if_enabled(), encapsulate RPC startup (chain_index creation,
SubscriptionManager, start_server_with_subscriptions, grpc spawn) into
start_rpc_servers() returning (handle, grpc_handle, chain_index, subscriptions),
factor consensus construction (DevConsensus::with_rpc_and_mempool / with_mempool
and .with_block_archive) into build_consensus(), and isolate the tokio::select
run_block_production_with_mempool plus save_chain_state into
run_and_persist_consensus(); wire these helpers back in
run_dev_node_with_rpc_and_mempool_eth_impl to keep that function under ~70 lines
and preserve existing behavior.

In `@crates/rpc/evnode/src/runner.rs`:
- Around line 367-543: The function run_external_consensus_node_eth is too large
and should be split into focused helper functions: extract storage bootstrap
(use build_storage and storage_config), chain state recovery/genesis logic (use
load_chain_state, build_genesis_stf, run_genesis, state_changes_to_operations),
RPC/startup wiring (use start_external_consensus_rpc_server and return the
rpc_handle), commit-sink and server construction (use
ExternalConsensusCommitSink::spawn, EvnodeServer::with_mempool, and return
commit_sink + server), and shutdown/persistence (use run_server_with_shutdown
and save_chain_state to persist ChainState and stop rpc_handle). Move each phase
into a small helper with clear inputs/outputs so run_external_consensus_node_eth
composes them sequentially and returns early on errors, preserving existing
symbols and behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: eec273d9-8889-452b-bda1-877431a5db65

📥 Commits

Reviewing files that changed from the base of the PR and between dce8d84 and 25651ac.

📒 Files selected for processing (9)
  • bin/evd/Cargo.toml
  • bin/evd/src/main.rs
  • bin/testapp/src/lib.rs
  • crates/app/node/src/config.rs
  • crates/app/node/src/lib.rs
  • crates/rpc/evnode/Cargo.toml
  • crates/rpc/evnode/src/lib.rs
  • crates/rpc/evnode/src/runner.rs
  • crates/rpc/evnode/src/service.rs

Comment on lines +95 to +114
let operations = state_changes_to_operations(info.state_changes);
let commit_hash = runtime.block_on(async {
storage
.batch(operations)
.await
.expect("storage batch failed");
storage.commit().await.expect("storage commit failed")
});
let state_root = B256::from_slice(commit_hash.as_bytes());
let block_hash =
compute_external_consensus_block_hash(info.height, info.timestamp, parent_hash);
let metadata = BlockMetadata::new(
block_hash,
parent_hash,
state_root,
info.timestamp,
config.max_gas,
Address::ZERO,
config.chain_id,
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the executor’s state root instead of inventing a second one.

Lines 96-104 discard info.state_root and derive a different value from storage.commit(). That makes the indexed metadata diverge from the ExecuteTxsResponse.updated_state_root already returned for the same block, and the mismatch is especially obvious on empty change sets where the service emits B256::ZERO.

💡 Proposed fix
-                let operations = state_changes_to_operations(info.state_changes);
-                let commit_hash = runtime.block_on(async {
+                let state_root = info.state_root;
+                let operations = state_changes_to_operations(info.state_changes);
+                runtime.block_on(async {
                     storage
                         .batch(operations)
                         .await
                         .expect("storage batch failed");
                     storage.commit().await.expect("storage commit failed")
                 });
-                let state_root = B256::from_slice(commit_hash.as_bytes());
                 let block_hash =
                     compute_external_consensus_block_hash(info.height, info.timestamp, parent_hash);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let operations = state_changes_to_operations(info.state_changes);
let commit_hash = runtime.block_on(async {
storage
.batch(operations)
.await
.expect("storage batch failed");
storage.commit().await.expect("storage commit failed")
});
let state_root = B256::from_slice(commit_hash.as_bytes());
let block_hash =
compute_external_consensus_block_hash(info.height, info.timestamp, parent_hash);
let metadata = BlockMetadata::new(
block_hash,
parent_hash,
state_root,
info.timestamp,
config.max_gas,
Address::ZERO,
config.chain_id,
);
let state_root = info.state_root;
let operations = state_changes_to_operations(info.state_changes);
runtime.block_on(async {
storage
.batch(operations)
.await
.expect("storage batch failed");
storage.commit().await.expect("storage commit failed")
});
let block_hash =
compute_external_consensus_block_hash(info.height, info.timestamp, parent_hash);
let metadata = BlockMetadata::new(
block_hash,
parent_hash,
state_root,
info.timestamp,
config.max_gas,
Address::ZERO,
config.chain_id,
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/runner.rs` around lines 95 - 114, The code is computing
a state_root from storage.commit() (B256::from_slice(commit_hash.as_bytes()))
and using that in BlockMetadata::new, which discards the executor-provided
info.state_root and causes a mismatch with
ExecuteTxsResponse.updated_state_root; instead, keep using info.state_root for
the metadata state root (replace the B256::from_slice(commit_hash.as_bytes())
usage with info.state_root) and do not assume storage.commit() value as the
canonical root — you may still perform storage.batch(operations) and commit if
needed, but pass info.state_root into BlockMetadata::new so metadata aligns with
the executor result and handles empty change sets like B256::ZERO correctly.

Comment on lines +316 to +349
let chain_index = Arc::clone(chain_index.as_ref().expect("chain index required for RPC"));
let subscriptions = Arc::new(SubscriptionManager::new());
let codes_for_rpc = Arc::new(build_codes());
let state_provider_config = ChainStateProviderConfig {
chain_id: config.chain.chain_id,
protocol_version: "0x1".to_string(),
gas_price: U256::ZERO,
sync_status: SyncStatus::NotSyncing(false),
};
let state_querier: Arc<dyn StateQuerier> =
Arc::new(StorageStateQuerier::new(storage, token_account_id));
let state_provider = ChainStateProvider::with_mempool(
Arc::clone(&chain_index),
state_provider_config,
codes_for_rpc,
mempool,
)
.with_state_querier(state_querier);

let rpc_addr = config.parsed_rpc_addr();
let server_config = RpcServerConfig {
http_addr: rpc_addr,
chain_id: config.chain.chain_id,
};

tracing::info!("Starting JSON-RPC server on {}", rpc_addr);
let handle =
start_server_with_subscriptions(server_config, state_provider, Arc::clone(&subscriptions))
.await
.expect("failed to start RPC server");

Some(RpcRuntimeHandle::new(move || {
handle.stop().expect("failed to stop RPC server");
}))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

eth_subscribe has no producer in the external-consensus path.

Lines 317-349 create a fresh SubscriptionManager, but nothing in this runner ever publishes new blocks/logs into it. Unlike the DevConsensus::with_rpc* paths in crates/app/node/src/lib.rs, the commit sink never sees that handle, so subscription streams will stay silent even while blocks are being committed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/runner.rs` around lines 316 - 349, A
SubscriptionManager is created (SubscriptionManager::new()) and passed into
start_server_with_subscriptions, but the external-consensus runner never
publishes block/log events to it so eth_subscribe streams stay silent; wire the
same subscriptions instance into the commit sink or block-commit path used by
external consensus (where commits are applied/observed) so commits and logs are
forwarded into SubscriptionManager (e.g., add a parameter or register the
SubscriptionManager with the commit sink/ChainIndex or the function that handles
new commits) ensuring that when blocks/logs are committed the
SubscriptionManager publishes events and start_server_with_subscriptions
receives those updates.

Comment on lines 484 to 494
if let Some(ref mempool) = self.mempool {
let tx_hashes: Vec<[u8; 32]> =
block.transactions.iter().map(|tx| tx.hash().0).collect();
if !tx_hashes.is_empty() {
let executed_hashes: Vec<[u8; 32]> = block
.transactions
.iter()
.take(executed_tx_count)
.map(|tx| tx.hash().0)
.collect();
if !block.transactions.is_empty() {
let mut pool = mempool.write().await;
pool.finalize(&tx_hashes);
pool.finalize(&executed_hashes);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Always finalize the mempool, even for empty execution batches.

Line 491 skips finalize() when block.transactions is empty. That strands any previously proposed in_flight transactions if the consensus layer decides to execute zero txs, because finalize(&[]) is the path that requeues the whole in-flight set.

💡 Proposed fix
         if let Some(ref mempool) = self.mempool {
             let executed_hashes: Vec<[u8; 32]> = block
                 .transactions
                 .iter()
                 .take(executed_tx_count)
                 .map(|tx| tx.hash().0)
                 .collect();
-            if !block.transactions.is_empty() {
-                let mut pool = mempool.write().await;
-                pool.finalize(&executed_hashes);
-            }
+            let mut pool = mempool.write().await;
+            pool.finalize(&executed_hashes);
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if let Some(ref mempool) = self.mempool {
let tx_hashes: Vec<[u8; 32]> =
block.transactions.iter().map(|tx| tx.hash().0).collect();
if !tx_hashes.is_empty() {
let executed_hashes: Vec<[u8; 32]> = block
.transactions
.iter()
.take(executed_tx_count)
.map(|tx| tx.hash().0)
.collect();
if !block.transactions.is_empty() {
let mut pool = mempool.write().await;
pool.finalize(&tx_hashes);
pool.finalize(&executed_hashes);
}
if let Some(ref mempool) = self.mempool {
let executed_hashes: Vec<[u8; 32]> = block
.transactions
.iter()
.take(executed_tx_count)
.map(|tx| tx.hash().0)
.collect();
let mut pool = mempool.write().await;
pool.finalize(&executed_hashes);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/service.rs` around lines 484 - 494, The mempool
finalize call is skipped when block.transactions is empty, which prevents
requeuing of in_flight transactions; modify the logic in the block handling (the
code around self.mempool, executed_hashes, and the finalize call) so that you
always obtain executed_hashes (an empty Vec when executed_tx_count is 0) and
always call pool.finalize(&executed_hashes) inside the mempool write block
(i.e., remove the conditional on block.transactions.is_empty() and ensure
finalize is invoked even with an empty slice).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
crates/app/node/src/lib.rs (1)

881-1196: 🛠️ Refactor suggestion | 🟠 Major

Split run_dev_node_with_rpc_and_mempool_eth_impl before it grows further.

This new function is carrying too many responsibilities at once: genesis/load, runner-mode selection, RPC/gRPC startup, block production, shutdown, and state persistence. Please extract the repeated boot/shutdown paths into smaller helpers; that will also reduce the duplication against the non-ETH runners and make the persistent vs. ephemeral differences easier to audit. As per coding guidelines, "Keep functions to less than 70 lines to maintain bounded complexity".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/node/src/lib.rs` around lines 881 - 1196, The function
run_dev_node_with_rpc_and_mempool_eth_impl is too large and should be split into
focused helpers: extract (1) load_or_run_genesis(&storage, build_genesis_stf,
run_genesis, build_codes) -> (genesis_result, initial_height) to handle loading
or running genesis and committing; (2)
build_state_provider_and_servers(rpc_config, chain_index_db_path, storage,
genesis_result, build_codes, mempool) -> (rpc_handle, grpc_handle,
subscriptions, chain_index, codes_for_rpc, state_provider_config) to encapsulate
chain index, state_querier, RPC/gRPC startup; (3)
start_block_production(dev_consensus, context_for_shutdown, max_txs_per_block)
-> final_height that runs run_block_production_with_mempool + ctrl-c shutdown
logic; and (4) persist_and_stop(final_height, dev, genesis_result, rpc_handle,
grpc_handle) to save chain state and stop servers; refactor
run_dev_node_with_rpc_and_mempool_eth_impl to call these helpers and pass needed
values (stf, storage, codes, dev_config, mempool, rpc_config, archive_cb,
context clones) to reduce duplication and keep each function under ~70 lines.
crates/app/server/src/dev.rs (1)

380-526: 🛠️ Refactor suggestion | 🟠 Major

Split produce_block_with_txs before it grows further.

This method now mixes block assembly, STF execution, persistence, archival, indexing, and logging in one path. It is well past the 70-line limit, which makes the new proposed-vs-executed behavior harder to reason about and test. Please extract at least the execute/commit and archive/index branches into helpers. As per coding guidelines, "Keep functions to less than 70 lines to maintain bounded complexity".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/server/src/dev.rs` around lines 380 - 526, produce_block_with_txs
is too long and mixes block assembly, STF execution, persistence, archival and
indexing; extract logical sections into small helper functions. Create a
synchronous/async helper like execute_and_commit_block(&self, block: Block<Tx>)
-> Result<(ExecutionResult, ExecState, u64 /*gas_used*/, u64
/*executed_tx_count*/), ServerError> to run self.stf.execute_block, convert
changes, call storage.batch and commit, and compute gas/tx counts and
state_root; create archive_block(cb: Arc<...>, height: u64, block: Block<Tx>,
block_hash: B256, state_root: B256, gas_used: u64) to handle borsh encoding and
tokio::spawn archival callback; and create index_block(index: ChainIndex, block:
Block<Tx>, result: ExecutionResult, metadata: BlockMetadata, subscriptions:
Option<...>, height: u64) to encapsulate the spawn_blocking indexing logic and
publishing. Replace the corresponding inline sections in produce_block_with_txs
with calls to these helpers, keep logging and state updates in the main method,
and update any error types/returns so behaviour remains identical.
🧹 Nitpick comments (3)
crates/rpc/evnode/src/runner.rs (2)

410-412: Consider making worker thread count configurable.

The worker thread count is hardcoded to 4. For production deployments with varying hardware, this could be exposed via NodeConfig.

♻️ Suggested change
     let runtime_config = TokioConfig::default()
         .with_storage_directory(data_dir)
-        .with_worker_threads(4);
+        .with_worker_threads(config.operations.worker_threads.unwrap_or(4));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/runner.rs` around lines 410 - 412, The hardcoded worker
thread count in runtime_config should be made configurable: add a worker_threads
(or similar) field on NodeConfig, default it to 4 if absent, and use that value
when building TokioConfig via
TokioConfig::default().with_worker_threads(node_config.worker_threads); update
any constructors/readers of NodeConfig to parse/provide the new field and ensure
runtime_config is created using the NodeConfig value (symbols: NodeConfig,
runtime_config, TokioConfig::with_worker_threads).

68-72: Consider adding a Drop implementation for graceful cleanup.

If ExternalConsensusCommitSink is dropped without calling finish() (e.g., due to a panic before line 518), the worker thread continues until the channel closes but isn't joined. While the thread will eventually exit, any panic within the worker goes undetected, and pending blocks may not be fully committed.

♻️ Proposed Drop implementation
+impl Drop for ExternalConsensusCommitSink {
+    fn drop(&mut self) {
+        // Close channel to signal worker shutdown
+        drop(self.sender.take());
+        // Best-effort join - don't panic in drop
+        if let Some(worker) = self.worker.take() {
+            let _ = worker.join();
+        }
+    }
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/runner.rs` around lines 68 - 72,
ExternalConsensusCommitSink currently may be dropped without joining its worker,
so implement Drop for ExternalConsensusCommitSink to perform graceful cleanup:
take and close the sender (set to None) to signal the worker, then take the
worker JoinHandle and join it (handling and logging panics/errors) to ensure any
panic in the worker is observed and pending commits finish; reuse/coordinate
with the existing finish() logic (call it or inline its steps) and update fields
sender, worker, and current_height access accordingly to avoid double-drop
races.
crates/rpc/chain-index/src/integration.rs (1)

284-284: Consider logging serialization failures instead of silent fallback.

unwrap_or_default() silently discards borsh serialization errors, resulting in empty transaction input data. While acceptable for fallback, logging the error would help diagnose indexing issues.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/chain-index/src/integration.rs` at line 284, The current call
Bytes::from(borsh::to_vec(tx.request()).unwrap_or_default()) silently swallows
Borsh serialization errors; change this to capture the Result from
borsh::to_vec(tx.request()), log the Err (e.g., using tracing::error! or
log::error!) with context including tx or tx.request(), and then fall back to an
empty/default Bytes only after logging; locate the expression using
borsh::to_vec and tx.request() in integration.rs and replace unwrap_or_default()
with a match or map_err that logs the error before using the default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/app/node/src/lib.rs`:
- Around line 131-132: The field shutdown_timeout_secs is documented as a
node-wide graceful shutdown timeout but is only used by context.stop(...) and
not applied to the gRPC sidecar, so either propagate this timeout into the gRPC
shutdown path or restrict the field/docs to only the consensus/runtime stop; to
fix, locate shutdown_timeout_secs in the struct and then (A) wire it into the
gRPC shutdown sequence by passing it to the sidecar/server shutdown call (the
code that currently force-aborts the gRPC sidecar) so the server uses a graceful
shutdown deadline, or (B) if you prefer not to change gRPC behavior, rename and
update the doc comment for shutdown_timeout_secs to indicate it only governs the
consensus/runtime context.stop(...) path (and adjust any related uses such as
the code calling context.stop(...) to match). Ensure references to
shutdown_timeout_secs, context.stop(...), and the gRPC sidecar shutdown/abort
code are updated consistently.

In `@crates/app/server/src/dev.rs`:
- Around line 612-619: The early return that calls
produce_block_with_txs(vec![]) when max_txs == 0 changes semantics
(Mempool::propose treats 0 as “no count limit”); remove that branch so we always
call mempool.propose(self.config.gas_limit, max_txs) and let Mempool::propose
handle the zero case, keeping the rest of the selection logic unchanged (refer
to the max_txs variable, the mempool.write().await block, and
produce_block_with_txs only for actual empty-block cases returned by propose).

In `@crates/rpc/evnode/src/runner.rs`:
- Around line 360-536: The function run_external_consensus_node_eth is too large
and should be decomposed into smaller helpers: extract storage and genesis
bootstrap (storage creation, load_chain_state, run_genesis, genesis DB writes)
into an async helper (suggest name bootstrap_storage_and_genesis returning
(genesis_result, initial_height, Stf)), extract server and runtime setup
(mempool, chain_index, start_external_consensus_rpc_server, commit sink
creation, EvnodeServer construction and run_server_with_shutdown orchestration)
into another async helper (suggest name setup_and_run_servers returning
(rpc_handle, commit_sink)), and keep run_external_consensus_node_eth only as the
orchestrator that builds Arcs, runner, calls these helpers and handles final
save_chain_state and shutdown cleanup; update uses of build_genesis_stf,
build_stf, build_codes, run_genesis, build_storage,
start_external_consensus_rpc_server, ExternalConsensusCommitSink::spawn,
EvnodeServer::with_mempool, and save_chain_state accordingly.

---

Outside diff comments:
In `@crates/app/node/src/lib.rs`:
- Around line 881-1196: The function run_dev_node_with_rpc_and_mempool_eth_impl
is too large and should be split into focused helpers: extract (1)
load_or_run_genesis(&storage, build_genesis_stf, run_genesis, build_codes) ->
(genesis_result, initial_height) to handle loading or running genesis and
committing; (2) build_state_provider_and_servers(rpc_config,
chain_index_db_path, storage, genesis_result, build_codes, mempool) ->
(rpc_handle, grpc_handle, subscriptions, chain_index, codes_for_rpc,
state_provider_config) to encapsulate chain index, state_querier, RPC/gRPC
startup; (3) start_block_production(dev_consensus, context_for_shutdown,
max_txs_per_block) -> final_height that runs run_block_production_with_mempool +
ctrl-c shutdown logic; and (4) persist_and_stop(final_height, dev,
genesis_result, rpc_handle, grpc_handle) to save chain state and stop servers;
refactor run_dev_node_with_rpc_and_mempool_eth_impl to call these helpers and
pass needed values (stf, storage, codes, dev_config, mempool, rpc_config,
archive_cb, context clones) to reduce duplication and keep each function under
~70 lines.

In `@crates/app/server/src/dev.rs`:
- Around line 380-526: produce_block_with_txs is too long and mixes block
assembly, STF execution, persistence, archival and indexing; extract logical
sections into small helper functions. Create a synchronous/async helper like
execute_and_commit_block(&self, block: Block<Tx>) -> Result<(ExecutionResult,
ExecState, u64 /*gas_used*/, u64 /*executed_tx_count*/), ServerError> to run
self.stf.execute_block, convert changes, call storage.batch and commit, and
compute gas/tx counts and state_root; create archive_block(cb: Arc<...>, height:
u64, block: Block<Tx>, block_hash: B256, state_root: B256, gas_used: u64) to
handle borsh encoding and tokio::spawn archival callback; and create
index_block(index: ChainIndex, block: Block<Tx>, result: ExecutionResult,
metadata: BlockMetadata, subscriptions: Option<...>, height: u64) to encapsulate
the spawn_blocking indexing logic and publishing. Replace the corresponding
inline sections in produce_block_with_txs with calls to these helpers, keep
logging and state updates in the main method, and update any error types/returns
so behaviour remains identical.

---

Nitpick comments:
In `@crates/rpc/chain-index/src/integration.rs`:
- Line 284: The current call
Bytes::from(borsh::to_vec(tx.request()).unwrap_or_default()) silently swallows
Borsh serialization errors; change this to capture the Result from
borsh::to_vec(tx.request()), log the Err (e.g., using tracing::error! or
log::error!) with context including tx or tx.request(), and then fall back to an
empty/default Bytes only after logging; locate the expression using
borsh::to_vec and tx.request() in integration.rs and replace unwrap_or_default()
with a match or map_err that logs the error before using the default.

In `@crates/rpc/evnode/src/runner.rs`:
- Around line 410-412: The hardcoded worker thread count in runtime_config
should be made configurable: add a worker_threads (or similar) field on
NodeConfig, default it to 4 if absent, and use that value when building
TokioConfig via
TokioConfig::default().with_worker_threads(node_config.worker_threads); update
any constructors/readers of NodeConfig to parse/provide the new field and ensure
runtime_config is created using the NodeConfig value (symbols: NodeConfig,
runtime_config, TokioConfig::with_worker_threads).
- Around line 68-72: ExternalConsensusCommitSink currently may be dropped
without joining its worker, so implement Drop for ExternalConsensusCommitSink to
perform graceful cleanup: take and close the sender (set to None) to signal the
worker, then take the worker JoinHandle and join it (handling and logging
panics/errors) to ensure any panic in the worker is observed and pending commits
finish; reuse/coordinate with the existing finish() logic (call it or inline its
steps) and update fields sender, worker, and current_height access accordingly
to avoid double-drop races.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cbd17b5d-7f5b-48a8-90f9-150b4ff56291

📥 Commits

Reviewing files that changed from the base of the PR and between 25651ac and 9499e1e.

📒 Files selected for processing (6)
  • bin/testapp/src/sim_testing.rs
  • crates/app/node/src/lib.rs
  • crates/app/server/src/dev.rs
  • crates/app/server/src/lib.rs
  • crates/rpc/chain-index/src/integration.rs
  • crates/rpc/evnode/src/runner.rs

Comment on lines +131 to +132
/// Graceful shutdown timeout in seconds.
pub shutdown_timeout_secs: u64,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

shutdown_timeout_secs is only partially enforced.

This timeout now sounds like it governs graceful shutdown for the node, but only context.stop(...) uses it. The gRPC sidecar is still force-aborted later, so in-flight unary/streaming calls will not drain within this window. Please either wire this setting through the gRPC shutdown path as well, or narrow the field name/docs so it only covers the consensus/runtime stop path.

Also applies to: 190-193

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/node/src/lib.rs` around lines 131 - 132, The field
shutdown_timeout_secs is documented as a node-wide graceful shutdown timeout but
is only used by context.stop(...) and not applied to the gRPC sidecar, so either
propagate this timeout into the gRPC shutdown path or restrict the field/docs to
only the consensus/runtime stop; to fix, locate shutdown_timeout_secs in the
struct and then (A) wire it into the gRPC shutdown sequence by passing it to the
sidecar/server shutdown call (the code that currently force-aborts the gRPC
sidecar) so the server uses a graceful shutdown deadline, or (B) if you prefer
not to change gRPC behavior, rename and update the doc comment for
shutdown_timeout_secs to indicate it only governs the consensus/runtime
context.stop(...) path (and adjust any related uses such as the code calling
context.stop(...) to match). Ensure references to shutdown_timeout_secs,
context.stop(...), and the gRPC sidecar shutdown/abort code are updated
consistently.

Comment on lines +612 to +619
if max_txs == 0 {
return self.produce_block_with_txs(vec![]).await;
}

// Build a gas-aware proposal so over-budget transactions remain queued.
let selected = {
let mut pool = mempool.write().await;
pool.select(max_txs)
pool.propose(self.config.gas_limit, max_txs).0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep max_txs == 0 semantics aligned with Mempool::propose.

The underlying mempool API documents max_txs == 0 as “no count limit”, but this wrapper now converts 0 into “produce an empty block”. A caller that forwards 0 expecting unbounded selection will silently stop draining the mempool.

Suggested fix
-        if max_txs == 0 {
-            return self.produce_block_with_txs(vec![]).await;
-        }
-
         // Build a gas-aware proposal so over-budget transactions remain queued.
         let selected = {
             let mut pool = mempool.write().await;
             pool.propose(self.config.gas_limit, max_txs).0
         };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if max_txs == 0 {
return self.produce_block_with_txs(vec![]).await;
}
// Build a gas-aware proposal so over-budget transactions remain queued.
let selected = {
let mut pool = mempool.write().await;
pool.select(max_txs)
pool.propose(self.config.gas_limit, max_txs).0
// Build a gas-aware proposal so over-budget transactions remain queued.
let selected = {
let mut pool = mempool.write().await;
pool.propose(self.config.gas_limit, max_txs).0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/server/src/dev.rs` around lines 612 - 619, The early return that
calls produce_block_with_txs(vec![]) when max_txs == 0 changes semantics
(Mempool::propose treats 0 as “no count limit”); remove that branch so we always
call mempool.propose(self.config.gas_limit, max_txs) and let Mempool::propose
handle the zero case, keeping the rest of the selection logic unchanged (refer
to the max_txs variable, the mempool.write().await block, and
produce_block_with_txs only for actual empty-block cases returned by propose).

Comment on lines +360 to +536
pub fn run_external_consensus_node_eth<
Stf,
Codes,
G,
S,
BuildGenesisStf,
BuildStf,
BuildCodes,
RunGenesis,
BuildStorage,
BuildStorageFut,
>(
config: NodeConfig,
build_genesis_stf: BuildGenesisStf,
build_stf: BuildStf,
build_codes: BuildCodes,
run_genesis: RunGenesis,
build_storage: BuildStorage,
) where
Codes: AccountsCodeStorage + Send + Sync + 'static,
S: ReadonlyKV + Storage + Clone + Send + Sync + 'static,
Stf: StfExecutor<TxContext, S, Codes> + EvnodeStfExecutor<S, Codes> + Send + Sync + 'static,
G: BorshSerialize
+ BorshDeserialize
+ Clone
+ Debug
+ HasTokenAccountId
+ Send
+ Sync
+ 'static,
BuildGenesisStf: Fn() -> Stf + Send + Sync + 'static,
BuildStf: Fn(&G) -> Stf + Send + Sync + 'static,
BuildCodes: Fn() -> Codes + Clone + Send + Sync + 'static,
RunGenesis: Fn(&Stf, &Codes, &S) -> Result<GenesisOutput<G>, Box<dyn std::error::Error + Send + Sync>>
+ Send
+ Sync
+ 'static,
BuildStorage: Fn(TokioContext, StorageConfig) -> BuildStorageFut + Send + Sync + 'static,
BuildStorageFut:
Future<Output = Result<S, Box<dyn std::error::Error + Send + Sync>>> + Send + 'static,
{
tracing::info!("=== Evolve External Consensus Node ===");
std::fs::create_dir_all(&config.storage.path).expect("failed to create data directory");

let data_dir = Path::new(&config.storage.path);
let storage_config = StorageConfig {
path: data_dir.to_path_buf(),
..Default::default()
};

let runtime_config = TokioConfig::default()
.with_storage_directory(data_dir)
.with_worker_threads(4);
let runner = Runner::new(runtime_config);

let build_genesis_stf = Arc::new(build_genesis_stf);
let build_stf = Arc::new(build_stf);
let build_codes = Arc::new(build_codes);
let run_genesis = Arc::new(run_genesis);
let build_storage = Arc::new(build_storage);

runner.start(move |context| {
let build_genesis_stf = Arc::clone(&build_genesis_stf);
let build_stf = Arc::clone(&build_stf);
let build_codes = Arc::clone(&build_codes);
let run_genesis = Arc::clone(&run_genesis);
let build_storage = Arc::clone(&build_storage);
let config = config.clone();

async move {
let context_for_shutdown = context.clone();
let storage = (build_storage)(context, storage_config)
.await
.expect("failed to create storage");

let codes = build_codes();
let (genesis_result, initial_height) = match load_chain_state::<G, _>(&storage) {
Some(state) => {
tracing::info!("Resuming from existing state at height {}", state.height);
tracing::info!("Genesis state: {:?}", state.genesis_result);
(state.genesis_result, state.height)
}
None => {
tracing::info!("No existing state found, running genesis...");
let bootstrap_stf = (build_genesis_stf)();
let output =
(run_genesis)(&bootstrap_stf, &codes, &storage).expect("genesis failed");

let chain_state = ChainState {
height: 1,
genesis_result: output.genesis_result.clone(),
};
let mut operations = state_changes_to_operations(output.changes);
operations.push(Operation::Set {
key: evolve_server::CHAIN_STATE_KEY.to_vec(),
value: borsh::to_vec(&chain_state)
.map_err(|e| format!("serialize chain state: {e}"))
.expect("serialize chain state"),
});

storage
.batch(operations)
.await
.expect("genesis batch failed");
storage.commit().await.expect("genesis commit failed");

tracing::info!("Genesis complete. Result: {:?}", output.genesis_result);
(output.genesis_result, 1)
}
};

let stf = (build_stf)(&genesis_result);
let mempool: SharedMempool<Mempool<TxContext>> = new_shared_mempool();
let chain_index = init_persistent_chain_index(
Path::new(&config.storage.path),
config.rpc.enabled || config.rpc.enable_block_indexing,
);
let rpc_handle = start_external_consensus_rpc_server(
&config,
storage.clone(),
mempool.clone(),
&chain_index,
genesis_result.token_account_id(),
build_codes.as_ref(),
)
.await;

let executor_config = ExecutorServiceConfig::default();
let commit_sink = ExternalConsensusCommitSink::spawn(
storage.clone(),
chain_index,
ExternalConsensusSinkConfig {
initial_height,
chain_id: config.chain.chain_id,
max_gas: executor_config.max_gas,
indexing_enabled: config.rpc.enable_block_indexing,
},
);
let current_height = commit_sink.current_height();

let grpc_config = EvnodeServerConfig {
addr: config.parsed_grpc_addr(),
enable_gzip: config.grpc.enable_gzip,
max_message_size: config.grpc_max_message_size_usize(),
executor_config,
};
let server =
EvnodeServer::with_mempool(grpc_config, stf, storage.clone(), codes, mempool)
.with_on_block_executed(commit_sink.callback());

tracing::info!("Server ready. Press Ctrl+C to stop.");
run_server_with_shutdown(
server.serve(),
context_for_shutdown,
config.operations.shutdown_timeout_secs,
)
.await;

commit_sink.finish();

let chain_state = ChainState {
height: current_height.load(Ordering::SeqCst),
genesis_result,
};
if let Err(err) = save_chain_state(&storage, &chain_state).await {
tracing::error!("Failed to save chain state: {}", err);
}

if let Some(handle) = rpc_handle {
tracing::info!("Stopping JSON-RPC server...");
handle.stop();
}

tracing::info!("Shutdown complete.");
}
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Function exceeds 70-line guideline (~176 lines).

This function handles multiple distinct responsibilities: storage bootstrap, runtime configuration, genesis/resume logic, mempool setup, chain index initialization, RPC server startup, commit sink setup, gRPC server configuration, and shutdown handling. Consider decomposing into smaller, focused helper functions.

As per coding guidelines: "Keep functions to less than 70 lines to maintain bounded complexity."

♻️ Suggested decomposition structure
// Extract bootstrap logic
async fn bootstrap_storage_and_genesis<G, S, Stf, Codes, ...>(
    config: &NodeConfig,
    storage: &S,
    build_genesis_stf: &BuildGenesisStf,
    build_stf: &BuildStf,
    run_genesis: &RunGenesis,
    codes: &Codes,
) -> Result<(G, u64, Stf), ...> { ... }

// Extract server setup logic
async fn setup_servers<S, Codes, ...>(
    config: &NodeConfig,
    storage: S,
    stf: Stf,
    genesis_result: &G,
    ...,
) -> (Option<RpcRuntimeHandle>, ExternalConsensusCommitSink, EvnodeServer<...>) { ... }

// Main function becomes orchestration only
pub fn run_external_consensus_node_eth<...>(...) {
    // ~30 lines: create runner, call bootstrap, setup, and shutdown
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/evnode/src/runner.rs` around lines 360 - 536, The function
run_external_consensus_node_eth is too large and should be decomposed into
smaller helpers: extract storage and genesis bootstrap (storage creation,
load_chain_state, run_genesis, genesis DB writes) into an async helper (suggest
name bootstrap_storage_and_genesis returning (genesis_result, initial_height,
Stf)), extract server and runtime setup (mempool, chain_index,
start_external_consensus_rpc_server, commit sink creation, EvnodeServer
construction and run_server_with_shutdown orchestration) into another async
helper (suggest name setup_and_run_servers returning (rpc_handle, commit_sink)),
and keep run_external_consensus_node_eth only as the orchestrator that builds
Arcs, runner, calls these helpers and handles final save_chain_state and
shutdown cleanup; update uses of build_genesis_stf, build_stf, build_codes,
run_genesis, build_storage, start_external_consensus_rpc_server,
ExternalConsensusCommitSink::spawn, EvnodeServer::with_mempool, and
save_chain_state accordingly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/app/server/src/dev.rs (1)

509-525: ⚠️ Potential issue | 🟡 Minor

tx_count semantic change: now represents executed count, not proposed count.

ProducedBlock.tx_count is now set to executed_tx_count (line 521). Previously it represented all transactions in the block. This is a semantic change that could affect callers expecting the full transaction count. The enhanced logging on lines 509-516 clarifies the distinction with separate executed and proposed counts.

Consider documenting this semantic change in ProducedBlock struct:

 pub struct ProducedBlock {
     /// Block height.
     pub height: u64,
     /// Block hash.
     pub hash: B256,
-    /// Number of transactions in the block.
+    /// Number of transactions actually executed in the block.
+    /// This may be less than the number proposed if some were skipped.
     pub tx_count: usize,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/server/src/dev.rs` around lines 509 - 525, ProducedBlock.tx_count
was changed to hold executed_tx_count (not total/proposed), which is a semantic
break for callers; update the ProducedBlock definition and its docs/comments to
explicitly state that tx_count is the count of executed transactions (or add a
new field like total_tx_count/ proposed_tx_count if you need to preserve the old
semantics), and update any public docstrings or comments near the ProducedBlock
struct and usages (ProducedBlock, tx_count, executed_tx_count,
proposed_tx_count) to reflect the new meaning so callers are aware of the
change.
♻️ Duplicate comments (1)
bin/testapp/src/sim_testing.rs (1)

456-460: ⚠️ Potential issue | 🟠 Major

Guard max_txs == 0 before calling pool.propose.

Mempool::propose(..., 0) means “no count limit”, so this helper still turns produce_block_from_mempool(0) into “propose everything” instead of “produce an empty block”.

Suggested fix
 fn propose_mempool_batch(&mut self, max_txs: usize) -> (Vec<[u8; 32]>, Vec<TxContext>) {
+    if max_txs == 0 {
+        return (Vec::new(), Vec::new());
+    }
+
     let selected = {
         let mut pool = self.mempool.blocking_write();
         pool.propose(DEFAULT_TEST_BLOCK_GAS_LIMIT, max_txs).0
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/src/sim_testing.rs` around lines 456 - 460, The helper
propose_mempool_batch currently calls Mempool::propose(..., max_txs) which
treats max_txs==0 as "no limit", so calling produce_block_from_mempool(0)
incorrectly proposes everything; fix by guarding max_txs == 0 before taking the
mempool lock and return empty Vecs immediately when max_txs is zero. Update
propose_mempool_batch to check max_txs==0 at the start and return (Vec::new(),
Vec::new()) instead of calling pool.propose, referencing the
propose_mempool_batch function and Mempool::propose call in your change.
🧹 Nitpick comments (5)
crates/app/node/src/lib.rs (1)

644-648: gRPC server still force-aborted without graceful shutdown.

While shutdown_timeout_secs is now used for context.stop(), the gRPC handle is still aborted via handle.abort() (line 646), which does not respect the timeout. In-flight unary/streaming calls will be dropped immediately. Consider using graceful shutdown if the gRPC server supports it, or document that this timeout only applies to the consensus/runtime path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/node/src/lib.rs` around lines 644 - 648, The gRPC server is
force-aborted via grpc_handle.abort(), which ignores shutdown_timeout_secs and
drops in-flight RPCs; instead wire up a graceful shutdown: locate where
grpc_handle (the spawned task/JoinHandle for the gRPC server) is created and
change the shutdown path to signal the server to shut down gracefully (for
example by sending a shutdown oneshot/trigger the Server::graceful_shutdown path
or calling a provided graceful stop method) and then await grpc_handle with a
tokio::time::timeout(Duration::from_secs(shutdown_timeout_secs)) to let
in-flight calls finish; only call grpc_handle.abort() if that timeout elapses.
Ensure references to grpc_handle and shutdown_timeout_secs (and the
context.stop() usage) are updated accordingly so the timeout actually governs
gRPC shutdown.
crates/rpc/chain-index/src/provider.rs (1)

855-855: Please assert effective_gas_price in a receipt test.

Line 855 updates the fixture, but the current provider tests still only verify transaction_hash. A regression in the new StoredReceipt -> RpcReceipt mapping would still pass unless get_transaction_receipt() also checks the propagated gas price.

Helpful assertion to add
let receipt = found_receipt.expect("receipt should exist");
assert_eq!(receipt.transaction_hash, tx_hash);
assert_eq!(receipt.effective_gas_price, U256::from(1u64));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/chain-index/src/provider.rs` at line 855, Tests for
get_transaction_receipt currently only assert transaction_hash; update the
relevant receipt test in provider.rs to also assert effective_gas_price to catch
regressions in the StoredReceipt -> RpcReceipt mapping: after unwrapping
found_receipt (the result of get_transaction_receipt()), add an assertion
comparing receipt.effective_gas_price to U256::from(1u64) (alongside the
existing assert_eq!(receipt.transaction_hash, tx_hash)); ensure you reference
the same found_receipt/receipt variables used in the test so the gas price check
validates the propagated value.
crates/rpc/chain-index/src/lib.rs (1)

44-46: One hardcoded "0x1" is still left in the gRPC service.

Re-exporting this constant is the right API shape, but crates/rpc/grpc/src/services/execution.rs:618-620 still returns "0x1" directly. Switching that endpoint to this shared constant would finish the dedup and prevent the two RPC surfaces from drifting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rpc/chain-index/src/lib.rs` around lines 44 - 46, A hardcoded "0x1" is
returned from the gRPC execution service; replace that literal with the shared
constant exported as DEFAULT_PROTOCOL_VERSION to avoid divergence. In the gRPC
service handler in execution.rs, import or reference DEFAULT_PROTOCOL_VERSION
(the re-export from provider alongside
ChainStateProvider/ChainStateProviderConfig) and return that constant instead of
the string "0x1", ensuring the constant is in scope where the response is
constructed.
bin/testapp/Cargo.toml (1)

60-60: Consider removing redundant dev-dependency entry.

evolve_tx_eth appears in both [dependencies] (line 25, with features = ["testing"]) and [dev-dependencies] (line 60, without features). Since Cargo merges features additively, the dev-dependency entry is redundant—the testing feature from the main dependency will already be active during tests.

♻️ Proposed removal
 [dev-dependencies]
 tempfile = "3.8"
 criterion = "0.5"
 alloy-primitives = { workspace = true }
 async-trait = { workspace = true }
 evolve_mempool = { workspace = true }
-evolve_tx_eth = { workspace = true }
 hex = "0.4"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/Cargo.toml` at line 60, Remove the redundant dev-dependency entry
for evolve_tx_eth in Cargo.toml: delete the line "evolve_tx_eth = { workspace =
true }" from [dev-dependencies] because evolve_tx_eth is already listed under
[dependencies] with features = ["testing"] and Cargo merges features for dev
builds; if you intended different features for tests, instead add the needed
features to the main evolve_tx_eth dependency declaration rather than
duplicating the package in [dev-dependencies], then run cargo test to confirm
nothing breaks.
crates/app/tx/eth/tests/proptest_tests.rs (1)

87-87: Consider using EVOLVE_PROPTEST_CASES for configurable test iterations.

The proptest configurations use hardcoded case counts (20 and 15). As per coding guidelines, property tests should use the EVOLVE_PROPTEST_CASES environment variable for controlling iterations, which allows adjusting test thoroughness in CI vs local development.

This is pre-existing code and out of scope for this PR's deduplication focus, but worth tracking for a future improvement.

Also applies to: 245-245

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/tx/eth/tests/proptest_tests.rs` at line 87, Replace the hardcoded
proptest case counts with a configurable value read from the
EVOLVE_PROPTEST_CASES environment variable: modify the proptest_config
attributes that call ProptestConfig::with_cases(20) and
ProptestConfig::with_cases(15) so they instead use a runtime-parsed usize
obtained from std::env::var("EVOLVE_PROPTEST_CASES").unwrap_or_default()
behavior (falling back to the current defaults 20 and 15 if the env var is
missing or invalid); ensure you reference the same ProptestConfig::with_cases
invocation points in proptest_tests.rs and parse errors fall back silently to
the existing hardcoded counts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/testapp/src/lib.rs`:
- Around line 471-477: The test currently only checks the sum of balances via
eoa_account_ids and read_token_balance; instead derive the two specific EOA
account IDs from the configured ETH addresses (the same addresses used to build
the genesis mapping) and assert each account's token balance individually
against 1234 and 5678 using read_token_balance(output.genesis_result.token,
<derived_account_id>) so the mapping to Alice/Bob is validated (use the ETH
address variables/constants you already configure to compute the expected
account IDs rather than relying on order from eoa_account_ids).

In `@crates/app/tx/eth/src/mempool.rs`:
- Around line 169-171: TxContext now returns Message::new(self) (the full
encoded envelope), which breaks custom AuthenticationPayload implementations
that expect raw sender bytes via tx.get::<[u8; 20]>() (as used in
EthEoaAccount); update those implementations to either parse the new Message
envelope or switch to using tx.get::<TxContext>() so they decode the canonical
TxContext. Locate usages of Message::new(self), EthEoaAccount, and any
AuthenticationPayload impls that call tx.get::<[u8;20]>() and change them to
extract the sender from TxContext or implement a Message-aware parser that
extracts the same sender field from the new encoded format, keeping the
canonical ETH path (tx.get::<TxContext>()) intact.

---

Outside diff comments:
In `@crates/app/server/src/dev.rs`:
- Around line 509-525: ProducedBlock.tx_count was changed to hold
executed_tx_count (not total/proposed), which is a semantic break for callers;
update the ProducedBlock definition and its docs/comments to explicitly state
that tx_count is the count of executed transactions (or add a new field like
total_tx_count/ proposed_tx_count if you need to preserve the old semantics),
and update any public docstrings or comments near the ProducedBlock struct and
usages (ProducedBlock, tx_count, executed_tx_count, proposed_tx_count) to
reflect the new meaning so callers are aware of the change.

---

Duplicate comments:
In `@bin/testapp/src/sim_testing.rs`:
- Around line 456-460: The helper propose_mempool_batch currently calls
Mempool::propose(..., max_txs) which treats max_txs==0 as "no limit", so calling
produce_block_from_mempool(0) incorrectly proposes everything; fix by guarding
max_txs == 0 before taking the mempool lock and return empty Vecs immediately
when max_txs is zero. Update propose_mempool_batch to check max_txs==0 at the
start and return (Vec::new(), Vec::new()) instead of calling pool.propose,
referencing the propose_mempool_batch function and Mempool::propose call in your
change.

---

Nitpick comments:
In `@bin/testapp/Cargo.toml`:
- Line 60: Remove the redundant dev-dependency entry for evolve_tx_eth in
Cargo.toml: delete the line "evolve_tx_eth = { workspace = true }" from
[dev-dependencies] because evolve_tx_eth is already listed under [dependencies]
with features = ["testing"] and Cargo merges features for dev builds; if you
intended different features for tests, instead add the needed features to the
main evolve_tx_eth dependency declaration rather than duplicating the package in
[dev-dependencies], then run cargo test to confirm nothing breaks.

In `@crates/app/node/src/lib.rs`:
- Around line 644-648: The gRPC server is force-aborted via grpc_handle.abort(),
which ignores shutdown_timeout_secs and drops in-flight RPCs; instead wire up a
graceful shutdown: locate where grpc_handle (the spawned task/JoinHandle for the
gRPC server) is created and change the shutdown path to signal the server to
shut down gracefully (for example by sending a shutdown oneshot/trigger the
Server::graceful_shutdown path or calling a provided graceful stop method) and
then await grpc_handle with a
tokio::time::timeout(Duration::from_secs(shutdown_timeout_secs)) to let
in-flight calls finish; only call grpc_handle.abort() if that timeout elapses.
Ensure references to grpc_handle and shutdown_timeout_secs (and the
context.stop() usage) are updated accordingly so the timeout actually governs
gRPC shutdown.

In `@crates/app/tx/eth/tests/proptest_tests.rs`:
- Line 87: Replace the hardcoded proptest case counts with a configurable value
read from the EVOLVE_PROPTEST_CASES environment variable: modify the
proptest_config attributes that call ProptestConfig::with_cases(20) and
ProptestConfig::with_cases(15) so they instead use a runtime-parsed usize
obtained from std::env::var("EVOLVE_PROPTEST_CASES").unwrap_or_default()
behavior (falling back to the current defaults 20 and 15 if the env var is
missing or invalid); ensure you reference the same ProptestConfig::with_cases
invocation points in proptest_tests.rs and parse errors fall back silently to
the existing hardcoded counts.

In `@crates/rpc/chain-index/src/lib.rs`:
- Around line 44-46: A hardcoded "0x1" is returned from the gRPC execution
service; replace that literal with the shared constant exported as
DEFAULT_PROTOCOL_VERSION to avoid divergence. In the gRPC service handler in
execution.rs, import or reference DEFAULT_PROTOCOL_VERSION (the re-export from
provider alongside ChainStateProvider/ChainStateProviderConfig) and return that
constant instead of the string "0x1", ensuring the constant is in scope where
the response is constructed.

In `@crates/rpc/chain-index/src/provider.rs`:
- Line 855: Tests for get_transaction_receipt currently only assert
transaction_hash; update the relevant receipt test in provider.rs to also assert
effective_gas_price to catch regressions in the StoredReceipt -> RpcReceipt
mapping: after unwrapping found_receipt (the result of
get_transaction_receipt()), add an assertion comparing
receipt.effective_gas_price to U256::from(1u64) (alongside the existing
assert_eq!(receipt.transaction_hash, tx_hash)); ensure you reference the same
found_receipt/receipt variables used in the test so the gas price check
validates the propagated value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0c1ef7e0-a70b-4289-bf8b-6c64f39350c2

📥 Commits

Reviewing files that changed from the base of the PR and between 9499e1e and a8c2305.

📒 Files selected for processing (24)
  • bin/testapp/Cargo.toml
  • bin/testapp/src/lib.rs
  • bin/testapp/src/main.rs
  • bin/testapp/src/sim_testing.rs
  • bin/testapp/tests/mempool_e2e.rs
  • bin/txload/Cargo.toml
  • bin/txload/src/main.rs
  • crates/app/node/Cargo.toml
  • crates/app/node/src/lib.rs
  • crates/app/server/src/dev.rs
  • crates/app/server/src/lib.rs
  • crates/app/tx/eth/Cargo.toml
  • crates/app/tx/eth/src/eoa_registry.rs
  • crates/app/tx/eth/src/ethereum/recovery.rs
  • crates/app/tx/eth/src/lib.rs
  • crates/app/tx/eth/src/mempool.rs
  • crates/app/tx/eth/src/test_utils.rs
  • crates/app/tx/eth/tests/integration_tests.rs
  • crates/app/tx/eth/tests/proptest_tests.rs
  • crates/rpc/chain-index/src/lib.rs
  • crates/rpc/chain-index/src/provider.rs
  • crates/rpc/evnode/Cargo.toml
  • crates/rpc/evnode/src/runner.rs
  • crates/rpc/evnode/src/service.rs

Comment on lines +471 to +477
let eoa_ids = eoa_account_ids(&state);
assert_eq!(eoa_ids.len(), 2);
assert_eq!(
read_token_balance(&state, output.genesis_result.token, eoa_ids[0])
+ read_token_balance(&state, output.genesis_result.token, eoa_ids[1]),
1234 + 5678,
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert the individual ETH-mapped balances, not just the sum.

This still passes if the balances are swapped between Alice/Bob or one mapped account receives both allocations. Since the test is validating address mapping, derive the two expected account IDs from the configured ETH addresses and assert 1234 and 5678 on each one directly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/testapp/src/lib.rs` around lines 471 - 477, The test currently only
checks the sum of balances via eoa_account_ids and read_token_balance; instead
derive the two specific EOA account IDs from the configured ETH addresses (the
same addresses used to build the genesis mapping) and assert each account's
token balance individually against 1234 and 5678 using
read_token_balance(output.genesis_result.token, <derived_account_id>) so the
mapping to Alice/Bob is validated (use the ETH address variables/constants you
already configure to compute the expected account IDs rather than relying on
order from eoa_account_ids).

Comment on lines +169 to +171
// Pass the canonical transaction context through authentication so EOAs
// can verify both the recovered sender and the transaction nonce.
Message::new(self)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== AuthenticationPayload producers =="
rg -n -C3 'impl\s+AuthenticationPayload|fn authentication_payload' --type rust

echo
echo "== authenticate(...) implementations =="
rg -n -C4 '\bauthenticate\s*\(' --type rust

echo
echo "== likely old 20-byte payload decoders =="
rg -n -C3 'get::<\[u8; 20\]>\(\)|\[u8;\s*20\]' --type rust

Repository: evstack/ev-rs

Length of output: 20692


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Other AuthenticationPayload implementations =="
rg -n 'impl AuthenticationPayload' --type rust -A 5

echo
echo "== DummyTx and other test tx types =="
rg -n 'struct DummyTx|struct.*Tx' --type rust -B 2 | head -100

echo
echo "== References to fallback [u8; 20] authentication path =="
rg -n 'tx\.get::<\[u8; 20\]>' --type rust -B 2 -A 2

Repository: evstack/ev-rs

Length of output: 8021


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Tests that authenticate with [u8; 20] format =="
rg -n 'authenticate.*Message.*from|authenticate.*0x|Message::from' --type rust -B 3 -A 3

echo
echo "== Check context around eth_eoa.rs authenticate method =="
sed -n '119,140p' bin/testapp/src/eth_eoa.rs

echo
echo "== Check if Message::from works with [u8; 20] =="
rg -n 'impl.*From.*\[u8.*20\]|fn from' crates/app/sdk/stf_traits/src/lib.rs -A 5

Repository: evstack/ev-rs

Length of output: 41196


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Full authenticate method in eth_eoa.rs =="
sed -n '119,150p' bin/testapp/src/eth_eoa.rs

echo
echo "== Search for any other places that call authentication_payload =="
rg -n 'authentication_payload' --type rust -B 2 -A 2

Repository: evstack/ev-rs

Length of output: 3141


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Authentication tests =="
find . -name "*authentication*test*" -o -name "*test*authentication*" | head -20

echo
echo "== Search for tests that construct authentication messages =="
rg -n 'authenticate\(' --type rust -B 5 | grep -A 5 'test\|#\[' | head -60

Repository: evstack/ev-rs

Length of output: 2083


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Full authentication tests file =="
cat bin/testapp/tests/authentication_tests.rs

Repository: evstack/ev-rs

Length of output: 4400


🏁 Script executed:

#!/bin/bash
set -euo clipypefail

echo "== Search for direct authenticate() calls (not through AuthenticationPayload) =="
rg -n '\.authenticate\(' --type rust -B 3 | grep -v test | head -80

echo
echo "== Check if DummyTx's payload field could be [u8; 20] =="
sed -n '70,120p' crates/app/sdk/standards/evolve_authentication/src/lib.rs

Repository: evstack/ev-rs

Length of output: 113


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Search for direct authenticate() calls (not through AuthenticationPayload) =="
rg -n '\.authenticate\(' --type rust -B 3 | grep -v test | head -80

echo
echo "== Check DummyTx and its payload field =="
sed -n '70,120p' crates/app/sdk/standards/evolve_authentication/src/lib.rs

Repository: evstack/ev-rs

Length of output: 1825


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Check if tests use the [u8; 20] fallback path =="
rg -n 'authenticate.*\[u8; 20\]|Message::from.*\[u8; 20\]' --type rust

echo
echo "== Verify tests all pass =="
echo "Checking if authentication_tests.rs exercises canonical path only..."
grep -n "submit_transfer\|build.*transfer" bin/testapp/tests/authentication_tests.rs

Repository: evstack/ev-rs

Length of output: 883


Intentional payload contract narrowing—verify all custom tx types handle the new Message format.

With TxContext now returning Message::new(self) (full encoded envelope), the fallback at EthEoaAccount line 130 expecting tx.get::<[u8; 20]>() will no longer decode. This is an intentional tightening of the authentication interface, but any custom AuthenticationPayload implementations sending raw sender bytes will fail at runtime. The canonical ETH path (line 126 with tx.get::<TxContext>()) continues to work; confirm that all other tx types either rely on the canonical path or define their own authentication payload shape.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/app/tx/eth/src/mempool.rs` around lines 169 - 171, TxContext now
returns Message::new(self) (the full encoded envelope), which breaks custom
AuthenticationPayload implementations that expect raw sender bytes via
tx.get::<[u8; 20]>() (as used in EthEoaAccount); update those implementations to
either parse the new Message envelope or switch to using tx.get::<TxContext>()
so they decode the canonical TxContext. Locate usages of Message::new(self),
EthEoaAccount, and any AuthenticationPayload impls that call tx.get::<[u8;20]>()
and change them to extract the sender from TxContext or implement a
Message-aware parser that extracts the same sender field from the new encoded
format, keeping the canonical ETH path (tx.get::<TxContext>()) intact.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant