Conversation
📝 WalkthroughWalkthroughReplaced multiple deploy-config Changes
Sequence DiagramsequenceDiagram
participant Client as Client/Test
participant ShadowProver as ShadowProver
participant Executor as Local Executor
participant L1Provider as L1 Provider
participant Rollup as Rollup Contract
Client->>ShadowProver: prove(BatchProveInfo { batch_info, batch_header })
ShadowProver->>Executor: execute_block(batch_info)
Executor-->>ShadowProver: BlockState + proof
ShadowProver->>ShadowProver: generate_aggregation_proof()
ShadowProver->>L1Provider: eth_call proveCommittedBatchState(batch_header, proof) (dry-run)
L1Provider-->>ShadowProver: call result (ok/fail)
alt dry-run ok
ShadowProver->>Rollup: send proveCommittedBatchState(tx with proof)
Rollup-->>ShadowProver: tx receipt/status
else dry-run failed
ShadowProver-->>Client: log and retry/skip
end
ShadowProver-->>Client: final result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
Update morph-revm/morph-primitives/morph-chainspec to rev 72367096, which adds serde(default) for TxMorph version/feeTokenId/feeLimit fields.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
prover/crates/executor/client/src/types/blob.rs (1)
66-75:⚠️ Potential issue | 🟠 MajorDo not treat malformed tx bytes as a partial success.
Unsupported types still
breakand return a truncated list, while truncated buffers or decode failures still panic via unchecked slicing andunwrap(). For blob-derived input, this should fail closed.🛡️ Proposed change
let tx_len_size = if first_byte > 0xf7 { (first_byte - 0xf7) as usize } else { // Support transaction types: 0x01, 0x02, 0x04, 0x7f - if first_byte != 0x01 && first_byte != 0x02 && first_byte != 0x04 && first_byte != 0x7f - { - log::info!("not supported tx type: 0x{first_byte:02x}"); - break; + if !matches!(first_byte, 0x01 | 0x02 | 0x04 | 0x7f) { + return Err(anyhow!("unsupported tx type: 0x{first_byte:02x} at offset {offset}")); } - (*bs.get(offset + 1).unwrap() - 0xf7) as usize + let Some(&len_prefix) = bs.get(offset + 1) else { + return Err(anyhow!("truncated typed tx at offset {offset}")); + }; + if len_prefix <= 0xf7 { + return Err(anyhow!("unexpected short typed tx payload at offset {offset}")); + } + (len_prefix - 0xf7) as usize }; + if tx_len_size > 4 { + return Err(anyhow!("unsupported transaction length prefix at offset {offset}")); + } @@ - let tx_bytes = bs[offset..offset + rlp_tx_len].to_vec(); - let tx_decoded = MorphTxEnvelope::decode_2718(&mut tx_bytes.as_slice()) - .inspect_err(|e| { - log::error!("decode_transaction error: {e:?}"); - }) - .unwrap(); + let Some(mut tx_slice) = bs.get(offset..offset + rlp_tx_len) else { + return Err(anyhow!("truncated transaction at offset {offset}")); + }; + let tx_decoded = MorphTxEnvelope::decode_2718(&mut tx_slice) + .with_context(|| format!("failed to decode transaction at offset {offset}"))?;This needs a small follow-up outside the selected hunk: change
decode_transactionsto returnResult<Vec<MorphTxEnvelope>, anyhow::Error>and wrap the final value inOk(txs_decoded).Also applies to: 93-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/crates/executor/client/src/types/blob.rs` around lines 66 - 75, The loop in decode_transactions treats malformed or unsupported transaction bytes as partial success (uses break) and panics on unchecked ops (bs.get().unwrap(), unchecked slicing); change decode_transactions to return Result<Vec<MorphTxEnvelope>, anyhow::Error>, replace any breaks for unsupported types with returning Err(...) (include context like first_byte), validate bounds from bs.get(...) and slice lengths and convert unwraps into early Err on failure, and at the end wrap and return Ok(txs_decoded); update the other similar block around lines 93-98 the same way so all decode errors return Err rather than truncating or panicking.
🧹 Nitpick comments (1)
prover/crates/storage/witness-db/src/lib.rs (1)
100-114: Consider consistent error handling instorage_ref.The
basic_refmethod properly returns errors viaResult, butstorage_refuses.expect()which will panic on missing storage tries or MPT errors. While this may be intentional (incomplete witness = bug), the inconsistency could make debugging harder if the prover crashes.♻️ Optional: Return errors instead of panicking
fn storage_ref(&self, address: Address, index: U256) -> Result<U256, Self::Error> { let hashed_address = keccak256(address); let hashed_address = hashed_address.as_slice(); - let storage_trie = self + let storage_trie = self .inner .storage_tries .get(hashed_address) - .expect("A storage trie must be provided for each account"); + .ok_or_else(|| TrieDBError(format!( + "storage trie not found for address: {address:?}" + )))?; - Ok(storage_trie + let value = storage_trie .get_rlp::<U256>(keccak256(index.to_be_bytes::<32>()).as_slice()) - .expect("Can get from MPT") - .unwrap_or_default()) + .map_err(|e| TrieDBError(format!( + "MPT error for {address:?} at index {index}: {e:?}" + )))? + .unwrap_or_default(); + + Ok(value) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/crates/storage/witness-db/src/lib.rs` around lines 100 - 114, The storage_ref function currently panics via expect() when a storage trie is missing or when get_rlp fails; make it consistent with basic_ref by returning Err(Self::Error) instead of panicking: locate storage_ref and replace the expect("A storage trie must be provided for each account") on self.inner.storage_tries.get(hashed_address) with an early-return Err mapped to the crate's error type, and replace the .expect("Can get from MPT") and unwrap_or_default() on storage_trie.get_rlp::<U256>(...) with proper mapping of the MPT error into Self::Error and propagating a default or None as defined by the Result contract; use map_err/ok_or_else or ? to convert underlying errors into Self::Error so callers receive a Result instead of a panic (refer to storage_ref, inner.storage_tries, and get_rlp).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@prover/bin/shadow-prove/contracts/README.md`:
- Around line 17-20: The markdown code fences for the deploy commands (the
EvmVerifier and ShadowRollup blocks) are missing a language tag; update the
triple-backtick fences surrounding the forge create commands to use the bash
language tag (e.g., change ``` to ```bash) for both the EvmVerifier block and
the ShadowRollup block so markdownlint stops flagging them.
In `@prover/bin/shadow-prove/src/shadow_prove.rs`:
- Line 229: The code currently calls pending_tx.get_receipt().await.unwrap(),
which will panic on RPC/network failures; replace this unwrap with proper error
handling: await the Result from pending_tx.get_receipt().await and either
propagate the error (return Err) from the surrounding function or handle it
locally (log the error via the existing logger and skip/abort the proving task).
Locate the call to pending_tx.get_receipt() in shadow_prove.rs, bind its Result
(e.g., match or if let), and handle Err(e) by logging/contextualizing the error
and returning or continuing instead of unwrapping so the task doesn't crash.
- Around line 61-73: The constructor new currently takes parameters in the order
(wallet_address, shadow_rollup_address, rollup_address, ...) which causes
l1_rollup and l1_shadow_rollup to be initialized with swapped addresses; change
the parameter order to (wallet_address, rollup_address, shadow_rollup_address,
l1_provider, shadow_provider, wallet) and update the internal initializations so
l1_rollup = Rollup::RollupInstance::new(rollup_address, l1_provider.clone()) and
l1_shadow_rollup = ShadowRollupInstance::new(shadow_rollup_address, wallet)
match the call site; ensure the struct field assignment Self { shadow_provider,
l1_rollup, l1_shadow_rollup, wallet_address } remains correct and update any
other call sites of new accordingly.
In `@prover/contracts/README.md`:
- Line 20: The deploy example's --constructor-args for EvmVerifier contains a
stray "2d" in the programVkey; update the command so the 0x... value matches the
new programVkey used elsewhere in this PR (fix the hex string passed to
--constructor-args for EvmVerifier in the README so it exactly equals the
updated programVkey).
In `@prover/crates/primitives/src/lib.rs`:
- Around line 165-172: The new Morph accessors on the TxTrace trait are required
today but only used when ty() == 0x7f, which breaks existing implementers; add
default implementations on the trait so implementers aren't forced to change. In
the TxTrace trait, provide a default for morph_tx_version(&self) -> u8 (e.g.,
return 0) and defaults for morph_tx_reference(&self) -> Option<B256> and
morph_tx_memo(&self) -> Option<Bytes> (return None), keeping the method
signatures intact so callers can still read them when ty() == 0x7f but existing
implementers don't need to implement these methods.
In `@prover/crates/primitives/src/types/tx.rs`:
- Around line 176-178: The method morph_tx_version currently collapses None into
0, making a missing Morph version indistinguishable from an explicit 0; update
handling so missing versions are not defaulted: either change morph_tx_version
to return Option<u8> (propagate None and let callers/TxMorph construction handle
absence), or if the TxTrace trait must remain u8, remove the unwrap_or(0) and
keep an explicit check where TxMorph/0x7f transactions are constructed (e.g., in
the TxMorph builder/0x7f construction path) to validate and reject None/missing
version instead of treating it as 0; reference morph_tx_version, TxMorph
construction, and the 0x7f path when making the fix.
---
Outside diff comments:
In `@prover/crates/executor/client/src/types/blob.rs`:
- Around line 66-75: The loop in decode_transactions treats malformed or
unsupported transaction bytes as partial success (uses break) and panics on
unchecked ops (bs.get().unwrap(), unchecked slicing); change decode_transactions
to return Result<Vec<MorphTxEnvelope>, anyhow::Error>, replace any breaks for
unsupported types with returning Err(...) (include context like first_byte),
validate bounds from bs.get(...) and slice lengths and convert unwraps into
early Err on failure, and at the end wrap and return Ok(txs_decoded); update the
other similar block around lines 93-98 the same way so all decode errors return
Err rather than truncating or panicking.
---
Nitpick comments:
In `@prover/crates/storage/witness-db/src/lib.rs`:
- Around line 100-114: The storage_ref function currently panics via expect()
when a storage trie is missing or when get_rlp fails; make it consistent with
basic_ref by returning Err(Self::Error) instead of panicking: locate storage_ref
and replace the expect("A storage trie must be provided for each account") on
self.inner.storage_tries.get(hashed_address) with an early-return Err mapped to
the crate's error type, and replace the .expect("Can get from MPT") and
unwrap_or_default() on storage_trie.get_rlp::<U256>(...) with proper mapping of
the MPT error into Self::Error and propagating a default or None as defined by
the Result contract; use map_err/ok_or_else or ? to convert underlying errors
into Self::Error so callers receive a Result instead of a panic (refer to
storage_ref, inner.storage_tries, and get_rlp).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1686d412-9ba6-47bd-bd1d-beeaa7ec3348
⛔ Files ignored due to path filters (1)
prover/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
contracts/src/deploy-config/holesky.tscontracts/src/deploy-config/hoodi.tscontracts/src/deploy-config/l1.tscontracts/src/deploy-config/qanetl1.tscontracts/src/deploy-config/sepolia.tscontracts/src/deploy-config/testnetl1.tsprover/.gitignoreprover/Cargo.tomlprover/bin/client/elf/verifier-clientprover/bin/shadow-prove/abi/Rollup.jsonprover/bin/shadow-prove/contracts/README.mdprover/bin/shadow-prove/src/execute.rsprover/bin/shadow-prove/src/main.rsprover/bin/shadow-prove/src/shadow_prove.rsprover/contracts/README.mdprover/crates/executor/client/Cargo.tomlprover/crates/executor/client/src/types/blob.rsprover/crates/executor/core/Cargo.tomlprover/crates/executor/core/src/executor.rsprover/crates/executor/host/src/execute.rsprover/crates/primitives/src/lib.rsprover/crates/primitives/src/types/tx.rsprover/crates/storage/rpc-db/src/basic_rpc_db.rsprover/crates/storage/witness-db/src/lib.rsprover/rust-toolchainprover/testdata/state/executor_input_19720290.data
💤 Files with no reviewable changes (1)
- prover/crates/executor/core/Cargo.toml
…ipt error - Swap rollup_address and shadow_rollup_address parameters in ShadowProver::new() to match the call site in main.rs - Replace unwrap() on get_receipt() with proper error handling to avoid panics on RPC/network failures
There was a problem hiding this comment.
🧹 Nitpick comments (2)
prover/bin/shadow-prove/src/shadow_prove.rs (2)
134-138: Consider handlingspawn_blockingjoin errors.The
.unwrap()onspawn_blocking().awaitwill panic if the spawned task panics (e.g., ifserde_json::to_stringfails on line 135, though unlikely for this struct). While rare, graceful handling would improve resilience.♻️ Proposed fix
let rt = tokio::task::spawn_blocking(move || { util::call_prover(serde_json::to_string(&request).unwrap(), "/prove_batch") }) - .await - .unwrap(); + .await + .unwrap_or_else(|e| { + log::error!("spawn_blocking task failed: {:#?}", e); + None + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/bin/shadow-prove/src/shadow_prove.rs` around lines 134 - 138, The code currently uses tokio::task::spawn_blocking(...).await.unwrap(), which will panic on a JoinError; change this to handle the Result from spawn_blocking().await (the JoinError) instead of unwrap—match or use map_err/expect_with_context to log or return a proper error when the spawned task panics or is cancelled; ensure any error from serde_json::to_string and util::call_prover is propagated or converted into a meaningful error value instead of causing a panic so the caller of the function (and variable rt) can handle failures gracefully.
254-258: Samespawn_blockingunwrap pattern.Similar to line 138, this
.unwrap()onspawn_blocking().awaitwill panic on task failure. For consistency, consider handling the join error gracefully here as well.♻️ Proposed fix
let rt = tokio::task::spawn_blocking(move || { util::call_prover(batch_index.to_string(), "/query_proof") }) - .await - .unwrap(); + .await + .unwrap_or_else(|e| { + log::error!("spawn_blocking task failed: {:#?}", e); + None + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/bin/shadow-prove/src/shadow_prove.rs` around lines 254 - 258, The .unwrap() on the tokio::task::spawn_blocking join (the block that calls util::call_prover(batch_index.to_string(), "/query_proof")) can panic on task failure; replace the unwrap with proper JoinError handling—await the JoinHandle, match on the Result, and propagate or log the error instead of panicking (e.g., return an Err or map the JoinError into your function's error type). Update the block that creates rt (the spawn_blocking call and its .await) to handle Err(join_err) and handle util::call_prover failures gracefully.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@prover/bin/shadow-prove/src/shadow_prove.rs`:
- Around line 134-138: The code currently uses
tokio::task::spawn_blocking(...).await.unwrap(), which will panic on a
JoinError; change this to handle the Result from spawn_blocking().await (the
JoinError) instead of unwrap—match or use map_err/expect_with_context to log or
return a proper error when the spawned task panics or is cancelled; ensure any
error from serde_json::to_string and util::call_prover is propagated or
converted into a meaningful error value instead of causing a panic so the caller
of the function (and variable rt) can handle failures gracefully.
- Around line 254-258: The .unwrap() on the tokio::task::spawn_blocking join
(the block that calls util::call_prover(batch_index.to_string(),
"/query_proof")) can panic on task failure; replace the unwrap with proper
JoinError handling—await the JoinHandle, match on the Result, and propagate or
log the error instead of panicking (e.g., return an Err or map the JoinError
into your function's error type). Update the block that creates rt (the
spawn_blocking call and its .await) to handle Err(join_err) and handle
util::call_prover failures gracefully.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 43cb02f6-ed6f-4832-a6ff-72d4d2893090
📒 Files selected for processing (1)
prover/bin/shadow-prove/src/shadow_prove.rs
- Add comment clarifying proveCommittedBatchState uses eth_call as dry-run validation before sending the actual shadow tx - Skip shadow tx when dry-run validation fails (continue instead of falling through) - Align alloy-primitives version (1.5.6 -> 1.5.7) with Cargo.lock - Add trailing newline to .gitignore
There was a problem hiding this comment.
🧹 Nitpick comments (3)
prover/bin/shadow-prove/src/shadow_prove.rs (3)
162-164: Document or extract the magic number for block prove time.The value
300represents seconds per block proving time (5 minutes). Consider extracting this as a named constant or adding a brief inline comment for maintainability.♻️ Proposed improvement
+ const BLOCK_PROVE_TIME_SECS: usize = 300; // 5 minutes per block // Step5. query proof and prove onchain state. - let mut max_waiting_time: usize = 300 * blocks_len as usize; //block_prove_time = 5min + let mut max_waiting_time: usize = BLOCK_PROVE_TIME_SECS * blocks_len as usize;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/bin/shadow-prove/src/shadow_prove.rs` around lines 162 - 164, The magic number 300 (seconds per block proof) used to compute max_waiting_time should be extracted to a named constant (e.g., BLOCK_PROVE_SECONDS or BLOCK_PROVE_TIME_SECONDS) or at minimum documented with an inline comment; update the calculation in shadow_prove.rs where max_waiting_time is set (currently: let mut max_waiting_time: usize = 300 * blocks_len as usize) to use the new constant and ensure the name clearly indicates seconds (or include units in the comment) so future maintainers understand the 5-minute-per-block assumption.
134-138: Consider handlingspawn_blockingjoin errors gracefully.The
.unwrap()onspawn_blocking(...).awaitwill panic if the spawned task panics. Whilecall_proverreturnsOptionand is unlikely to panic, defensive handling would improve resilience.♻️ Proposed improvement
let rt = tokio::task::spawn_blocking(move || { util::call_prover(serde_json::to_string(&request).unwrap(), "/prove_batch") }) - .await - .unwrap(); + .await + .unwrap_or_else(|e| { + log::error!("spawn_blocking join error: {:#?}", e); + None + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/bin/shadow-prove/src/shadow_prove.rs` around lines 134 - 138, The code currently calls tokio::task::spawn_blocking(...).await.unwrap() which will panic if the spawned task panics; change this to handle the JoinError returned by await on the spawn_blocking JoinHandle. Replace the unwrap with a match or map_err on the await result from tokio::task::spawn_blocking(...) so that if Err(e) you log or return a sensible error instead of panicking; reference the spawn_blocking call and the call_prover invocation (and the rt variable) when adding the error handling so the join failure is caught and handled gracefully.
280-297: Consider relocating this test to a more appropriate module.This test validates
SP1Verifiererror payload decoding, which is functionality from theabimodule rather thanshadow_prove. Moving it to theabimodule's tests or a dedicated integration test file would improve code organization.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/bin/shadow-prove/src/shadow_prove.rs` around lines 280 - 297, The unit test that decodes SP1Verifier errors should be moved out of shadow_prove.rs into the abi module's tests (or a dedicated integration test) because it exercises abi::SP1Verifier behavior; cut the test block and paste it into the abi module test file (or tests/ integration file), update imports to use crate::abi::SP1Verifier and alloy_json_rpc::ErrorPayload there, and ensure the assertions still call payload.as_decoded_error::<SP1Verifier::WrongVerifierSelector>() and as_decoded_error::<SP1Verifier::InvalidProof>() from the new location; finally remove the test from shadow_prove.rs so only abi-related tests live with the abi code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@prover/bin/shadow-prove/src/shadow_prove.rs`:
- Around line 162-164: The magic number 300 (seconds per block proof) used to
compute max_waiting_time should be extracted to a named constant (e.g.,
BLOCK_PROVE_SECONDS or BLOCK_PROVE_TIME_SECONDS) or at minimum documented with
an inline comment; update the calculation in shadow_prove.rs where
max_waiting_time is set (currently: let mut max_waiting_time: usize = 300 *
blocks_len as usize) to use the new constant and ensure the name clearly
indicates seconds (or include units in the comment) so future maintainers
understand the 5-minute-per-block assumption.
- Around line 134-138: The code currently calls
tokio::task::spawn_blocking(...).await.unwrap() which will panic if the spawned
task panics; change this to handle the JoinError returned by await on the
spawn_blocking JoinHandle. Replace the unwrap with a match or map_err on the
await result from tokio::task::spawn_blocking(...) so that if Err(e) you log or
return a sensible error instead of panicking; reference the spawn_blocking call
and the call_prover invocation (and the rt variable) when adding the error
handling so the join failure is caught and handled gracefully.
- Around line 280-297: The unit test that decodes SP1Verifier errors should be
moved out of shadow_prove.rs into the abi module's tests (or a dedicated
integration test) because it exercises abi::SP1Verifier behavior; cut the test
block and paste it into the abi module test file (or tests/ integration file),
update imports to use crate::abi::SP1Verifier and alloy_json_rpc::ErrorPayload
there, and ensure the assertions still call
payload.as_decoded_error::<SP1Verifier::WrongVerifierSelector>() and
as_decoded_error::<SP1Verifier::InvalidProof>() from the new location; finally
remove the test from shadow_prove.rs so only abi-related tests live with the abi
code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 552fc034-faa3-4242-8e13-ad1ff719679e
📒 Files selected for processing (3)
prover/.gitignoreprover/Cargo.tomlprover/bin/shadow-prove/src/shadow_prove.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- prover/.gitignore
Summary
Upgrade prover to support Jade hardfork.
Key Changes
morph-revm,morph-primitives,morph-chainspecto latest main (72367096), which addsserde(default)for TxMorphversion/feeTokenId/feeLimitfields for backward-compatible deserialization of legacy V0 MorphTx datamorph_tx_version,morph_tx_reference,morph_tx_memofields toTxTracetrait and related types in prover-primitivesproveCommittedBatchStateViewin shadow proving, update Rollup ABI, and refactor batch execution logicMORPH_TX_V1_BLOCKacross all network deploy configsexecutor_input_19720290.data)Test Plan
test_execute_local_traces— all 11 MPT testdata blocks passtest_execute_local— state testdata (executor_input_19720290.data) passescargo build --release -p shadow-provingcompiles successfullySummary by CodeRabbit
New Features
Chores