Conversation
📝 WalkthroughWalkthroughIntroduces a new exported constant Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~7 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
node/types/retryable_client.go (1)
160-172:⚠️ Potential issue | 🔴 CriticalDon't share one
ExponentialBackOffacross the whole client.
backoff.Retryresets and mutates the backoff state (viaReset()and repeatedNextBackOff()calls), making concurrent RPC calls onRetryableClientrace when they all userc.b. This causes nondeterministic retry timing and potential data races. Construct a fresh backoff per call instead, bound to the call context usingbackoff.WithContext.All these methods pass
rc.btobackoff.Retry:
AssembleL2Block(line 313)ValidateL2Block(line 332)NewL2Block(line 353)NewSafeL2Block(line 372)CommitBatch(line 389)AppendBlsSignature(line 396)- And others
Suggested fix
Remove the
b backoff.BackOfffield fromRetryableClient(line 141) and the backoff initialization fromNewRetryableClient(lines 161–162, 172, 194). Add a helper function:func newGethBackoff(ctx context.Context) backoff.BackOff { bo := backoff.NewExponentialBackOff() bo.MaxElapsedTime = GethRetryMaxElapsedTime return backoff.WithContext(bo, ctx) }Then replace each
backoff.Retry(..., rc.b)withbackoff.Retry(..., newGethBackoff(ctx)).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/types/retryable_client.go` around lines 160 - 172, The RetryableClient currently stores a single backoff.BackOff (rc.b) created in NewRetryableClient which is mutated by backoff.Retry and causes races when multiple RPC methods (e.g., AssembleL2Block, ValidateL2Block, NewL2Block, NewSafeL2Block, CommitBatch, AppendBlsSignature) call backoff.Retry concurrently; remove the b backoff.BackOff field from RetryableClient and its initialization in NewRetryableClient, add a helper newGethBackoff(ctx context.Context) that creates an ExponentialBackOff with MaxElapsedTime = GethRetryMaxElapsedTime and wraps it via backoff.WithContext, and update every backoff.Retry call that currently passes rc.b to instead call backoff.Retry(..., newGethBackoff(ctx)) so each RPC gets a fresh, context-bound backoff instance.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node/types/retryable_client.go`:
- Around line 34-36: FetchGethConfigWithRetry still uses a fixed loop based on
GethRetryAttempts and GethRetryInterval (60 * 5s ≈ 5 minutes) so startup is
capped at ~5 minutes; change FetchGethConfigWithRetry to honor
GethRetryMaxElapsedTime instead (or reuse the same backoff policy used
elsewhere) by looping until elapsed time >= GethRetryMaxElapsedTime or using the
existing backoff/RetryWithBackoff helper, replacing the fixed attempt/count
check with a time-based check that references GethRetryMaxElapsedTime (and keep
GethRetryInterval as the sleep between attempts).
---
Outside diff comments:
In `@node/types/retryable_client.go`:
- Around line 160-172: The RetryableClient currently stores a single
backoff.BackOff (rc.b) created in NewRetryableClient which is mutated by
backoff.Retry and causes races when multiple RPC methods (e.g., AssembleL2Block,
ValidateL2Block, NewL2Block, NewSafeL2Block, CommitBatch, AppendBlsSignature)
call backoff.Retry concurrently; remove the b backoff.BackOff field from
RetryableClient and its initialization in NewRetryableClient, add a helper
newGethBackoff(ctx context.Context) that creates an ExponentialBackOff with
MaxElapsedTime = GethRetryMaxElapsedTime and wraps it via backoff.WithContext,
and update every backoff.Retry call that currently passes rc.b to instead call
backoff.Retry(..., newGethBackoff(ctx)) so each RPC gets a fresh, context-bound
backoff instance.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4bb8c8f3-c86b-46ec-b86f-e2861406ddbc
📒 Files selected for processing (1)
node/types/retryable_client.go
| GethRetryAttempts = 60 // max retry attempts | ||
| GethRetryInterval = 5 * time.Second // interval between retries | ||
| GethRetryMaxElapsedTime = 30 * time.Minute |
There was a problem hiding this comment.
This still leaves the startup retry path capped at 5 minutes.
FetchGethConfigWithRetry is still bounded by GethRetryAttempts * GethRetryInterval (60 * 5s), so boot-time geth readiness will continue to fail after ~5 minutes even though this PR introduces a 30-minute retry window. If the goal is to tolerate slower geth bring-up, please derive that loop from GethRetryMaxElapsedTime or move it onto the same backoff policy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node/types/retryable_client.go` around lines 34 - 36,
FetchGethConfigWithRetry still uses a fixed loop based on GethRetryAttempts and
GethRetryInterval (60 * 5s ≈ 5 minutes) so startup is capped at ~5 minutes;
change FetchGethConfigWithRetry to honor GethRetryMaxElapsedTime instead (or
reuse the same backoff policy used elsewhere) by looping until elapsed time >=
GethRetryMaxElapsedTime or using the existing backoff/RetryWithBackoff helper,
replacing the fixed attempt/count check with a time-based check that references
GethRetryMaxElapsedTime (and keep GethRetryInterval as the sleep between
attempts).
| GethRetryInterval = 5 * time.Second // interval between retries | ||
| GethRetryAttempts = 60 // max retry attempts | ||
| GethRetryInterval = 5 * time.Second // interval between retries | ||
| GethRetryMaxElapsedTime = 30 * time.Minute |
There was a problem hiding this comment.
The DefaultMaxElapsedTime is 15 * time.Minute. Why do we need to change RetryableClient.b's max elapsed time to 30 minutes via GethRetryMaxElapsedTime?
Summary by CodeRabbit