Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ jobs:
- uses: golangci/golangci-lint-action@v9.2.0
with:
version: latest
args: --timeout 10m
args: --timeout 10m ./...
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

Expand Down
65 changes: 64 additions & 1 deletion .golangci.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,63 @@
version: "2"
run:
modules-download-mode: readonly
timeout: 10m
tests: true
build-tags:
- evm
- e2e
- docker
linters:
enable:
- asciicheck
- bidichk
- bodyclose
- containedctx
- contextcheck
- copyloopvar
- durationcheck
- errname
- errcheck
- errorlint
- gocritic
- govet
- ineffassign
- makezero
- gosec
- misspell
- nilerr
- noctx
- nolintlint
- prealloc
- predeclared
- reassign
- revive
- rowserrcheck
- sqlclosecheck
- staticcheck
- testifylint
- unconvert
- unparam
- unused
- usestdlibvars
- wastedassign
Comment on lines +12 to +43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n .golangci.yml

Repository: evstack/ev-node

Length of output: 2986


🏁 Script executed:

#!/bin/bash
# Verify overlap between linters.enable and linters.disable in .golangci.yml
python - <<'PY'
import re
from pathlib import Path

lines = Path(".golangci.yml").read_text().splitlines()
in_linters = False
state = None
enable, disable = set(), set()

for line in lines:
    if re.match(r'^linters:\s*$', line):
        in_linters = True
        state = None
        continue
    if in_linters and re.match(r'^[^\s].*:\s*$', line):  # next top-level block
        in_linters = False
        state = None
    if not in_linters:
        continue

    if re.match(r'^\s{2}enable:\s*$', line):
        state = "enable"
        continue
    if re.match(r'^\s{2}disable:\s*$', line):
        state = "disable"
        continue
    if re.match(r'^\s{2}[a-zA-Z0-9_-]+:\s*$', line) and not re.match(r'^\s{2}(enable|disable):\s*$', line):
        state = None
        continue

    m = re.match(r'^\s{4}-\s+([a-zA-Z0-9_-]+)\s*$', line)
    if not m:
        continue
    if state == "enable":
        enable.add(m.group(1))
    elif state == "disable":
        disable.add(m.group(1))

overlap = sorted(enable & disable)
print("Enabled linters:", sorted(enable))
print("\nDisabled linters:", sorted(disable))
print("\nOverlapping linters:", overlap if overlap else "none")
PY

Repository: evstack/ev-node

Length of output: 735


Conflicting linter configuration disables several newly enabled checks.

Lines 12-43 enable containedctx, errcheck, gocritic, nolintlint, testifylint, and unparam which are then disabled again on lines 90-97. This contradiction also makes the related settings on lines 45-60 (for errcheck and gocritic) ineffective, undermining the PR's stricter-linting objective.

Suggested cleanup (if stricter mode is intended)
   disable:
-    - containedctx
-    - errcheck
-    - gocritic
-    - nolintlint
-    - testifylint
     - thelper
     - tparallel
-    - unparam
     - wrapcheck
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.golangci.yml around lines 12 - 43, The linter config enables containedctx,
errcheck, gocritic, nolintlint, testifylint and unparam earlier but then
disables them later, neutralizing the stricter rules; remove those linters from
the later disable list (containedctx, errcheck, gocritic, nolintlint,
testifylint, unparam) so the initial enables take effect, and ensure the
associated per-linter settings for errcheck and gocritic remain in their current
config blocks (so their options are honored).

settings:
errcheck:
check-type-assertions: true
check-blank: true
govet:
enable-all: true
disable:
- fieldalignment
- shadow
gocritic:
enabled-tags:
- diagnostic
- style
- performance
disabled-checks:
- hugeParam
- rangeValCopy
gosec:
excludes:
- G115
Expand All @@ -21,7 +66,7 @@ linters:
- name: package-comments
disabled: true
- name: duplicated-imports
severity: warning
severity: error
- name: exported
arguments:
- disableStutteringCheck
Expand All @@ -32,10 +77,28 @@ linters:
- common-false-positives
- legacy
- std-error-handling
rules:
- path: _test\.go
linters:
- prealloc
- noctx
Comment on lines +80 to +84
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

golangci-lint v2 linters exclusions rules path suppress enforce

💡 Result:

golangci-lint v2: exclude/suppress/enforce (paths, rules, linters)

1) Enable/disable (enforce which linters run)

version: "2"
linters:
  default: standard   # or: all | none | fast
  enable:  [gosec]
  disable: [dupl]

linters.default/enable/disable control which linters execute. [1]

2) Exclude findings (without skipping analysis) — linters.exclusions.*

Per-path / per-linter / per-text / per-source:

linters:
  exclusions:
    warn-unused: true
    presets: [comments, common-false-positives, legacy, std-error-handling]
    rules:
      - path: '(.+)_test\.go'
        linters: [gocyclo, errcheck]
      - path-except: '(.+)_test\.go'
        linters: [forbidigo]
      - linters: [staticcheck]
        text: "SA9003:"
      - linters: [lll]
        source: "^//go:generate "

Also available:

linters:
  exclusions:
    paths: ["path/to/a/file.go", "path/to/a/dir/"]
    paths-except: ["keep/this/path.go"]
    generated: strict   # or: lax | disable

These options are the v2 way to exclude issues by path/text/source/linter and to control generated-files handling. [1] (examples also shown in the “False Positives” doc) [2]

3) Suppress inline with //nolint

var x int //nolint:wsl,unused
//nolint:all
func legacy() {}

Valid directive form has no spaces in //nolint:.... [2]

4) Enforce “good” nolint usage (require specific linter + explanation, disallow unused)

linters:
  settings:
    nolintlint:
      allow-unused: false
      require-specific: true
      require-explanation: true

This makes //nolint harder to misuse (and catches unused/ill-formed directives). [3]

5) Make path matching predictable (important for exclusions)

run:
  relative-path-mode: gomod   # or: gitroot | cfg | wd

This controls how relative paths in exclusions (and some other features) are evaluated. [1]

Sources: Configuration file options & examples [1], exclusions + nolint syntax [2], nolintlint enforcement settings [3].


🏁 Script executed:

# Find and examine the .golangci.yml file
fd -t f ".golangci.yml" -x cat -n {}

Repository: evstack/ev-node

Length of output: 41


🏁 Script executed:

# Search for the .golangci.yml file in the repository
fd ".golangci" --type f

Repository: evstack/ev-node

Length of output: 41


🏁 Script executed:

# If found, display the content with line numbers, focusing on the relevant section
cat -n .golangci.yml | sed -n '70,95p'

Repository: evstack/ev-node

Length of output: 726


This exclusion rule suppresses prealloc and noctx linter findings in test files; confirm this is intentional.

The linters.exclusions.rules configuration with path: _test\.go and linters: [prealloc, noctx] excludes findings from these linters for all test files. If the project standards expect stricter linting in tests (deterministic tests, table-driven patterns, standard formatting), this rule should be removed or narrowed to specific test files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.golangci.yml around lines 80 - 84, The current golangci-lint exclusion rule
("rules" entry with path: _test\.go and linters: [prealloc, noctx]) broadly
suppresses prealloc and noctx findings for all test files; either remove this
exclusion or narrow it to specific test filenames or directories that truly
require it (modify the "path" pattern or list explicit files) so tests remain
subject to desired lint checks, or document the justification inline; update the
.golangci.yml rules block (the "path" and "linters" entries) accordingly.

paths:
- third_party$
- builtin$
- examples$
disable:
- containedctx
- errcheck
- gocritic
- nolintlint
- testifylint
- thelper
- tparallel
- unparam
- wrapcheck
issues:
max-issues-per-linter: 0
max-same-issues: 0
formatters:
enable:
- gci
Expand Down
4 changes: 2 additions & 2 deletions .just/lint.just
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
[group('lint')]
lint: vet
@echo "--> Running golangci-lint"
@golangci-lint run
@golangci-lint run ./...
@echo "--> Running markdownlint"
@markdownlint --config .markdownlint.yaml '**/*.md'
@echo "--> Running hadolint"
Expand All @@ -18,7 +18,7 @@ lint: vet
[group('lint')]
lint-fix:
@echo "--> Formatting go"
@golangci-lint run --fix
@golangci-lint run --fix ./...
@echo "--> Formatting markdownlint"
@markdownlint --config .markdownlint.yaml --ignore './changelog.md' '**/*.md' -f

Expand Down
1 change: 1 addition & 0 deletions block/internal/da/async_block_retriever_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -266,5 +266,6 @@ func TestBlockData_SerializationEmpty(t *testing.T) {
}

assert.Equal(t, uint64(100), decoded.Height)
assert.Equal(t, time.Unix(0, 0).UTC(), decoded.Timestamp)
assert.Equal(t, 0, len(decoded.Blobs))
}
8 changes: 4 additions & 4 deletions block/internal/executing/executor.go
Original file line number Diff line number Diff line change
Expand Up @@ -591,7 +591,7 @@ func (e *Executor) ProduceBlock(ctx context.Context) error {
LastSubmittedDaHeaderHeight: e.cache.GetLastSubmittedHeaderHeight(),
LastSubmittedDaDataHeight: e.cache.GetLastSubmittedDataHeight(),
}
if err := e.raftNode.Broadcast(e.ctx, raftState); err != nil {
if err := e.raftNode.Broadcast(ctx, raftState); err != nil {
return fmt.Errorf("failed to propose block to raft: %w", err)
}
e.logger.Debug().Uint64("height", newHeight).Msg("proposed block to raft")
Expand All @@ -617,12 +617,12 @@ func (e *Executor) ProduceBlock(ctx context.Context) error {
// IMPORTANT: Header MUST be broadcast before data — the P2P layer validates
// incoming data against the current and previous header, so out-of-order
// delivery would cause validation failures on peers.
if err := e.headerBroadcaster.WriteToStoreAndBroadcast(e.ctx, &types.P2PSignedHeader{
if err := e.headerBroadcaster.WriteToStoreAndBroadcast(ctx, &types.P2PSignedHeader{
SignedHeader: header,
}); err != nil {
e.logger.Error().Err(err).Msg("failed to broadcast header")
}
if err := e.dataBroadcaster.WriteToStoreAndBroadcast(e.ctx, &types.P2PData{
if err := e.dataBroadcaster.WriteToStoreAndBroadcast(ctx, &types.P2PData{
Data: data,
}); err != nil {
e.logger.Error().Err(err).Msg("failed to broadcast data")
Expand All @@ -637,7 +637,7 @@ func (e *Executor) ProduceBlock(ctx context.Context) error {

// For based sequencer, advance safe/finalized since it comes from DA.
if e.config.Node.BasedSequencer {
if err := e.exec.SetFinal(e.ctx, newHeight); err != nil {
if err := e.exec.SetFinal(ctx, newHeight); err != nil {
e.sendCriticalError(fmt.Errorf("failed to set final height in based sequencer mode: %w", err))
return fmt.Errorf("failed to set final height in based sequencer mode: %w", err)
}
Expand Down
14 changes: 7 additions & 7 deletions block/internal/submitting/da_submitter.go
Original file line number Diff line number Diff line change
Expand Up @@ -90,15 +90,15 @@ func (rs *retryState) Next(reason retryReason, pol retryPolicy) {
}

// clamp constrains a duration between min and max bounds
func clamp(v, min, max time.Duration) time.Duration {
if min > max {
min, max = max, min
func clamp(v, minTime, maxTime time.Duration) time.Duration {
if minTime > maxTime {
minTime, maxTime = maxTime, minTime
}
if v < min {
return min
if v < minTime {
return minTime
}
if v > max {
return max
if v > maxTime {
return maxTime
}
return v
}
Expand Down
1 change: 0 additions & 1 deletion block/internal/submitting/da_submitter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,6 @@ func TestDASubmitter_SubmitHeaders_Success(t *testing.T) {

// Create test signer
addr, pub, signer := createTestSigner(t)
gen.ProposerAddress = addr

// Create test headers
header1 := &types.SignedHeader{
Expand Down
24 changes: 11 additions & 13 deletions block/internal/syncing/syncer.go
Original file line number Diff line number Diff line change
Expand Up @@ -178,14 +178,14 @@ func (s *Syncer) Start(ctx context.Context) error {

s.fiRetriever = da.NewForcedInclusionRetriever(s.daClient, s.logger, s.config, s.genesis.DAStartHeight, s.genesis.DAEpochForcedInclusion)
s.p2pHandler = NewP2PHandler(s.headerStore, s.dataStore, s.cache, s.genesis, s.logger)
if currentHeight, err := s.store.Height(s.ctx); err != nil {
if currentHeight, err := s.store.Height(ctx); err != nil {
s.logger.Error().Err(err).Msg("failed to set initial processed height for p2p handler")
} else {
s.p2pHandler.SetProcessedHeight(currentHeight)
}

if s.raftRetriever != nil {
if err := s.raftRetriever.Start(s.ctx); err != nil {
if err := s.raftRetriever.Start(ctx); err != nil {
return fmt.Errorf("start raft retriever: %w", err)
}
}
Expand All @@ -198,7 +198,7 @@ func (s *Syncer) Start(ctx context.Context) error {
s.wg.Go(s.processLoop)

// Start dedicated workers for DA, and pending processing
s.startSyncWorkers()
s.startSyncWorkers(ctx)

s.logger.Info().Msg("syncer started")
return nil
Expand Down Expand Up @@ -358,11 +358,12 @@ func (s *Syncer) processLoop() {
}
}

func (s *Syncer) startSyncWorkers() {
func (s *Syncer) startSyncWorkers(ctx context.Context) {
_ = ctx
s.wg.Add(3)
go s.daWorkerLoop()
go s.pendingWorkerLoop()
go s.p2pWorkerLoop()
go s.p2pWorkerLoop(ctx)
}

func (s *Syncer) daWorkerLoop() {
Expand Down Expand Up @@ -485,7 +486,7 @@ func (s *Syncer) pendingWorkerLoop() {
}
}

func (s *Syncer) p2pWorkerLoop() {
func (s *Syncer) p2pWorkerLoop(ctx context.Context) {
defer s.wg.Done()

logger := s.logger.With().Str("worker", "p2p").Logger()
Expand All @@ -494,12 +495,12 @@ func (s *Syncer) p2pWorkerLoop() {

for {
select {
case <-s.ctx.Done():
case <-ctx.Done():
return
default:
}

currentHeight, err := s.store.Height(s.ctx)
currentHeight, err := s.store.Height(ctx)
if err != nil {
logger.Error().Err(err).Msg("failed to get current height for P2P worker")
if !s.sleepOrDone(50 * time.Millisecond) {
Expand All @@ -509,7 +510,7 @@ func (s *Syncer) p2pWorkerLoop() {
}

targetHeight := currentHeight + 1
waitCtx, cancel := context.WithCancel(s.ctx)
waitCtx, cancel := context.WithCancel(ctx)
s.setP2PWaitState(targetHeight, cancel)

err = s.p2pHandler.ProcessHeight(waitCtx, targetHeight, s.heightInCh)
Expand Down Expand Up @@ -879,7 +880,7 @@ func (s *Syncer) ValidateBlock(_ context.Context, currState types.State, data *t
// Set custom verifier for aggregator node signature
header.SetCustomVerifierForSyncNode(s.options.SyncNodeSignatureBytesProvider)

if err := header.ValidateBasicWithData(data); err != nil {
if err := header.ValidateBasicWithData(data); err != nil { //nolint:contextcheck // validation API does not accept context
return fmt.Errorf("invalid header: %w", err)
}

Expand Down Expand Up @@ -1226,10 +1227,7 @@ func (s *Syncer) RecoverFromRaft(ctx context.Context, raftState *raft.RaftBlockS
s.logger.Debug().Err(err).Msg("no state in store, using genesis defaults for recovery")
currentState = types.State{
ChainID: s.genesis.ChainID,
InitialHeight: s.genesis.InitialHeight,
LastBlockHeight: s.genesis.InitialHeight - 1,
LastBlockTime: s.genesis.StartTime,
DAHeight: s.genesis.DAStartHeight,
}
}
}
Expand Down
6 changes: 3 additions & 3 deletions block/internal/syncing/syncer_backoff_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ func TestSyncer_BackoffOnDAError(t *testing.T) {
}

// Run sync loop
syncer.startSyncWorkers()
syncer.startSyncWorkers(t.Context())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find and read the test file
fd -type f -name "syncer_backoff_test.go" -path "*/block/internal/syncing/*"

Repository: evstack/ev-node

Length of output: 229


🏁 Script executed:

# Find the test file
fd "syncer_backoff_test.go"

Repository: evstack/ev-node

Length of output: 104


🏁 Script executed:

# Read the test file to examine context around the mentioned lines
cat -n block/internal/syncing/syncer_backoff_test.go

Repository: evstack/ev-node

Length of output: 14114


Pass the cancelable context to startSyncWorkers, not t.Context().

At lines 124, 226, and 297, cancel() only cancels ctx; the workers receive t.Context(), which cannot be cancelled. This decouples the worker lifecycle from the test's cancellation mechanism, making the tests non-deterministic and prone to timeouts or incomplete teardown.

Suggested fix
-				syncer.startSyncWorkers(t.Context())
+				syncer.startSyncWorkers(ctx)
@@
-		syncer.startSyncWorkers(t.Context())
+		syncer.startSyncWorkers(ctx)
@@
-		syncer.startSyncWorkers(t.Context())
+		syncer.startSyncWorkers(ctx)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/syncing/syncer_backoff_test.go` at line 124, The test
currently calls syncer.startSyncWorkers(t.Context()) which uses the immutable
test context; instead pass the cancelable context (ctx) that you created and
cancel with cancel() so worker lifecycles are tied to the test teardown. Update
the three calls to startSyncWorkers to use the cancelable ctx variable (the one
paired with cancel()) rather than t.Context(), ensuring startSyncWorkers(...)
receives ctx so cancel() actually stops the workers.

<-ctx.Done()
syncer.wg.Wait()

Expand Down Expand Up @@ -223,7 +223,7 @@ func TestSyncer_BackoffResetOnSuccess(t *testing.T) {
go syncer.processLoop()

// Run workers
syncer.startSyncWorkers()
syncer.startSyncWorkers(t.Context())
<-ctx.Done()
syncer.wg.Wait()

Expand Down Expand Up @@ -294,7 +294,7 @@ func TestSyncer_BackoffBehaviorIntegration(t *testing.T) {
Return(nil, datypes.ErrBlobNotFound).Once()

go syncer.processLoop()
syncer.startSyncWorkers()
syncer.startSyncWorkers(t.Context())
<-ctx.Done()
syncer.wg.Wait()

Expand Down
2 changes: 1 addition & 1 deletion block/internal/syncing/syncer_benchmark_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ func BenchmarkSyncerIO(b *testing.B) {

// run both loops
go fixt.s.processLoop()
fixt.s.startSyncWorkers()
fixt.s.startSyncWorkers(b.Context())

require.Eventually(b, func() bool {
processedHeight, _ := fixt.s.store.Height(b.Context())
Expand Down
4 changes: 2 additions & 2 deletions block/internal/syncing/syncer_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -417,7 +417,7 @@ func TestSyncLoopPersistState(t *testing.T) {
Return(nil, datypes.ErrHeightFromFuture)

go syncerInst1.processLoop()
syncerInst1.startSyncWorkers()
syncerInst1.startSyncWorkers(t.Context())
syncerInst1.wg.Wait()
requireEmptyChan(t, errorCh)

Expand Down Expand Up @@ -480,7 +480,7 @@ func TestSyncLoopPersistState(t *testing.T) {

// when it starts, it should fetch from the last height it stopped at
t.Log("sync workers on instance2 started")
syncerInst2.startSyncWorkers()
syncerInst2.startSyncWorkers(t.Context())
syncerInst2.wg.Wait()

t.Log("sync workers exited")
Expand Down
6 changes: 3 additions & 3 deletions node/failover.go
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ func setupFailoverState(
func (f *failoverState) Run(pCtx context.Context) (multiErr error) {
stopService := func(stoppable func(context.Context) error, name string) {
// parent context is cancelled already, so we need to create a new one
shutdownCtx, done := context.WithTimeout(context.Background(), 3*time.Second)
shutdownCtx, done := context.WithTimeout(context.WithoutCancel(pCtx), 3*time.Second)
defer done()

if err := stoppable(shutdownCtx); err != nil && !errors.Is(err, context.Canceled) {
Expand All @@ -192,7 +192,7 @@ func (f *failoverState) Run(pCtx context.Context) (multiErr error) {
cCtx, cancel := context.WithCancel(pCtx)
defer cancel()
wg, ctx := errgroup.WithContext(cCtx)
wg.Go(func() (rerr error) {
wg.Go(func() (rerr error) { //nolint:contextcheck // block components stop API does not accept context
defer func() {
if err := f.bc.Stop(); err != nil && !errors.Is(err, context.Canceled) {
rerr = errors.Join(rerr, fmt.Errorf("stopping block components: %w", err))
Expand Down Expand Up @@ -234,7 +234,7 @@ func (f *failoverState) Run(pCtx context.Context) (multiErr error) {

wg.Go(func() error {
defer func() {
shutdownCtx, done := context.WithTimeout(context.Background(), 3*time.Second)
shutdownCtx, done := context.WithTimeout(context.WithoutCancel(ctx), 3*time.Second)
defer done()
_ = f.rpcServer.Shutdown(shutdownCtx)
}()
Expand Down
2 changes: 1 addition & 1 deletion node/full.go
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,7 @@ func (n *FullNode) Run(parentCtx context.Context) error {
n.Logger.Info().Msg("halting full node and its sub services...")

// Use a timeout context to ensure shutdown doesn't hang
shutdownCtx, cancel := context.WithTimeout(context.Background(), 9*time.Second)
shutdownCtx, cancel := context.WithTimeout(context.WithoutCancel(ctx), 9*time.Second)
defer cancel()

var shutdownMultiErr error // Variable to accumulate multiple errors
Expand Down
Loading
Loading