Skip to content

chore: v1 branch#1656

Draft
SwenSchaeferjohann wants to merge 504 commits intov1-c8c0ea2e6from
main
Draft

chore: v1 branch#1656
SwenSchaeferjohann wants to merge 504 commits intov1-c8c0ea2e6from
main

Conversation

@SwenSchaeferjohann
Copy link
Copy Markdown
Contributor

No description provided.

@github-advanced-security
Copy link
Copy Markdown
Contributor

This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jun 11, 2025

Important

Review skipped

Auto reviews are limited based on label configuration.

🏷️ Required labels (at least one) (1)
  • ai-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 11efecbd-9dbc-46fa-b3aa-862e58302f95

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch main

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment on lines +32 to +100
name: system-programs
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
timeout-minutes: 60

services:
redis:
image: redis:8.0.1
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5

env:
REDIS_URL: redis://localhost:6379

strategy:
matrix:
include:
- program: sdk-test-program
sub-tests: '["cargo-test-sbf -p sdk-native-test"]'
- program: sdk-anchor-test-program
sub-tests: '["cargo-test-sbf -p sdk-anchor-test", "cargo-test-sbf -p sdk-pinocchio-test"]'
- program: sdk-libs
packages: light-macros light-sdk light-program-test light-client light-batched-merkle-tree
test_cmd: |
cargo test -p light-macros
cargo test -p light-sdk
cargo test -p light-program-test
cargo test -p light-client
cargo test -p client-test
cargo test -p light-sparse-merkle-tree
cargo test -p light-batched-merkle-tree --features test-only -- --skip test_simulate_transactions --skip test_e2e
steps:
- name: Checkout sources
uses: actions/checkout@v4

- name: Setup and build
uses: ./.github/actions/setup-and-build
with:
skip-components: "redis"

- name: build-programs
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/programs

- name: Run sub-tests for ${{ matrix.program }}
if: matrix.sub-tests != null
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/zk-compression-cli

IFS=',' read -r -a sub_tests <<< "${{ join(fromJSON(matrix.sub-tests), ', ') }}"
for subtest in "${sub_tests[@]}"
do
echo "$subtest"
eval "RUSTFLAGS=\"-D warnings\" $subtest"
done

- name: Run tests for ${{ matrix.program }}
if: matrix.test_cmd != null
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/zk-compression-cli
${{ matrix.test_cmd }}

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI 8 days ago

In general, to fix this issue you add an explicit permissions: block either at the workflow root (so it applies to all jobs) or at the individual job level, and set it to the minimum required (often contents: read if the job only needs to check out code and run tests). This constrains the GITHUB_TOKEN so it cannot perform unintended write operations.

For this specific workflow, the job only checks out the repository and runs tests and package-manager operations; none of the steps need to write to GitHub resources. The simplest least-privilege fix is therefore to add a top-level permissions: block just after the name: section, setting contents: read. This will apply to the system-programs job and any other jobs in this workflow (none are shown, but this is safe). No changes to steps, actions versions, or additional imports are needed.

Concretely:

  • Edit .github/workflows/sdk-tests.yml.

  • After the name: examples-tests line, insert:

    permissions:
      contents: read

This explicitly limits GITHUB_TOKEN to read-only repository contents while preserving existing functionality.

Suggested changeset 1
.github/workflows/sdk-tests.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/sdk-tests.yml b/.github/workflows/sdk-tests.yml
--- a/.github/workflows/sdk-tests.yml
+++ b/.github/workflows/sdk-tests.yml
@@ -19,6 +19,9 @@
 
 name: examples-tests
 
+permissions:
+  contents: read
+
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
   cancel-in-progress: true
EOF
@@ -19,6 +19,9 @@

name: examples-tests

permissions:
contents: read

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Copilot is powered by AI and may make mistakes. Always verify output.
authorityTokenAccount,
toPubkey,
amount,
outputStateTreeInfo,

Check failure

Code scanning / CodeQL

Insecure randomness

This uses a cryptographically insecure random number generated at [Math.random()](1) in a security context.

Copilot Autofix

AI 3 months ago

In general, to fix insecure randomness you replace Math.random() with a cryptographically secure pseudo-random generator (crypto.randomBytes in Node, crypto.getRandomValues in the browser), and then convert the random bytes to the range you need without introducing bias.

Here, the only insecure use is in selectStateTreeInfo in js/stateless.js/src/utils/get-state-tree-infos.ts, where const index = Math.floor(Math.random() * length); randomly selects an integer index in [0, length). The safest minimal-change fix is:

  • Import Node’s crypto module.
  • Implement a small helper that draws an unbiased random integer in [0, maxExclusive) using crypto.randomBytes (rejection sampling).
  • Use that helper instead of Math.random() to compute index.

This keeps the external API and behavior of selectStateTreeInfo the same (still random among eligible trees, with the same length limiting), but uses a CSPRNG. No changes are required in js/compressed-token/src/actions/approve-and-mint-to.ts or js/compressed-token/src/program.ts, because they simply consume the TreeInfo produced by selectStateTreeInfo.

Concretely:

  • Edit js/stateless.js/src/utils/get-state-tree-infos.ts:
    • Add import crypto from 'crypto'; (or import * as crypto from 'crypto'; depending on the surrounding style).
    • Add a getSecureRandomInt(maxExclusive: number): number helper above selectStateTreeInfo.
    • Replace const index = Math.floor(Math.random() * length); with const index = getSecureRandomInt(length);.

Suggested changeset 1
js/stateless.js/src/utils/get-state-tree-infos.ts
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/js/stateless.js/src/utils/get-state-tree-infos.ts b/js/stateless.js/src/utils/get-state-tree-infos.ts
--- a/js/stateless.js/src/utils/get-state-tree-infos.ts
+++ b/js/stateless.js/src/utils/get-state-tree-infos.ts
@@ -2,8 +2,38 @@
 import { TreeInfo, TreeType } from '../state/types';
 import { MerkleContext } from '../state/compressed-account';
 import { featureFlags, StateTreeLUTPair } from '../constants';
+import crypto from 'crypto';
 
 /**
+ * Generate a cryptographically secure random integer in the range [0, maxExclusive).
+ *
+ * Uses rejection sampling to avoid modulo bias.
+ */
+function getSecureRandomInt(maxExclusive: number): number {
+    if (maxExclusive <= 0 || !Number.isInteger(maxExclusive)) {
+        throw new Error('maxExclusive must be a positive integer');
+    }
+
+    // Use up to 32-bit unsigned integers for simplicity.
+    const UINT32_MAX = 0xffffffff;
+    const limit = UINT32_MAX - (UINT32_MAX % maxExclusive);
+
+    while (true) {
+        const buf = crypto.randomBytes(4);
+        const rnd =
+            (buf[0] << 24) |
+            (buf[1] << 16) |
+            (buf[2] << 8) |
+            buf[3];
+        const value = rnd >>> 0; // convert to unsigned 32-bit
+
+        if (value < limit) {
+            return value % maxExclusive;
+        }
+    }
+}
+
+/**
  * Get the currently active output queue from a merkle context.
  *
  * @param merkleContext The merkle context to get the output queue from
@@ -124,7 +152,7 @@
     const length = useMaxConcurrency
         ? filteredInfos.length
         : Math.min(MAX_HOTSPOTS, filteredInfos.length);
-    const index = Math.floor(Math.random() * length);
+    const index = getSecureRandomInt(length);
 
     if (!filteredInfos[index].queue) {
         throw new Error('Queue must not be null for state tree');
EOF
@@ -2,8 +2,38 @@
import { TreeInfo, TreeType } from '../state/types';
import { MerkleContext } from '../state/compressed-account';
import { featureFlags, StateTreeLUTPair } from '../constants';
import crypto from 'crypto';

/**
* Generate a cryptographically secure random integer in the range [0, maxExclusive).
*
* Uses rejection sampling to avoid modulo bias.
*/
function getSecureRandomInt(maxExclusive: number): number {
if (maxExclusive <= 0 || !Number.isInteger(maxExclusive)) {
throw new Error('maxExclusive must be a positive integer');
}

// Use up to 32-bit unsigned integers for simplicity.
const UINT32_MAX = 0xffffffff;
const limit = UINT32_MAX - (UINT32_MAX % maxExclusive);

while (true) {
const buf = crypto.randomBytes(4);
const rnd =
(buf[0] << 24) |
(buf[1] << 16) |
(buf[2] << 8) |
buf[3];
const value = rnd >>> 0; // convert to unsigned 32-bit

if (value < limit) {
return value % maxExclusive;
}
}
}

/**
* Get the currently active output queue from a merkle context.
*
* @param merkleContext The merkle context to get the output queue from
@@ -124,7 +152,7 @@
const length = useMaxConcurrency
? filteredInfos.length
: Math.min(MAX_HOTSPOTS, filteredInfos.length);
const index = Math.floor(Math.random() * length);
const index = getSecureRandomInt(length);

if (!filteredInfos[index].queue) {
throw new Error('Queue must not be null for state tree');
Copilot is powered by AI and may make mistakes. Always verify output.
tokenAccount,
mint,
remainingAmount,
outputStateTreeInfo,

Check failure

Code scanning / CodeQL

Insecure randomness

This uses a cryptographically insecure random number generated at [Math.random()](1) in a security context.

Copilot Autofix

AI 3 months ago

General fix: Replace Math.random() in selectStateTreeInfo with a cryptographically secure source of randomness, using either Node’s crypto.randomBytes or Web Crypto’s crypto.getRandomValues, and then map the secure random bytes to a uniform integer in [0, length).

Best fix here: Implement a small helper inside selectStateTreeInfo that derives a secure random integer modulo length using Web Crypto when available and Node’s crypto otherwise. This avoids changing the public API, keeps behavior (pseudo‑random selection) the same, and uses well‑known libraries only. We don’t need to touch js/compressed-token/src/program.ts; the taint arises entirely from how outputStateTreeInfo is chosen in selectStateTreeInfo. We only edit js/stateless.js/src/utils/get-state-tree-infos.ts, in the region around line 127 where Math.random() is called. We’ll add an import for crypto at the top and change the index calculation to:

  • Generate a 32‑bit unsigned integer from secure random bytes.
  • Reduce it modulo length to get an index in the correct range.

This maintains uniformity and removes the insecure RNG.

Concretely:

  • In js/stateless.js/src/utils/get-state-tree-infos.ts, add import * as crypto from 'crypto';.
  • Replace const index = Math.floor(Math.random() * length); with a new block that:
    • Uses window.crypto.getRandomValues in browsers (if available).
    • Falls back to crypto.randomBytes in Node to fill a 4‑byte buffer.
    • Interprets the 4 bytes as a 32‑bit unsigned integer and takes mod length.

No other files need changes.


Suggested changeset 1
js/stateless.js/src/utils/get-state-tree-infos.ts
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/js/stateless.js/src/utils/get-state-tree-infos.ts b/js/stateless.js/src/utils/get-state-tree-infos.ts
--- a/js/stateless.js/src/utils/get-state-tree-infos.ts
+++ b/js/stateless.js/src/utils/get-state-tree-infos.ts
@@ -2,6 +2,7 @@
 import { TreeInfo, TreeType } from '../state/types';
 import { MerkleContext } from '../state/compressed-account';
 import { featureFlags, StateTreeLUTPair } from '../constants';
+import * as crypto from 'crypto';
 
 /**
  * Get the currently active output queue from a merkle context.
@@ -124,8 +125,25 @@
     const length = useMaxConcurrency
         ? filteredInfos.length
         : Math.min(MAX_HOTSPOTS, filteredInfos.length);
-    const index = Math.floor(Math.random() * length);
 
+    // Use a cryptographically secure random index in the range [0, length).
+    let randomUint32: number;
+    if (typeof globalThis !== 'undefined' && (globalThis as any).crypto && typeof (globalThis as any).crypto.getRandomValues === 'function') {
+        // Browser / Web Crypto path
+        const arr = new Uint32Array(1);
+        (globalThis as any).crypto.getRandomValues(arr);
+        randomUint32 = arr[0];
+    } else {
+        // Node.js path using crypto.randomBytes
+        const buf = crypto.randomBytes(4);
+        randomUint32 =
+            (buf[0] << 24) >>> 0 ^
+            (buf[1] << 16) ^
+            (buf[2] << 8) ^
+            buf[3];
+    }
+    const index = randomUint32 % length;
+
     if (!filteredInfos[index].queue) {
         throw new Error('Queue must not be null for state tree');
     }
EOF
@@ -2,6 +2,7 @@
import { TreeInfo, TreeType } from '../state/types';
import { MerkleContext } from '../state/compressed-account';
import { featureFlags, StateTreeLUTPair } from '../constants';
import * as crypto from 'crypto';

/**
* Get the currently active output queue from a merkle context.
@@ -124,8 +125,25 @@
const length = useMaxConcurrency
? filteredInfos.length
: Math.min(MAX_HOTSPOTS, filteredInfos.length);
const index = Math.floor(Math.random() * length);

// Use a cryptographically secure random index in the range [0, length).
let randomUint32: number;
if (typeof globalThis !== 'undefined' && (globalThis as any).crypto && typeof (globalThis as any).crypto.getRandomValues === 'function') {
// Browser / Web Crypto path
const arr = new Uint32Array(1);
(globalThis as any).crypto.getRandomValues(arr);
randomUint32 = arr[0];
} else {
// Node.js path using crypto.randomBytes
const buf = crypto.randomBytes(4);
randomUint32 =
(buf[0] << 24) >>> 0 ^
(buf[1] << 16) ^
(buf[2] << 8) ^
buf[3];
}
const index = randomUint32 % length;

if (!filteredInfos[index].queue) {
throw new Error('Queue must not be null for state tree');
}
Copilot is powered by AI and may make mistakes. Always verify output.
@@ -42,7 +45,7 @@
} {
const {
inputCompressedTokenAccounts,
outputStateTrees,
outputStateTreeInfo,

Check failure

Code scanning / CodeQL

Insecure randomness

This uses a cryptographically insecure random number generated at [Math.random()](1) in a security context.

Copilot Autofix

AI 3 months ago

In general, to fix insecure randomness in Node.js/TypeScript, replace Math.random() with a cryptographically secure source such as crypto.randomInt or crypto.randomBytes, taking care to avoid modulo bias when constraining the range.

In this codebase, the only insecure randomness is in selectStateTreeInfo in js/stateless.js/src/utils/get-state-tree-infos.ts where it computes const index = Math.floor(Math.random() * length);. The best, minimal‑impact fix is to compute index using crypto.randomInt(0, length) when running in Node.js. This keeps the distribution uniform over [0, length) with no bias and preserves all existing behavior except that the randomness becomes cryptographically strong. To do this, we need to add an import of Node’s built‑in crypto module and replace the Math.random() line with a crypto.randomInt call. No changes are required in compress.ts, program.ts, or pack-compressed-token-accounts.ts, since they only propagate the selected TreeInfo.

However, per your constraints I may only edit code inside the shown snippets of:

  • js/compressed-token/src/utils/pack-compressed-token-accounts.ts
  • js/stateless.js/src/utils/get-state-tree-infos.ts
  • js/compressed-token/src/actions/compress.ts
  • js/compressed-token/src/program.ts

The problematic Math.random() is in js/stateless.js/src/utils/get-state-tree-infos.ts, which is one of the allowed files and has been shown, so we will:

  1. Add import { randomInt } from 'crypto'; (Node built‑in, no new dependency).
  2. Replace const index = Math.floor(Math.random() * length); with const index = randomInt(length);.

No changes are needed in the compressed‑token files (compress.ts, program.ts, pack-compressed-token-accounts.ts) because they simply accept and forward the TreeInfo.


Suggested changeset 1
js/stateless.js/src/utils/get-state-tree-infos.ts
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/js/stateless.js/src/utils/get-state-tree-infos.ts b/js/stateless.js/src/utils/get-state-tree-infos.ts
--- a/js/stateless.js/src/utils/get-state-tree-infos.ts
+++ b/js/stateless.js/src/utils/get-state-tree-infos.ts
@@ -2,6 +2,7 @@
 import { TreeInfo, TreeType } from '../state/types';
 import { MerkleContext } from '../state/compressed-account';
 import { featureFlags, StateTreeLUTPair } from '../constants';
+import { randomInt } from 'crypto';
 
 /**
  * Get the currently active output queue from a merkle context.
@@ -124,7 +125,7 @@
     const length = useMaxConcurrency
         ? filteredInfos.length
         : Math.min(MAX_HOTSPOTS, filteredInfos.length);
-    const index = Math.floor(Math.random() * length);
+    const index = randomInt(length);
 
     if (!filteredInfos[index].queue) {
         throw new Error('Queue must not be null for state tree');
EOF
@@ -2,6 +2,7 @@
import { TreeInfo, TreeType } from '../state/types';
import { MerkleContext } from '../state/compressed-account';
import { featureFlags, StateTreeLUTPair } from '../constants';
import { randomInt } from 'crypto';

/**
* Get the currently active output queue from a merkle context.
@@ -124,7 +125,7 @@
const length = useMaxConcurrency
? filteredInfos.length
: Math.min(MAX_HOTSPOTS, filteredInfos.length);
const index = Math.floor(Math.random() * length);
const index = randomInt(length);

if (!filteredInfos[index].queue) {
throw new Error('Queue must not be null for state tree');
Copilot is powered by AI and may make mistakes. Always verify output.
*/
static async createAccount({
payer,
newAddressParams,
newAddress,
recentValidityProof,
outputStateTree,
outputStateTreeInfo,

Check failure

Code scanning / CodeQL

Insecure randomness

This uses a cryptographically insecure random number generated at [Math.random()](1) in a security context.

Copilot Autofix

AI 3 months ago

In general, to fix insecure randomness in Node/TypeScript code, replace Math.random() with a cryptographically secure RNG from the standard crypto module, and implement unbiased mapping from random bytes to the desired range (for indices, use rejection sampling to avoid modulo bias).

For this specific issue, the only insecure spot is selectStateTreeInfo in js/stateless.js/src/utils/get-state-tree-infos.ts at line 127. The best fix is:

  • Import Node’s crypto module at the top of the file.
  • Add a small helper function, e.g. getSecureRandomInt(max: number): number, that:
    • Uses crypto.randomBytes(4) to obtain a 32‑bit unsigned integer.
    • Uses rejection sampling so that the final integer is uniformly distributed in [0, max).
  • Replace const index = Math.floor(Math.random() * length); with const index = getSecureRandomInt(length);.

This keeps behavior (pseudo‑randomly choosing a tree among the first length entries) while making the randomness cryptographically secure and avoiding bias. No changes are needed in create-account.ts or program.ts, since they just consume the TreeInfo returned by selectStateTreeInfo.

Suggested changeset 1
js/stateless.js/src/utils/get-state-tree-infos.ts
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/js/stateless.js/src/utils/get-state-tree-infos.ts b/js/stateless.js/src/utils/get-state-tree-infos.ts
--- a/js/stateless.js/src/utils/get-state-tree-infos.ts
+++ b/js/stateless.js/src/utils/get-state-tree-infos.ts
@@ -2,8 +2,37 @@
 import { TreeInfo, TreeType } from '../state/types';
 import { MerkleContext } from '../state/compressed-account';
 import { featureFlags, StateTreeLUTPair } from '../constants';
+import crypto from 'crypto';
 
 /**
+ * Generate a cryptographically secure random integer in the range [0, max).
+ *
+ * Uses rejection sampling to avoid modulo bias.
+ */
+function getSecureRandomInt(max: number): number {
+    if (max <= 0) {
+        throw new Error('max must be a positive integer');
+    }
+    const maxUint32 = 0xffffffff;
+    const limit = maxUint32 - (maxUint32 % max);
+
+    // Rejection sampling loop; expected iterations is close to 1.
+    while (true) {
+        const randomBytes = crypto.randomBytes(4);
+        const rand =
+            (randomBytes[0] << 24) |
+            (randomBytes[1] << 16) |
+            (randomBytes[2] << 8) |
+            randomBytes[3];
+        const randUint32 = rand >>> 0;
+
+        if (randUint32 < limit) {
+            return randUint32 % max;
+        }
+    }
+}
+
+/**
  * Get the currently active output queue from a merkle context.
  *
  * @param merkleContext The merkle context to get the output queue from
@@ -124,7 +151,7 @@
     const length = useMaxConcurrency
         ? filteredInfos.length
         : Math.min(MAX_HOTSPOTS, filteredInfos.length);
-    const index = Math.floor(Math.random() * length);
+    const index = getSecureRandomInt(length);
 
     if (!filteredInfos[index].queue) {
         throw new Error('Queue must not be null for state tree');
EOF
@@ -2,8 +2,37 @@
import { TreeInfo, TreeType } from '../state/types';
import { MerkleContext } from '../state/compressed-account';
import { featureFlags, StateTreeLUTPair } from '../constants';
import crypto from 'crypto';

/**
* Generate a cryptographically secure random integer in the range [0, max).
*
* Uses rejection sampling to avoid modulo bias.
*/
function getSecureRandomInt(max: number): number {
if (max <= 0) {
throw new Error('max must be a positive integer');
}
const maxUint32 = 0xffffffff;
const limit = maxUint32 - (maxUint32 % max);

// Rejection sampling loop; expected iterations is close to 1.
while (true) {
const randomBytes = crypto.randomBytes(4);
const rand =
(randomBytes[0] << 24) |
(randomBytes[1] << 16) |
(randomBytes[2] << 8) |
randomBytes[3];
const randUint32 = rand >>> 0;

if (randUint32 < limit) {
return randUint32 % max;
}
}
}

/**
* Get the currently active output queue from a merkle context.
*
* @param merkleContext The merkle context to get the output queue from
@@ -124,7 +151,7 @@
const length = useMaxConcurrency
? filteredInfos.length
: Math.min(MAX_HOTSPOTS, filteredInfos.length);
const index = Math.floor(Math.random() * length);
const index = getSecureRandomInt(length);

if (!filteredInfos[index].queue) {
throw new Error('Queue must not be null for state tree');
Copilot is powered by AI and may make mistakes. Always verify output.
NewRoot: frontend.Variable(params.NewRoot),
HashchainHash: frontend.Variable(params.HashchainHash),
StartIndex: frontend.Variable(params.StartIndex),
LowElementValues: make([]frontend.Variable, params.BatchSize),

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).
HashchainHash: frontend.Variable(params.HashchainHash),
StartIndex: frontend.Variable(params.StartIndex),
LowElementValues: make([]frontend.Variable, params.BatchSize),
LowElementNextValues: make([]frontend.Variable, params.BatchSize),

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

The problem should be fixed by enforcing upper bounds (MaxBatchSize and MaxTreeHeight) on the values of params.BatchSize and params.TreeHeight wherever they are used to control slice allocation. These maximums should be chosen according to operational needs and capacity, but even a modestly large hard limit is better than none from a security perspective.

The most robust fix is to add a check in CreateWitness (and preferably also in the handler and/or in ValidateShape() if it is defined) to ensure that params.BatchSize and params.TreeHeight are within a safe (predefined) maximum range. If not, return an error and do not perform any allocations. The check should occur before any allocations.

You only need to change code in prover/server/prover/v2/batch_address_append_circuit.go since that's where the dangerous allocation is performed.

Required changes:

  • Add const MaxBatchSize = ... and const MaxTreeHeight = ... at the top of the file (just below the imports) or in appropriate scope.
  • In CreateWitness(), after checking for zero, check that params.BatchSize and params.TreeHeight do not exceed these maximums.
  • If either exceed the maximum, return an error.
  • Optionally, you may want to ensure non-negative values, but these are already checked by if params.BatchSize == 0 etc.

Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,10 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+// Set reasonable upper limits for batch size and tree height to prevent excessive allocations
+const MaxBatchSize = 1024    // Adjust as appropriate
+const MaxTreeHeight = 64     // Adjust as appropriate
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -144,6 +148,12 @@
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size exceeds maximum allowed value (%d)", MaxBatchSize)
+	}
+	if params.TreeHeight > MaxTreeHeight {
+		return nil, fmt.Errorf("tree height exceeds maximum allowed value (%d)", MaxTreeHeight)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,10 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)

// Set reasonable upper limits for batch size and tree height to prevent excessive allocations
const MaxBatchSize = 1024 // Adjust as appropriate
const MaxTreeHeight = 64 // Adjust as appropriate

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -144,6 +148,12 @@
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size exceeds maximum allowed value (%d)", MaxBatchSize)
}
if params.TreeHeight > MaxTreeHeight {
return nil, fmt.Errorf("tree height exceeds maximum allowed value (%d)", MaxTreeHeight)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.
StartIndex: frontend.Variable(params.StartIndex),
LowElementValues: make([]frontend.Variable, params.BatchSize),
LowElementNextValues: make([]frontend.Variable, params.BatchSize),
LowElementIndices: make([]frontend.Variable, params.BatchSize),

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

To fix this issue, enforce a reasonable upper bound for BatchSize (and ideally also TreeHeight) before performing any slice allocations based on user input. The upper bound(s) should be set as per application requirements and resource constraints (const MaxBatchSize = ..., const MaxTreeHeight = ...). The best location for these checks is at the start of CreateWitness() in batch_address_append_circuit.go, where allocations occur, and possibly also in ProveBatchAddressAppend for belt-and-braces defense. If the inputs exceed these limits, return an error before any allocation. This limits memory use so an attacker cannot force OOM via large slice allocations.

Changes needed:

  • In prover/server/prover/v2/batch_address_append_circuit.go, define constants for MaxBatchSize and MaxTreeHeight.
  • In the CreateWitness method, add checks ensuring BatchSize and TreeHeight are within allowed limits before performing any allocations.
  • Return an error if the limits are exceeded.
  • Optionally, add the same checks at the start of InitBatchAddressTreeAppendCircuit and/or ProveBatchAddressAppend for defense-in-depth, but the primary fix is in CreateWitness().

Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,10 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+// Defensive upper bounds against untrusted input:
+const MaxBatchSize = 256
+const MaxTreeHeight = 64
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -141,9 +145,15 @@
 	if params.BatchSize == 0 {
 		return nil, fmt.Errorf("batch size cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size too large (max: %d)", MaxBatchSize)
+	}
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.TreeHeight > MaxTreeHeight {
+		return nil, fmt.Errorf("tree height too large (max: %d)", MaxTreeHeight)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,10 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)

// Defensive upper bounds against untrusted input:
const MaxBatchSize = 256
const MaxTreeHeight = 64

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -141,9 +145,15 @@
if params.BatchSize == 0 {
return nil, fmt.Errorf("batch size cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size too large (max: %d)", MaxBatchSize)
}
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.TreeHeight > MaxTreeHeight {
return nil, fmt.Errorf("tree height too large (max: %d)", MaxTreeHeight)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.
LowElementValues: make([]frontend.Variable, params.BatchSize),
LowElementNextValues: make([]frontend.Variable, params.BatchSize),
LowElementIndices: make([]frontend.Variable, params.BatchSize),
NewElementValues: make([]frontend.Variable, params.BatchSize),

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

The best way to fix the problem is to enforce maximum limits on both params.BatchSize and params.TreeHeight before any slices are allocated using these values. The limit values should be selected so that reasonable but resource-safe workloads are allowed, and enforced consistently anywhere untrusted input may be used to allocate memory. To implement, add sanity checks to the CreateWitness() method for BatchAddressAppendParameters (and possibly other locations where allocation occurs), rejecting values above a fixed constant such as MaxBatchSize and MaxTreeHeight, and returning an error if outside bounds. These constants should be defined at the top of the file or alongside similar configuration, so they can be changed centrally. Only code in prover/server/prover/v2/batch_address_append_circuit.go needs editing, specifically within the CreateWitness function.

Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,10 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+// Set sensible upper bounds for batch size and tree height.
+const MaxBatchSize = 1024
+const MaxTreeHeight = 64
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -144,6 +148,12 @@
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size too large (max %d allowed)", MaxBatchSize)
+	}
+	if params.TreeHeight > MaxTreeHeight {
+		return nil, fmt.Errorf("tree height too large (max %d allowed)", MaxTreeHeight)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,10 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)

// Set sensible upper bounds for batch size and tree height.
const MaxBatchSize = 1024
const MaxTreeHeight = 64

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -144,6 +148,12 @@
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size too large (max %d allowed)", MaxBatchSize)
}
if params.TreeHeight > MaxTreeHeight {
return nil, fmt.Errorf("tree height too large (max %d allowed)", MaxTreeHeight)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.
LowElementNextValues: make([]frontend.Variable, params.BatchSize),
LowElementIndices: make([]frontend.Variable, params.BatchSize),
NewElementValues: make([]frontend.Variable, params.BatchSize),
LowElementProofs: make([][]frontend.Variable, params.BatchSize),

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

To fix this vulnerability, we must impose a reasonable upper bound (maximum allowed value) on the BatchSize and TreeHeight parameters received from untrusted input before using them to allocate memory. This should be enforced in the CreateWitness method in prover/server/prover/v2/batch_address_append_circuit.go, where allocations are made, and ideally also in higher-level logic.

  • Define (or use an existing) reasonable maximum constants for BatchSize and TreeHeight (e.g., MaxBatchSize, MaxTreeHeight).
  • In CreateWitness, add checks:
    • If params.BatchSize < 0 (though it's unsigned, so not needed) or params.BatchSize > MaxBatchSize, return an error.
    • Similarly, check params.TreeHeight > MaxTreeHeight.
  • Return an error to the caller if values are out of bounds, preventing allocation.
  • The maximums should be chosen based on practical and business constraints, and declared as constants at the top of the file.

No new imports or external packages are required for this fix.


Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,11 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+const (
+	MaxBatchSize  = 64      // choose practical values based on use-case and server resources
+	MaxTreeHeight = 32
+)
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -141,9 +146,15 @@
 	if params.BatchSize == 0 {
 		return nil, fmt.Errorf("batch size cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size exceeds maximum allowed (%d)", MaxBatchSize)
+	}
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.TreeHeight > MaxTreeHeight {
+		return nil, fmt.Errorf("tree height exceeds maximum allowed (%d)", MaxTreeHeight)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,11 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)

const (
MaxBatchSize = 64 // choose practical values based on use-case and server resources
MaxTreeHeight = 32
)

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -141,9 +146,15 @@
if params.BatchSize == 0 {
return nil, fmt.Errorf("batch size cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size exceeds maximum allowed (%d)", MaxBatchSize)
}
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.TreeHeight > MaxTreeHeight {
return nil, fmt.Errorf("tree height exceeds maximum allowed (%d)", MaxTreeHeight)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.
LowElementIndices: make([]frontend.Variable, params.BatchSize),
NewElementValues: make([]frontend.Variable, params.BatchSize),
LowElementProofs: make([][]frontend.Variable, params.BatchSize),
NewElementProofs: make([][]frontend.Variable, params.BatchSize),

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

The best way to address this vulnerability is to validate params.BatchSize (and, for completeness, possibly also params.TreeHeight) in the key entry-point for user-controlled values. This means:

  • Adding a constant upper bound (such as MaxBatchSize) and returning an error if the requested value exceeds this bound. (Example: 128, 256, or another value as appropriate—should be set in consultation with expected system capabilities and use cases.)
  • The check should be performed as soon as possible after unmarshalling user input but before any slice allocations.
  • For code clarity and maintainability, the check should be implemented in the CreateWitness method in prover/server/prover/v2/batch_address_append_circuit.go, and in the API handler method in prover/server/server/server.go if relevant.
  • No changes to the logic should be made except to prevent allocation for bad requests; return a descriptive error in case of exceeding limits.

In summary:

  • In prover/server/prover/v2/batch_address_append_circuit.go, add reasonable maximum allowed value checks to BatchSize (and optionally TreeHeight) in CreateWitness.
  • Reject requests exceeding these bounds with a clear error.

Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,8 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+const MaxBatchSize = 128 // adjust as appropriate for your system capacity
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -144,6 +146,9 @@
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size cannot exceed %d", MaxBatchSize)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,8 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)

const MaxBatchSize = 128 // adjust as appropriate for your system capacity

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -144,6 +146,9 @@
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size cannot exceed %d", MaxBatchSize)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.
}

for i := uint32(0); i < params.BatchSize; i++ {
circuit.LowElementProofs[i] = make([]frontend.Variable, params.TreeHeight)

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

To prevent excessively large memory allocations from user-controlled values, the best way is to enforce upper bounds for BatchSize and TreeHeight before using them to allocate slices. These bounds should be chosen based on system constraints and expected usage patterns (e.g., maximum batch size and tree height that the system can process comfortably).

Implement these checks inside (*BatchAddressAppendParameters).CreateWitness(), rejecting requests with parameters exceeding limits. To ensure all entry points are guarded, also validate before use in all functions that process user input parameters, such as in the server handler.
Required changes:

  • Add constants (e.g., MaxBatchSize, MaxTreeHeight) at the top of the relevant file (near the CreateWitness method).
  • In CreateWitness, check if params.BatchSize and params.TreeHeight are above the maximum; if so, return an error.
  • Optionally, add similar checks in the handler or at the earliest point possible for fail-fast protection.

Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,12 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+
+const (
+	MaxBatchSize  = 64   // Adjust to appropriate system limits
+	MaxTreeHeight = 64   // Adjust to appropriate system limits
+)
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -144,6 +150,12 @@
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size %d exceeds maximum allowed %d", params.BatchSize, MaxBatchSize)
+	}
+	if params.TreeHeight > MaxTreeHeight {
+		return nil, fmt.Errorf("tree height %d exceeds maximum allowed %d", params.TreeHeight, MaxTreeHeight)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,12 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)


const (
MaxBatchSize = 64 // Adjust to appropriate system limits
MaxTreeHeight = 64 // Adjust to appropriate system limits
)

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -144,6 +150,12 @@
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size %d exceeds maximum allowed %d", params.BatchSize, MaxBatchSize)
}
if params.TreeHeight > MaxTreeHeight {
return nil, fmt.Errorf("tree height %d exceeds maximum allowed %d", params.TreeHeight, MaxTreeHeight)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.

for i := uint32(0); i < params.BatchSize; i++ {
circuit.LowElementProofs[i] = make([]frontend.Variable, params.TreeHeight)
circuit.NewElementProofs[i] = make([]frontend.Variable, params.TreeHeight)

Check failure

Code scanning / CodeQL

Slice memory allocation with excessive size value

This memory allocation depends on a [user-provided value](1).

Copilot Autofix

AI 6 months ago

To remediate this issue, you should enforce maximum allowable values on both TreeHeight and BatchSize before any slice allocation that uses them. Choose sensible upper bounds according to expected use (e.g., based on business logic or system capacity). Implement checks at the start of functions where these values are accepted, returning an error if either exceeds the maximum. These checks should be placed at the top of the CreateWitness function in prover/server/prover/v2/batch_address_append_circuit.go, which is the critical path for these allocations. You may also wish to add a helper method or constants for these bounds if appropriate.

Changes needed:

  • In prover/server/prover/v2/batch_address_append_circuit.go:
    • Define constants for maximum allowed TreeHeight and BatchSize (such as MaxTreeHeight and MaxBatchSize).
    • At the top of CreateWitness, add checks that error out if params.TreeHeight or params.BatchSize exceeds their respective maximums.

No changes to imports are needed; this is pure logic.


Suggested changeset 1
prover/server/prover/v2/batch_address_append_circuit.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/prover/server/prover/v2/batch_address_append_circuit.go b/prover/server/prover/v2/batch_address_append_circuit.go
--- a/prover/server/prover/v2/batch_address_append_circuit.go
+++ b/prover/server/prover/v2/batch_address_append_circuit.go
@@ -17,6 +17,11 @@
 	"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
 )
 
+const (
+	MaxTreeHeight = 64  // Set according to system needs
+	MaxBatchSize  = 1024 // Set according to system needs
+)
+
 type BatchAddressTreeAppendCircuit struct {
 	PublicInputHash frontend.Variable `gnark:",public"`
 
@@ -144,6 +149,12 @@
 	if params.TreeHeight == 0 {
 		return nil, fmt.Errorf("tree height cannot be 0")
 	}
+	if params.BatchSize > MaxBatchSize {
+		return nil, fmt.Errorf("batch size too large (max allowed is %d)", MaxBatchSize)
+	}
+	if params.TreeHeight > MaxTreeHeight {
+		return nil, fmt.Errorf("tree height too large (max allowed is %d)", MaxTreeHeight)
+	}
 
 	circuit := &BatchAddressTreeAppendCircuit{
 		BatchSize:            params.BatchSize,
EOF
@@ -17,6 +17,11 @@
"github.com/reilabs/gnark-lean-extractor/v3/abstractor"
)

const (
MaxTreeHeight = 64 // Set according to system needs
MaxBatchSize = 1024 // Set according to system needs
)

type BatchAddressTreeAppendCircuit struct {
PublicInputHash frontend.Variable `gnark:",public"`

@@ -144,6 +149,12 @@
if params.TreeHeight == 0 {
return nil, fmt.Errorf("tree height cannot be 0")
}
if params.BatchSize > MaxBatchSize {
return nil, fmt.Errorf("batch size too large (max allowed is %d)", MaxBatchSize)
}
if params.TreeHeight > MaxTreeHeight {
return nil, fmt.Errorf("tree height too large (max allowed is %d)", MaxTreeHeight)
}

circuit := &BatchAddressTreeAppendCircuit{
BatchSize: params.BatchSize,
Copilot is powered by AI and may make mistakes. Always verify output.
Comment on lines +22 to +96
name: stateless-js-v1
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest

services:
redis:
image: redis:8.0.1
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5

env:
LIGHT_PROTOCOL_VERSION: V1
REDIS_URL: redis://localhost:6379
CI: true

steps:
- name: Checkout sources
uses: actions/checkout@v4

- name: Setup and build
uses: ./.github/actions/setup-and-build
with:
skip-components: "redis,disk-cleanup"
cache-suffix: "js"

- name: Build stateless.js with V1
run: |
cd js/stateless.js
pnpm build:v1

- name: Build CLI
- name: Build compressed-token with V1
run: |
source ./scripts/devenv.sh
npx nx build @lightprotocol/zk-compression-cli --skip-nx-cache
cd js/compressed-token
pnpm build:v1

# Comment for breaking changes to Photon
- name: Run CLI tests
- name: Build CLI (CI mode - Linux x64 only)
run: |
source ./scripts/devenv.sh
npx nx test @lightprotocol/zk-compression-cli
npx nx build-ci @lightprotocol/zk-compression-cli

- name: Run stateless.js tests
- name: Run stateless.js tests with V1
run: |
source ./scripts/devenv.sh
npx nx test @lightprotocol/stateless.js
echo "Running stateless.js tests with retry logic (max 2 attempts)..."
attempt=1
max_attempts=2
until npx nx test-ci @lightprotocol/stateless.js; do
attempt=$((attempt + 1))
if [ $attempt -gt $max_attempts ]; then
echo "Tests failed after $max_attempts attempts"
exit 1
fi
echo "Attempt $attempt/$max_attempts failed, retrying..."
sleep 5
done
echo "Tests passed on attempt $attempt"

- name: Run compressed-token tests
- name: Run compressed-token tests with V1
run: |
source ./scripts/devenv.sh
npx nx test @lightprotocol/compressed-token
echo "Running compressed-token tests with retry logic (max 2 attempts)..."
attempt=1
max_attempts=2
until npx nx test-ci @lightprotocol/compressed-token; do
attempt=$((attempt + 1))
if [ $attempt -gt $max_attempts ]; then
echo "Tests failed after $max_attempts attempts"
exit 1
fi
echo "Attempt $attempt/$max_attempts failed, retrying..."
sleep 5
done
echo "Tests passed on attempt $attempt"

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI about 2 months ago

To fix the problem, explicitly define minimal GITHUB_TOKEN permissions for this workflow or for the specific job. Since the workflow only checks out code, builds, and runs tests, it normally only needs read access to repository contents. We can add a permissions block at the root of the workflow so it applies to all jobs (there is only one job in the snippet). This will ensure that even if the repository or org default is read-write, the workflow will only have contents: read.

Concretely, in .github/workflows/js.yml, add:

permissions:
  contents: read

between the name: js-tests-v1 section and the concurrency: block (lines 14–16 in the snippet). This does not change any existing behavior of steps, but constrains the token according to the principle of least privilege. No additional methods, definitions, or imports are needed.

Suggested changeset 1
.github/workflows/js.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/js.yml b/.github/workflows/js.yml
--- a/.github/workflows/js.yml
+++ b/.github/workflows/js.yml
@@ -13,6 +13,9 @@
 
 name: js-tests-v1
 
+permissions:
+  contents: read
+
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
   cancel-in-progress: true
EOF
@@ -13,6 +13,9 @@

name: js-tests-v1

permissions:
contents: read

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
Copilot is powered by AI and may make mistakes. Always verify output.
ananas-block and others added 17 commits October 24, 2025 21:18
* chore: registry program throw on zero network fee

* chore: set batched address tree default fee to 10k

* chore: limit batched tree creations to light security group

* chore: disable program owned trees

* chore: cleanup features, add tests for mainnet tree config and features

* cleanup

* chore: fix nits

* fix test

* fix: impl feedback
* chore: add docker image publishing to prover release workflow

* Add Go build and key download steps to prover release

* Add disk space cleanup step to prover release workflow

* Remove disk cleanup condition from prover release

* Move disk space cleanup step to prover build job

* Fix prover release workflow tag format
* chore: regenerate vkeys

* update prover version tag
* add fetch_accounts xtask

* fmt
Bumps [nx](https://github.com/nrwl/nx/tree/HEAD/packages/nx) from 20.8.1 to 22.0.1.
- [Release notes](https://github.com/nrwl/nx/releases)
- [Commits](https://github.com/nrwl/nx/commits/22.0.1/packages/nx)

---
updated-dependencies:
- dependency-name: nx
  dependency-version: 22.0.1
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 6.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](actions/download-artifact@v4...v6)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* fix: ctoken address Merkle tree check with cpi context

* test: failing write to cpi context
* feat(forester): gRPC-based event-driven processing for V2 trees

* chore: add protobuf-compiler to setup-and-build

* chore: update PHOTON_COMMIT version in versions.sh

* feat: add grpc_port config to cli

* cleanup

* cleanup

* cleanup

* wait for indexer in rpc-interop.test.ts

* wait for indexer in rpc-interop.test.ts

* bump photon version

* cleanup

* cleanup
sergeytimoshin and others added 11 commits March 4, 2026 23:35
* Refactor epoch processing logic in EpochManager

- Simplified the handling of epoch registration and processing by removing redundant checks and consolidating error handling.
- Enhanced logging for better traceability of epoch processing events.
- Updated the logic to always process the current epoch regardless of registration status.
- Improved the registration phase checks to determine if registration is open or closed.
- Adjusted the handling of target epochs to streamline the flow of sending epochs for processing.

Enhance ForesterStatus to include registration state

- Added `registration_is_open` field to `ForesterStatus` to indicate if registration is currently open.
- Updated logic in `get_forester_status_with_options` to determine the current registration state and adjust the next registration epoch accordingly.
- Modified calculations for slots until next registration based on the registration state.

Update queue helpers to track total ZKP batches

- Extended `V2QueueInfo` and related structures to include `input_total_zkp_batches` and `output_total_zkp_batches`.
- Adjusted parsing functions to calculate total ZKP batches based on batch size and ZKP batch size.
- Ensured compatibility with existing queue processing logic.

Deploy script adjustments

- Updated the deploy script to focus on the `light_registry` library, removing references to other libraries.

* refactor metrics

* revert deploy.sh

* feat: Add fallback RPC and indexer configuration options

- Introduced fallback RPC and indexer URLs in the environment configuration and CLI arguments to enhance resilience.
- Updated README.md to document new fallback options and their usage.
- Modified the API server and RPC pool to support automatic failover to fallback URLs on health check failures.
- Enhanced the configuration structs and tests to include fallback settings.
- Refactored related code for improved readability and maintainability.

* feat: Add metrics contract and update metrics registration logic

* Enhance CLI argument validation and improve error handling

- Added value parser with range validation for `rpc_pool_failure_threshold` and `rpc_pool_primary_probe_interval_secs` in CLI arguments to ensure they are greater than 0.
- Updated error handling in `EpochManager` to return an error when the epoch channel is closed instead of continuing the loop.
- Simplified queue capacity calculation in `parse_tree_status` by removing unnecessary checks for `zkp_batch_size`.
- Moved the spawning of the primary recovery probe to after the pool is built in `run_pipeline_with_run_id` for better clarity and structure.
…ol authority (#2325)

* chore: limit v1 state tree, v2 state&address tree creations to protocol authority

Entire-Checkpoint: beb8e7abdeef

* chore: use bigger stack in test

Entire-Checkpoint: ed8d74024c1f
* feat: refinalization for late forester registrations

* fix: improve wait_for_refinalize logic and remove unused epoch_pda parameter

* feat: add registration trackers to manage re-finalization for mid-epoch forester registrations

* cleanup
* chore: add v1 tree deprecation log messages

Log warnings when v1 state trees or address trees are used,
directing users to the v1-to-v2 migration guide.

Entire-Checkpoint: 4a00fc4f40bb

* feat: charge network fee on V1 output appends

Add network fee (5,000 lamports per unique V1 output tree) to match
the existing V2 output fee behavior and V1 input fee behavior.

Entire-Checkpoint: 7423e3c1c43d

* feat: reimburse forester for tx fees on V1 tree operations

Transfer network_fee lamports from queue account to fee_payer when
foresters perform nullify_leaves and update_address_merkle_tree
operations on V1 trees with network_fee > 0. The registry CPI
wrappers pass the forester's authority wallet as fee_payer.

Entire-Checkpoint: 7071fd5951fd

* fix: update JS tests for V1 output network fee

Entire-Checkpoint: ecbda9cc501d

* chore: clarify fee comment

Entire-Checkpoint: 814bd0c10ab4

* stash

* fix lint

* fix: transfer nullify fee from merkle tree, fix borrow conflicts

V1 state tree network fees accumulate in the merkle tree account (not
the nullifier queue), so nullify reimbursement must transfer from the
merkle tree. Also read network_fee before mutable data borrows to
avoid RefCell conflicts in both nullify_leaves and
update_address_merkle_tree.

Entire-Checkpoint: 6941c31c12ec
* fix: ForesterNotEligible deadlock fix

* refactor: add fee-filter to tree discovery

* fix: adjust FORESTER_NOT_ELIGIBLE error code to include ERROR_CODE_OFFSET

* feat: enhance tree discovery with retroactive filtering and add confirmation settings

* cleanup
…Tree (#2335)

Add fee_payer account to BatchAppend and BatchUpdateAddressTree instructions
to reimburse foresters for network fees. BatchAppend transfers 2x network_fee
from output_queue, BatchUpdateAddressTree transfers 1x network_fee from
merkle_tree. Transfers only occur when network_fee >= 5000 lamports.

Registry CPI wrappers pass fee_payer through; SDK builders set fee_payer
to forester. Also adds create-address-test-program to the programs build.
* chore: add poseidon hash input error for non 32 bytes inputs

* fix: pad RegisteredUser.data to 32 bytes for Poseidon hashv

The on-chain Poseidon length check now enforces all inputs to be
exactly 32 bytes. RegisteredUser.data is [u8; 31], so pad it into
a 32-byte array (right-aligned, big-endian) before hashing.

Entire-Checkpoint: 7de49f8e11b2

* fix: pad all Poseidon hash inputs to 32 bytes

light-poseidon 0.4.0 enforces that all inputs are exactly 32 bytes.
Pad smaller inputs (u64/u32/usize indices, [u8; 31] data) to 32-byte
arrays before hashing. Also bump light-poseidon to 0.4.0 in workspace.

Entire-Checkpoint: 140e93a9e15d

* fix: pad Poseidon hash inputs to 32 bytes in compressed account tests

The test_compressed_account_hash test was passing sub-32-byte slices
(leaf_index, lamports, discriminator) directly to Poseidon::hashv.
light-poseidon 0.4 requires all inputs to be exactly 32 bytes.

Entire-Checkpoint: ee45a7a4e626

* fix: pad 31-byte data to 32 bytes in system-cpi-test assertion

The test-side hash assertion also passes data.as_slice() (31 bytes)
to Poseidon::hashv which now requires 32-byte inputs.

Entire-Checkpoint: 1560c14e6263

* fix: pad Poseidon hash inputs to 32 bytes in indexed-merkle-tree tests

The test_append and functional_non_inclusion_test tests were passing
sub-32-byte slices to Poseidon::hashv. light-poseidon 0.4 requires
all inputs to be exactly 32 bytes.

Entire-Checkpoint: 716761bbe363

* fix: rustfmt formatting for indexed-merkle-tree tests

Entire-Checkpoint: f7dc2aedbbc6
* rm decompressinterface

test cov: offcurve, zero-amounts

test cov: dupe hash failure, v1 reject at ixn boundary

more test cov

load, add freeze thaw, extend test cov

add tests

lint

frozen handling

more tests

mark internals

rm _tryfetchctokencoldbyaddress

cleanups

fmt

* unwrap consistent

* remove createLoadAccountsParams

* add uni err

* fix

* remove layout serde, add load-ata instruction

* apply review fixes, simplify delegate and frozen reasoning

* update freeze thaw

* test upd

* fix cold load delegate

* fmt

* fix ci

* add test cov: tx size

* rename ctoken full v3

* renames

* wip

* checked for all interface, pass decimals

* use destination directly in transferinterface

* format

* fix lint

* fix lint

* fix transfer-interface.test.ts

* address last remaining comment

* add changelog: br change decimals

* fix mds

* apply review comments

* format

* lint

* for unwrap/wrap ixns we should always do wrap=false

* fixes

* lint

* fix version mismatch between formatter and linter

* better errors for getAccountInterface and getMintInterface

* format

* granualr typed errs on accountInfoInterface

* lint, changelog, tests

* v2 gate for getaccountinfointerface tests
* prep beta release

* bump versions

* fix comments and changelog
sergeytimoshin and others added 4 commits March 13, 2026 21:23
…tween v1, v2, and compression (#2331)

* feat: add priority fee configuration and handling

* fix: add Signature import to solana_sdk in epoch_manager

* feat: add confirmation configuration for smart transactions and update related functions

* format

* refactor

* fix: improve error handling

* refactor

* cleanup

* - refactored transaction sending logic in `send_transaction.rs` and `tx_sender.rs`
- enhanced error handling in transaction processing to differentiate between send failures and execution failures
- modified `priority_fee.rs` to streamline error handling and improve fallback mechanisms
- adjusted V2 error handling to include custom error codes for better debugging
- improved the handling of transaction execution status

* cleanup

* cleanup

* format

* refactor error handling in transaction processing to include batch not ready state

* add logging to forester tests workflow

* dump photon.log on failure

* add indexer health checks and tracker wait functions in tests

* more logs

* add local transaction dumping functionality and enhance test failure logging

* refactor transaction extraction and block fetching in local transaction dump

* refactor WorkReportError handling to use registry_error_code for improved clarity

* refactor ForesterError handling to use registry_error_code for NotEligible checks

* wip

* refactor local transaction dumping to handle duplicates and improve output structure

* cleanup

* debugging

* unify test validator and photon commiment

* increase timeout for Forester e2e test to 120 minutes

* custom surfpool branch

* refactor: update surfpool version to 1.1.1 and remove unused binary path logic

* fix: remove unnecessary environment variable from spawnBinary call

* refactor: remove unused dependencies and streamline eligibility checks in epoch manager

* format

* format
* fix: handle mixed batch/non-batch inputs in create_nullifier_queue_indices

When a transaction mixes batch (v2) and legacy/concurrent (v1) input
accounts, the nullifier queue index assignment was using the raw position
in input_compressed_accounts as the write index into nullifier_queue_indices.
This caused an out-of-bounds panic when a non-batch account appeared between
batch accounts (e.g. [batchA, legacy, batchB, batchA]).

Fix by walking input_compressed_accounts in order and using a compact
batch_idx counter that only advances for accounts with a matching sequence
number entry. Non-batch accounts have no sequence number entry and are
skipped without consuming a slot.

* feat: add xtask fetch-block-events subcommand

Fetches a configurable number of blocks starting at a given slot, parses
every transaction using event_from_light_transaction, and prints a
structured summary of all Light Protocol events found.

Usage:
  cargo xtask fetch-block-events --start-slot <slot> --network mainnet
  cargo xtask fetch-block-events --start-slot <slot> --network devnet --num-blocks 5

* fix: format

* chore: bump light-event 0.23.0 -> 0.23.1

* fix: extract ParsedInstruction struct to satisfy clippy::type_complexity

* fix: rustfmt fetch_block_events

* test(light-event): add regression tests for mixed batch/legacy nullifier OOB panic

Transaction 3ybts1eFSC7QN6aU4ao6NJCgn7xTbtBVyzeLDZJf9eVN93vHZWupX4TXqHHgV18xf17eit7Uw5T135uabnpToKK4
at slot 407265372 panicked with "index out of bounds: len is 3 but index is 3"
in create_nullifier_queue_indices when inputs mix batch and legacy trees.

Adds two tests:
- src/regression_test.rs: real mainnet instruction bytes decoded via bs58
- tests/parse_test.rs: synthetic test verifying exact nullifier_queue_indices [6, 3, 7]

Also adds light-event to sdk-libs/justfile so it runs in CI.
* fixes

* use slicelast

* upd docs

* format

* upd cu, transferoptions,

* add check

* fmt ci
ananas-block and others added 3 commits March 20, 2026 11:47
…rk-ff duplicate compilation (#2356)

* perf: iteration 1 — skip zstd-sys C build on CI via system libzstd

Install libzstd-dev on CI and set ZSTD_SYS_USE_PKG_CONFIG=1 so zstd-sys
uses pkg-config to find the system library instead of compiling from C
source. zstd-sys (~7.9s) is a transitive dep via Solana's reqwest 0.11
and cannot be removed from the dep graph, but the C build can be avoided
with a system library present.

Also includes lld, clang, and libssl-dev from previous optimization work
(already in the working tree from the prior plan).

Entire-Checkpoint: 549cca13d530

* perf: iteration 8 — compile proc-macro crates at opt-level 3 in dev profile

Add [profile.dev.package.*] overrides for syn, proc-macro2, quote,
serde_derive, ark-ff-macros, and ark-ff-asm. These crates execute at
build time; compiling them at full optimization reduces the time they
spend generating code for downstream crates.

* perf: iteration 4 — eliminate ark-ff 0.4 duplicate compilation

Patch groth16-solana via [patch.crates-io] with the local version which:
- Uses ark-ff 0.5 (same as workspace) instead of 0.4
- Bumps solana-bn254 from 2.x to 3.x (which uses ark 0.5)
- Sets default-features = false for ark/thiserror/serde deps

Also bump workspace solana-bn254 from "2.2" to "3.2.1" to match.

Fix typo in 104 light-verifier verifying_keys files: vk_gamme_g2 ->
vk_gamma_g2 (the typo fix was shipped in groth16-solana PR #29).

TODO: replace path patch with git dep once groth16-solana changes
are pushed and merged (branch: jorrit/chore-bump-deps, commit: 4e6cacf).

* perf: switch groth16-solana patch to git dep (rev 4e6cacf)

Replace local path patch with git dep pointing to the pushed commit,
making the ark-ff 0.4 elimination work on CI.

* fix: update solana-bn254 v3 API and fix remaining vk_gamme_g2 typos

solana-bn254 v3 deprecated the unversioned compress/decompress functions
in favour of explicit _be variants. Update prover/client/src/proof.rs to
use alt_bn128_g{1,2}_{compress,decompress}_be.

Also fix the vk_gamme_g2 -> vk_gamma_g2 typo in xtask/src/create_vkeyrs_from_gnark_key.rs
which was missed in the bulk rename (this file both constructs a Groth16Verifyingkey
and generates source code using quote!).

* fix: upgrade SOLANA_VERSION 2.2.15 -> 2.3.13 to fix edition 2024 CI failure

Solana 2.2.15 ships platform-tools v1.46 (cargo 1.84.0), which cannot
parse Cargo.toml manifests that use `edition = "2024"`. time-macros
0.2.27 (transitively required via solana-streamer -> x509-parser ->
asn1-rs -> time 0.3.47) uses edition 2024.

Main CI was not failing because its build cache was warm. Our Cargo.lock
changes (solana-bn254 2.2 -> 3.2.1, groth16-solana patch) bust the
cache, causing a fresh compile of time-macros 0.2.27 which then fails.

Solana 2.3.13 ships platform-tools v1.48 (Rust 1.86+) which supports
edition 2024. This also aligns the CLI version with the workspace
library crates that are already pinned to "2.3".

* fix: call ring provider install exactly once via Once

Use std::sync::Once in ensure_ring_provider so that
install_default() is only attempted on the first call per process,
avoiding redundant global-state mutations on subsequent calls.

* fix: pin time to 0.3.37 to avoid edition2024 in time-macros

time 0.3.38+ depends on time-macros 0.2.20+ which uses edition = "2024"
in its Cargo.toml. The Solana platform-tools ship Cargo 1.84.0 which
cannot parse edition 2024 manifests. Pin time to 0.3.37 (time-macros
0.2.19) so cargo test-sbf can parse the full dep graph.

Affects: e2e-test via solana-client -> solana-streamer -> x509-parser
         -> asn1-rs -> time -> time-macros
* chore: add forester tps xtask

Entire-Checkpoint: ca2f5cb4ca53

* chore: allow rpc url from env variable

Entire-Checkpoint: eb09f696bf72
* feat(compressed-token): add approve/revoke delegation for light-token ATAs

Add TypeScript SDK functions to call the on-chain CTokenApprove (discriminator 4)
and CTokenRevoke (discriminator 5) instruction handlers for light-token associated
token accounts.

New files:
- instructions/approve-revoke.ts: sync instruction builders matching Rust SDK layout
- actions/approve-interface.ts: async actions with cold loading + tx sending
- tests/e2e/approve-revoke-light-token.test.ts: unit + E2E tests

Also adds getLightTokenDelegate helper and extends FrozenOperation type.

* fix(sdk): make decimals optional in unified approve/revoke wrappers

Avoid unnecessary getMintInterface RPC call when caller provides decimals.

* feat(sdk): add transferDelegated for light-token ATAs

Add transferDelegatedInterface action and unified wrapper, completing
the approve → transfer → revoke delegation flow for light-token ATAs.

* add spl t22 support

* refactor(sdk): align transferDelegated with wallet-recipient API

Update transferDelegatedInterface and
createTransferDelegatedInterfaceInstructions to accept a recipient
wallet address instead of an explicit destination token account,
matching the transferInterface convention from PR #2354.

ATA derivation and idempotent creation now happen internally for
all programId variants (light-token, SPL, Token-2022).

* 1st batch commnets

* docs(sdk): document load-all behavior in approve/revoke JSDoc; add owner==feePayer E2E test

Add @remarks to approve/revoke functions documenting that for light-token
mints, all cold (compressed) balances are loaded into the hot ATA regardless
of the delegation amount. Add E2E test covering the owner==feePayer code
path which was previously only tested at the unit level.

* add regression tests

* fixes

* update changelog

* upd changelog

* fix: packedaccounts in js should not turn bool to number

* cherry pick bool fix

* bump versions again

---------

Co-authored-by: tilo-14 <tilo@luminouslabs.com>
Co-authored-by: Swenschaeferjohann <swen@lightprotocol.com>
* wrap options

* wip

* upd changelog
- Align changelogs on 0.23.0 stable (npm V2 default; no app LIGHT_PROTOCOL_VERSION=V2)
- Deprecate featureFlags beta helpers as no-ops; clarify V2_REQUIRED_ERROR
- Bump compressed-token, stateless.js, and zk-compression-cli package versions to stable tags

Made-with: Cursor
sergeytimoshin and others added 3 commits March 26, 2026 21:45
* feat: optimize address batch pipeline

* format

* feat: stabilize address batch pipeline

* chore: update subproject commit for photon

* fix: update deranged and time package versions in Cargo.lock

* feat: add input validation for batch size in get_batch_address_append_circuit_inputs

* cleanup
* feat: optimize address batch pipeline

* format

* feat: stabilize address batch pipeline

* feat: batch cold account loads in light client

* fix: harden load batching and mixed decompression

* Fix prover startup and decompression load flow

* cleanup: harden prover startup polling

* format

* format

* cleanup

* cleanup

* refactor: simplify batch data length validation and remove redundant proof height checks

* refactor: remove unused output_queue_index parameter from into_in_token_data methods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants