-
Notifications
You must be signed in to change notification settings - Fork 750
Perf/cache epoch version in ClarityDatabase #6959
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
jacinta-stacks
wants to merge
17
commits into
stacks-network:develop
Choose a base branch
from
jacinta-stacks:perf/cache-epoch-per-block
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
3451d95
Cache epoch version in ClarityDatabase and benchmark it
jacinta-stacks 1f5d255
Make sure that when we set the block via at-block, we invalidate the …
jacinta-stacks 532e325
Merge branch 'develop' of https://github.com/stacks-network/stacks-co…
jacinta-stacks a92bda2
CRC: use iter_batched_ref and PerIteration in benching
jacinta-stacks 5b1663b
Merge branch 'develop' of https://github.com/stacks-network/stacks-co…
jacinta-stacks 6bfa9fc
Add tests for epoch rollback and block_at functionality
jacinta-stacks 6fd5f32
CRC: pass known epoch to ClarityDatabase constructors to avoid a MARF…
jacinta-stacks 8301fab
chore: merge develop with conflicts
federico-stacks 33bd8e3
crc: restore and improve docstring for get_clarity_epoch_version and …
federico-stacks a2593d2
crc: makes get_clarity_db_epoch_version use read_epoch_from_store and…
federico-stacks 2cb8150
chore: improve docstrings involved with cached_epoch
federico-stacks 9de52bb
chore: add as_clarity_db_with_epoch docstring
federico-stacks 38e54d7
test: add coverage for cached_epoch and read_epoch_from_store
federico-stacks efbee17
chore: add changelog entry
federico-stacks c1315ad
chore: makes chainstate tests fs path cross-os
federico-stacks b2f1373
fix: improve chainstate test harness to work nicely with as_clarity_d…
federico-stacks 10dfc5e
chore: merge develop with conflicts
federico-stacks File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| Improved performances in ClarityDatabase caching clarity epoch version and eliminating redundant store read |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,267 @@ | ||
| // Copyright (C) 2026 Stacks Open Internet Foundation | ||
| // | ||
| // This program is free software: you can redistribute it and/or modify | ||
| // it under the terms of the GNU General Public License as published by | ||
| // the Free Software Foundation, either version 3 of the License, or | ||
| // (at your option) any later version. | ||
| // | ||
| // This program is distributed in the hope that it will be useful, | ||
| // but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
| // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
| // GNU General Public License for more details. | ||
| // | ||
| // You should have received a copy of the GNU General Public License | ||
| // along with this program. If not, see <http://www.gnu.org/licenses/>. | ||
|
|
||
| //! Baseline benchmarks for `get_clarity_epoch_version()` overhead. | ||
| //! | ||
| //! These benchmarks exercise the DB-heavy hot paths that call | ||
| //! `get_clarity_epoch_version()` on every operation: | ||
| //! | ||
| //! 1. **map_set_get** — Repeated `(map-set …)` + `(map-get? …)` in a loop. | ||
| //! Each `map-set` calls `get_clarity_epoch_version()` twice (key + value | ||
| //! `admits` checks). Each `map-get?` calls it once. This is the single | ||
| //! hottest path for epoch lookups. | ||
| //! | ||
| //! 2. **var_set_get** — Repeated `(var-set …)` + `(var-get …)`. | ||
| //! Each `var-set` calls `get_clarity_epoch_version()` once for the | ||
| //! `admits` check. | ||
| //! | ||
| //! 3. **contract_call_heavy** — Inter-contract calls that trigger | ||
| //! `get_contract` → `get_clarity_epoch_version()` on every call. | ||
| //! | ||
| //! Run with: | ||
| //! cargo bench --bench epoch_cache -p clarity | ||
|
|
||
| use std::hint::black_box; | ||
|
|
||
| use clarity::vm::contexts::{ContractContext, GlobalContext}; | ||
| use clarity::vm::costs::LimitedCostTracker; | ||
| use clarity::vm::database::MemoryBackingStore; | ||
| use clarity::vm::representations::SymbolicExpression; | ||
| use clarity::vm::types::QualifiedContractIdentifier; | ||
| use clarity::vm::version::ClarityVersion; | ||
| use clarity::vm::{ast, eval_all}; | ||
| use criterion::{BatchSize, BenchmarkId, Criterion, criterion_group, criterion_main}; | ||
| use stacks_common::consts::CHAIN_ID_TESTNET; | ||
| use stacks_common::types::StacksEpochId; | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // Helpers | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| fn parse(source: &str) -> Vec<SymbolicExpression> { | ||
| let contract_id = QualifiedContractIdentifier::transient(); | ||
| let mut cost = LimitedCostTracker::new_free(); | ||
| ast::build_ast( | ||
| &contract_id, | ||
| source, | ||
| &mut cost, | ||
| ClarityVersion::Clarity2, | ||
| StacksEpochId::Epoch30, | ||
| ) | ||
| .expect("failed to parse benchmark program") | ||
| .expressions | ||
| } | ||
|
|
||
| /// Create a `MemoryBackingStore` with the epoch stored in the KV store, | ||
| /// matching production behavior where the epoch is written during epoch | ||
| /// initialization. | ||
| fn setup_store() -> MemoryBackingStore { | ||
| let mut marf = MemoryBackingStore::new(); | ||
| let mut db = marf.as_clarity_db(); | ||
| db.begin(); | ||
| db.set_clarity_epoch_version(StacksEpochId::Epoch30) | ||
| .expect("failed to set epoch"); | ||
| db.commit().unwrap(); | ||
| marf | ||
| } | ||
|
|
||
| /// Execute `parsed` in a fresh environment. | ||
| /// `marf` is provided by the caller (created in `iter_batched_ref` setup) so | ||
| /// that SQLite initialisation and cleanup are excluded from the timing window. | ||
| /// `GlobalContext::new` is cheap (no DB calls), so including it in the timing | ||
| /// window should be negligible. | ||
| fn run(parsed: &[SymbolicExpression], marf: &mut MemoryBackingStore) { | ||
| let contract_id = QualifiedContractIdentifier::transient(); | ||
| let db = marf.as_clarity_db(); | ||
| let mut global_context = GlobalContext::new( | ||
| false, | ||
| CHAIN_ID_TESTNET, | ||
| db, | ||
| LimitedCostTracker::new_free(), | ||
| StacksEpochId::Epoch30, | ||
| ); | ||
| let mut ctx = ContractContext::new(contract_id, ClarityVersion::Clarity2); | ||
| black_box( | ||
| global_context | ||
| .execute(|g| eval_all(parsed, &mut ctx, g, None)) | ||
| .unwrap(), | ||
| ); | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // Program generators | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /// Generates a program that defines a map and does `iters` rounds of | ||
| /// `(map-set …)` + `(map-get? …)`. | ||
| /// | ||
| /// Each map-set triggers 2× `get_clarity_epoch_version()` (key admits + value | ||
| /// admits) plus serialization. Each map-get triggers 1× epoch lookup. | ||
| /// Total epoch lookups ≈ 3 × iters. | ||
| fn make_map_program(iters: usize) -> String { | ||
| // Build a sequence of (map-set m {id: <i>} {val: <i>}) (map-get? m {id: <i>}) | ||
| let mut body = String::new(); | ||
| for i in 0..iters { | ||
| body.push_str(&format!( | ||
| "(map-set m {{id: {i}}} {{val: {i}}})\n\ | ||
| (map-get? m {{id: {i}}})\n" | ||
| )); | ||
| } | ||
| format!( | ||
| "(define-map m {{id: int}} {{val: int}})\n\ | ||
| {body}\n\ | ||
| true" | ||
| ) | ||
| } | ||
|
|
||
| /// Generates a program that defines a data-var and does `iters` rounds of | ||
| /// `(var-set …)` + `(var-get …)`. | ||
| /// | ||
| /// Each var-set triggers 1× `get_clarity_epoch_version()` (admits check). | ||
| /// Total epoch lookups ≈ iters. | ||
| fn make_var_program(iters: usize) -> String { | ||
| let mut body = String::new(); | ||
| for i in 0..iters { | ||
| body.push_str(&format!( | ||
| "(var-set counter {i})\n\ | ||
| (var-get counter)\n" | ||
| )); | ||
| } | ||
| format!( | ||
| "(define-data-var counter int 0)\n\ | ||
| {body}\n\ | ||
| true" | ||
| ) | ||
| } | ||
|
|
||
| /// Generates a program that does `iters` intra-contract private function calls, | ||
| /// each of which reads a data-var (triggering epoch lookups). | ||
| /// | ||
| /// This exercises the var-get path under call overhead, closer to real workloads. | ||
| fn make_call_heavy_program(iters: usize) -> String { | ||
| let calls = (0..iters) | ||
| .map(|_| "(do-read)".to_string()) | ||
| .collect::<Vec<_>>() | ||
| .join("\n"); | ||
| format!( | ||
| "(define-data-var counter int 0)\n\ | ||
| (define-private (do-read) (var-get counter))\n\ | ||
| {calls}\n\ | ||
| true" | ||
| ) | ||
| } | ||
|
|
||
| /// Generates a program that does `iters` rounds of map-insert (checking | ||
| /// existence via full deserialization) + map-delete. | ||
| /// | ||
| /// `map-insert` calls `get_clarity_epoch_version()` 2× for admits, then | ||
| /// `data_map_entry_exists` which does a full get+deserialize. | ||
| /// `map-delete` calls `get_clarity_epoch_version()` 1× for admits. | ||
| /// Total epoch lookups ≈ 3 × iters. | ||
| fn make_map_insert_delete_program(iters: usize) -> String { | ||
| let mut body = String::new(); | ||
| for i in 0..iters { | ||
| body.push_str(&format!( | ||
| "(map-insert m {{id: {i}}} {{val: {i}}})\n\ | ||
| (map-delete m {{id: {i}}})\n" | ||
| )); | ||
| } | ||
| format!( | ||
| "(define-map m {{id: int}} {{val: int}})\n\ | ||
| {body}\n\ | ||
| true" | ||
| ) | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // Benchmark groups | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| fn bench_map_set_get(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("epoch_cache/map_set_get"); | ||
| for &iters in &[50usize, 200] { | ||
| let program = make_map_program(iters); | ||
| let parsed = parse(&program); | ||
|
|
||
| group.bench_function(BenchmarkId::new("iters", iters), |b| { | ||
| b.iter_batched_ref( | ||
| setup_store, | ||
| |marf| run(&parsed, marf), | ||
| BatchSize::PerIteration, | ||
| ); | ||
| }); | ||
| } | ||
| group.finish(); | ||
| } | ||
|
|
||
| fn bench_var_set_get(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("epoch_cache/var_set_get"); | ||
| for &iters in &[50usize, 200] { | ||
| let program = make_var_program(iters); | ||
| let parsed = parse(&program); | ||
|
|
||
| group.bench_function(BenchmarkId::new("iters", iters), |b| { | ||
| b.iter_batched_ref( | ||
| setup_store, | ||
| |marf| run(&parsed, marf), | ||
| BatchSize::PerIteration, | ||
| ); | ||
| }); | ||
| } | ||
| group.finish(); | ||
| } | ||
|
|
||
| fn bench_call_heavy(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("epoch_cache/call_heavy"); | ||
| for &iters in &[50usize, 200] { | ||
| let program = make_call_heavy_program(iters); | ||
| let parsed = parse(&program); | ||
|
|
||
| group.bench_function(BenchmarkId::new("iters", iters), |b| { | ||
| b.iter_batched_ref( | ||
| setup_store, | ||
| |marf| run(&parsed, marf), | ||
| BatchSize::PerIteration, | ||
| ); | ||
| }); | ||
| } | ||
| group.finish(); | ||
| } | ||
|
|
||
| fn bench_map_insert_delete(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("epoch_cache/map_insert_delete"); | ||
| for &iters in &[50usize, 200] { | ||
| let program = make_map_insert_delete_program(iters); | ||
| let parsed = parse(&program); | ||
|
|
||
| group.bench_function(BenchmarkId::new("iters", iters), |b| { | ||
| b.iter_batched_ref( | ||
| setup_store, | ||
| |marf| run(&parsed, marf), | ||
| BatchSize::PerIteration, | ||
| ); | ||
| }); | ||
| } | ||
| group.finish(); | ||
| } | ||
|
|
||
| criterion_group!( | ||
| benches, | ||
| bench_map_set_get, | ||
| bench_var_set_get, | ||
| bench_call_heavy, | ||
| bench_map_insert_delete, | ||
| ); | ||
| criterion_main!(benches); | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.