feat: add REDUCE_DAA_TARGET feature activation for 4x faster blocks#1636
feat: add REDUCE_DAA_TARGET feature activation for 4x faster blocks#1636msbrogli wants to merge 16 commits intochore/claude-setupfrom
Conversation
| state = self.get_state(block=block, feature=feature) | ||
| return state.is_active() | ||
|
|
||
| def is_feature_active_for_next_block(self, *, parent_block: 'Block', feature: Feature) -> bool: |
There was a problem hiding this comment.
Should the DAA start right at the block where it is activated? Or should it start after the very next block where it is activated?
There was a problem hiding this comment.
I would prefer starting it at the very next block, so we don't need this extra and specific method in feature service. The practical effect is the same.
| AVG_TIME_BETWEEN_BLOCKS: int = 30 # in seconds | ||
|
|
||
| # Average time between blocks after REDUCE_DAA_TARGET feature activation. | ||
| REDUCED_AVG_TIME_BETWEEN_BLOCKS: int = 30 # in seconds, networks configure the post-activation target |
There was a problem hiding this comment.
Fix: 7.5. Should we do 6 instead? Or 10?
|
| Branch | feat/daa-feature-activation |
| Testbed | ubuntu-22.04 |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result minutes (m) (Result Δ%) | Lower Boundary minutes (m) (Limit %) | Upper Boundary minutes (m) (Limit %) |
|---|---|---|---|---|
| sync-v2 (up to 20000 blocks) | 📈 view plot 🚷 view threshold | 1.69 m(-0.91%)Baseline: 1.71 m | 1.54 m (90.83%) | 2.05 m (82.58%) |
c07b6af to
5bf8e39
Compare
| """Validate reward amount.""" | ||
| parent_block = block.get_block_parent() | ||
| tokens_issued_per_block = self._daa.get_tokens_issued_per_block(parent_block.get_height() + 1) | ||
| tokens_issued_per_block = self._daa.get_tokens_issued_per_block(parent_block.get_height() + 1, block=block) |
There was a problem hiding this comment.
Should this method receive only the block? It can get the parent block height from the block itself if needed.
d4ff306 to
a9f8b34
Compare
a9f8b34 to
46951f4
Compare
5a42517 to
6ed83c8
Compare
46951f4 to
7cf8797
Compare
There was a problem hiding this comment.
This implementation is pretty specific and ad-hoc for the reduction feature. You could instead use the Features.from_vertex structure and make a BlockTimeVersion analogous to OpcodesVersion, NanoRuntimeVersion, and the unmerged BlueprintVersion.
By doing that, the structure would be ready to make another change if we decide to do it in the future (which is possible after we observe the real network behavior after the update). With the current solution, it would get unnecessarily complex to add another conditional on top of the implemented ones.
7cf8797 to
cb40622
Compare
| fee_tokens=True, | ||
| opcodes_version=OpcodesVersion.V2, | ||
| nano_runtime_version=NanoRuntimeVersion.V2, | ||
| block_time_version=BlockTimeVersion.V2, |
12d0624 to
76cefe7
Compare
| """ | ||
| if block is not None: | ||
| return self._select(block).get_tokens_issued_per_block(height) | ||
| return self._v1.get_tokens_issued_per_block(height) |
There was a problem hiding this comment.
I don't like this default. Should we require a block? Or the version to use?
There was a problem hiding this comment.
Or, if a block is not provided, use the best block?
|
|
||
| def minimum_tx_weight(self, tx: Transaction) -> float: | ||
| """Returns the minimum weight for the param tx. Version-independent.""" | ||
| return self._v1.minimum_tx_weight(tx) |
There was a problem hiding this comment.
It should require the block and select as well.
|
|
||
| def get_mined_tokens(self, height: int) -> int: | ||
| """Return the number of tokens mined in total at height. Version-independent.""" | ||
| return self._v1.get_mined_tokens(height) |
There was a problem hiding this comment.
It should require the block and select as well.
|
|
||
| def get_weight_decay_amount(self, distance: int) -> float: | ||
| """Return the amount to be reduced in the weight of the block. Version-independent.""" | ||
| return self._v1.get_weight_decay_amount(distance) |
There was a problem hiding this comment.
It should require the block and select as well.
| parent_block_getter: Callable[[Block], Block], | ||
| ) -> float: | ||
| """Public method for template creation.""" | ||
| if self.TEST_MODE & TestMode.TEST_BLOCK_WEIGHT: |
There was a problem hiding this comment.
Should we remove TEST_MODE for once and for all?
| fee_tokens=False, | ||
| opcodes_version=OpcodesVersion.V2, | ||
| nano_runtime_version=NanoRuntimeVersion.V2, | ||
| block_time_version=DAAVersion.V1, |
There was a problem hiding this comment.
block_time_version? It should be daa_version.
| nano_runtime_version = ( | ||
| NanoRuntimeVersion.V2 if feature_is_active[Feature.NANO_RUNTIME_V2] else NanoRuntimeVersion.V1 | ||
| ) | ||
| block_time_version = ( |
| fee_tokens=False, | ||
| opcodes_version=OpcodesVersion.V1, | ||
| nano_runtime_version=NanoRuntimeVersion.V1, | ||
| block_time_version=DAAVersion.V1, |
- Rename Features.block_time_version field to daa_version - Make block param required on facade get_tokens_issued_per_block - Add optional block param to minimum_tx_weight, get_mined_tokens, get_weight_decay_amount for future version dispatch - Bypass facade in manager.py and default_filler.py (info-only, no block context) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
||
| The weight must not be less than ``min_block_weight``. | ||
|
|
||
| .. _RFC 22: https://gitlab.com/HathorNetwork/rfcs/merge_requests/22 |
|
|
||
| blocks_in_this_halving = height - number_of_halvings * settings.BLOCKS_PER_HALVING | ||
|
|
||
| tokens_per_block = settings.INITIAL_TOKENS_PER_BLOCK |
There was a problem hiding this comment.
This has to be updated, right?
| if self._feature_service is None: | ||
| return self._v1 |
There was a problem hiding this comment.
We should add an assert somewhere to make sure this is set in non-test environments
| def get_tokens_issued_per_block(self, height: int) -> int: | ||
| """Return the number of tokens issued (aka reward) per block of a given height.""" | ||
| return self.daa.get_tokens_issued_per_block(height) | ||
| """Return the number of tokens issued (aka reward) per block of a given height. | ||
|
|
||
| This is an info-only method (used by APIs) with no block context, so it always uses V1. | ||
| """ | ||
| return self.daa._v1.get_tokens_issued_per_block(height) |
There was a problem hiding this comment.
It should return the correct value based on the height.
| parent_block_getter: Callable[[Block], Block], | ||
| ) -> list[VertexId]: | ||
| """Return the ids of the required blocks to call `calculate_block_difficulty`.""" | ||
| return self._v1.get_block_dependencies(block, parent_block_getter) |
There was a problem hiding this comment.
Call the standalone _get_block_dependencies function instead , and remove it from v1 and v2
| @cpu.profiler(key=lambda _, block: 'calculate_block_difficulty!{}'.format(block.hash.hex())) | ||
| def calculate_block_difficulty(self, block: Block, parent_block_getter: Callable[[Block], Block]) -> float: | ||
| """Calculate block weight according to the ascendants of `block`.""" | ||
| if self.TEST_MODE & TestMode.TEST_BLOCK_WEIGHT: | ||
| return 1.0 | ||
| if block.is_genesis: | ||
| return self.MIN_BLOCK_WEIGHT | ||
| parent_block = parent_block_getter(block) | ||
| return _calculate_next_weight( | ||
| self._settings, parent_block, block.timestamp, parent_block_getter, | ||
| avg_time=self.avg_time_between_blocks, min_block_weight=self.MIN_BLOCK_WEIGHT, | ||
| test_mode=self.TEST_MODE, | ||
| ) |
There was a problem hiding this comment.
This method is repeated in both v1 and v2, with the only difference being what arguments they pass to _calculate_next_weight. The same for other methods in this file. It seems to me the common could just be an abstract base class and the versions would override the missing methods, without duplication.
Even better, I would model this abstraction in inversion with values and composition instead: a single DAA class that gets a DAASettings class that provides values. Each one of v1 and v2 would correspond to an instance of DAASettings, with different values.
* chore: update license notices * chore: fix module settings in hathorlib
* feat: move runner and nanocontract core to hathorlib
- Rename Features.block_time_version field to daa_version - Make block param required on facade get_tokens_issued_per_block - Add optional block param to minimum_tx_weight, get_mined_tokens, get_weight_decay_amount for future version dispatch - Bypass facade in manager.py and default_filler.py (info-only, no block context) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1765ecf to
e204e54
Compare
chore(ci): temporarily remove macOS from CI matrix
chore: update checkpoints (and the checkpoint update script)
Implement feature-gated DAA target reduction: when REDUCE_DAA_TARGET activates, block target time drops from 30s to 7.5s (configurable via REDUCED_AVG_TIME_BETWEEN_BLOCKS_10X setting, stored in tenths of a second to avoid floats). Block reward is divided proportionally to maintain the same inflation rate. Key changes: - Add Feature.REDUCE_DAA_TARGET enum entry - Add REDUCED_AVG_TIME_BETWEEN_BLOCKS_10X setting (default: 75 = 7.5s) - Add FeatureService.is_feature_active_for_next_block() for template creation at evaluation boundaries (LOCKED_IN -> ACTIVE transition) - DAA accepts optional FeatureService; two-context pattern: verification uses is_feature_active(block), template creation uses is_feature_active_for_next_block(parent_block) - Builder wires FeatureService into DAA constructor - Consensus treats REDUCE_DAA_TARGET as no-op for transaction rules - Add comprehensive test suite for the feature activation - Add DAA transition simulator tools (batch simulation, charts, live dashboard) under tools/daa-reduction/ Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Split the monolithic hathor/daa.py into a proper package: - hathor/daa/common.py: shared types (DAAVersion, TestMode) and utility functions - hathor/daa/v1.py: DifficultyAdjustmentAlgorithmV1 (30s target) - hathor/daa/v2.py: DifficultyAdjustmentAlgorithmV2 (7.5s target) - hathor/daa/daa.py: feature-aware facade - hathor/daa/__init__.py: re-exports for backward compatibility Also renames BlockTimeVersion to DAAVersion and adds hardcoded regression tests for _calculate_next_weight with V1 and V2 across steady-state, fast, and slow block scenarios. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Rename Features.block_time_version field to daa_version - Make block param required on facade get_tokens_issued_per_block - Add optional block param to minimum_tx_weight, get_mined_tokens, get_weight_decay_amount for future version dispatch - Bypass facade in manager.py and default_filler.py (info-only, no block context) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reduces code duplication in V2 by inheriting from V1 and overriding only the differing behavior (block target and reward). Replaces direct _v1 access in the facade and callers with feature-aware selection via the current best block when no block context is provided. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
e204e54 to
13c6271
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## chore/claude-setup #1636 +/- ##
======================================================
- Coverage 85.21% 84.89% -0.33%
======================================================
Files 464 469 +5
Lines 30492 29253 -1239
Branches 4618 4392 -226
======================================================
- Hits 25984 24834 -1150
+ Misses 3615 3562 -53
+ Partials 893 857 -36
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Motivation
Implements the design in HathorNetwork/rfcs#99
Reduce the DAA target block time from 30s to 7.5s (4x faster blocks) via the feature activation system. This enables higher throughput while maintaining the same inflation rate by proportionally reducing block rewards.
Acceptance Criteria
REDUCE_DAA_TARGETto theFeatureenum with consensus handling (no-op for tx verification, only affects DAA parameters)FeatureServicevia dependency injection; checks feature state internallyREDUCED_AVG_TIME_BETWEEN_BLOCKS_10Xsetting (default 75 = 7.5s)(AVG_TIME * 10) // REDUCED_10X(e.g. 300 // 75 = 4x)is_feature_active(block)for verification,is_feature_active_for_next_block(parent_block)for template creation (handles LOCKED_IN → ACTIVE boundary)Analysis
The simulation runs a mainnet-like scenario with ~8.93 EH/s hashpower (weight ~67.86), where the
REDUCE_DAA_TARGETfeature activates at height 600. 1600 total blocks are mined with a DAA window of 134 blocks.Expected weight drop:
The DAA weight formula includes a
log₂(T)term where T is the target block time:W = log₂(H × 30)W = log₂(H × 7.5)log₂(30) − log₂(7.5) = log₂(30/7.5) = log₂(4) = 2.0This 2.0 drop is independent of hashpower — it's purely the ratio of target times. The instantaneous weight drop at the activation block (h=599→600) is exactly 2.0 (67.79 → 65.79), matching the theoretical prediction. The steady-state average weight drop shown in the dashboard summary is 1.81, slightly less than 2.0 due to sampling variance over finite windows.
DAA behavior:
The DAA correctly estimated the hashrate and adjusted mining difficulty right at activation. No overshoot, no prolonged oscillation. Solvetimes converge to the new 7.5s target within ~20 blocks, as visible in the solvetime chart (avg solvetime drops from 25.09s to 7.37s, max transition solvetime 50s).
No decay triggered:
The weight decay mechanism (triggered when solvetime ≥ 3600s) was never needed. The DAA handled the transition smoothly on its own.
Raw data:
The per-block simulation data points (height, weight, solvetime, hashpower, feature state) are available in
tools/daa-reduction/simulator/daa_runs/hp8928412586733189120_s0.json.Checklist
master, confirm this code is production-ready and can be included in future releases as soon as it gets merged