backport: Merge bitcoin/bitcoin#28644 , 28184#7091
backport: Merge bitcoin/bitcoin#28644 , 28184#7091vijaydasmp wants to merge 2 commits intodashpay:developfrom
Conversation
✅ No Merge Conflicts DetectedThis PR currently has no conflicts with other open PRs. |
2904f22 to
1a61563
Compare
343e254 to
1bf441f
Compare
2aa6e19 to
0ebc185
Compare
0ebc185 to
8ac187e
Compare
|
This pull request has conflicts, please rebase. |
8ac187e to
6b3af01
Compare
6b3af01 to
5d091d3
Compare
WalkthroughThe pull request contains changes across the Qt GUI layer, functional tests, and configuration scripts. In the main functional change, Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
test/functional/interface_usdt_validation.py (1)
99-137:⚠️ Potential issue | 🔴 Critical
eventsis never populated, so Line 136 will always fail.Line 136 asserts
len(events) == BLOCKS_EXPECTED, buthandle_blockconnected()never appends toevents. This makes the test fail even when tracepoints are correct.Suggested fix
def handle_blockconnected(_, data, __): event = ctypes.cast(data, ctypes.POINTER(Block)).contents self.log.info(f"handle_blockconnected(): {event}") - block_hash = bytes(event.hash[::-1]).hex() - block = expected_blocks[block_hash] - assert_equal(block["hash"], block_hash) - assert_equal(block["height"], event.height) - assert_equal(len(block["tx"]), event.transactions) - assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) - assert_equal(0, event.sigops) # no sigops in coinbase tx - # only plausibility checks - assert event.duration > 0 - del expected_blocks[block_hash] + events.append(event)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/interface_usdt_validation.py` around lines 99 - 137, The test fails because the events list is never populated; initialize a shared list (e.g., events = []) before defining handle_blockconnected and append the received event (or a lightweight copy of it) inside handle_blockconnected (the callback registered via bpf["block_connected"].open_perf_buffer(handle_blockconnected)); ensure you append the same event object shape expected later (the variable named event used in the later loop) so the final assertions and len(events) == BLOCKS_EXPECTED succeed.src/qt/bitcoingui.cpp (1)
555-620:⚠️ Potential issue | 🔴 CriticalUse
thisas connection context for wallet actions instead of null m_wallet_controller.
createActions()executes during construction (line 157) whenm_wallet_controlleris stillnullptr(initialized at line 122). Usingnullptras the Qt::connect context object on lines 555, 583, 609, 612, 618 bypasses Qt's automatic signal-slot lifecycle management. When the context object is null, Qt cannot properly manage connection cleanup, creating a resource management issue.Replace
m_wallet_controllerwiththisas the context object and add null checks inside each lambda, as the suggested fix shows. This ensures wallet actions are properly lifecycle-managed throughBitcoinGUI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/qt/bitcoingui.cpp` around lines 555 - 620, The connect calls for wallet-related actions (the QMenu aboutToShow lambda using m_wallet_controller->listWalletDir(), and the triggered handlers for m_restore_wallet_action/CreateWalletActivity/RestoreWalletActivity/CloseWalletActivity/closeAllWallets via m_close_wallet_action, m_create_wallet_action, m_close_all_wallets_action) currently use m_wallet_controller as the connection context even though it can be nullptr during construction; change the context argument in these connect(...) calls from m_wallet_controller to this, and inside each lambda add a guard that returns early if m_wallet_controller is nullptr before calling methods like m_wallet_controller->listWalletDir(), creating activities that take m_wallet_controller, or calling m_wallet_controller->closeWallet()/closeAllWallets(); this preserves Qt's automatic connection cleanup while avoiding dereferencing a null m_wallet_controller.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/qt/bitcoingui.cpp`:
- Around line 555-620: The connect calls for wallet-related actions (the QMenu
aboutToShow lambda using m_wallet_controller->listWalletDir(), and the triggered
handlers for
m_restore_wallet_action/CreateWalletActivity/RestoreWalletActivity/CloseWalletActivity/closeAllWallets
via m_close_wallet_action, m_create_wallet_action, m_close_all_wallets_action)
currently use m_wallet_controller as the connection context even though it can
be nullptr during construction; change the context argument in these
connect(...) calls from m_wallet_controller to this, and inside each lambda add
a guard that returns early if m_wallet_controller is nullptr before calling
methods like m_wallet_controller->listWalletDir(), creating activities that take
m_wallet_controller, or calling
m_wallet_controller->closeWallet()/closeAllWallets(); this preserves Qt's
automatic connection cleanup while avoiding dereferencing a null
m_wallet_controller.
In `@test/functional/interface_usdt_validation.py`:
- Around line 99-137: The test fails because the events list is never populated;
initialize a shared list (e.g., events = []) before defining
handle_blockconnected and append the received event (or a lightweight copy of
it) inside handle_blockconnected (the callback registered via
bpf["block_connected"].open_perf_buffer(handle_blockconnected)); ensure you
append the same event object shape expected later (the variable named event used
in the later loop) so the final assertions and len(events) == BLOCKS_EXPECTED
succeed.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
src/qt/bitcoingui.cpptest/functional/interface_usdt_net.pytest/functional/interface_usdt_utxocache.pytest/functional/interface_usdt_validation.pytest/fuzz/test_runner.pytest/lint/lint-python.py
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Verified 2 findings: (1) wallet menu action lambdas connected before m_wallet_controller is initialized — may crash or silently no-op, (2) USDT validation test assertion now vacuous after event recording refactor.
Reviewed commit: 5d091d3
🔴 1 blocking | 🟡 1 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/qt/bitcoingui.cpp`:
- [BLOCKING] lines 555-618: Wallet menu action lambdas are connected before `m_wallet_controller` is initialized
Verified. `BitcoinGUI` calls `createActions()` from the constructor at line 157, but `m_wallet_controller` is only assigned later in `setWalletController()` at lines 967-972. The backport changed the wallet menu/action connections at lines 555, 571, 583, 609, 612, and 618 to use `m_wallet_controller` as the context object at connection time, when it is still null. That means the `Open Wallet`, `Restore Wallet`, `Close Wallet`, `Create Wallet`, and `Close All Wallets` lambda connections are not established correctly. This matches the upstream revert of commit `7066e8996d` in `f09bfab4af` / merge `565c55119b`.
In `test/functional/interface_usdt_validation.py`:
- [SUGGESTION] lines 88-136: Final block-connected assertion was changed to use an event list that is never populated
Verified. The refactor removed `blocks_checked` from the callback, but the replacement check at line 136 now asserts `len(events) == BLOCKS_EXPECTED` even though `events` is only initialized at line 90 and never appended to in `handle_blockconnected()`. The callback still performs inline validation and deletes from `expected_blocks`, so the final `len(events)` assertion is guaranteed to stay at 0 and does not measure traced callbacks correctly.
| @@ -568,7 +568,7 @@ void BitcoinGUI::createActions() | |||
| continue; | |||
| } | |||
|
|
|||
| connect(action, &QAction::triggered, [this, path] { | |||
| connect(action, &QAction::triggered, m_wallet_controller, [this, path] { | |||
| auto activity = new OpenWalletActivity(m_wallet_controller, this); | |||
| connect(activity, &OpenWalletActivity::opened, this, &BitcoinGUI::setCurrentWallet, Qt::QueuedConnection); | |||
| connect(activity, &OpenWalletActivity::opened, rpcConsole, &RPCConsole::setCurrentWallet, Qt::QueuedConnection); | |||
| @@ -580,7 +580,7 @@ void BitcoinGUI::createActions() | |||
| action->setEnabled(false); | |||
| } | |||
| }); | |||
| connect(m_restore_wallet_action, &QAction::triggered, [this] { | |||
| connect(m_restore_wallet_action, &QAction::triggered, m_wallet_controller, [this] { | |||
| //: Name of the wallet data file format. | |||
| QString name_data_file = tr("Wallet Data"); | |||
|
|
|||
| @@ -606,16 +606,16 @@ void BitcoinGUI::createActions() | |||
| auto backup_file_path = fs::PathFromString(backup_file.toStdString()); | |||
| activity->restore(backup_file_path, wallet_name.toStdString()); | |||
| }); | |||
| connect(m_close_wallet_action, &QAction::triggered, [this] { | |||
| connect(m_close_wallet_action, &QAction::triggered, m_wallet_controller, [this] { | |||
| m_wallet_controller->closeWallet(walletFrame->currentWalletModel(), this); | |||
| }); | |||
| connect(m_create_wallet_action, &QAction::triggered, [this] { | |||
| connect(m_create_wallet_action, &QAction::triggered, m_wallet_controller, [this] { | |||
| auto activity = new CreateWalletActivity(m_wallet_controller, this); | |||
| connect(activity, &CreateWalletActivity::created, this, &BitcoinGUI::setCurrentWallet); | |||
| connect(activity, &CreateWalletActivity::created, rpcConsole, &RPCConsole::setCurrentWallet); | |||
| activity->create(); | |||
| }); | |||
| connect(m_close_all_wallets_action, &QAction::triggered, [this] { | |||
| connect(m_close_all_wallets_action, &QAction::triggered, m_wallet_controller, [this] { | |||
There was a problem hiding this comment.
🔴 Blocking: Wallet menu action lambdas are connected before m_wallet_controller is initialized
Verified. BitcoinGUI calls createActions() from the constructor at line 157, but m_wallet_controller is only assigned later in setWalletController() at lines 967-972. The backport changed the wallet menu/action connections at lines 555, 571, 583, 609, 612, and 618 to use m_wallet_controller as the context object at connection time, when it is still null. That means the Open Wallet, Restore Wallet, Close Wallet, Create Wallet, and Close All Wallets lambda connections are not established correctly. This matches the upstream revert of commit 7066e8996d in f09bfab4af / merge 565c55119b.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/qt/bitcoingui.cpp`:
- [BLOCKING] lines 555-618: Wallet menu action lambdas are connected before `m_wallet_controller` is initialized
Verified. `BitcoinGUI` calls `createActions()` from the constructor at line 157, but `m_wallet_controller` is only assigned later in `setWalletController()` at lines 967-972. The backport changed the wallet menu/action connections at lines 555, 571, 583, 609, 612, and 618 to use `m_wallet_controller` as the context object at connection time, when it is still null. That means the `Open Wallet`, `Restore Wallet`, `Close Wallet`, `Create Wallet`, and `Close All Wallets` lambda connections are not established correctly. This matches the upstream revert of commit `7066e8996d` in `f09bfab4af` / merge `565c55119b`.
| @@ -98,7 +97,6 @@ def __repr__(self): | |||
| usdt_contexts=[ctx], debug=0, cflags=["-Wno-error=implicit-function-declaration"]) | |||
|
|
|||
| def handle_blockconnected(_, data, __): | |||
| nonlocal events, blocks_checked | |||
| event = ctypes.cast(data, ctypes.POINTER(Block)).contents | |||
| self.log.info(f"handle_blockconnected(): {event}") | |||
| block_hash = bytes(event.hash[::-1]).hex() | |||
| @@ -111,7 +109,6 @@ def handle_blockconnected(_, data, __): | |||
| # only plausibility checks | |||
| assert event.duration > 0 | |||
| del expected_blocks[block_hash] | |||
| blocks_checked += 1 | |||
|
|
|||
| bpf["block_connected"].open_perf_buffer( | |||
| handle_blockconnected) | |||
| @@ -136,7 +133,7 @@ def handle_blockconnected(_, data, __): | |||
| # only plausibility checks | |||
| assert event.duration > 0 | |||
| del expected_blocks[block_hash] | |||
| assert_equal(BLOCKS_EXPECTED, blocks_checked) | |||
| assert_equal(BLOCKS_EXPECTED, len(events)) | |||
There was a problem hiding this comment.
🟡 Suggestion: Final block-connected assertion was changed to use an event list that is never populated
Verified. The refactor removed blocks_checked from the callback, but the replacement check at line 136 now asserts len(events) == BLOCKS_EXPECTED even though events is only initialized at line 90 and never appended to in handle_blockconnected(). The callback still performs inline validation and deletes from expected_blocks, so the final len(events) assertion is guaranteed to stay at 0 and does not measure traced callbacks correctly.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_validation.py`:
- [SUGGESTION] lines 88-136: Final block-connected assertion was changed to use an event list that is never populated
Verified. The refactor removed `blocks_checked` from the callback, but the replacement check at line 136 now asserts `len(events) == BLOCKS_EXPECTED` even though `events` is only initialized at line 90 and never appended to in `handle_blockconnected()`. The callback still performs inline validation and deletes from `expected_blocks`, so the final `len(events)` assertion is guaranteed to stay at 0 and does not measure traced callbacks correctly.
UdjinM6
left a comment
There was a problem hiding this comment.
- should drop gui 765
- 27944 should be partial because we don't have
test/functional/interface_usdt_mempool.pyyet
faa190b test: Fuzz merge with -use_value_profile=0 for now (MarcoFalke) Pull request description: Seems odd that this has to be done, but for now there are (unknown) size limits on the qa-assets repo. Also, a larger size means that cloning and iterating over the files takes a longer time. Not sure how to measure the net impact of this, but with some backups reverting this commit, it can be limited on the downside? ACKs for top commit: dergoegge: ACK faa190b Tree-SHA512: 9f8b3f4526f60e4ff6fca97859a725d145a8339c216bd15c92fad7e53f84308745fee47727527de459c0245ef9d474a9dc836fee599ab2b556b519bd900b9a33
f904777 lint: fix custom mypy cache dir setting (Fabian Jahr) Pull request description: fixes bitcoin#28183 The custom cache dir for `mypy` can only be set via an environment variable, setting the `MYPY_CACHE_DIR` variable in the program is not sufficient. This error was introduced while translating the shell script to python. See also the mypy documentation: https://mypy.readthedocs.io/en/stable/config_file.html#confval-cache_dir ACKs for top commit: MarcoFalke: lgtm ACK f904777 Tree-SHA512: 7e8fb0cd06688129bd46d1afb8647262eb53d0f60b1ef6f288fedaa122d906fb62c9855e8bb0d6c6297d41a87a47d3cec7a00df55a7d033947937dfe23d07ba7
|
✅ Review complete (commit 2e439a5) |
|
Re-checked current head
I prepared the minimal fix on my fork here:
Cherry-pick command if helpful: git fetch https://github.com/thepastaclaw/dash.git tracker-974 && git cherry-pick 62c1a52875I also tried pushing directly to |
|
The earlier comment from pastaclaw stating that “one real issue still remains in |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
This backport is confined to Python test/lint updates. The partial merge of bitcoin#27944 into test/functional/interface_usdt_validation.py is broken: the final count assertion now compares against an empty events list because handle_blockconnected was never updated to append. The root cause is a missing prerequisite from the earlier partial backport of bitcoin#27831 that Dash should have applied (converting the callback to events.append(event)), not an error in bitcoin#27944 itself. Two real secondary concerns remain in interface_usdt_utxocache.py: the deferred ctypes cast views and the order-sensitive zip comparison. CodeRabbit's summary describes a different (newer-tip) changeset with Qt/GUI edits that are not part of this detached SHA; nothing from it survives validation.
Reviewed commit: 1490033
🔴 1 blocking | 🟡 2 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_validation.py`:
- [BLOCKING] lines 99-137: `events` is never populated — `assert_equal(BLOCKS_EXPECTED, len(events))` will always fail
When this test actually runs on a host with bcc/BPF, it fails deterministically. `handle_blockconnected` (lines 99–111) still performs inline validation and deletes from `expected_blocks`, but it never appends to `events`. The cleanup from bitcoin#27944 (commit afc0224cd) then replaced the old `blocks_checked` counter with `assert_equal(BLOCKS_EXPECTED, len(events))` at line 136, and added a post-poll `for event in events:` loop (lines 125–135) that is dead code because `events` is always `[]`. Result: the final assertion compares `2 == 0` and raises, and any real tracepoint regression in the handler would be silently swallowed anyway (ctypes callbacks eat exceptions).
Root cause is a backport prerequisite gap, not an error in bitcoin#27944. Dash's earlier partial backport of bitcoin#27831 (commit c781f1ad39) introduced the `events = []` list and the post-poll validation loop, but did **not** convert the callback body to `events.append(event)` as upstream bitcoin#27831 did. That left the callback in an inconsistent state that was latent while `blocks_checked` still existed; bitcoin#27944's afc0224cd cleanup exposes the gap.
Fix: either fully adopt the upstream pattern by reducing the callback to `events.append(event)` and letting the post-poll loop do all validation (matches what this same PR already does in `interface_usdt_utxocache.py`), or adapt the backport to Dash's current state by restoring the `blocks_checked` counter. The first option is preferred because it is what upstream does and it avoids the documented BPF-callback exception-swallowing trap.
In `test/functional/interface_usdt_utxocache.py`:
- [SUGGESTION] lines 272-325: Deferred `ctypes.cast(...).contents` objects may alias reused perf-buffer memory
The refactor moves comparison out of the perf-buffer callback and stores the event objects for later inspection: `actual_utxocache_adds.append(ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents)`. `ctypes` `.contents` is a view onto the underlying C memory, not a deep copy. Once the callback returns, BPF is free to reuse the perf-buffer slot for the next event, so the stored objects can reflect overwritten data by the time `compare_utxo_with_event` runs after `perf_buffer_poll`. This can produce confusing comparison failures or, worse, spuriously passing comparisons if the overwrite happens to match. Safest fix is to copy the struct at callback time (e.g. `actual_utxocache_adds.append(UTXOCacheChange.from_buffer_copy(event))` or construct a plain Python dict inside the callback) so the saved record is independent of the perf-buffer lifetime.
- [SUGGESTION] lines 323-325: Positional `zip` comparison assumes strict tracepoint delivery order
`zip(expected_adds + expected_spents, actual_adds + actual_spents)` is position-sensitive, so it relies on BPF perf-buffer events arriving in exactly the order this test iterates over `block["tx"]`/`vin`/`vout`. That typically holds for a single reconnect but is not a guarantee the USDT/BPF pipeline makes, which is why Bitcoin upstream later switched this comparison to an unordered set-based check on keys (`{cache_event_to_key(e) for e in actual_utxocache_adds} == expected_add_keys`). Worth pulling that follow-up in with this backport or leaving a TODO so the next USDT batch picks it up — otherwise this is a likely source of hard-to-reproduce CI failures once the test actually runs on hardware with BCC/USDT.
| def handle_blockconnected(_, data, __): | ||
| nonlocal events, blocks_checked | ||
| event = ctypes.cast(data, ctypes.POINTER(Block)).contents | ||
| self.log.info(f"handle_blockconnected(): {event}") | ||
| block_hash = bytes(event.hash[::-1]).hex() | ||
| block = expected_blocks[block_hash] | ||
| assert_equal(block["hash"], block_hash) | ||
| assert_equal(block["height"], event.height) | ||
| assert_equal(len(block["tx"]), event.transactions) | ||
| assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) | ||
| assert_equal(0, event.sigops) # no sigops in coinbase tx | ||
| # only plausibility checks | ||
| assert event.duration > 0 | ||
| del expected_blocks[block_hash] | ||
| blocks_checked += 1 | ||
|
|
||
| bpf["block_connected"].open_perf_buffer( | ||
| handle_blockconnected) | ||
|
|
||
| self.log.info(f"mine {BLOCKS_EXPECTED} blocks") | ||
| block_hashes = self.generatetoaddress(self.nodes[0], | ||
| BLOCKS_EXPECTED, ADDRESS_BCRT1_UNSPENDABLE) | ||
| for block_hash in block_hashes: | ||
| expected_blocks[block_hash] = self.nodes[0].getblock(block_hash, 2) | ||
|
|
||
| bpf.perf_buffer_poll(timeout=200) | ||
|
|
||
| self.log.info(f"check that we correctly traced {BLOCKS_EXPECTED} blocks") | ||
| for event in events: | ||
| block_hash = bytes(event.hash[::-1]).hex() | ||
| block = expected_blocks[block_hash] | ||
| assert_equal(block["hash"], block_hash) | ||
| assert_equal(block["height"], event.height) | ||
| assert_equal(len(block["tx"]), event.transactions) | ||
| assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) | ||
| assert_equal(0, event.sigops) # no sigops in coinbase tx | ||
| # only plausibility checks | ||
| assert event.duration > 0 | ||
| del expected_blocks[block_hash] | ||
| assert_equal(BLOCKS_EXPECTED, blocks_checked) | ||
| assert_equal(BLOCKS_EXPECTED, len(events)) | ||
| assert_equal(0, len(expected_blocks)) |
There was a problem hiding this comment.
🔴 Blocking: events is never populated — assert_equal(BLOCKS_EXPECTED, len(events)) will always fail
When this test actually runs on a host with bcc/BPF, it fails deterministically. handle_blockconnected (lines 99–111) still performs inline validation and deletes from expected_blocks, but it never appends to events. The cleanup from bitcoin#27944 (commit afc0224) then replaced the old blocks_checked counter with assert_equal(BLOCKS_EXPECTED, len(events)) at line 136, and added a post-poll for event in events: loop (lines 125–135) that is dead code because events is always []. Result: the final assertion compares 2 == 0 and raises, and any real tracepoint regression in the handler would be silently swallowed anyway (ctypes callbacks eat exceptions).
Root cause is a backport prerequisite gap, not an error in bitcoin#27944. Dash's earlier partial backport of bitcoin#27831 (commit c781f1a) introduced the events = [] list and the post-poll validation loop, but did not convert the callback body to events.append(event) as upstream bitcoin#27831 did. That left the callback in an inconsistent state that was latent while blocks_checked still existed; bitcoin#27944's afc0224 cleanup exposes the gap.
Fix: either fully adopt the upstream pattern by reducing the callback to events.append(event) and letting the post-poll loop do all validation (matches what this same PR already does in interface_usdt_utxocache.py), or adapt the backport to Dash's current state by restoring the blocks_checked counter. The first option is preferred because it is what upstream does and it avoids the documented BPF-callback exception-swallowing trap.
💡 Suggested change
| def handle_blockconnected(_, data, __): | |
| event = ctypes.cast(data, ctypes.POINTER(Block)).contents | |
| self.log.info(f"handle_blockconnected(): {event}") | |
| events.append(event) | |
| bpf["block_connected"].open_perf_buffer( | |
| handle_blockconnected) | |
| self.log.info(f"mine {BLOCKS_EXPECTED} blocks") | |
| block_hashes = self.generatetoaddress(self.nodes[0], | |
| BLOCKS_EXPECTED, ADDRESS_BCRT1_UNSPENDABLE) | |
| for block_hash in block_hashes: | |
| expected_blocks[block_hash] = self.nodes[0].getblock(block_hash, 2) | |
| bpf.perf_buffer_poll(timeout=200) | |
| self.log.info(f"check that we correctly traced {BLOCKS_EXPECTED} blocks") | |
| for event in events: | |
| block_hash = bytes(event.hash[::-1]).hex() | |
| block = expected_blocks[block_hash] | |
| assert_equal(block["hash"], block_hash) | |
| assert_equal(block["height"], event.height) | |
| assert_equal(len(block["tx"]), event.transactions) | |
| assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) | |
| assert_equal(0, event.sigops) # no sigops in coinbase tx | |
| # only plausibility checks | |
| assert event.duration > 0 | |
| del expected_blocks[block_hash] | |
| assert_equal(BLOCKS_EXPECTED, len(events)) | |
| assert_equal(0, len(expected_blocks)) | |
| bpf.cleanup() |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_validation.py`:
- [BLOCKING] lines 99-137: `events` is never populated — `assert_equal(BLOCKS_EXPECTED, len(events))` will always fail
When this test actually runs on a host with bcc/BPF, it fails deterministically. `handle_blockconnected` (lines 99–111) still performs inline validation and deletes from `expected_blocks`, but it never appends to `events`. The cleanup from bitcoin#27944 (commit afc0224cd) then replaced the old `blocks_checked` counter with `assert_equal(BLOCKS_EXPECTED, len(events))` at line 136, and added a post-poll `for event in events:` loop (lines 125–135) that is dead code because `events` is always `[]`. Result: the final assertion compares `2 == 0` and raises, and any real tracepoint regression in the handler would be silently swallowed anyway (ctypes callbacks eat exceptions).
Root cause is a backport prerequisite gap, not an error in bitcoin#27944. Dash's earlier partial backport of bitcoin#27831 (commit c781f1ad39) introduced the `events = []` list and the post-poll validation loop, but did **not** convert the callback body to `events.append(event)` as upstream bitcoin#27831 did. That left the callback in an inconsistent state that was latent while `blocks_checked` still existed; bitcoin#27944's afc0224cd cleanup exposes the gap.
Fix: either fully adopt the upstream pattern by reducing the callback to `events.append(event)` and letting the post-poll loop do all validation (matches what this same PR already does in `interface_usdt_utxocache.py`), or adapt the backport to Dash's current state by restoring the `blocks_checked` counter. The first option is preferred because it is what upstream does and it avoids the documented BPF-callback exception-swallowing trap.
| @@ -326,19 +313,18 @@ def handle_utxocache_spent(_, data, __): | |||
| "is_coinbase": block_index == 0, | |||
| }) | |||
|
|
|||
| assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, len(expected_utxocache_adds)) | |||
| assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS, | |||
| len(expected_utxocache_spents)) | |||
|
|
|||
| bpf.perf_buffer_poll(timeout=200) | |||
| bpf.cleanup() | |||
|
|
|||
| assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, len(expected_utxocache_adds), len(actual_utxocache_adds)) | |||
| assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS, len(expected_utxocache_spents), len(actual_utxocache_spents)) | |||
|
|
|||
| self.log.info( | |||
| f"check that we successfully traced {EXPECTED_HANDLE_ADD_SUCCESS} adds and {EXPECTED_HANDLE_SPENT_SUCCESS} spent") | |||
| assert_equal(0, len(expected_utxocache_adds)) | |||
| assert_equal(0, len(expected_utxocache_spents)) | |||
| assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, handle_add_succeeds) | |||
| assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS, handle_spent_succeeds) | |||
| for expected_utxo, actual_event in zip(expected_utxocache_adds + expected_utxocache_spents, | |||
| actual_utxocache_adds + actual_utxocache_spents): | |||
| compare_utxo_with_event(expected_utxo, actual_event) | |||
There was a problem hiding this comment.
🟡 Suggestion: Deferred ctypes.cast(...).contents objects may alias reused perf-buffer memory
The refactor moves comparison out of the perf-buffer callback and stores the event objects for later inspection: actual_utxocache_adds.append(ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents). ctypes .contents is a view onto the underlying C memory, not a deep copy. Once the callback returns, BPF is free to reuse the perf-buffer slot for the next event, so the stored objects can reflect overwritten data by the time compare_utxo_with_event runs after perf_buffer_poll. This can produce confusing comparison failures or, worse, spuriously passing comparisons if the overwrite happens to match. Safest fix is to copy the struct at callback time (e.g. actual_utxocache_adds.append(UTXOCacheChange.from_buffer_copy(event)) or construct a plain Python dict inside the callback) so the saved record is independent of the perf-buffer lifetime.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_utxocache.py`:
- [SUGGESTION] lines 272-325: Deferred `ctypes.cast(...).contents` objects may alias reused perf-buffer memory
The refactor moves comparison out of the perf-buffer callback and stores the event objects for later inspection: `actual_utxocache_adds.append(ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents)`. `ctypes` `.contents` is a view onto the underlying C memory, not a deep copy. Once the callback returns, BPF is free to reuse the perf-buffer slot for the next event, so the stored objects can reflect overwritten data by the time `compare_utxo_with_event` runs after `perf_buffer_poll`. This can produce confusing comparison failures or, worse, spuriously passing comparisons if the overwrite happens to match. Safest fix is to copy the struct at callback time (e.g. `actual_utxocache_adds.append(UTXOCacheChange.from_buffer_copy(event))` or construct a plain Python dict inside the callback) so the saved record is independent of the perf-buffer lifetime.
| for expected_utxo, actual_event in zip(expected_utxocache_adds + expected_utxocache_spents, | ||
| actual_utxocache_adds + actual_utxocache_spents): | ||
| compare_utxo_with_event(expected_utxo, actual_event) |
There was a problem hiding this comment.
🟡 Suggestion: Positional zip comparison assumes strict tracepoint delivery order
zip(expected_adds + expected_spents, actual_adds + actual_spents) is position-sensitive, so it relies on BPF perf-buffer events arriving in exactly the order this test iterates over block["tx"]/vin/vout. That typically holds for a single reconnect but is not a guarantee the USDT/BPF pipeline makes, which is why Bitcoin upstream later switched this comparison to an unordered set-based check on keys ({cache_event_to_key(e) for e in actual_utxocache_adds} == expected_add_keys). Worth pulling that follow-up in with this backport or leaving a TODO so the next USDT batch picks it up — otherwise this is a likely source of hard-to-reproduce CI failures once the test actually runs on hardware with BCC/USDT.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_utxocache.py`:
- [SUGGESTION] lines 323-325: Positional `zip` comparison assumes strict tracepoint delivery order
`zip(expected_adds + expected_spents, actual_adds + actual_spents)` is position-sensitive, so it relies on BPF perf-buffer events arriving in exactly the order this test iterates over `block["tx"]`/`vin`/`vout`. That typically holds for a single reconnect but is not a guarantee the USDT/BPF pipeline makes, which is why Bitcoin upstream later switched this comparison to an unordered set-based check on keys (`{cache_event_to_key(e) for e in actual_utxocache_adds} == expected_add_keys`). Worth pulling that follow-up in with this backport or leaving a TODO so the next USDT batch picks it up — otherwise this is a likely source of hard-to-reproduce CI failures once the test actually runs on hardware with BCC/USDT.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Test-infrastructure backport with one deterministic runtime bug: the cherry-pick to test/functional/interface_usdt_validation.py adds events.append() to the callback without removing the inline assertions + del expected_blocks[block_hash] that Dash's earlier partial backport of bitcoin#27831 left in place. The post-hoc for-loop now repeats those same assertions and del, which raises KeyError on the first iteration whenever USDT/BPF is available. The fuzz and mypy-lint backports are clean.
Reviewed commit: dcb8af8
🔴 1 blocking
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_validation.py`:
- [BLOCKING] lines 99-112: Duplicate assertions + del across callback and post-hoc loop will raise KeyError at runtime
This cherry-pick of bitcoin#27944 adds `events.append(event)` inside `handle_blockconnected` to match the upstream pattern that moves assertions out of the callback into a post-hoc loop. However, Dash's earlier partial backport of the prerequisite bitcoin#27831 left the inline assertions and `del expected_blocks[block_hash]` in the callback (lines 104-112). The result: the callback now both appends each event to `events` and deletes `expected_blocks[block_hash]`; the new post-hoc loop at lines 126-136 then iterates `events` and does `block = expected_blocks[block_hash]` on keys the callback already deleted, raising KeyError on the first iteration. Upstream's final state (bitcoin/bitcoin c5a63ea56f) has the callback do only `events.append(event)` with all assertions + `del` in the post-hoc loop. Fix: remove the duplicated assertions/del from the callback so it matches upstream, or drop the added `events.append` and revert the final check to the `blocks_checked` counter.
| def handle_blockconnected(_, data, __): | ||
| nonlocal events, blocks_checked | ||
| event = ctypes.cast(data, ctypes.POINTER(Block)).contents | ||
| self.log.info(f"handle_blockconnected(): {event}") | ||
| events.append(event) | ||
| block_hash = bytes(event.hash[::-1]).hex() | ||
| block = expected_blocks[block_hash] | ||
| assert_equal(block["hash"], block_hash) | ||
| assert_equal(block["height"], event.height) | ||
| assert_equal(len(block["tx"]), event.transactions) | ||
| assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) | ||
| assert_equal(0, event.sigops) # no sigops in coinbase tx | ||
| # only plausibility checks | ||
| assert event.duration > 0 | ||
| del expected_blocks[block_hash] |
There was a problem hiding this comment.
🔴 Blocking: Duplicate assertions + del across callback and post-hoc loop will raise KeyError at runtime
This cherry-pick of bitcoin#27944 adds events.append(event) inside handle_blockconnected to match the upstream pattern that moves assertions out of the callback into a post-hoc loop. However, Dash's earlier partial backport of the prerequisite bitcoin#27831 left the inline assertions and del expected_blocks[block_hash] in the callback (lines 104-112). The result: the callback now both appends each event to events and deletes expected_blocks[block_hash]; the new post-hoc loop at lines 126-136 then iterates events and does block = expected_blocks[block_hash] on keys the callback already deleted, raising KeyError on the first iteration. Upstream's final state (bitcoin/bitcoin c5a63ea) has the callback do only events.append(event) with all assertions + del in the post-hoc loop. Fix: remove the duplicated assertions/del from the callback so it matches upstream, or drop the added events.append and revert the final check to the blocks_checked counter.
💡 Suggested change
| def handle_blockconnected(_, data, __): | |
| event = ctypes.cast(data, ctypes.POINTER(Block)).contents | |
| self.log.info(f"handle_blockconnected(): {event}") | |
| events.append(event) |
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/interface_usdt_validation.py`:
- [BLOCKING] lines 99-112: Duplicate assertions + del across callback and post-hoc loop will raise KeyError at runtime
This cherry-pick of bitcoin#27944 adds `events.append(event)` inside `handle_blockconnected` to match the upstream pattern that moves assertions out of the callback into a post-hoc loop. However, Dash's earlier partial backport of the prerequisite bitcoin#27831 left the inline assertions and `del expected_blocks[block_hash]` in the callback (lines 104-112). The result: the callback now both appends each event to `events` and deletes `expected_blocks[block_hash]`; the new post-hoc loop at lines 126-136 then iterates `events` and does `block = expected_blocks[block_hash]` on keys the callback already deleted, raising KeyError on the first iteration. Upstream's final state (bitcoin/bitcoin c5a63ea56f) has the callback do only `events.append(event)` with all assertions + `del` in the post-hoc loop. Fix: remove the duplicated assertions/del from the callback so it matches upstream, or drop the added `events.append` and revert the final check to the `blocks_checked` counter.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
PR #7091 contains two clean upstream test-tooling backports (bitcoin#28644 fuzz merge -use_value_profile=0, bitcoin#28184 mypy cache dir). The prior blocking finding in test/functional/interface_usdt_validation.py is resolved — that file is no longer part of the PR's diff against develop, since the bitcoin#27944 cherry-pick that previously touched it has been dropped from this PR. No in-scope findings remain.
Reviewed commit: 2e439a5
Bitcoin Backporting work