tests: skip local NVML runtime mismatches while preserving CI failures#1739
tests: skip local NVML runtime mismatches while preserving CI failures#1739cpcloud wants to merge 4 commits intoNVIDIA:mainfrom
Conversation
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Driver upgrades without a reboot can temporarily leave NVML in a driver/library mismatch state, which is a common local developer scenario. Route NVML-dependent checks through shared fixtures/helpers so local runs skip cleanly while CI still fails fast on real NVML init/load regressions. Made-with: Cursor
Run repository hooks and keep the NVML fixture changes compliant by applying ruff import ordering and formatting adjustments. Made-with: Cursor
Apply hook-driven import ordering/spacing updates introduced by rebasing onto upstream/main so pre-commit passes cleanly. Made-with: Cursor
3ae1ece to
4272269
Compare
|
/ok to test |
|
mdboom
left a comment
There was a problem hiding this comment.
This looks great, in terms of centralizing all of this logic.
However, I'm not sure why some tests unrelated to NVML now have this tagging.
And within system/test_system_system.py, we need to keep most of those tests still running even when NVML is totally missing.
cuda_core/tests/test_memory.py
Outdated
|
|
||
|
|
||
| @pytest.mark.parametrize("change_device", [True, False]) | ||
| @pytest.mark.usefixtures("require_nvml_runtime_or_skip_local") |
There was a problem hiding this comment.
Why do the tests here now require a working NVML? These tests predate NVML in cuda_bindings... What's the root cause?
There was a problem hiding this comment.
I'll look into why this ends up being required.
There was a problem hiding this comment.
get_num_devices will use NVML if it's available. So, yeah, they're pre-existing tests, but they're hitting other APIs now.
Now that these route through NVML when it's available, they're a place where we need to skip if nvml is available but fails in an expected way.
What's the root cause?
The root cause is that I upgraded the driver without rebooting. Since NVML is a driver library, I can no longer use it without a reboot.
I don't want to reboot to keep working in the repo, especially if I'm working on something unrelated to any of this code.
There was a problem hiding this comment.
Thanks for the clarification. That makes sense.
However, I'm not sure we want to pave over broken installations like this (though I feel your pain about wanting to continue working with a partially-broken install). The test suite can't know whether things are broken because of local installation issues or changes to the code that have broken things, and this would pave over the latter. I know this would eventually get caught in CI, but I'm not a fan of having local and remote testing behave differently.
There was a problem hiding this comment.
@cpcloud there is a way to update driver without rebooting. Just need to unload and load the kernel modules with rmmod and/or modprobe, see, e.g.
https://forums.developer.nvidia.com/t/reset-driver-without-rebooting-on-linux/40625/2
Some of our internal infra use this, for example, to achieve <1 min driver refreshing at node allocation time.
|
|
||
| from .conftest import skip_if_nvml_unsupported | ||
|
|
||
| pytestmark = skip_if_nvml_unsupported |
There was a problem hiding this comment.
Most of the tests in this file are expected to run, other than test_gpu_driver_version, even without an NVML available.
There was a problem hiding this comment.
I'll look into why this ends up being required.
Remove the module-level pytestmark from test_system_system.py and the per-test require_nvml_runtime_or_skip_local markers from test_memory.py. These tests don't inherently need NVML; the NVML-specific tests already have individual @skip_if_nvml_unsupported decorators. Made-with: Cursor
|
/ok to test |
mdboom
left a comment
There was a problem hiding this comment.
I'm kind of wary of this change papering over failures on broken installations. Maybe let's get a second opinion?
|
Not worth the review effort/delay. By the time anyone else chimes in, everyone will have forgotten why this was done and it'll remain in limbo. |
Removed preview folders for the following PRs: - PR #1739
|
Agreed with Mike. The system needs to be in a sane state. Running |
Summary
require_nvml_runtime_or_skip_localfixtures incuda_bindingsandcuda_coretests.Test plan
pixi run --manifest-path cuda_bindings pytest cuda_bindings/tests --override-ini norecursedirs=examples -k "not test_cufile"CI=1 pixi run --manifest-path cuda_bindings pytest cuda_bindings/tests/nvml/test_init.py::test_init_ref_count(expected error on NVML mismatch in CI mode)pixi run --manifest-path cuda_core test(currently blocked in this workspace by unrelated import mismatch:cuda.core._resource_handles does not export expected C function create_culink_handle)CI=1 pixi run --manifest-path cuda_core pytest cuda_core/tests/system/test_system_system.py::test_num_devices(same unrelated import mismatch blocker)Made with Cursor