Skip to content

CUDA: Limit DeviceSegmentedSort to immediate mode#21718

Open
ORippler wants to merge 2 commits intoggml-org:masterfrom
ORippler:osimons/cuda_fix_argsort_graph_capture
Open

CUDA: Limit DeviceSegmentedSort to immediate mode#21718
ORippler wants to merge 2 commits intoggml-org:masterfrom
ORippler:osimons/cuda_fix_argsort_graph_capture

Conversation

@ORippler
Copy link
Copy Markdown
Collaborator

@ORippler ORippler commented Apr 10, 2026

Overview

DeviceSegmentedSort is currently not capturable in a cuda graph. Hence, we have to go for the slower DeviceSegmentedRadixSort in that case.

Perf numbers on RTX Pro 6000 Blackwell Max-Q:
DeviceSegmentedRadixSort in graph mode (i.e. CUDA Graphs)

  ARGSORT(type=f32,ne=[2048,512,1,1],order=1):                 12291 runs -   105.94 us/run -     8192 kB/run -   73.75 GB/s
  ARGSORT(type=f32,ne=[4096,512,1,1],order=1):                 10245 runs -   115.08 us/run -    16384 kB/run -  135.77 GB/s
  ARGSORT(type=f32,ne=[8192,512,1,1],order=1):                  5125 runs -   221.22 us/run -    32768 kB/run -  141.26 GB/s
  ARGSORT(type=f32,ne=[16384,512,1,1],order=1):                 2565 runs -   430.98 us/run -    65536 kB/run -  145.02 GB/s
  ARGSORT(type=f32,ne=[32768,512,1,1],order=1):                 1028 runs -  1185.83 us/run -   131072 kB/run -  105.41 GB/s
  ARGSORT(type=f32,ne=[65536,512,1,1],order=1):                  387 runs -  2748.62 us/run -   262144 kB/run -   90.95 GB/s

DeviceSegmentedSort in immediate mode

  ARGSORT(type=f32,ne=[2048,512,1,1],order=1):                 16388 runs -    71.17 us/run -     8192 kB/run -  109.78 GB/s
  ARGSORT(type=f32,ne=[4096,512,1,1],order=1):                 12294 runs -    81.38 us/run -    16384 kB/run -  192.00 GB/s
  ARGSORT(type=f32,ne=[8192,512,1,1],order=1):                  5125 runs -   240.81 us/run -    32768 kB/run -  129.77 GB/s
  ARGSORT(type=f32,ne=[16384,512,1,1],order=1):                 2565 runs -   406.60 us/run -    65536 kB/run -  153.71 GB/s
  ARGSORT(type=f32,ne=[32768,512,1,1],order=1):                 1285 runs -   873.23 us/run -   131072 kB/run -  143.15 GB/s
  ARGSORT(type=f32,ne=[65536,512,1,1],order=1):                  516 runs -  2288.46 us/run -   262144 kB/run -  109.24 GB/s

Closes #21682

Additional information

There is no way to force graph mode in the CUDA backend at the moment: We execute each graph only once in ggml_backend_compare_graph_backend, and depending on how the host OS allocates the first node of subsequent test cases' ggml_cgraphs we get some weird mix of "some tests running in graph and some in immediate mode" at the moment in test-backend-op. While we do have a way to force immediate mode via GGML_CUDA_DISABLE_GRAPHS, I feel there may be a need to force graph mode for testing purposes.

I did a local run where I patched ggml_backend_compare_graph_backend to evaluate each graph twice to trigger cuda graph warmup reliably for each test config of argsort.

Requirements

DeviceSegmentedSort is currently not capturable in a cuda graph. Hence,
we have to go for the slower DeviceSegmentedRadixSort in that case.

Perf numbers on RTX Pro 6000 Blackwell Max-Q:
DeviceSegmentedRadixSort in graph mode (i.e. CUDA Graphs)

  ARGSORT(type=f32,ne=[2048,512,1,1],order=1):                 12291 runs -   105.94 us/run -     8192 kB/run -   73.75 GB/s
  ARGSORT(type=f32,ne=[4096,512,1,1],order=1):                 10245 runs -   115.08 us/run -    16384 kB/run -  135.77 GB/s
  ARGSORT(type=f32,ne=[8192,512,1,1],order=1):                  5125 runs -   221.22 us/run -    32768 kB/run -  141.26 GB/s
  ARGSORT(type=f32,ne=[16384,512,1,1],order=1):                 2565 runs -   430.98 us/run -    65536 kB/run -  145.02 GB/s
  ARGSORT(type=f32,ne=[32768,512,1,1],order=1):                 1028 runs -  1185.83 us/run -   131072 kB/run -  105.41 GB/s
  ARGSORT(type=f32,ne=[65536,512,1,1],order=1):                  387 runs -  2748.62 us/run -   262144 kB/run -   90.95 GB/s

DeviceSegmentedSort in immediate mode

  ARGSORT(type=f32,ne=[2048,512,1,1],order=1):                 16388 runs -    71.17 us/run -     8192 kB/run -  109.78 GB/s
  ARGSORT(type=f32,ne=[4096,512,1,1],order=1):                 12294 runs -    81.38 us/run -    16384 kB/run -  192.00 GB/s
  ARGSORT(type=f32,ne=[8192,512,1,1],order=1):                  5125 runs -   240.81 us/run -    32768 kB/run -  129.77 GB/s
  ARGSORT(type=f32,ne=[16384,512,1,1],order=1):                 2565 runs -   406.60 us/run -    65536 kB/run -  153.71 GB/s
  ARGSORT(type=f32,ne=[32768,512,1,1],order=1):                 1285 runs -   873.23 us/run -   131072 kB/run -  143.15 GB/s
  ARGSORT(type=f32,ne=[65536,512,1,1],order=1):                  516 runs -  2288.46 us/run -   262144 kB/run -  109.24 GB/s
We currently lack a way to force graph mode in CUDA, patch callback to
invoke ggml_backend_compare_graph_backend twice to enforce each test to
run in graph mode
@ORippler ORippler requested a review from a team as a code owner April 10, 2026 10:37
@ORippler ORippler requested a review from ggerganov as a code owner April 10, 2026 10:38
@github-actions github-actions bot added testing Everything test related Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Apr 10, 2026
@fairydreaming
Copy link
Copy Markdown
Collaborator

I tried:

  • running test-backend-op perf for ARGSORT and TOP_K with large tensor shapes (up to {65536, 512, 1, 1}) - it doesn't crash anymore (was crashing before)
  • running llama-perplexity on DeepSeek V3.2 DSA with large ubatch (4096/8192/16384) - it doesn't crash anymore (was crashing before)
  • running llama-bench on DeepSeek V3.2 DSA with 2048 ubatch - it doesn't crash anymore (crashed yesterday on 8x RTX PRO 6000S, so tested this as well)

Looks good for me!

One thing I'm not sure about is wild inconsistency in code formatting that may force some poor OCD sufferers to seek therapy. 😵‍💫 But that's a minor issue.

Copy link
Copy Markdown
Contributor

@am17an am17an left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also have the env variable GGML_CUDA_DISABLE_GRAPHS which this PR doesn't seem to respect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs testing Everything test related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Misc. bug: NVIDIA CUB implementations of GGML argsort and top-k cause CUDA graph capture failure for some tensor shapes

4 participants