Skip to content

vulkan: Support asymmetric FA in coopmat2 path#21753

Open
jeffbolznv wants to merge 1 commit intoggml-org:masterfrom
jeffbolznv:fa_cm2_mixed
Open

vulkan: Support asymmetric FA in coopmat2 path#21753
jeffbolznv wants to merge 1 commit intoggml-org:masterfrom
jeffbolznv:fa_cm2_mixed

Conversation

@jeffbolznv
Copy link
Copy Markdown
Contributor

Overview

There has been some recent interest/experimentation with mixed quantization types for FA. I had originally designed the cm2 FA shader with this in mind (because I didn't realize it wasn't supported at the time!), this change adds the missing pieces and enables it.

Also support Q1_0 since people have been trying that out (seems crazy, but who knows).

We should be able to do similar things in the coopmat1/scalar path, but there's another change open against the scalar path and I don't want to conflict.

Requirements

  • I have read and agree with the contributing guidelines
  • AI usage disclosure: YES, I used Cursor (composer-2-fast) for most of this change.

There has been some recent interest/experimentation with mixed quantization
types for FA. I had originally designed the cm2 FA shader with this in mind
(because I didn't realize it wasn't supported at the time!), this change
adds the missing pieces and enables it.

Also support Q1_0 since people have been trying that out (seems crazy, but
who knows).

We should be able to do similar things in the coopmat1/scalar path, but
there's another change open against the scalar path and I don't want to
conflict.
@jeffbolznv jeffbolznv requested review from a team and ggerganov as code owners April 11, 2026 01:33
@github-actions github-actions bot added testing Everything test related Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Apr 11, 2026
@Green-Sky
Copy link
Copy Markdown
Collaborator

I have not run it, but at first glance the code looks like iq4_nl is missing.


Thanks for providing this, I will see what testing I can get out of this. But from the testing I did with q1_0 with nkvo, q1_0 hurts the performance of the model too much. It is more or less equivalent to a thirding or quatering of model parameter count.

@jeffbolznv
Copy link
Copy Markdown
Contributor Author

I took out iq4nl because I wanted to avoid allocating and initializing the shared memory table if it's not used. Is this something people actually use? If so, I can put it back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants