UPSTREAM PR #21554: hexagon: optimization for HMX mat_mul#1346
Open
UPSTREAM PR #21554: hexagon: optimization for HMX mat_mul#1346
Conversation
Introduce hmx-worker (dedicated thread for HMX compute) to overlap HMX matmul with HVX dequant/DMA stages in the pipeline path, replacing the previous synchronous HMX calls that blocked the main thread.
Store the boolean to local variable avoid atomic load twice
|
No meaningful performance changes were detected across 126809 analyzed functions in the following binaries: build.bin.libllama.so, build.bin.libmtmd.so, build.bin.llama-cvector-generator, build.bin.llama-tts, build.bin.llama-bench, build.bin.libggml-base.so, build.bin.libggml-cpu.so, build.bin.libggml.so, build.bin.llama-tokenize, build.bin.llama-quantize, build.bin.llama-qwen2vl-cli, build.bin.llama-gemma3-cli, build.bin.llama-gguf-split, build.bin.llama-llava-cli, build.bin.llama-minicpmv-cli. 💬 Questions? Tag @loci-dev |
63ab8d1 to
7638ab4
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Note
Source pull request: ggml-org/llama.cpp#21554
Overview
This PR introduces two additional optimizations for the Hexagon HMX backend:
Enable asynchronous HMX execution
HMX computations are now executed asynchronously, allowing them to overlap with HVX dequantization and DMA stages within the pipeline. Previously, synchronous HMX calls blocked the main thread and limited parallelism.
Automatic shape search for
mat_mul_qk_0_d16a32_out_stationary()The auto-tuning logic is extended to the out-stationary pipeline path. This functionality was previously only available for non out-stationary paths.
Additional Information
Improved auto-tuning strategy
The previous strategy maximized
mc * nc, effectively reducing the number of DMA calls. While this works well for FP16 matmul, it does not accurately model the cost of quantized matmul.In quantized matmul:
Profiling on 8 Elite Gen 5 indicates that loading quantized weights is approximately 1.5× more expensive than loading activations. Although this is a rough estimate, it's produce good enough results.
Benchmark on 8 Elite Gen 5
Master
Commit a521c91 (HMX Async)
Commit ef501f8 (HMX async and auto-tuning)
Requirements