Open
Conversation
NeuronX port of MosaicML MPT-7B-Chat (6.7B params) with: - ALiBi slopes stored as TP-sharded weight parameter - Position bias computed at runtime, flash attention disabled - Fused QKV from HF checkpoint split during weight conversion - LayerNorm without bias Validated: 54.84% greedy, 97.50% teacher-forced (job 7769). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use consistent CE/TG column table format across all contrib models. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
NeuronX Distributed Inference port of mosaicml/mpt-7b-chat, a 6.7B-parameter decoder-only transformer with ALiBi attention. NXDI has no native ALiBi support, so per-head slopes are stored as a weight parameter that gets TP-sharded. Position bias is computed at runtime and added to attention scores. Flash attention is disabled since NKI kernels cannot accept additive bias tensors.
Model Information
Model Name: MPT-7B-Chat
Model Architecture: Decoder-only transformer with ALiBi attention (no position embeddings), 32 MHA heads, 32 layers, LayerNorm without bias, GELU, fused QKV, tied embeddings
Purpose: Chat/instruction following
Checklist
Required Components
test/integration/test_model.py)src/)Optional Components
Folder Structure
Testing
Model was compiled and tested with TP=1, batch_size=1, seq_len=128, bfloat16 on trn1.32xlarge.
Test Results:
The lower greedy match rate compared to non-ALiBi models is expected: BF16 precision differences in the additive position bias compound during autoregressive generation. The high teacher-forced rate (97.50%) confirms weights are correctly ported.
Compatibility
Tested with:
Additional Information
alibi_slopes) that gets TP-sharded. Position bias computed at runtime from slopes and token positions, added to attention scores before softmax.Wqkvweight, split into separate Q, K, V during weight conversion.no_bias=Truefor all LayerNorm layers.Related Issues
N/A
vLLM Integration
By submitting this PR, I confirm that: