Open
Conversation
Sarvam-30B is a 30B MoE model (128 experts + 1 shared, top-6 routing) with sigmoid scoring, learned expert bias, and 2.5x routed scaling factor. Custom SarvamRouterTopK implements exact HF routing behavior. Layer 0 is dense, layers 1-18 are MoE with separate shared expert MLP. Validation: 61% greedy match, 98.4% teacher-forced match (TP=8, bf16). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use consistent CE/TG column table format across all contrib models. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
NeuronX Distributed Inference port of sarvamai/sarvam-30b, a 30B-parameter Mixture of Experts model (2.4B active per token). Sarvam uses a hybrid dense+MoE architecture (layer 0 dense, layers 1-18 MoE), 128 routed experts with top-6 routing, sigmoid routing with learned expert bias, a 2.5x routed scaling factor, shared experts, and Q/K normalization. Key porting challenges included the custom routing logic, shared expert handling separate from the NXDI MoE module, and ParallelEmbedding fixes for correct XLA tracing.
Model Information
Model Name: sarvam-30b
Model Architecture: Decoder-only hybrid dense+MoE transformer -- 19 layers (1 dense + 18 MoE), 128 routed experts with top-6 sigmoid routing + expert bias + 2.5x scaling, 1 shared expert per MoE layer, 64 Q heads / 4 KV heads (GQA), Q/K RMSNorm, RoPE (theta=8M)
Purpose: Multilingual text generation (Indian languages focus)
Checklist
Required Components
test/integration/test_model.py)src/)Optional Components
Folder Structure
Testing
Model was compiled and tested with TP=8, batch_size=1, seq_len=128, bfloat16 on trn1.32xlarge.
Test Results:
Greedy divergence is expected for MoE models with sigmoid routing + expert bias + scaling factor interactions in BF16 precision. Teacher-forced match confirms the model is functionally correct.
Compatibility
Tested with:
Additional Information
first_k_dense_replace=1), layers 1-18 use MoE.SarvamRouterTopKapplies sigmoid activation then adds learnedexpert_bias(post-sigmoid, pre-topk). Affinities use unbiased sigmoid scores.shard_across_embedding,pad,tensor_model_parallel_group, anduse_spmd_rankparameters to avoid rank-0 baked constants in XLA tracing.Related Issues
N/A
vLLM Integration
By submitting this PR, I confirm that: