Skip to content

[contrib] Add GPT-J-6B NeuronX port#79

Open
dhwanw wants to merge 3 commits intomainfrom
contrib/gpt-j-6b
Open

[contrib] Add GPT-J-6B NeuronX port#79
dhwanw wants to merge 3 commits intomainfrom
contrib/gpt-j-6b

Conversation

@dhwanw
Copy link

@dhwanw dhwanw commented Mar 17, 2026

Description

NeuronX Distributed Inference port of EleutherAI/gpt-j-6b, a 6B-parameter decoder-only transformer. GPT-J uses parallel residual connections (attn + mlp + residual), partial RoPE (64/256 dims with interleaved rotation pattern), single LayerNorm per block, and GELU-new activation. Weight mapping handles the separate Q/K/V projection conversion.

Model Information

Model Name: GPT-J-6B
Model Architecture: Decoder-only transformer with parallel residual connections, partial RoPE (64/256 dims, GPT-J interleaved rotation), 16 MHA heads, 28 layers, LayerNorm, GELU-new
Purpose: General text generation

Checklist

Required Components

  • Accuracy Test (test/integration/test_model.py)
    • Validates model generation and coherence
    • Performance benchmarks (TTFT, throughput)
    • Test can compile and run the model on Neuron
  • README.md with the following sections:
    • Usage Example: Clear code example showing how to use the model
    • Compatibility Matrix: Table showing tested Neuron SDK versions and instance types
    • Example Checkpoints: Links to compatible model checkpoints
    • Testing Instructions: Command to run the test suite for the model
  • Source Code (src/)
    • Modeling code following NxD Inference patterns

Optional Components

  • Unit Tests (CPU or Neuron-based)

Folder Structure

/contrib/models/gpt-j-6b/
  README.md
  /src
    modeling_gptj.py
  /test
    /integration
      test_model.py

Testing

Model was compiled and tested with TP=1, batch_size=1, seq_len=128, bfloat16 on trn1.32xlarge.

Test Results:

Test Status Result
Smoke Test ✅ PASS Model loads successfully
Greedy Token Matching ✅ PASS 72.81% average (466/640 tokens)
Teacher-Forced Match ✅ PASS 98.91% average
Throughput ✅ PASS 20.2 tok/s

Teacher-forced accuracy of 98.91% confirms per-token predictions are nearly identical to HF. Greedy divergences are from small floating-point differences snowballing during autoregressive generation.

Compatibility

Tested with:

  • Neuron SDK Version(s): 2.22
  • Instance Type(s): trn1.32xlarge
  • PyTorch Version: 2.9
  • Python Version: 3.10
  • Configuration: TP=1, batch_size=1, seq_len=128, bfloat16

Additional Information

  • Parallel residual connections: Attention and MLP are computed on the same normalized input: attn(ln(x)) + mlp(ln(x)) + x.
  • Partial RoPE: Only 64 of 256 head dimensions use rotary embeddings, with GPT-J's interleaved rotation (rotate_every_two, not LLaMA's rotate_half).
  • Weight mapping: HF keys (transformer.wte, transformer.h.{i}.attn.{q,k,v}_proj, etc.) are mapped to NXDI format (embed_tokens, layers.{i}.self_attn.qkv_proj, etc.) during weight conversion.

Related Issues

N/A

vLLM Integration

  • This model/feature is intended for use with vLLM
  • Documentation includes vLLM registration instructions

By submitting this PR, I confirm that:

  • I have read and followed the contributing guidelines
  • This is a community contribution and may have limited testing compared to officially-supported models
  • The code follows best practices and is well-documented
  • All required components listed above are included

dhwanw and others added 3 commits March 4, 2026 22:32
…pport

GPT-J requires two non-standard features: partial rotary embeddings
(64/256 dims with interleaved rotation) and parallel residual connections.
Validated at 98.91% teacher-forced token match against HF reference.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use consistent CE/TG column table format across all contrib models.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@dhwanw dhwanw marked this pull request as ready for review March 19, 2026 19:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant