Add SGLang Ray inference example#40
Merged
robertnishihara merged 33 commits intomainfrom Mar 18, 2026
Merged
Conversation
Add offline and online inference drivers with Dockerfile and Anyscale job configs for running SGLang on Ray.
…stness - Dockerfile: use sglang[all]==0.5.8 + sgl-kernel==0.3.21 instead of git fork - Drivers: add logging, named placement groups, exit codes, better error handling - Job configs: add NCCL_DEBUG, fix submit path comment - README: add How It Works, Troubleshooting, local run examples Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Rename sglang_ray_inference -> sglang_inference - Batch inference (job.yaml + driver_offline.py) fully working with multi-node TP=4, PP=2 using SGLang's use_ray=True mode - Ray Serve deployment (service.yaml + serve.py) uses same pattern as official Ray LLM SGLang integration with signal monkey-patching - Add query.py script for testing the service - Simplify configuration with environment variables The serving example is still being validated with multi-replica autoscaling. Single replica works; investigating occasional timeouts with multiple replicas. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
async_generate() returns an AsyncGenerator by default for streaming. Calling await on a generator without consuming it causes requests to hang indefinitely. Set stream=False to get a single result instead. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Each replica spans multiple nodes, so NUM_NODES_PER_REPLICA is more descriptive than NUM_NODES. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
The working_dir is specified in service.yaml and job.yaml, making the Dockerfile WORKDIR redundant. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Ray actor scheduling support has been merged into the main SGLang repository, so we no longer need to install from the experimental fork. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
With min_replicas=1 and NUM_NODES_PER_REPLICA=2, we only need 2 worker nodes at minimum. Previously set to 4, wasting 2 nodes when idle. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Code now runs directly at module level, making it simpler to copy/paste snippets for experimentation. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Replaced instance_type specifications with min_resources/max_resources, allowing Anyscale to automatically select appropriate GPU instances (A10G, L4, L40S, A100, H100) based on availability. Service: Scales from 1-4 replicas (8-32 GPUs) Job: Fixed 2 replicas (16 GPUs) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
driver_offline.py creates only 1 engine (not 2), so it uses 8 GPUs (2 nodes × 4 GPUs) not 16. Updated compute config accordingly. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Use required_resources instead of min_resources/max_resources. Anyscale will match instance types with 4 GPUs per node (A10G, L4, L40S, A100, H100) based on availability. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Replaced AWS-specific m5.2xlarge with cloud-agnostic required_resources, making the example portable across AWS, GCP, and other cloud providers. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Increased from 64Gi to 128Gi to match available instance types. Anyscale's fuzzy matching searches for instances with memory in the range [requested:2x requested], and 64-128 GiB didn't match g5.12xlarge (192 GiB). With 128 GiB, the range becomes 128-256 GiB which includes g5.12xlarge and similar 4-GPU instances. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
- Upgrade to Ray 2.54.0 for latest features and fixes - Migrate from deprecated Engine(use_ray=True) to RayEngine API - Add A10G GPU specification via required_labels in compute configs - Fix Ray Serve deployment by removing invalid scheduling_strategy - Add placement_group_capture_child_tasks for proper multi-node distribution - Remove unnecessary LD_LIBRARY_PATH from Dockerfile The RayEngine API automatically creates and distributes SchedulerActor instances across nodes. Ray Serve automatically captures child tasks when placement_group_bundles is specified. Tested with both small (1.7B) and large (30B) models on A10G GPUs. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Remove all references to the large 30B model (Qwen3-30B-A3B-Instruct-2507) due to async_generate hanging issues with RayEngine. Keep only the working 1.7B model configuration. Changes: - job.yaml: Remove large model submit instructions - service.yaml: Remove large model deploy instructions - README.md: Remove all large model deployment examples Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Modify driver_offline.py to run 500 prompts with 256 tokens each, targeting 5-10 minutes of continuous GPU computation for better performance metrics collection. Changes: - Expand from 4 to 500 prompts (25x replication of 20 base prompts) - Increase max_new_tokens from 64 to 256 for longer generation - Add temperature=0.8 for more diverse outputs - Print first/last 5 results instead of all responses Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Update batch inference driver to generate longer responses (512 tokens). Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Modify batch job to process prompts in batches of 10 and use ray.wait to print progress as each batch completes, showing real-time throughput and sample outputs during execution. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Reduce verbosity in batch processing with ray.wait. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Remove redundant last 3 samples from output. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Progress messages are sufficient. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Show first prompt of each completed batch in progress output. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
- Increase batch size from 10 to 20 - Reduce total prompts from 500 to 250 - Randomly sample prompts for variety - Print full prompt and response for first element of each batch Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Batches execute sequentially on single engine, so ray.wait is not needed. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Better for Ray scheduling to know all work upfront. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Robert Nishihara <rkn@anyscale.com>
xyuzh
commented
Mar 8, 2026
| worker_nodes: | ||
| - instance_type: g5.12xlarge # 4x A10G | ||
| min_nodes: 4 | ||
| max_nodes: 8 |
Contributor
Author
There was a problem hiding this comment.
I see the min_nodes max_nodes settings are different for offline and serve config, is there a reason for this?
Contributor
There was a problem hiding this comment.
Your fix is correct. 2 and 8 is right, since the replicas autoscale from 1-4.
xyuzh
commented
Mar 8, 2026
| instance_type: m5.2xlarge # CPU-only head | ||
| worker_nodes: | ||
| - instance_type: g5.12xlarge # 4x A10G | ||
| min_nodes: 4 |
Contributor
Author
There was a problem hiding this comment.
shouldn't we only need 2 nodes here?
xyuzh
commented
Mar 8, 2026
| max_nodes: 8 | ||
|
|
||
| env_vars: | ||
| MODEL_PATH: "Qwen/Qwen3-1.7B" |
Contributor
Author
There was a problem hiding this comment.
have you succeeded with the 30B model
This PR fixes the issue where multi-node serving hangs because the rank0 scheduler isn't co-located with the Engine node.
4a226a8 to
e7622d6
Compare
The defaults are already set in the Python code (driver_offline.py and serve.py). Keep only RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES which is required for SGLang.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add offline and online inference drivers with Dockerfile and Anyscale job configs for running SGLang on Ray.