Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
12a3b91
feat: add 1st docs iteration
Dec 18, 2025
43bc37d
feat: 2nd docs iteration
Jan 6, 2026
0060348
feat: add dependency group + CI pipelines
Jan 9, 2026
3d61e03
feat: add 1st docs iteration
Dec 18, 2025
53e1873
feat: 2nd docs iteration
Jan 6, 2026
d177713
feat: add dependency group + CI pipelines
Jan 9, 2026
cd822ac
fix: gitlab template
Jan 18, 2026
8cdce49
feat: add more detailed pages
Jan 19, 2026
a1cc275
feat: add pages to readme
Jan 19, 2026
83e72e4
feat: add details
Jan 19, 2026
cff36fb
feat: add more details of requesting
Jan 19, 2026
39d4bbf
feat: more detailed logical/physical
Jan 19, 2026
c7a4f64
feat: add OOMK crash
Jan 19, 2026
faa8dfe
fix: broken link
Jan 19, 2026
76a14d0
Merge branch 'docs/model-service' of gitlab.ics.muni.cz:rationai/infr…
Jan 19, 2026
621fb39
fix: spelling
Jan 19, 2026
09d88f4
feat: add code examples
Jan 19, 2026
111fac1
fix: fox the formula
Jan 19, 2026
3317382
fix: outdated gitlab references
Jurgee Mar 19, 2026
d3a60fd
fix: uv lock
Jurgee Mar 19, 2026
5c3733a
Merge branch 'main' into docs/model-service
Jurgee Mar 19, 2026
b589035
fix: uv lock after merge
Jurgee Mar 19, 2026
5d1c74e
fix: mypy
Jurgee Mar 19, 2026
2287dc1
Update docs/guides/adding-models.md
Jurgee Mar 19, 2026
8cffe93
fix: master to main
Jurgee Mar 19, 2026
0b282eb
feat: Add workflow for building documentation
Jurgee Mar 28, 2026
859ed1e
fix: useage of RationAI template
Jurgee Mar 30, 2026
295b1ba
some changes
Jurgee Apr 12, 2026
b84e8e6
new docs
Jurgee Apr 18, 2026
e8b109f
fix
Jurgee Apr 18, 2026
6dfa5b2
Merge branch 'main' into docs/model-service
Jurgee Apr 18, 2026
ae96419
add testing
Jurgee Apr 20, 2026
3cb79e8
fixes
Jurgee Apr 20, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions .github/workflows/build-docs.yml
Comment thread
Jurgee marked this conversation as resolved.
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: Build Documentation

on:
push:
branches: ["main"]
paths:
- "docs/**"
- ".github/workflows/build-docs.yml"
pull_request:
branches: ["main"]
paths:
- "docs/**"
- ".github/workflows/build-docs.yml"

jobs:
run:
uses: RationAI/.github/.github/workflows/mkdocs-build.yml@main
135 changes: 76 additions & 59 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,93 +1,110 @@
# Model Service

Model deployment infrastructure for RationAI using Ray Serve on Kubernetes.

This repository contains:

## Getting started
- A KubeRay `RayService` manifest (`ray-service.yaml`) for deploying Ray Serve on Kubernetes.
- Model implementations under `models/` (reference: `models/binary_classifier.py`).
- Documentation under `docs/` (MkDocs).

To make it easy for you to get started with GitLab, here's a list of recommended next steps.
## Documentation

Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
- MkDocs content: `docs/`
- Key pages:
- `docs/get-started/quick-start.md`
- `docs/guides/deployment-guide.md`
- `docs/guides/adding-models.md`
- `docs/guides/configuration-reference.md`
- `docs/guides/troubleshooting.md`
- `docs/architecture/overview.md`
- `docs/architecture/request-lifecycle.md`
- `docs/architecture/queues-and-backpressure.md`
- `docs/architecture/batching.md`

## Add your files
## Quick Start (Kubernetes)

- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/topics/git/add_files/#add-files-to-a-git-repository) or push an existing Git repository with the following command:
Full walkthrough: `docs/get-started/quick-start.md`.

```
cd existing_repo
git remote add origin https://gitlab.ics.muni.cz/rationai/infrastructure/model-service2.git
git branch -M master
git push -uf origin master
```
### Prerequisites

## Integrate with your tools
- Kubernetes cluster with KubeRay operator installed
- `kubectl` configured for the cluster

- [ ] [Set up project integrations](https://gitlab.ics.muni.cz/rationai/infrastructure/model-service2/-/settings/integrations)
### Deploy

## Collaborate with your team
```bash
kubectl apply -f ray-service.yaml -n [namespace]
Comment thread
Jurgee marked this conversation as resolved.
kubectl get rayservice rayservice-models -n [namespace]
```

- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/user/project/merge_requests/auto_merge/)
### Access locally

## Test and Deploy
```bash
kubectl port-forward -n [namespace] svc/rayservice-models-serve-svc 8000:8000
```

Use the built-in continuous integration in GitLab.
### Test the reference model (`BinaryClassifier`)

- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
The reference deployment in `ray-service.yaml` exposes an app at the route prefix:

***
- `/prostate-classifier-1`

# Editing this README
`models/binary_classifier.py` expects a **request body that is LZ4-compressed raw bytes** of a single RGB tile:

When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
- dtype: `uint8`
- shape: `(tile_size, tile_size, 3)`
- byte order: row-major (NumPy default)

## Suggestions for a good README
Example (Python):

Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
```bash
pip install numpy lz4 requests
```

## Name
Choose a self-explaining name for your project.
```python
import lz4.frame
import numpy as np
import requests

## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
tile_size = 512 # must match RayService user_config.tile_size
tile = np.zeros((tile_size, tile_size, 3), dtype=np.uint8)

## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
payload = lz4.frame.compress(tile.tobytes())

## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
resp = requests.post(
"http://localhost:8000/prostate-classifier-1/",
data=payload,
headers={"Content-Type": "application/octet-stream"},
timeout=60,
Comment thread
Jurgee marked this conversation as resolved.
)
resp.raise_for_status()
print(resp.json() if resp.headers.get("content-type", "").startswith("application/json") else resp.text)
```

## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Repository Structure

## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
```
model-service/
├── models/ # Model implementations
│ └── binary_classifier.py
├── providers/ # Model loading providers
│ └── model_provider.py
├── docs/ # Documentation
├── ray-service.yaml # Kubernetes RayService configuration
├── pyproject.toml # Python dependencies
└── README.md
```
Comment thread
Jurgee marked this conversation as resolved.

## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.

## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.

## Contributing
State if you are open to contributions and what your requirements are for accepting them.
- **Issues:** Report bugs or request features via [GitHub Issues](https://github.com/RationAI/model-service/issues)
- **Contact:** RationAI team at Masaryk University

For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.

You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## License

## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
This project is part of the RationAI infrastructure and is available for use by authorized members of the RationAI group.

## License
For open source projects, say how it is licensed.
## Authors

## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
Developed and maintained by the RationAI team at Masaryk University, Faculty of Informatics.
82 changes: 82 additions & 0 deletions docs/architecture/batching.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Batching (How It Works Under the Hood)

Batching in Ray Serve is a **replica-local request coalescing** mechanism.

It improves throughput when your model can process multiple inputs more efficiently together (common for GPU inference).

## Where batching happens

Batching happens **inside each replica process**.

Requests only become eligible for batching after they:

1. enter through the proxy and handle queueing/backpressure, and
2. get routed to a specific replica

See also: **[Request lifecycle](request-lifecycle.md)**.

## The API surface (what you configure)

In user code, batching is enabled by decorating an **async** method with `@serve.batch`:

- `max_batch_size`: upper bound for how many requests are grouped into one batch execution
- `batch_wait_timeout_s`: maximum time to wait (since the first queued item) before flushing a smaller batch

Serve expects the batched handler to return **one result per input** (same batch length, same order).

## What Serve actually does internally

Conceptually, each replica maintains an internal structure like:

- an in-memory buffer of pending calls
- a background “flush” loop that decides when to execute a batch
- per-request futures/promises that get completed when the batch finishes

### 1. Collection phase (buffering)

Incoming requests that hit the batched method are appended to a replica-local buffer.

Each buffered entry stores:

- the request arguments (or decoded payload)
- a future representing that request’s eventual response

### 2. Flush conditions (size or time)

The buffer is flushed when either condition becomes true:

- **Size trigger**: buffer length reaches `max_batch_size`
- **Time trigger**: `batch_wait_timeout_s` elapses since the **first** item currently in the buffer

This is why batching can increase latency at low QPS: a request may wait up to `batch_wait_timeout_s` for more arrivals.

### 3. Execution phase (single call)

Serve invokes your batched handler **once** with a list of inputs.

This is where you typically vectorize:

- stack/concat tensors
- run one forward pass
- split/scatter outputs back

### 4. Scatter phase (complete futures)

When the batched handler returns a list of outputs, Serve resolves the stored futures in order.

Each original HTTP request then completes independently with its corresponding output.

## Configuration & Tuning

For a deep dive into how batching interacts with concurrency limits (specifically why `max_ongoing_requests` must be larger than `max_batch_size`), see **[Queues and backpressure](queues-and-backpressure.md)**.

Quick tips:

- Increase `max_batch_size` if the model benefits from larger batches and you have headroom.
- Increase `batch_wait_timeout_s` to favor fuller batches; decrease it to favor latency.

## Next

- Request flow including queue points: [Request lifecycle](request-lifecycle.md)
- Queueing and rejection controls: [Queues and backpressure](queues-and-backpressure.md)
- “Knobs” reference and meanings: [Configuration reference](../guides/configuration-reference.md)
Loading
Loading