Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .editorconfig
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
root = true

[*]
charset = utf-8
end_of_line = lf
tab_width = 4
indent_size = 4
indent_style = space
max_line_length = 88
insert_final_newline = true
trim_trailing_whitespace = true

[Makefile]
indent_style = tab

[{*.yml,*.yaml,*.json}]
tab_width = 2
indent_size = 2
104 changes: 104 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# AGENTS.md - Chicory Development Guide

Chicory is a lightweight, async-native job queue for Python (3.11–3.14) with Redis, RabbitMQ, and database backends.

---

## Commands

```bash
make init # uv sync --dev --all-extras
make lint # ruff + ty (both)
make ruff # ruff check --fix
make ty # ty check (Astral's type checker, not mypy/pyright)

make test # lint + pytest -vv (needs Docker services)
make test-fast # lint + pytest -m "not slow and not integration"
make test-slow # lint + pytest -n auto -m "slow"
make test-unit # lint + pytest -n auto -m "not integration" (includes slow tests)
make test-integration # lint + pytest -m "integration"

make up # docker compose up -d --build
make down # docker compose down -v ⚠️ -v removes all volumes (destructive)
```

**Every `make test*` target runs lint first.** To skip lint, call pytest directly:

```bash
uv run pytest tests/unit/test_task.py -vv
uv run pytest tests/unit/test_task.py::TestTaskDelay::test_delay_success -vv
```

Coverage runs on every `pytest` invocation (`--cov` is in `addopts`). Add `--no-cov` to suppress it.

---

## Testing Quirks

### asyncio — strict mode
`asyncio_mode = "strict"` is set. **Every `async def test_*` must have `@pytest.mark.asyncio`.**
Omitting it causes the coroutine to be collected but not awaited (silent pass or confusing error).

### Markers
`--strict-markers` is enabled. Only `slow` and `integration` are registered. Any other marker causes a collection failure.

### Integration test services (Docker)
Integration tests require all three services running:
- Redis — `redis://localhost:6379` (db 0 = broker, db 1 = backend)
- RabbitMQ — `amqp://guest:guest@localhost:5672//`
- PostgreSQL — `postgresql+asyncpg://chicory:chicory@localhost:5432/chicory`

### Integration fixture parameterization
`chicory_app` (in `tests/conftest.py`) is parameterized across 4 broker/backend combinations:
`redis-redis`, `rabbitmq-postgres`, `redis-postgres`, `rabbitmq-redis`.
Each integration test runs **4 times**. Postgres schema is created by the fixture (`Base.metadata.create_all`), not by a migration script.

### `test-unit` label is misleading
`make test-unit` uses `-m "not integration"` — it includes `slow` tests. Only `make test-fast` excludes both slow and integration.

---

## Non-Obvious Architecture

### Optional dependency guards
`src/chicory/__init__.py` wraps broker/backend imports in `try/except ImportError`.
`__init__.py` has `F401` ignored in ruff so these conditional re-exports don't trigger lint errors.
`tests/unit/test_optional_imports.py` explicitly tests these guards — don't break them.

### `TaskMessage` serializes with pickle (not JSON)
`TaskMessage.dumps()` / `TaskMessage.loads()` use pickle. Never consume from untrusted brokers.

### Key locations that aren't obvious from the directory tree
- `DEFAULT_QUEUE` and `TaskEnvelope` type → `broker/base.py`
- SQLAlchemy `Base` metadata → `backend/models.py`
- CLI entrypoint → `cli/cli.py` (typer app, invoked as `chicory worker <module:broker>`)
- Worker uses `ThreadPoolExecutor` for sync task functions

### uv workspace
`benchmarks/` is a uv workspace member with its own `pyproject.toml`. Run benchmark commands from within `benchmarks/` or via `benchmarks/Makefile`.

### Build backend
`uv_build>=0.9.13,<0.10.0` — not setuptools or hatch.

---

## Ruff / Style Gotchas

- `target-version = "py314"` — ruff enforces Python 3.14 compatibility
- `TD` rule is enabled: TODOs must use `# TODO(author): description` format
- `NPY` rule is enabled (numpy conventions) even though numpy is not a dependency
- `docstring-code-format = true` with `docstring-code-line-length = "dynamic"`
- `from __future__ import annotations` required at the top of every source file
- `src/chicory/logging.py` is omitted from coverage measurement

---

## CI Summary

| Workflow | Trigger | Services | Command |
|---|---|---|---|
| `pr-unit-tests.yml` | PR | none | `make test-unit` |
| `merge-gate.yml` | merge queue | Redis, Postgres 17, RabbitMQ 3 | `make test` |
| `post-merge-coverage.yml` | push to `main` | Docker via `make up` | `make test` + Codecov |

PRs only run unit tests. Full integration tests run at merge gate. Coverage XML (`coverage.xml`) is uploaded to Codecov on main merges.
1 change: 1 addition & 0 deletions benchmarks/.python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.13
48 changes: 48 additions & 0 deletions benchmarks/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
.PHONY: up down
.PHONY: chicory-worker chicory-run chicory-run-increment chicory-run-cpu chicory-run-io chicory-run-all
.PHONY: taskiq-worker taskiq-run taskiq-run-increment taskiq-run-cpu taskiq-run-io taskiq-run-all

# ── Infrastructure ──────────────────────────────────────────────
up:
docker compose up -d --build

down:
docker compose down -v

# ── Chicory ─────────────────────────────────────────────────────
chicory-worker:
chicory worker --workers 4 --concurrency 32 broker_redis.bench_chicory:app

chicory-run:
python broker_redis/bench_chicory.py

chicory-run-increment:
python broker_redis/bench_chicory.py --workload increment

chicory-run-cpu:
python broker_redis/bench_chicory.py --workload cpu_bound

chicory-run-io:
python broker_redis/bench_chicory.py --workload io_bound

chicory-run-all:
python broker_redis/bench_chicory.py --workload all

# ── TaskIQ ──────────────────────────────────────────────────────
taskiq-worker:
taskiq worker --workers 4 --max-async-tasks 32 broker_redis.bench_taskiq:broker --max-prefetch 32

taskiq-run:
python broker_redis/bench_taskiq.py

taskiq-run-increment:
python broker_redis/bench_taskiq.py --workload increment

taskiq-run-cpu:
python broker_redis/bench_taskiq.py --workload cpu_bound

taskiq-run-io:
python broker_redis/bench_taskiq.py --workload io_bound

taskiq-run-all:
python broker_redis/bench_taskiq.py --workload all
144 changes: 144 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Benchmarks

Apple-to-apple comparison of **Chicory** vs **TaskIQ**, both using Redis as broker and
result backend.

## Prerequisites

- Docker & Docker Compose
- Python 3.11+ with `uv`

## Quick start

```bash
# 1. Install benchmark deps
uv sync --dev --all-extras

# 2. Start Redis, Prometheus, Grafana
make up

# 3. Terminal 1 — start a worker (pick one)
make chicory-worker # 4 processes × 32 concurrency = 128 concurrent
make taskiq-worker # 4 processes × 32 max-async-tasks = 128 concurrent

# 4. Terminal 2 — run benchmark
make chicory-run-all
make taskiq-run-all

# 5. View results
# Console — printed at end of run
# Grafana — http://localhost:3000 (dashboard: "Chicory Benchmarks")
# Prometheus — http://localhost:9100
```

## What's measured

Each benchmark iterates over batch sizes
`[8, 16, 32, 64, 128, 256, 1024, 2048, 4096, 8192, 16384]` and records:

| Metric | Description |
|-----------------------------|-----------------------------------------------------|
| **Enqueue duration** | Wall time to `gather()` all `delay()`/`kiq()` calls |
| **Dequeue duration** | Wall time to `gather()` all result retrievals |
| **Throughput** | `batch_size / (enqueue + dequeue)` |
| **Success/Failure/Invalid** | Per-batch result classification |

Redis is flushed (both db 0 and db 1) between batches to prevent data leakage.

## Workload types

| Workload | Task body | Purpose |
|-------------|-------------------------------------------|------------------------------------------|
| `increment` | `return value + 1` | Baseline — pure enqueue/dequeue overhead |
| `cpu_bound` | 10k iterations of `(x * 31 + 17) % 1e9+7` | CPU-bound worker load |
| `io_bound` | `asyncio.sleep(0.01)` | I/O-bound / concurrency pressure |

```bash
make chicory-run-increment # or cpu, io, all
make taskiq-run-all
```

## Fairness controls

Both frameworks run under equivalent conditions:

| Setting | Chicory | TaskIQ |
|------------------------|-----------------------|-----------------------|
| Workers (processes) | 4 | 4 |
| Concurrency per worker | 32 | 32 |
| Broker | Redis streams (db 0) | Redis streams (db 0) |
| Result backend | Redis (db 1) | Redis (db 1) |
| Validation overhead | `ValidationMode.NONE` | N/A (none by default) |
| Connection pool | Default | Default |

## Monitoring

### Services (via `docker compose`)

| Service | Internal port | External port |
|----------------|---------------|---------------|
| Redis | 6379 | 6379 |
| Redis Exporter | 9121 | — |
| Prometheus | 9090 | **9100** |
| Grafana | 3000 | 3000 |

Prometheus is on **9100** externally to avoid conflict with Chicory's metrics endpoint
on 9090.

### Prometheus targets

| Job | Target | Notes |
|---------------------|-----------------------------|-----------------------|
| `redis_exporter` | `redis-exporter:9121` | Inside docker network |
| `chicory_benchmark` | `host.docker.internal:9090` | Bench script on host |
| `taskiq_benchmark` | `host.docker.internal:9091` | Bench script on host |

### Grafana dashboard

Auto-provisioned at startup. Template variables:

- **target**: `chicory`, `taskiq`, or both
- **workload_type**: `increment`, `cpu_bound`, `io_bound`, or all

9 panels: enqueue/dequeue duration, throughput, success/failure/invalid counts, Redis
memory, Redis ops/sec, Redis connected clients.

## Architecture

```
benchmarks/
├── broker_redis/
│ ├── bench_chicory.py # Chicory benchmark script + task definitions
│ └── bench_taskiq.py # TaskIQ benchmark script + task definitions
├── framework/
│ ├── __init__.py
│ ├── config.py # BenchmarkConfig, WorkloadType enum
│ ├── metrics.py # MetricsCollector, BenchmarkResult, Prometheus gauges/counters
│ ├── runner.py # BenchmarkRunner (generic, for future use)
│ └── workloads.py # Task creator functions (for runner.py)
├── monitor/
│ ├── prometheus.yml
│ └── grafana/
│ ├── datasources.yml
│ ├── dashboards.yml
│ └── dashboards/
│ └── benchmarks.json
├── docker-compose.yml
├── Makefile
└── pyproject.toml # uv workspace member
```

## Adding new benchmarks

1. Create `broker_redis/bench_<name>.py` following the pattern in existing bench files
2. Define tasks, `_WORKLOAD_TASKS` mapping, `_flush_redis()`, `_run_batch()`
3. Use `MetricsCollector.record_result(result, "<name>")` for Prometheus export
4. Add `make <name>-worker` and `make <name>-run-*` targets to `Makefile`
5. Add a Prometheus scrape target in `monitor/prometheus.yml`
6. Update the Grafana dashboard `target` template variable

## Cleanup

```bash
make down # Stop all containers, remove volumes
```
Empty file.
Loading
Loading