Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,10 +1,22 @@
install:
pip install -e .
uv pip install -e ".[test]"

clean:
rm -rf build dist *.egg-info
rm -rf examples/*.ll examples/*.o
rm -rf htmlcov .coverage

test:
pytest tests/ -v --tb=short -m "not verifier"

test-cov:
pytest tests/ -v --tb=short -m "not verifier" \
--cov=pythonbpf --cov-report=term-missing --cov-report=html

test-verifier:
@echo "NOTE: verifier tests require sudo and bpftool. Uses sudo .venv/bin/python3."
pytest tests/test_verifier.py -v --tb=short -m verifier

all: clean install

.PHONY: all clean
.PHONY: all clean install test test-cov test-verifier
23 changes: 23 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,30 @@ docs = [
"sphinx-rtd-theme>=2.0",
"sphinx-copybutton",
]
test = [
"pytest>=8.0",
"pytest-cov>=5.0",
"tomli>=2.0; python_version < '3.11'",
]

[tool.setuptools.packages.find]
where = ["."]
include = ["pythonbpf*"]

[tool.pytest.ini_options]
testpaths = ["tests"]
pythonpath = ["."]
python_files = ["test_*.py"]
markers = [
"verifier: requires sudo/root for kernel verifier tests (not run by default)",
"vmlinux: requires vmlinux.py for current kernel",
]
log_cli = false

[tool.coverage.run]
source = ["pythonbpf"]
omit = ["*/vmlinux*", "*/__pycache__/*"]

[tool.coverage.report]
show_missing = true
skip_covered = false
116 changes: 116 additions & 0 deletions tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# PythonBPF Test Suite

## Quick start

```bash
# Activate the venv and install test deps (once)
source .venv/bin/activate
uv pip install -e ".[test]"

# Run the full suite (IR + LLC levels, no sudo required)
make test

# Run with coverage report
make test-cov
```

## Test levels

Tests are split into three levels, each in a separate file:

| Level | File | What it checks | Needs sudo? |
|---|---|---|---|
| 1 — IR generation | `test_ir_generation.py` | `compile_to_ir()` completes without exception or `logging.ERROR` | No |
| 2 — LLC compilation | `test_llc_compilation.py` | Level 1 + `llc` produces a non-empty `.o` file | No |
| 3 — Kernel verifier | `test_verifier.py` | `bpftool prog load -d` exits 0 | Yes |

Levels 1 and 2 run together with `make test`. Level 3 is opt-in:

```bash
make test-verifier # requires bpftool and sudo
```

## Running a single test

Tests are parametrized by file path. Use `-k` to filter:

```bash
# By file name
pytest tests/ -v -k "and.py" -m "not verifier"

# By category
pytest tests/ -v -k "conditionals" -m "not verifier"

# One specific level only
pytest tests/test_ir_generation.py -v -k "hash_map.py"
```

## Coverage report

```bash
make test-cov
```

- **Terminal**: shows per-file coverage with missing lines after the test run.
- **HTML**: written to `htmlcov/index.html` — open in a browser for line-by-line detail.

```bash
xdg-open htmlcov/index.html
```

`htmlcov/` and `.coverage` are excluded from git (listed in `.gitignore` if not already).

## Expected failures (`test_config.toml`)

Known-broken tests are declared in `tests/test_config.toml`:

```toml
[xfail]
"failing_tests/my_test.py" = {reason = "...", level = "ir"}
```

- `level = "ir"` — fails during IR generation; both IR and LLC tests are marked xfail.
- `level = "llc"` — IR generates fine but `llc` rejects it; only the LLC test is marked xfail.

All xfails use `strict = True`: if a test starts **passing** it shows up as **XPASS** and is treated as a test failure. This is intentional — it means the bug was fixed and the test should be promoted to `passing_tests/`.

## Adding a new test

1. Create a `.py` file in `tests/passing_tests/<category>/` with the usual `@bpf` decorators and a `compile()` call at the bottom.
2. Run `make test` — the file is discovered and tested automatically at all levels.
3. If the test is expected to fail, add it to `tests/test_config.toml` instead of `passing_tests/`.

## Directory structure

```
tests/
├── README.md ← you are here
├── conftest.py ← pytest config: discovery, xfail/skip injection, fixtures
├── test_config.toml ← expected-failure list
├── test_ir_generation.py ← Level 1
├── test_llc_compilation.py ← Level 2
├── test_verifier.py ← Level 3 (opt-in, sudo)
├── framework/
│ ├── bpf_test_case.py ← BpfTestCase dataclass
│ ├── collector.py ← discovers test files, reads test_config.toml
│ ├── compiler.py ← wrappers around compile_to_ir() + _run_llc()
│ └── verifier.py ← bpftool subprocess wrapper
├── passing_tests/ ← programs that should compile and verify cleanly
└── failing_tests/ ← programs with known issues (declared in test_config.toml)
```

## Known regressions (as of compilation-context PR merge)

Three tests in `passing_tests/` currently fail — these are bugs to fix, not xfails:

| Test | Error |
|---|---|
| `passing_tests/assign/comprehensive.py` | `TypeError: cannot store i64* to i64*` |
| `passing_tests/helpers/bpf_probe_read.py` | `ValueError: 'ctx' not in local symbol table` |
| `passing_tests/vmlinux/register_state_dump.py` | `KeyError: 'cs'` |

Nine tests in `failing_tests/` were fixed by the compilation-context PR (they show as XPASS). They can be moved to `passing_tests/` when convenient:

`assign/retype.py`, `conditionals/helper_cond.py`, `conditionals/oneline.py`,
`direct_assign.py`, `globals.py`, `if.py`, `license.py` (IR only), `named_arg.py`,
`xdp/xdp_test_1.py`
Empty file added tests/__init__.py
Empty file.
103 changes: 103 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
"""
pytest configuration for the PythonBPF test suite.

Test discovery:
All .py files under tests/passing_tests/ and tests/failing_tests/ are
collected as parametrized BPF test cases.

Markers applied automatically from test_config.toml:
- xfail (strict=True): failing_tests/ entries that are expected to fail
- skip: vmlinux tests when vmlinux.py is not importable

Run the suite:
pytest tests/ -v -m "not verifier" # IR + LLC only (no sudo)
pytest tests/ -v --cov=pythonbpf # with coverage
pytest tests/test_verifier.py -m verifier # kernel verifier (sudo required)
"""

import logging

import pytest

from tests.framework.collector import collect_all_test_files

# ── vmlinux availability ────────────────────────────────────────────────────

try:
import vmlinux # noqa: F401

VMLINUX_AVAILABLE = True
except ImportError:
VMLINUX_AVAILABLE = False


# ── shared fixture: collected test cases ───────────────────────────────────


def _all_cases():
return collect_all_test_files()


# ── pytest_generate_tests: parametrize on bpf_test_file ───────────────────


def pytest_generate_tests(metafunc):
if "bpf_test_file" in metafunc.fixturenames:
cases = _all_cases()
metafunc.parametrize(
"bpf_test_file",
[c.path for c in cases],
ids=[c.rel_path for c in cases],
)


# ── pytest_collection_modifyitems: apply xfail / skip markers ─────────────


def pytest_collection_modifyitems(items):
case_map = {c.rel_path: c for c in _all_cases()}

for item in items:
# Resolve the test case from the parametrize ID embedded in the node id.
# Node id format: tests/test_foo.py::test_bar[passing_tests/helpers/pid.py]
case = None
for bracket in (item.callspec.id,) if hasattr(item, "callspec") else ():
case = case_map.get(bracket)
break

if case is None:
continue

# vmlinux skip
if case.needs_vmlinux and not VMLINUX_AVAILABLE:
item.add_marker(
pytest.mark.skip(reason="vmlinux.py not available for current kernel")
)
continue

# xfail (strict: XPASS counts as a test failure, alerting us to fixed bugs)
if case.is_expected_fail:
# Level "ir" → fails at IR generation: xfail both IR and LLC tests
# Level "llc" → IR succeeds but LLC fails: only xfail the LLC test
is_llc_test = item.nodeid.startswith("tests/test_llc_compilation.py")

apply_xfail = (case.xfail_level == "ir") or (
case.xfail_level == "llc" and is_llc_test
)
if apply_xfail:
item.add_marker(
pytest.mark.xfail(
reason=case.xfail_reason,
strict=True,
raises=Exception,
)
)


# ── caplog level fixture: capture ERROR+ from pythonbpf ───────────────────


@pytest.fixture(autouse=True)
def set_log_level(caplog):
with caplog.at_level(logging.ERROR, logger="pythonbpf"):
yield
Empty file added tests/framework/__init__.py
Empty file.
17 changes: 17 additions & 0 deletions tests/framework/bpf_test_case.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from dataclasses import dataclass
from pathlib import Path


@dataclass
class BpfTestCase:
path: Path
rel_path: str
is_expected_fail: bool = False
xfail_reason: str = ""
xfail_level: str = "ir" # "ir" or "llc"
needs_vmlinux: bool = False
skip_reason: str = ""

@property
def test_id(self) -> str:
return self.rel_path.replace("/", "::")
60 changes: 60 additions & 0 deletions tests/framework/collector.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import sys
from pathlib import Path

if sys.version_info >= (3, 11):
import tomllib
else:
import tomli as tomllib

from .bpf_test_case import BpfTestCase

TESTS_DIR = Path(__file__).parent.parent
CONFIG_FILE = TESTS_DIR / "test_config.toml"

VMLINUX_TEST_DIRS = {"passing_tests/vmlinux"}
VMLINUX_TEST_PREFIXES = {
"failing_tests/vmlinux",
"failing_tests/xdp",
}


def _is_vmlinux_test(rel_path: str) -> bool:
for prefix in VMLINUX_TEST_DIRS | VMLINUX_TEST_PREFIXES:
if rel_path.startswith(prefix):
return True
return False


def _load_config() -> dict:
if not CONFIG_FILE.exists():
return {}
with open(CONFIG_FILE, "rb") as f:
return tomllib.load(f)


def collect_all_test_files() -> list[BpfTestCase]:
config = _load_config()
xfail_map: dict = config.get("xfail", {})

cases = []
for subdir in ("passing_tests", "failing_tests"):
for py_file in sorted((TESTS_DIR / subdir).rglob("*.py")):
rel = str(py_file.relative_to(TESTS_DIR))
needs_vmlinux = _is_vmlinux_test(rel)

xfail_entry = xfail_map.get(rel)
is_expected_fail = xfail_entry is not None
xfail_reason = xfail_entry.get("reason", "") if xfail_entry else ""
xfail_level = xfail_entry.get("level", "ir") if xfail_entry else "ir"

cases.append(
BpfTestCase(
path=py_file,
rel_path=rel,
is_expected_fail=is_expected_fail,
xfail_reason=xfail_reason,
xfail_level=xfail_level,
needs_vmlinux=needs_vmlinux,
)
)
return cases
23 changes: 23 additions & 0 deletions tests/framework/compiler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import logging
from pathlib import Path

from pythonbpf.codegen import compile_to_ir, _run_llc


def run_ir_generation(test_path: Path, output_ll: Path):
"""Run compile_to_ir on a BPF test file.

Returns the (output, structs_sym_tab, maps_sym_tab) tuple from compile_to_ir.
Raises on exception. Any logging.ERROR records captured by pytest caplog
indicate a compile failure even when no exception is raised.
"""
return compile_to_ir(str(test_path), str(output_ll), loglevel=logging.WARNING)


def run_llc(ll_path: Path, obj_path: Path) -> bool:
"""Compile a .ll file to a BPF .o using llc.

Raises subprocess.CalledProcessError on failure (llc uses check=True).
Returns True on success.
"""
return _run_llc(str(ll_path), str(obj_path))
Loading
Loading