A toolkit for the rgs stack of Django + Hasura + Postgres applications (waterworks, urbanworks, drainworks, ...). Its goal is to remove the boilerplate that otherwise has to be re-implemented in every app: model metadata, per-role permissions, Hasura metadata generation, Postgres DDL sync, JWT claims handling and runtime logging to a central database.
The package is organised around a small number of building blocks:
dj_extended_models— drop-in replacement fordjango.contrib.gis.db.modelsthat adds aConfig(...)per field for documentation, Hasura permissions, presets, calculations and form- builder metadata. Ships theFPerm,TPerm,FPresets,HasuraSetclasses plus section/calculation helpers.BaseEnum/BaseEnumExtended— abstract bases for enum tables that are compatible with Hasura's enum type and optionally split into a separate "extended" table for non-enum columns.PermissionHelper+HasuraPermissions— walk every installed model and emit the completehasura/metadata.jsonpayload (tables, permissions per role, functions, views) from Django model definitions.install_db_*helpers — push Django field defaults,ON DELETEcascade actions and Postgres functions/triggers into the database so mutations going through Hasura or raw SQL behave the same as the ORM.upsert_multiple_data/upsert_from_existing_data— set-based upserts for bulk imports, with geometry coercion and page-sized batching.Claims,JwtUserToken,JwtModuleToken— Django-Ninja auth backends that decode the Hasura-namespaced JWT and expose a read-only claims view to views and background workers.SettingsGetter,EmailTemplate— layered settings reader (module → env → default) and a small class hierarchy for transactional emails.- Runtime logging —
TaskContext,RunContext,PostgresHandler,LogContextFilterfor writing structured, per-run logs to a dedicatedloggingdatabase.
The package is not on PyPI yet. Consumer apps wire it in through two pixi features — editable local checkout for development, pinned git-tag for CI / production builds:
# <your-app>/pixi.toml
# Dev: edits in ~/Documents/GitHub/rgs-django-utils/ show up immediately.
[feature.dev.pypi-dependencies]
rgs-django-utils = { path = "../rgs-django-utils", editable = true }
# Prod / CI: reproducible pinned build from GitHub.
[feature.prod.pypi-dependencies]
rgs-django-utils = { git = "https://github.com/GetThePointGit/rgs-django-utils.git", tag = "v0.1.0" }
[environments]
default = { features = ["dev"] }
prod = { features = ["prod"] }pixi install→ default (dev) environment, editable path.pixi install -e prod→ pinned git-tag checkout, what Docker build runs.
For private consumer repos path = still points at the sibling
checkout; the git URL in prod stays https:// and pixi picks up your
SSH / credential-helper / URL-rewrite setup from git. See
documentation/tooling-alignment-plan.md
for the detailed auth-per-environment matrix.
# myapp/models.py
import uuid
from rgs_django_utils.database import dj_extended_models as models
from rgs_django_utils.database.base_models import (
ModificationMetaMixin,
ValidityPeriodMixin,
)
loc = models.FieldSection("loc", "Locatie", order=1)
class Waterway(ModificationMetaMixin, ValidityPeriodMixin, models.Model):
id = models.UUIDField(
primary_key=True,
default=uuid.uuid4,
config=models.Config(
section=loc,
permissions=models.FPerm("-s-"),
hasura_set=models.HasuraSet(insert="x-hasura-user-id"),
),
)
code = models.TextStringField(
"code",
config=models.Config(
section=loc,
doc_short="Unique waterway code",
permissions=models.FPerm("-s-", project_edit="isu"),
),
)
geometry = models.LineStringField(
srid=28992,
config=models.Config(
section=loc,
permissions=models.FPerm("-s-", project_edit="isu"),
),
)
class Meta:
db_table = "waterway"
class TableDescription:
section = loc
modules = "*"
@classmethod
def get_permissions(cls):
return models.TPerm(
public=None,
project_read={"select": {}},
project_edit={
"insert": {},
"select": {},
"update": {},
"delete": {},
},
)The Config(...) fields drive the Hasura metadata generator, the
datamodel exporters (Excel / JSON-Schema) and the form builder. The
HasuraSet preset tells Hasura to stamp the creator on every insert.
get_permissions() returns a TPerm describing the per-role row
filters at the table level.
The migrate_and_update management command chains every
post-migration hook in the correct order (see
rgs_django_utils/management/commands/migrate_and_update.py):
python manage.py migrate_and_updateSteps performed:
install_db_before_functions— Postgres scripts that migrations rely on (default-value helpers).manage.py migrate— Django schema migration.install_db_authorization_functions— row-level-security helpers.install_db_defaults_and_relation_cascading— pushes Django field defaults and FK cascade actions into Postgres DDL so Hasura and raw SQL behave consistently.add_default_records— seeds enum tables and any model exposingdefault_records()orcustom_default_records().install_db_functions+install_db_last_functions— remaining Postgres scripts.sync_db_meta_tables— mirrors every model's rgs metadata into thedescription_*tables.export_datamodel_to_excel+export_datamodel_to_json_schema— refreshes the documentation artefacts invar/.
For fine-grained control:
python manage.py migrate_and_update --skip_migration # only post-hooks
python manage.py migrate_and_update --skip_before # skip 01_before
python manage.py sync_db_description # just step 7
python manage.py write_models_to_json_schema # just step 8 (JSON)
python manage.py write_models_to_xlsx # just step 8 (XLSX)generate_hasura_metadata walks every installed model plus anything
registered via HasuraConfig (functions, views) and writes the
hasura/metadata.json payload Hasura's replace_metadata endpoint
expects.
# Write hasura_metadata_exported.json:
python manage.py generate_hasura_metadata
# Write and apply directly to Hasura (needs HASURA_GRAPHQL_URL and
# HASURA_GRAPHQL_ADMIN_SECRET in the environment):
python manage.py generate_hasura_metadata --apply
# Skip generation, apply an existing file (useful in CI):
python manage.py generate_hasura_metadata --apply-onlyRegister per-app SQL functions and views by subclassing HasuraConfig:
from rgs_django_utils.commands.hasura_permissions import HasuraConfig
class MyAppHasuraConfig(HasuraConfig):
pass
MyAppHasuraConfig.register_multiple_functions([
{"function": "search_waterways", "configuration": {...}, "permissions": [...]},
])
MyAppHasuraConfig.register_multiple_views([
{"view": "v_active_waterway_codes", "configuration": {...}, "permissions": [...]},
])Hasura signs JWTs with a Hasura-namespaced claim block. JwtUserToken
and JwtModuleToken plug those directly into Ninja routes:
from ninja import Router
from rgs_django_utils.utils.authorization import JwtModuleToken, JwtUserToken
router = Router()
@router.get("/me", auth=JwtUserToken())
def me(request):
# request.auth is a Claims instance — authenticated user guaranteed.
return {
"id": request.auth.user.id,
"email": request.auth.email,
"fullname": request.auth.fullname,
}
@router.post("/admin/reload-config", auth=JwtModuleToken("admin"))
def reload_config(request):
# Only requires the "admin" role in the token — no user lookup.
return {"ok": True}decode_jwt returns None on any verification failure (bad signature,
expired, malformed), so callers can treat "no token" and "bad token"
uniformly. Requires settings.JWT_PUBLIC_KEY to be set to the RSA
public key Hasura signs with.
from rgs_django_utils.database.db_types import ImportMethod
from rgs_django_utils.database.upsert_multiple_data import upsert_multiple_data
upsert_multiple_data(
model=Waterway,
data=[
{"code": "W-001", "geometry": "LINESTRING(0 0, 1 1)"},
{"code": "W-002", "geometry": "LINESTRING(1 1, 2 2)"},
],
data_fields=["code", "geometry"],
update_field_names=["geometry"],
identification_field_names=["code"],
method=ImportMethod.OVERWRITE,
page_size=1000,
)Geometry columns are wrapped in ST_GeomFromText (or
ST_TRANSFORM(..., 4326) when the target SRID is 4326). Rows are
uploaded in batches of page_size to avoid oversized SQL statements.
Use upsert_from_existing_data when the source is already a staging
table in Postgres.
Wire the handler and filter in LOGGING:
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"filters": {
"context": {"()": "rgs_django_utils.logging.logging.LogContextFilter"},
},
"handlers": {
"postgres": {
"level": "DEBUG",
"class": "rgs_django_utils.logging.logging.PostgresHandler",
"filters": ["context"],
},
},
"loggers": {
"": {"handlers": ["postgres"], "level": "INFO"},
},
}Group a workflow's log lines under a named run with nested tasks:
from rgs_django_utils.logging.logging import (
RunContext, TaskContext, get_data_logger, log_counter,
)
data_log = get_data_logger("ingest")
with RunContext("nightly-import"):
with TaskContext("load-waterways"):
for row in rows:
try:
ingest(row)
log_counter("imported")
except ValueError as e:
data_log.error("invalid row %s", row["code"], extra={"code": 12})
log_counter("skipped")TaskContext.__exit__ emits a timing + summary line (duration,
max-severity, counters) that surfaces in the run overview, so failing
tasks are visible even when the Python call returns successfully.
- Primary-key columns without
Config(...)default to select-only in the permission helper. That is almost always what you want, but surprised me once or twice — override by giving the pk an explicitFPerm. @cacheonPermissionHelperis per-instance-per-process. Tests that swapsettings.PERMISSION_TREEbetween cases must instantiate a freshPermissionHelper().install_db_defaults_and_relation_cascadingonly handles static defaults,auto_now[_add](rewritten toNOW()), empty-list defaults on array columns (array[]::integer[]), andON DELETEofCASCADE/SET_NULL/SET_DEFAULT. Callable defaults other than the listed ones, andDateTimeFielddefaults, must be installed as Postgres trigger functions separately.decode_jwtswallows all exceptions and returnsNone. If you need to distinguish "no token" from "bad token", inspect the claim shape returned byClaims, not the exception flow.LogRunis a consumer-app model.rgs_django_utilsonly provides the context + handler; each consuming app owns the concreteLogRunDjango model thatset_run()creates rows in.
The recommended workflow uses pixi — it provisions the Python environment, installs the package editable, and exposes every common action as a named task:
git clone https://github.com/GetThePointGit/rgs-django-utils.git
cd rgs-django-utils
pixi install # provisions .pixi/envs/default
# and installs the pre-commit git hookPrefer a plain virtualenv? That works too:
python -m venv venv
source venv/bin/activate
pip install -e .
pip install pytest pytest-cov pytest-django ruff build twine python-dotenv toxThe dev feature installs pixi-pycharm, so after pixi install add a
Python interpreter of type conda pointing at .pixi/envs/default/libexec/conda.
If packages are installed with editable = true, mark them as "source
root" so the IDE resolves cross-package imports.
Single Python/Django combination (fast):
pixi run tests # pytest --cov=rgs_django_utils
pixi run tests-verbose # adds -v
pixi run -- pytest tests/testapp -v
pixi run -- pytest -k permissionThe testapp requires a PostGIS database on port 5431 — bring one up
with docker compose or point DATABASES at an existing local
instance.
Full matrix (every supported Python × Django combination, via tox):
pixi run tox # every env in tox.ini
pixi run -- tox -e py3.12-django5.1 # one env
pixi run -- tox -e ruff # the lint-only envpixi run style # sort-imports + format
pixi run sort-imports # ruff check --select I --fix
pixi run format # ruff format
pixi run lint # ruff check ./rgs_django_utilsCI runs the ruff tox env on every push.
A .pre-commit-config.yaml runs ruff (check + format), trailing-
whitespace, end-of-file, YAML/TOML validity and a large-file guard on
every commit. pixi install auto-wires the git hook via a
postinstall task; to run the full sweep manually:
pixi run pre-commit-run # run every hook on every file
pixi run pre-commit-install # re-install the git hookThe hook runs inside the pre-commit-managed environment (not the pixi
env), so the ruff version is pinned via the rev in
.pre-commit-config.yaml — bump it with pixi run -- pre-commit autoupdate.
pixi run build # cleans dist/, builds wheel + sdist
pixi run build-check # twine check dist/*One-time setup:
- Register an account on https://pypi.org and on https://test.pypi.org (separate accounts).
- Enable 2FA on both.
- Create an API token on each site (Account settings → API tokens, scope: "Entire account" for the first upload, later narrow to the project).
Test the release on Test PyPI first:
pixi run publish-test
# Username: __token__
# Password: <your Test PyPI token>
# In a clean venv, verify the install works:
pip install --index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple/ \
rgs-django-utils
python -c "from rgs_django_utils.database import dj_extended_models; print('ok')"Real release:
pixi run publish
# Username: __token__
# Password: <your PyPI token>Confirm at https://pypi.org/project/rgs-django-utils/.
For subsequent releases, Trusted Publishing via GitHub Actions OIDC
is strongly recommended — no more tokens to rotate. See
https://docs.pypi.org/trusted-publishers/ and add a
.github/workflows/publish.yml that runs on a release tag.
Alpha (0.1.0). Used in waterworks and urbanworks, so the core API surface (Config, FPerm, TPerm,
FPresets, BaseEnum, PermissionHelper, install_db_*,
SettingsGetter, Claims, JwtUserToken, JwtModuleToken) is
load-bearing. The roles list in dj_extended_models.roles_list is
still evolving — see tests/testapp/models.py for gaps that surfaced
during the tooling alignment pass.