[18.0][ADD] auditlog_clickhouse_write: Add module#3528
Open
[18.0][ADD] auditlog_clickhouse_write: Add module#3528
Conversation
65ad212 to
c4763c1
Compare
df7fdac to
eed96ad
Compare
5ae9a83 to
5101588
Compare
5101588 to
f2e27d1
Compare
f2e27d1 to
350ee69
Compare
Contributor
|
This PR has the |
Audit tables can grow without bounds on production databases, causing bloat and adding latency to audited operations because logs are written synchronously in the same transaction. Keeping audit data in PostgreSQL also leaves an immutability gap for privileged users. Introduce a dedicated module to buffer audit payloads and export them asynchronously to ClickHouse, allowing audit storage to scale without slowing down business transactions and keeping audit data effectively immutable in a write-only store. Task: 5246
Split ClickHouse existing-row checks into chunks to avoid oversized SELECT ... IN (...) queries during large imports. Previously, retry-safe deduplication could build a very large IN clause when checking already inserted log or log line ids. This caused ClickHouse to fail with "Max query size exceeded" and the queue job stopped processing buffered auditlog rows. The fix keeps deduplication logic but executes existence checks in smaller chunks, which preserves idempotent retries and prevents query size errors on bulk imports. Task: 5246
3b20173 to
3696841
Compare
Store auditlog HTTP session and request data in ClickHouse together with audit logs and log lines. Previously only auditlog_log and auditlog_log_line were exported. This made ClickHouse-backed logs incomplete for read scenarios that depend on http_session_id and http_request_id, especially after recreating the Odoo database. Also extend buffer processing and tests to handle HTTP-related rows and keep retry-safe deduplication for the additional tables. Task: 5246
3696841 to
b7137b6
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose:
Production deployments with extensive audit rules can experience
unbounded growth of audit tables, increasing PostgreSQL bloat and
slowing down audited operations due to synchronous ORM writes. Audit
records stored in PostgreSQL are also mutable for privileged users.
This PR introduces auditlog_clickhouse_write to buffer audit payloads in
PostgreSQL and export them asynchronously to ClickHouse, enabling
scalable audit storage while reducing transactional overhead.
What’s included:
How to use:
Task: 5246