Skip to content

pg_buffercache: Add pg_buffercache_relation_stats() function#36

Open
lfittl wants to merge 1 commit intomasterfrom
pg-bufferusage-relation-stats
Open

pg_buffercache: Add pg_buffercache_relation_stats() function#36
lfittl wants to merge 1 commit intomasterfrom
pg-bufferusage-relation-stats

Conversation

@lfittl
Copy link
Copy Markdown
Owner

@lfittl lfittl commented Feb 28, 2026

This function returns an aggregation of buffer contents, grouped on a per-relfilenode basis. This is often useful to understand which tables or indexes are currently in cache, and can show cache disruptions due to query activity when sampled over time. The existing pg_buffercache() function can be utilized for this by grouping the result, but due to the amount of buffer entries (one per page) this can be prohibitively expensive on large machines. Even on a small shared buffers (128MB) the new function is 10x faster. Similar to the existing summary functions this new function does not hold a lock whilst gathering its statistics.

This function returns an aggregation of buffer contents, grouped on a
per-relfilenode basis. This is often useful to understand which tables
or indexes are currently in cache, and can show cache disruptions due
to query activity when sampled over time. The existing pg_buffercache()
function can be utilized for this by grouping the result, but due to
the amount of buffer entries (one per page) this can be prohibitively
expensive on large machines. Even on a small shared buffers (128MB) the
new function is 10x faster. Similar to the existing summary functions
this new function does not hold a lock whilst gathering its statistics.

Author: Lukas Fittl <lukas@fittl.com>
Reviewed by:
Discussion:
@lfittl lfittl force-pushed the pg-bufferusage-relation-stats branch from 6d2615b to 5e5a97d Compare February 28, 2026 23:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant