Storage performance wrappers (read cache + write optimizer)
We keep the TokenStore single-op contract as the required API and add optional capabilities that wrappers can exploit.
- Read-cache wrapper (
CachingTokenStore):
- Wraps any
TokenStore; keeps per-process in-memory caches for sessions, states, auth codes, and clients.
- Read-through with TTL = min(record expiry, cap) minus a small skew; negative-cache misses briefly.
- Invalidate on store/delete/consume; one
asyncio.Lock to guard cache maps.
- Optional metrics: hit/miss/invalidation counters (no tokens/PII).
- Write-optimizer wrapper (
BufferedTokenStore):
- Queues single-op writes; flush triggers:
max_batch_size or max_delay.
- Coalesce duplicate writes (keep last per key); reads bypass the buffer.
- On flush: if the delegate exposes bulk hooks, use them; else run single-op calls inside one transaction/single worker to reduce lock/fsync churn.
- Graceful shutdown: drain queue on
close; retry or surface errors (no silent loss).
- Optional bulk capability (feature-detected):
- Define an optional protocol (e.g.,
BulkSessionStore) with:
bulk_store_sessions(records: list[StoredSession]) -> None
bulk_delete_sessions_by_hash(hashes: list[str]) -> None
- optionally
bulk_cleanup_expired() -> dict[str, int]
- Backends may implement these; wrappers check
isinstance(delegate, BulkSessionStore) (or hasattr) and fall back when absent.
- SQLite backend updates:
- Implement the bulk hooks using one transaction +
executemany / DELETE … WHERE access_token_hash IN (…).
- Keep existing single-op methods as-is; bulk is an optimization only.
- Tests to add (make them backend-agnostic via parametrized fixtures):
- Core
TokenStore (keep existing): states/auth-codes one-time, expiry pruning, session load/store/delete by token/id/refresh, encryption/plaintext opt-in, cleanup.
- Caching wrapper suite:
- Hit/miss/negative-cache: first miss hits delegate, subsequent read hits cache; expired entries are evicted and not returned.
- Invalidation: store/delete/consume paths evict relevant cache keys; no stale reads after writes/deletes/consumes.
- TTL skew: cache TTL is slightly shorter than record expiry; ensure expired sessions/auth-codes/states are not served.
- Buffered write wrapper suite:
- Flush triggers: size threshold and time threshold both flush to delegate; after flush, data is persisted.
- Coalescing: multiple writes to the same key before flush result in a single persisted value (last write wins).
- Shutdown drain:
close() drains the queue; no queued writes are lost.
- Error propagation: delegate errors surface (or are retried once); no silent drops.
- Bulk capability suite (run only if backend advertises bulk):
bulk_store_sessions / bulk_delete_sessions_by_hash persist/delete all items; results match looping single-op calls.
- Fallback parity: when bulk is absent, wrapper falls back to single-op (ideally in one transaction) with identical outcomes.
Storage performance wrappers (read cache + write optimizer)
We keep the
TokenStoresingle-op contract as the required API and add optional capabilities that wrappers can exploit.CachingTokenStore):TokenStore; keeps per-process in-memory caches for sessions, states, auth codes, and clients.asyncio.Lockto guard cache maps.BufferedTokenStore):max_batch_sizeormax_delay.close; retry or surface errors (no silent loss).BulkSessionStore) with:bulk_store_sessions(records: list[StoredSession]) -> Nonebulk_delete_sessions_by_hash(hashes: list[str]) -> Nonebulk_cleanup_expired() -> dict[str, int]isinstance(delegate, BulkSessionStore)(orhasattr) and fall back when absent.executemany/DELETE … WHERE access_token_hash IN (…).TokenStore(keep existing): states/auth-codes one-time, expiry pruning, session load/store/delete by token/id/refresh, encryption/plaintext opt-in, cleanup.close()drains the queue; no queued writes are lost.bulk_store_sessions/bulk_delete_sessions_by_hashpersist/delete all items; results match looping single-op calls.