From dd62927627b9b22b6d795b71791e7bc4d614d432 Mon Sep 17 00:00:00 2001 From: Mykyta Zotov Date: Sun, 1 Mar 2026 21:41:49 +0100 Subject: [PATCH 1/6] feat: implement multi-layer cache support with LayeredWindowCache and WindowCacheDataSourceAdapter; refactor: improve async delay handling and exception messages in data fetching methods --- README.md | 52 ++ docs/architecture.md | 154 ++++++ docs/components/overview.md | 29 + docs/components/public-api.md | 50 +- docs/diagnostics.md | 133 +++++ docs/glossary.md | 23 + docs/scenarios.md | 206 +++++++ docs/storage-strategies.md | 66 ++- .../Public/LayeredWindowCache.cs | 151 ++++++ .../Public/LayeredWindowCacheBuilder.cs | 205 +++++++ .../Public/WindowCacheDataSourceAdapter.cs | 139 +++++ .../LayeredCacheIntegrationTests.cs | 468 ++++++++++++++++ .../DataSources/SimpleTestDataSource.cs | 2 + .../Extensions/IntegerVariableStepDomain.cs | 21 +- .../Storage/CopyOnReadStorageTests.cs | 2 + .../Public/LayeredWindowCacheBuilderTests.cs | 343 ++++++++++++ .../Public/LayeredWindowCacheTests.cs | 510 ++++++++++++++++++ .../WindowCacheDataSourceAdapterTests.cs | 393 ++++++++++++++ 18 files changed, 2909 insertions(+), 38 deletions(-) create mode 100644 src/SlidingWindowCache/Public/LayeredWindowCache.cs create mode 100644 src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs create mode 100644 src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs create mode 100644 tests/SlidingWindowCache.Integration.Tests/LayeredCacheIntegrationTests.cs create mode 100644 tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs create mode 100644 tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs create mode 100644 tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs diff --git a/README.md b/README.md index 56fad91..5370f98 100644 --- a/README.md +++ b/README.md @@ -351,6 +351,58 @@ This is a thin composition of `GetDataAsync` followed by `WaitForIdleAsync`. The `WaitForIdleAsync()` provides race-free synchronization with background operations for tests. Uses "was idle at some point" semantics — does not guarantee still idle after completion. See `docs/invariants.md` (Activity tracking invariants). +## Multi-Layer Cache + +For workloads with high-latency data sources, you can compose multiple `WindowCache` instances into a layered stack. Each layer uses the layer below it as its data source, allowing you to trade memory for reduced data-source I/O. + +```csharp +await using var cache = LayeredWindowCacheBuilder + .Create(realDataSource, domain) + .AddLayer(new WindowCacheOptions( // L2: deep background cache + leftCacheSize: 10.0, + rightCacheSize: 10.0, + readMode: UserCacheReadMode.CopyOnRead, + leftThreshold: 0.3, + rightThreshold: 0.3)) + .AddLayer(new WindowCacheOptions( // L1: user-facing cache + leftCacheSize: 0.5, + rightCacheSize: 0.5, + readMode: UserCacheReadMode.Snapshot)) + .Build(); + +var result = await cache.GetDataAsync(range, ct); +``` + +`LayeredWindowCache` implements `IWindowCache` and is `IAsyncDisposable` — it owns and disposes all layers when you dispose it. + +**Recommended layer configuration pattern:** +- **Inner layers** (closest to the data source): `CopyOnRead`, large buffer sizes (5–10×), handles the heavy prefetching +- **Outer (user-facing) layer**: `Snapshot`, small buffer sizes (0.3–1.0×), zero-allocation reads + +> **Important — buffer ratio requirement:** Inner layer buffers must be **substantially** larger +> than outer layer buffers, not merely slightly larger. When the outer layer rebalances, it +> fetches missing ranges from the inner layer via `GetDataAsync`. Each fetch publishes a +> rebalance intent on the inner layer. If the inner layer's `NoRebalanceRange` is not wide +> enough to contain the outer layer's full `DesiredCacheRange`, the inner layer will also +> rebalance — and re-center toward only one side of the outer layer's gap, leaving it poorly +> positioned for the next rebalance. With undersized inner buffers this becomes a continuous +> cycle (cascading rebalance thrashing). Use a 5–10× ratio and `leftThreshold`/`rightThreshold` +> of 0.2–0.3 on inner layers to ensure the inner layer's stability zone absorbs the outer +> layer's rebalance fetches. See `docs/architecture.md` (Cascading Rebalance Behavior) and +> `docs/scenarios.md` (Scenarios L6 and L7) for the full explanation. + +**Three-layer example:** +```csharp +await using var cache = LayeredWindowCacheBuilder + .Create(realDataSource, domain) + .AddLayer(l3Options) // L3: 10× CopyOnRead — network/disk absorber + .AddLayer(l2Options) // L2: 2× CopyOnRead — mid-level buffer + .AddLayer(l1Options) // L1: 0.5× Snapshot — user-facing + .Build(); +``` + +For detailed guidance see `docs/storage-strategies.md`. + ## License MIT diff --git a/docs/architecture.md b/docs/architecture.md index 02d3f6b..523b1bb 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -316,6 +316,160 @@ Disposal respects the single-writer architecture: --- +## Multi-Layer Caches + +### Overview + +Multiple `WindowCache` instances can be stacked into a cache pipeline where each layer's +`IDataSource` is the layer below it. This is built into the library via three public types: + +- **`WindowCacheDataSourceAdapter`** — adapts any `IWindowCache` as an `IDataSource` so it can + serve as a backing store for an outer `WindowCache`. +- **`LayeredWindowCacheBuilder`** — fluent builder that wires the layers together and returns a + `LayeredWindowCache` that owns and disposes all of them. +- **`LayeredWindowCache`** — thin `IWindowCache` wrapper that delegates `GetDataAsync` to the + outermost layer, awaits all layers sequentially (outermost-to-innermost) on `WaitForIdleAsync`, + and disposes all layers outermost-first on disposal. + +### Architectural Properties + +**Each layer is an independent `WindowCache`.** +Every layer obeys the full single-writer architecture, decision-driven execution, and smart +eventual consistency model described in this document. There is no shared state between layers. + +**Data flows inward on miss, outward on return.** +When the outermost layer does not have data in its window, it calls the adapter's `FetchAsync`, +which calls `GetDataAsync` on the next inner layer. This cascades inward until the real data +source is reached. Each layer then caches the data it fetched and returns it up the chain. + +**Full-stack convergence via `WaitForIdleAsync`.** +`WaitForIdleAsync` on `LayeredWindowCache` awaits all layers sequentially, outermost to innermost. +The outermost layer must be awaited first, because its rebalance drives fetch requests (via the +adapter) into inner layers — only once the outer layer is idle can inner layers be known to have +received all pending work. This guarantees that calling `GetDataAndWaitForIdleAsync` on a +`LayeredWindowCache` waits for the entire cache stack to converge, not just the user-facing layer. +Each inner layer independently manages its own idle state via `AsyncActivityCounter`. + +**Consistent model — not strong consistency between layers.** +The adapter uses `GetDataAsync` (eventual consistency), not `GetDataAndWaitForIdleAsync`. Inner +layers are not forced to converge before serving the outer layer. Each layer serves correct data +immediately; prefetch optimization propagates asynchronously at each layer independently. + +**No new concurrency model.** A layered cache is not a multi-consumer scenario. All user +requests flow through the single outermost layer, which remains the sole logical consumer of the +next inner layer (via the adapter). The single-consumer model holds at every layer boundary. + +**Disposal order.** `LayeredWindowCache.DisposeAsync` disposes layers outermost-first: +the user-facing layer is stopped first (no new requests flow into inner layers), then each inner +layer is disposed in turn. This mirrors the single-writer disposal sequence at each layer. + +### Recommended Layer Configuration + +| Layer | `UserCacheReadMode` | Buffer size | Purpose | +|---------------------------------------------|---------------------|-------------|----------------------------------------| +| Innermost (deepest, closest to data source) | `CopyOnRead` | 5–10× | Wide prefetch window; absorbs I/O cost | +| Intermediate (optional) | `CopyOnRead` | 1–3× | Narrows window toward working set | +| Outermost (user-facing) | `Snapshot` | 0.3–1.0× | Zero-allocation reads; minimal memory | + +Inner layers with `CopyOnRead` make cache writes cheap (growable list, no copy on write) while +outer `Snapshot` layers make reads cheap (single contiguous array, zero per-read allocation). + +### Cascading Rebalance Behavior + +This is the most important configuration concern in a layered cache setup. + +#### Mechanism + +When L1 rebalances, its `CacheDataExtensionService` computes missing ranges +(`DesiredCacheRange \ AssembledRangeData`) and calls the batch `FetchAsync(IEnumerable, ct)` +on the `WindowCacheDataSourceAdapter`. Because the adapter only implements the single-range +`FetchAsync` overload, the default `IDataSource` interface implementation dispatches one +parallel call per missing range via `Task.WhenAll`. + +Each call reaches L2's `GetDataAsync`, which: +1. Serves the data immediately (from L2's cache or by fetching from L2's own data source) +2. **Publishes a rebalance intent on L2** with that individual range + +When L1's `DesiredCacheRange` extends beyond L2's current window on both sides, L1's rebalance +produces two gap ranges (left and right). Both `GetDataAsync` calls on L2 happen in parallel. +L2's intent loop processes whichever intent it sees last ("latest wins"), and if that range +falls outside L2's `NoRebalanceRange`, L2 schedules its own background rebalance. + +This is a **cascading rebalance**: L1's rebalance triggers L2's rebalance. Under sequential +access with correct configuration this should be rare. Under misconfiguration it becomes a +continuous cycle — every L1 rebalance triggers an L2 rebalance, which re-centers L2 toward +just one gap side, leaving L2 poorly positioned for L1's next rebalance. + +#### Natural Mitigations Already in Place + +The system provides several natural defences against cascading rebalances, even before +configuration is considered: + +- **"Latest wins" semantics**: When two parallel `GetDataAsync` calls publish intents on L2, + the intent loop processes only the surviving (latest) intent. At most one L2 rebalance is + triggered per L1 rebalance burst, regardless of how many gap ranges L1 fetched. +- **Debounce delay**: L2's debounce delay further coalesces rapid sequential intent publications. + Parallel intents from a single L1 rebalance will typically be absorbed into one debounce window. +- **Decision engine work avoidance**: If the surviving intent range falls within L2's + `NoRebalanceRange`, L2's Decision Engine rejects rebalance at Stage 1 (fast path). No L2 + rebalance is triggered at all. This is the **desired steady-state** under correct configuration. + +#### Configuration Requirements + +The natural mitigations are only effective when L2's buffer is substantially larger than L1's. +The goal is that L1's full `DesiredCacheRange` fits comfortably within L2's `NoRebalanceRange` +during normal sequential access — making Stage 1 rejection the norm, not the exception. + +**Buffer ratio rule of thumb:** + +| Layer | `leftCacheSize` / `rightCacheSize` | `leftThreshold` / `rightThreshold` | +|----------------|------------------------------------|--------------------------------------------| +| L1 (outermost) | 0.3–1.0× | 0.1–0.2 (can be tight — L2 absorbs misses) | +| L2 (inner) | 5–10× L1's buffer | 0.2–0.3 (wider stability zone) | +| L3+ (deeper) | 3–5× the layer above | 0.2–0.3 | + +With these ratios, L1's `DesiredCacheRange` (which expands L1's buffer around the request) +typically falls well within L2's `NoRebalanceRange` (which is L2's buffer shrunk by its +thresholds). L2's Decision Engine skips rebalance at Stage 1, and no cascading occurs. + +**Why the ratio matters more than the absolute size:** + +Suppose L1 has `leftCacheSize=1.0, rightCacheSize=1.0` and `requestedRange` has length 100. +L1's `DesiredCacheRange` will be approximately `[request - 100, request + 100]` (length 300). +For L2's Stage 1 to reject the rebalance, L2's `NoRebalanceRange` must contain that +`[request - 100, request + 100]` interval. L2's `NoRebalanceRange` is derived from +`CurrentCacheRange` by applying L2's thresholds inward. So L2 needs a `CurrentCacheRange` +substantially larger than L1's `DesiredCacheRange`. + +#### Anti-Pattern: Buffers Too Close in Size + +**What goes wrong when L2's buffer is similar to L1's:** + +1. User scrolls → L1 rebalances, extending to `[50, 300]` +2. L1 fetches left gap `[50, 100)` and right gap `(250, 300]` from L2 in parallel +3. Both ranges fall outside L2's `NoRebalanceRange` (L2's buffer isn't large enough to cover them) +4. L2 re-centers toward the last-processed gap — say, `(250, 300]` +5. L2's `CurrentCacheRange` is now `[200, 380]` +6. User scrolls again → L1 rebalances to `[120, 370]` +7. Left gap `[120, 200)` falls outside L2's window — L2 must fetch from its own data source +8. L2 re-centers again → oscillation + +**Symptoms:** `l2.RebalanceExecutionCompleted` count approaches `l1.RebalanceExecutionCompleted`. +The inner layer provides no meaningful buffering benefit. Data source I/O per user request is +not reduced compared to a single-layer cache. + +**Resolution:** Increase L2's `leftCacheSize` and `rightCacheSize` to 5–10× L1's values, and +set L2's `leftThreshold` / `rightThreshold` to 0.2–0.3. + +### See Also + +- `README.md` — Multi-Layer Cache usage examples and configuration warning +- `docs/scenarios.md` — Scenarios L6 (cascading rebalance mechanics) and L7 (anti-pattern) +- `docs/storage-strategies.md` — Storage strategy trade-offs for layered configs +- `docs/components/public-api.md` — API reference for the three new public types + +--- + ## Invariants This document explains the model; the formal guarantees live in `docs/invariants.md`. diff --git a/docs/components/overview.md b/docs/components/overview.md index 0abfc8a..ba1962e 100644 --- a/docs/components/overview.md +++ b/docs/components/overview.md @@ -18,6 +18,7 @@ The system is easier to reason about when components are grouped by: - Public facade: `WindowCache` - Public extensions: `WindowCacheExtensions` — opt-in strong consistency mode (`GetDataAndWaitForIdleAsync`) +- Multi-layer support: `WindowCacheDataSourceAdapter`, `LayeredWindowCacheBuilder`, `LayeredWindowCache` - User Path: assembles requested data and publishes intent - Intent loop: observes latest intent and runs analytical validation - Execution: performs debounced, cancellable rebalance work and mutates cache state @@ -54,6 +55,34 @@ The system is easier to reason about when components are grouped by: ├── 🟦 RebalanceExecutor └── 🟦 CacheDataExtensionService └── uses → 🟧 IDataSource (user-provided) + +──────────────────────────── Multi-Layer Support ──────────────────────────── + +🟦 LayeredWindowCacheBuilder [Fluent Builder] +│ Static Create(dataSource, domain) → builder +│ AddLayer(options, diagnostics?) → builder (fluent chain) +│ Build() → LayeredWindowCache +│ +│ internally wires: +│ IDataSource → WindowCache → WindowCacheDataSourceAdapter +│ │ +│ ▼ +│ WindowCache → WindowCacheDataSourceAdapter → ... +│ │ +│ ▼ (outermost) +└─────────────────────────────────► WindowCache + (user-facing layer, index = LayerCount-1) + +🟦 LayeredWindowCache [IWindowCache wrapper] +│ LayerCount: int +│ GetDataAsync() → delegates to outermost WindowCache +│ WaitForIdleAsync() → awaits all layers sequentially, outermost to innermost +│ DisposeAsync() → disposes all layers outermost-first + +🟦 WindowCacheDataSourceAdapter [IDataSource adapter] +│ Wraps IWindowCache as IDataSource +│ FetchAsync() → calls inner cache's GetDataAsync() +│ converts ReadOnlyMemory → array for RangeChunk ``` **Component Type Legend:** diff --git a/docs/components/public-api.md b/docs/components/public-api.md index 1f29e86..2dd213b 100644 --- a/docs/components/public-api.md +++ b/docs/components/public-api.md @@ -145,7 +145,55 @@ Composes `GetDataAsync` + `WaitForIdleAsync` into a single call. Returns the sam **See**: `README.md` (Strong Consistency Mode section) and `docs/architecture.md` for broader context. -## See Also +## Multi-Layer Cache + +Three classes support building layered cache stacks where each layer's data source is the layer below it: + +### WindowCacheDataSourceAdapter\ + +**File**: `src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs` + +**Type**: `sealed class` implementing `IDataSource` + +Wraps an `IWindowCache` as an `IDataSource`, allowing any `WindowCache` to act as the data source for an outer `WindowCache`. Data is retrieved using eventual consistency (`GetDataAsync`). + +- Converts `ReadOnlyMemory` (returned by `IWindowCache.GetDataAsync`) to `IEnumerable` (required by `IDataSource.FetchAsync`) via `.ToArray()`. +- Does **not** own the wrapped cache; the caller is responsible for disposing it. + +### LayeredWindowCache\ + +**File**: `src/SlidingWindowCache/Public/LayeredWindowCache.cs` + +**Type**: `sealed class` implementing `IWindowCache` and `IAsyncDisposable` + +A thin wrapper that: +- Delegates `GetDataAsync` to the outermost layer. +- **`WaitForIdleAsync` awaits all layers sequentially, outermost to innermost.** The outer layer is awaited first because its rebalance drives fetch requests into inner layers. This ensures `GetDataAndWaitForIdleAsync` correctly waits for the entire cache stack to converge. +- **Owns** all layer `WindowCache` instances and disposes them in reverse order (outermost first) when disposed. +- Exposes `LayerCount` for inspection. + +Typically created via `LayeredWindowCacheBuilder.Build()` rather than directly. + +### LayeredWindowCacheBuilder\ + +**File**: `src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs` + +**Type**: `sealed class` — fluent builder + +```csharp +await using var cache = LayeredWindowCacheBuilder + .Create(realDataSource, domain) + .AddLayer(deepOptions) // L2: inner layer (CopyOnRead, large buffers) + .AddLayer(userOptions) // L1: outer layer (Snapshot, small buffers) + .Build(); +``` + +- `Create(dataSource, domain)` — factory entry point; validates `dataSource` is not null. +- `AddLayer(options, diagnostics?)` — adds a layer on top; first call = innermost layer, last call = outermost (user-facing). +- `Build()` — constructs all `WindowCache` instances, wires them via `WindowCacheDataSourceAdapter`, and wraps them in `LayeredWindowCache`. +- Throws `InvalidOperationException` from `Build()` if no layers were added. + +**See**: `README.md` (Multi-Layer Cache section) and `docs/storage-strategies.md` for recommended layer configuration patterns. - `docs/boundary-handling.md` - `docs/diagnostics.md` diff --git a/docs/diagnostics.md b/docs/diagnostics.md index 936a13e..d145b66 100644 --- a/docs/diagnostics.md +++ b/docs/diagnostics.md @@ -762,6 +762,139 @@ public class PrometheusMetricsDiagnostics : ICacheDiagnostics --- +## Per-Layer Diagnostics in Layered Caches + +When using `LayeredWindowCacheBuilder`, each cache layer can be given its own independent +`ICacheDiagnostics` instance. This lets you observe the behavior of each layer in isolation, +which is the primary tool for tuning buffer sizes and thresholds in a multi-layer setup. + +### Attaching Diagnostics to Individual Layers + +Pass a diagnostics instance as the second argument to `AddLayer`: + +```csharp +var l2Diagnostics = new EventCounterCacheDiagnostics(); +var l1Diagnostics = new EventCounterCacheDiagnostics(); + +await using var cache = LayeredWindowCacheBuilder + .Create(realDataSource, domain) + .AddLayer(deepOptions, l2Diagnostics) // L2: inner / deep layer + .AddLayer(userOptions, l1Diagnostics) // L1: outermost / user-facing layer + .Build(); +``` + +Omit the second argument (or pass `null`) to use the default `NoOpDiagnostics` for that layer. + +### What Each Layer's Diagnostics Report + +Because each layer is a fully independent `WindowCache`, every `ICacheDiagnostics` event has +the same meaning as documented in the single-cache sections above — but scoped to that layer: + +| Event | Meaning in a layered context | +|-------------------------------------------|------------------------------------------------------------------------------------| +| `UserRequestServed` | A request was served by **this layer** (whether from cache or via adapter) | +| `UserRequestFullCacheHit` | The request was served entirely from **this layer's** window | +| `UserRequestPartialCacheHit` | This layer partially served the request; the rest was fetched from the layer below | +| `UserRequestFullCacheMiss` | This layer had no data; the full request was delegated to the layer below | +| `DataSourceFetchSingleRange` | This layer called the layer below (via the adapter) for a single range | +| `DataSourceFetchMissingSegments` | This layer called the layer below for gap-filling segments only | +| `RebalanceExecutionCompleted` | This layer completed a background rebalance (window expansion/shrink) | +| `RebalanceSkippedCurrentNoRebalanceRange` | This layer's rebalance was skipped — still within its stability zone | + +### Detecting Cascading Rebalances + +A **cascading rebalance** occurs when the outer layer's rebalance fetches ranges from the +inner layer that fall outside the inner layer's `NoRebalanceRange`, causing the inner layer +to also rebalance. Under correct configuration this should be rare. Under misconfiguration +it becomes continuous and defeats the purpose of layering. + +**Primary indicator — compare rebalance completion counts:** + +```csharp +// After a sustained sequential access session: +var l1Rate = l1Diagnostics.RebalanceExecutionCompleted; +var l2Rate = l2Diagnostics.RebalanceExecutionCompleted; + +// Healthy: L2 rebalances much less often than L1 +// l2Rate should be << l1Rate for normal sequential access + +// Unhealthy: L2 rebalances nearly as often as L1 +// l2Rate ≈ l1Rate → cascading rebalance thrashing +``` + +**Secondary confirmation — check skip counts on the inner layer:** + +```csharp +// Under correct configuration, the inner layer's Decision Engine +// should reject most L1-driven intents at Stage 1 (NoRebalanceRange containment). +// This counter should be much higher than l2.RebalanceExecutionCompleted. +var l2SkippedStage1 = l2Diagnostics.RebalanceSkippedCurrentNoRebalanceRange; + +// Healthy ratio: l2SkippedStage1 >> l2Rate +// Unhealthy ratio: l2SkippedStage1 ≈ 0 while l2Rate is high +``` + +**Confirming the data source is being hit too frequently:** + +```csharp +// If the inner layer is rebalancing on every L1 rebalance, +// it will also be fetching from the real data source frequently. +// This counter on the innermost layer should grow slowly under correct config. +var dataSourceFetches = lInnerDiagnostics.DataSourceFetchMissingSegments + + lInnerDiagnostics.DataSourceFetchSingleRange; +``` + +**Resolution checklist when cascading is detected:** + +1. Increase inner layer `leftCacheSize` and `rightCacheSize` to 5–10× the outer layer's values +2. Set inner layer `leftThreshold` and `rightThreshold` to 0.2–0.3 +3. Re-run the access pattern and verify `l2.RebalanceSkippedCurrentNoRebalanceRange` dominates +4. See `docs/architecture.md` (Cascading Rebalance Behavior) and `docs/scenarios.md` (L6, L7) + for a full explanation of the mechanics and the anti-pattern +``` +l2Diagnostics.UserRequestFullCacheHit / l2Diagnostics.UserRequestServed +``` +A low hit rate on the inner layer means L1 is frequently delegating to L2 — consider +increasing L2's buffer sizes (`leftCacheSize` / `rightCacheSize`). + +**Outer layer hit rate:** +``` +l1Diagnostics.UserRequestFullCacheHit / l1Diagnostics.UserRequestServed +``` +The outer layer hit rate is what users directly experience. If it is low, consider increasing +L1's buffer size or tightening the `leftThreshold` / `rightThreshold` to reduce rebalancing. + +**Real data source access rate (bypassing all layers):** + +Monitor `l_innermost_diagnostics.DataSourceFetchSingleRange` or +`DataSourceFetchMissingSegments` on the innermost layer. These represent requests that went +all the way to the real data source. Reducing this rate (by widening inner layer buffers) is +the primary goal of a multi-layer setup. + +**Rebalance frequency:** +``` +l1Diagnostics.RebalanceExecutionCompleted // How often L1 is re-centering +l2Diagnostics.RebalanceExecutionCompleted // How often L2 is re-centering +``` +If L1 rebalances much more frequently than L2, it is either too narrowly configured or the +access pattern has high variability. Consider loosening L1's thresholds or widening L2. + +### Production Guidance for Layered Caches + +- **Always handle `RebalanceExecutionFailed` on each layer.** Background rebalance failures + on any layer are silent without a proper implementation. See the production requirements + section above — they apply to every layer independently. + +- **Use separate `EventCounterCacheDiagnostics` instances per layer** during development + and staging to establish baseline metrics. In production, replace with custom + implementations that export to your monitoring infrastructure. + +- **Layer diagnostics are completely independent.** There is no aggregate or combined + diagnostics object; you observe each layer separately and interpret the metrics in + relation to each other. + +--- + ## See Also - **[Invariants](invariants.md)** - System invariants tracked by diagnostics diff --git a/docs/glossary.md b/docs/glossary.md index f2cab7c..0e37e7a 100644 --- a/docs/glossary.md +++ b/docs/glossary.md @@ -115,6 +115,29 @@ Strong Consistency Mode - Not recommended for hot paths: adds latency equal to the rebalance execution time (debounce delay + I/O). - See `README.md` and `docs/components/public-api.md`. +## Multi-Layer Caches + +Layered Cache +- A pipeline of two or more `WindowCache` instances where each layer's `IDataSource` is the layer below it. Created via `LayeredWindowCacheBuilder`. The user interacts with the outermost layer; inner layers serve as warm prefetch buffers. See `docs/architecture.md` and `README.md`. + +Cascading Rebalance +- When an outer layer's rebalance fetches missing ranges from the inner layer via `GetDataAsync`, each fetch publishes a rebalance intent on the inner layer. If those ranges fall outside the inner layer's `NoRebalanceRange`, the inner layer also schedules a rebalance. Under correct configuration (inner buffers 5–10× larger than outer buffers) this is rare — the inner layer's Decision Engine rejects the intent at Stage 1. Under misconfiguration it becomes continuous (see "Cascading Rebalance Thrashing"). See `docs/architecture.md` (Cascading Rebalance Behavior) and `docs/scenarios.md` (Scenarios L6, L7). + +Cascading Rebalance Thrashing +- The failure mode of a misconfigured layered cache where every outer layer rebalance triggers an inner layer rebalance, which re-centers the inner layer toward only one side of the outer layer's gap, leaving it poorly positioned for the next rebalance. Symptoms: `l2.RebalanceExecutionCompleted ≈ l1.RebalanceExecutionCompleted`; the inner layer provides no buffering benefit. Resolution: increase inner layer buffer sizes to 5–10× the outer layer's and use thresholds of 0.2–0.3. See `docs/scenarios.md` (Scenario L7). + +Layer +- A single `WindowCache` instance in a layered cache stack. Layers are ordered by proximity to the user: L1 = outermost (user-facing), L2 = next inner, Lₙ = innermost (closest to the real data source). + +WindowCacheDataSourceAdapter +- Adapts an `IWindowCache` to the `IDataSource` interface, enabling it to act as the backing store for an outer `WindowCache`. This is the composition point for building layered caches. The adapter does not own the inner cache; ownership is managed by `LayeredWindowCache`. See `src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs`. + +LayeredWindowCacheBuilder +- Fluent builder that wires `WindowCache` layers into a `LayeredWindowCache`. Layers are added bottom-up (deepest/innermost first, user-facing last). Each `AddLayer` call adds one `WindowCache` on top of the current stack. `Build()` returns a `LayeredWindowCache` that owns all layers. See `src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs`. + +LayeredWindowCache +- A thin `IWindowCache` wrapper that owns a stack of `WindowCache` layers. Delegates `GetDataAsync` to the outermost layer. `WaitForIdleAsync` awaits all layers sequentially, outermost to innermost, ensuring full-stack convergence (required for correct behavior of `GetDataAndWaitForIdleAsync`). Disposes all layers outermost-first on `DisposeAsync`. Exposes `LayerCount`. See `src/SlidingWindowCache/Public/LayeredWindowCache.cs`. + ## Storage And Materialization UserCacheReadMode diff --git a/docs/scenarios.md b/docs/scenarios.md index 800f0bf..dd1b3e9 100644 --- a/docs/scenarios.md +++ b/docs/scenarios.md @@ -337,6 +337,212 @@ For concurrency correctness, the following guarantees hold: Temporary non-optimal cache geometry is acceptable. Permanent inconsistency is not. +--- + +## V. Multi-Layer Cache Scenarios + +These scenarios describe the temporal behavior when `LayeredWindowCacheBuilder` is used to +create a cache stack of two or more `WindowCache` layers. + +**Notation:** L1 = outermost (user-facing) layer; L2 = next inner layer; Lₙ = innermost layer +(directly above the real `IDataSource`). Data requests flow L1 → L2 → ... → Lₙ → data source; +data returns in reverse order. + +--- + +### L1 — Cold Start (All Layers Uninitialized) + +**Preconditions:** +- All layers uninitialized (`IsInitialized == false` at every layer) + +**Action Sequence:** +1. User calls `GetDataAsync(range)` on `LayeredWindowCache` → delegates to L1 +2. L1 (cold): calls `FetchAsync(range)` on the adapter → calls L2's `GetDataAsync(range)` +3. L2 (cold): calls `FetchAsync(range)` on the adapter → continues inward until Lₙ +4. Lₙ (cold): fetches `range` from the real `IDataSource`; returns data; publishes intent +5. Each inner layer returns data upward, each publishes its own rebalance intent (fire-and-forget) +6. L1 receives data from L2 adapter; publishes its own intent; returns data to user +7. In the background, each layer independently rebalances to its configured `DesiredCacheRange` + +**Key insight:** The first user request traverses the full stack. Subsequent requests will be +served from whichever layer has the data in its window (L1 first, then L2, etc.). + +--- + +### L2 — L1 Cache Hit (Outermost Layer Serves Request) + +**Preconditions:** +- All layers initialized +- L1 `CurrentCacheRange.Contains(requestedRange) == true` + +**Action Sequence:** +1. User calls `GetDataAsync(requestedRange)` → L1 has the data +2. L1 serves the request from its cache without contacting L2 +3. L1 publishes an intent (fire-and-forget); Decision Engine evaluates whether L1 needs rebalancing +4. L2 and deeper layers are NOT contacted; they continue their own background rebalancing independently + +**Key insight:** The outermost layer absorbs requests that fall within its window, providing the +lowest latency. Inner layers are only contacted on L1 misses. + +--- + +### L3 — L1 Miss, L2 Hit (Outer Miss Delegates to Next Layer) + +**Preconditions:** +- All layers initialized +- L1 does NOT have `requestedRange` in its window +- L2 `CurrentCacheRange.Contains(requestedRange) == true` + +**Action Sequence:** +1. User calls `GetDataAsync(requestedRange)` → L1 misses +2. L1 calls `FetchAsync(requestedRange)` on the L2 adapter +3. L2 serves the request from its own cache; publishes its own rebalance intent +4. L2 adapter returns a `RangeChunk` to L1 +5. L1 assembles and returns data to the user; publishes its rebalance intent +6. L1's background rebalance subsequently fetches the wider range from L2 (via adapter), + expanding L1's window to cover similar future requests without contacting L2 + +**Key insight:** L2 acts as a warm prefetch buffer. L1 pays one adapter call on miss, then +rebalances to prevent the same miss on the next request. + +--- + +### L4 — Full Stack Miss (Request Falls Outside All Layer Windows) + +**Preconditions:** +- All layers initialized +- `requestedRange` falls outside every layer's current window (e.g., a large jump) + +**Action Sequence:** +1. User calls `GetDataAsync(requestedRange)` → L1 misses +2. L1 adapter → L2 misses → ... → Lₙ misses → real `IDataSource` fetches data +3. Data flows back up the chain; each layer publishes its own rebalance intent +4. User receives data immediately; all layers' background rebalances cascade independently + +**Note:** In a large jump, each layer's rebalance independently re-centers around the new region. +The stack converges from the inside out: Lₙ expands first (driving real I/O), then L(n-1) expands +from Lₙ's new window, and finally L1 expands from L2. + +--- + +### L5 — Per-Layer Diagnostics Observation + +**Setup:** +```csharp +var l2Diagnostics = new EventCounterCacheDiagnostics(); +var l1Diagnostics = new EventCounterCacheDiagnostics(); + +await using var cache = LayeredWindowCacheBuilder + .Create(dataSource, domain) + .AddLayer(deepOptions, l2Diagnostics) // L2 + .AddLayer(userOptions, l1Diagnostics) // L1 + .Build(); +``` + +**Observation pattern:** +- `l1Diagnostics.UserRequestFullCacheHit` — requests served entirely from L1 +- `l2Diagnostics.UserRequestFullCacheHit` — requests L1 delegated to L2 that L2 served from cache +- `l2Diagnostics.DataSourceFetchSingleRange` — requests that reached the real data source +- `l1Diagnostics.RebalanceExecutionCompleted` — how often L1's window was re-centered + +**Key insight:** Each layer has fully independent diagnostics. By comparing hit rates across +layers you can tune buffer sizes and thresholds for the access pattern in production. + +--- + +### L6 — Cascading Rebalance (L1 Rebalance Triggers L2 Rebalance) + +This scenario describes the internal mechanics of a cascading rebalance. Understanding it +is essential for correct layer configuration. See also `docs/architecture.md` (Cascading +Rebalance Behavior) and Scenario L7 for the anti-pattern case. + +**Preconditions:** +- Both layers initialized +- User has scrolled forward enough that L1's `DesiredCacheRange` now extends **beyond** L2's + `NoRebalanceRange` on at least one side (e.g., L2's buffers are too small relative to L1's) + +**Action Sequence:** +1. User calls `GetDataAsync(range)` → L1 serves from cache; publishes rebalance intent +2. L1's Decision Engine confirms rebalance needed (range outside L1's `NoRebalanceRange`) +3. L1's rebalance computes: `AssembledRangeData = [100, 250]`, `DesiredCacheRange = [50, 300]` +4. Missing ranges: left gap `[50, 100)` and right gap `(250, 300]` +5. L1 calls `dataSource.FetchAsync({[50,100), (250,300]}, ct)` on the adapter +6. The adapter's default batch implementation dispatches **two parallel** `GetDataAsync` calls to L2 +7. L2 serves both ranges from its cache (or its own data source); returns data to L1 +8. **L2 publishes two rebalance intents** — one per `GetDataAsync` call (fire-and-forget) +9. L2's intent loop applies "latest wins" — one intent supersedes the other +10. L2's Decision Engine evaluates the surviving intent against L2's `NoRebalanceRange` + +**Branch A — Cascading rebalance avoided (correct configuration):** +- The surviving range falls inside L2's `NoRebalanceRange` (Stage 1 rejection) +- L2 skips rebalance entirely — no I/O, no cache mutation +- This is the **desired steady-state**: L2's large buffer absorbed L1's fetch without reacting + +**Branch B — Cascading rebalance occurs (buffer too small):** +- The surviving range falls outside L2's `NoRebalanceRange` +- L2 schedules its own background rebalance +- L2 re-centers toward the surviving intent range (one gap side, not the midpoint of L1's desired range) +- L2's `CurrentCacheRange` shifts — potentially leaving it poorly positioned for L1's next rebalance + +**Key insight:** Whether Branch A or Branch B occurs is determined entirely by configuration. +Making L2's `leftCacheSize`/`rightCacheSize` 5–10× larger than L1's, and using +`leftThreshold`/`rightThreshold` of 0.2–0.3, makes Branch A the norm. + +--- + +### L7 — Anti-Pattern: Cascading Rebalance Thrashing + +This scenario describes the failure mode when inner layer buffers are too close in size to outer +layer buffers. Do not configure a layered cache this way. + +**Configuration (wrong):** +``` +L1: leftCacheSize=1.0, rightCacheSize=1.0, leftThreshold=0.1, rightThreshold=0.1 +L2: leftCacheSize=1.5, rightCacheSize=1.5, leftThreshold=0.1, rightThreshold=0.1 +``` +L2's buffers are only 1.5× L1's — not nearly enough. + +**Access pattern:** User scrolls sequentially, one step per second. + +**What happens (step by step):** + +1. **Step 1** — User requests `[100, 110]` + - Cold start: both layers fetch from data source; L2 rebalances to `[0, 260]`; L1 rebalances to `[0, 220]` + - Both layers converge around `[100, 110]` + +2. **Step 2** — User requests `[200, 210]` + - L1: `[200, 210]` is within L1's window → cache hit; L1 publishes intent; L1 rebalances to `[100, 310]` + - L1's rebalance fetches right gap `(220, 310]` from L2 via adapter + - L2: `(220, 310]` extends slightly beyond L2's `NoRebalanceRange` (L2 only has window to ~260) + - L2 re-centers to `[110, 410]` — **L2 rebalanced unnecessarily** + +3. **Step 3** — User requests `[300, 310]` + - L1 rebalances to `[200, 410]`; fetches right gap `(310, 410]` from L2 + - L2: right gap at `(310, 410]` is near L2's new boundary → L2 rebalances again + - L2 re-centers to `[210, 510]` — **L2 rebalanced again** + +4. **Pattern repeats every scroll step** — L2's rebalance count tracks L1's rebalance count + +**Observed symptoms:** +- `l2.RebalanceExecutionCompleted ≈ l1.RebalanceExecutionCompleted` (L2 rebalances as often as L1) +- `l2.DataSourceFetchMissingSegments` is high (L2 repeatedly fetches from the real data source) +- L2 provides no meaningful prefetch advantage over a single-layer cache +- Data source I/O is not reduced compared to using L1 alone + +**Resolution:** +``` +L2: leftCacheSize=8.0, rightCacheSize=8.0, leftThreshold=0.25, rightThreshold=0.25 +``` +With 8× buffers, L2's `DesiredCacheRange` spans `[100 - 800, 100 + 800]` after the first +rebalance. L1's subsequent `DesiredCacheRange` values (length ~300) remain well within L2's +`NoRebalanceRange` (L2's window shrunk by 25% thresholds on each side). L2's Decision Engine +rejects rebalance at Stage 1 for every normal sequential scroll step. + +**Diagnostic check:** After resolving misconfiguration, `l2.RebalanceSkippedCurrentNoRebalanceRange` +should be much higher than `l2.RebalanceExecutionCompleted` during normal sequential access. + +--- + ## Invariants Scenarios must be consistent with: diff --git a/docs/storage-strategies.md b/docs/storage-strategies.md index 4c924bd..63cde81 100644 --- a/docs/storage-strategies.md +++ b/docs/storage-strategies.md @@ -201,49 +201,44 @@ lock (_lock) ### Example Scenario: Multi-Level Cache Composition +The library provides built-in support for layered cache composition via `LayeredWindowCacheBuilder` and `WindowCacheDataSourceAdapter`. + ```csharp -// BACKGROUND LAYER: Large distant cache with CopyOnRead -var backgroundOptions = new WindowCacheOptions( - leftCacheSize: 10.0, // Cache 10x requested range - rightCacheSize: 10.0, - leftThreshold: 0.3, - rightThreshold: 0.3, - readMode: UserCacheReadMode.CopyOnRead // ← Cheap rematerialization -); +// Two-layer cache: L2 (CopyOnRead, large) → L1 (Snapshot, small) +await using var cache = LayeredWindowCacheBuilder + .Create(slowDataSource, domain) // real (bottom-most) data source + .AddLayer(new WindowCacheOptions( // L2: deep background cache + leftCacheSize: 10.0, + rightCacheSize: 10.0, + leftThreshold: 0.3, + rightThreshold: 0.3, + readMode: UserCacheReadMode.CopyOnRead)) // ← cheap rematerialization + .AddLayer(new WindowCacheOptions( // L1: user-facing cache + leftCacheSize: 0.5, + rightCacheSize: 0.5, + readMode: UserCacheReadMode.Snapshot)) // ← zero-allocation reads + .Build(); + +// User scrolls: +// - L1 cache: many reads (zero-alloc), rare rebalancing +// - L2 cache: infrequent reads (copy), frequent rebalancing against slowDataSource +var result = await cache.GetDataAsync(range, ct); +``` -var backgroundCache = new WindowCache( - slowDataSource, // Network/disk - domain, - backgroundOptions -); +If you need lower-level control, you can compose layers manually using `WindowCacheDataSourceAdapter`: -// USER-FACING LAYER: Small nearby cache with Snapshot -var userOptions = new WindowCacheOptions( - leftCacheSize: 0.5, - rightCacheSize: 0.5, - readMode: UserCacheReadMode.Snapshot // ← Zero-allocation reads -); +```csharp +var backgroundCache = new WindowCache( + slowDataSource, domain, backgroundOptions); // Wrap background cache as IDataSource for user cache -// (Implement IDataSource wrapping the background cache — not provided by the library) -IDataSource cachedDataSource = new BackgroundCacheAdapter(backgroundCache); +IDataSource cachedDataSource = + new WindowCacheDataSourceAdapter(backgroundCache); var userCache = new WindowCache( - cachedDataSource, // Reads from background cache - domain, - userOptions -); - -// User scrolls: -// - userCache: many reads (zero-alloc), rare rebalancing -// - backgroundCache: infrequent reads (copy), frequent rebalancing + cachedDataSource, domain, userOptions); ``` -This composition leverages the strengths of both strategies: - -- **Background layer**: Handles large distant window, absorbs rebalancing cost -- **User layer**: Handles small nearby window, serves reads with zero allocation - --- ## Decision Matrix @@ -473,7 +468,8 @@ public async Task CopyOnReadMode_CorrectDuringExpansion() - **Snapshot**: Fast reads (zero-allocation), expensive rematerialization, best for read-heavy workloads - **CopyOnRead with Staging Buffer**: Fast rematerialization, all reads copy under lock (`Read()` and `ToRangeData()`), best for rematerialization-heavy workloads -- **Composition**: Combine both strategies in multi-level caches for optimal performance +- **Composition**: Combine both strategies in multi-level caches using `LayeredWindowCacheBuilder` for + optimal performance; or wire layers manually via `WindowCacheDataSourceAdapter` - **Staging Buffer**: Critical correctness pattern preventing enumeration corruption during cache expansion - **`ToRangeData()` safety**: `CopyOnReadStorage.ToRangeData()` copies `_activeStorage` to an immutable array snapshot under the lock. This is required because `ToRangeData()` is called from the user thread diff --git a/src/SlidingWindowCache/Public/LayeredWindowCache.cs b/src/SlidingWindowCache/Public/LayeredWindowCache.cs new file mode 100644 index 0000000..5631d86 --- /dev/null +++ b/src/SlidingWindowCache/Public/LayeredWindowCache.cs @@ -0,0 +1,151 @@ +using Intervals.NET; +using Intervals.NET.Domain.Abstractions; +using SlidingWindowCache.Public.Dto; + +namespace SlidingWindowCache.Public; + +/// +/// A thin wrapper around a stack of instances +/// that form a multi-layer cache pipeline. Implements +/// by delegating to the outermost (user-facing) layer, and disposes all layers in the correct +/// order when itself is disposed. +/// +/// +/// The type representing range boundaries. Must implement . +/// +/// +/// The type of data being cached. +/// +/// +/// The type representing the domain of the ranges. Must implement . +/// +/// +/// Construction: +/// +/// Instances are created exclusively by . +/// Do not construct directly; use the builder to ensure correct wiring of layers. +/// +/// Layer Order: +/// +/// Layers are ordered from deepest (index 0, closest to the real data source) to outermost +/// (index - 1, user-facing). All public cache operations +/// delegate to the outermost layer. Inner layers operate independently and are driven +/// by the outer layer's data source requests (via ). +/// +/// Disposal: +/// +/// Disposing this instance disposes all managed layers in order from outermost to innermost. +/// The outermost layer is disposed first to stop new user requests from reaching inner layers. +/// Each layer's background loops are stopped gracefully before the next layer is disposed. +/// + /// WaitForIdleAsync Semantics: + /// + /// awaits all layers sequentially, from outermost to innermost. + /// This guarantees that the entire cache stack has converged: the outermost layer finishes its + /// rebalance first (which drives fetch requests into inner layers), then each inner layer is + /// awaited in turn until the deepest layer is idle. + /// + /// + /// This full-stack idle guarantee is required for correct behavior of the + /// GetDataAndWaitForIdleAsync strong consistency extension method when used with a + /// : a caller waiting for strong + /// consistency needs all layers to have converged, not just the outermost one. + /// +/// +public sealed class LayeredWindowCache + : IWindowCache + where TRange : IComparable + where TDomain : IRangeDomain +{ + private readonly IReadOnlyList> _layers; + private readonly IWindowCache _userFacingLayer; + + /// + /// Initializes a new instance of . + /// + /// + /// The ordered list of cache layers, from deepest (index 0) to outermost (last index). + /// Must contain at least one layer. + /// + /// + /// Thrown when is null. + /// + /// + /// Thrown when is empty. + /// + internal LayeredWindowCache(IReadOnlyList> layers) + { + if (layers == null) + { + throw new ArgumentNullException(nameof(layers)); + } + + if (layers.Count == 0) + { + throw new ArgumentException("At least one layer is required.", nameof(layers)); + } + + _layers = layers; + _userFacingLayer = layers[^1]; + } + + /// + /// Gets the total number of layers in the cache stack. + /// + /// + /// Layers are ordered from deepest (index 0, closest to the real data source) to + /// outermost (last index, closest to the user). + /// + public int LayerCount => _layers.Count; + + /// + /// + /// Delegates to the outermost (user-facing) layer. Data is served from that layer's + /// cache window, which is backed by the next inner layer via + /// . + /// + public ValueTask> GetDataAsync( + Range requestedRange, + CancellationToken cancellationToken) + => _userFacingLayer.GetDataAsync(requestedRange, cancellationToken); + + /// + /// + /// Awaits all layers sequentially from outermost to innermost. The outermost layer is awaited + /// first because its rebalance drives fetch requests into inner layers; only after it is idle + /// can inner layers be known to have received all pending work. Each subsequent inner layer is + /// then awaited in order, ensuring the full cache stack has converged before this task completes. + /// + public async Task WaitForIdleAsync(CancellationToken cancellationToken = default) + { + // Outermost to innermost: outer rebalance drives inner fetches, so outer must finish first. + for (var i = _layers.Count - 1; i >= 0; i--) + { + await _layers[i].WaitForIdleAsync(cancellationToken).ConfigureAwait(false); + } + } + + /// + /// Disposes all layers from outermost to innermost, releasing all background resources. + /// + /// + /// + /// Disposal order is outermost-first: the user-facing layer is stopped before inner layers, + /// ensuring no new requests flow into inner layers during their disposal. + /// + /// + /// Each layer's gracefully stops background + /// rebalance loops and releases all associated resources (channels, cancellation tokens, + /// semaphores) before proceeding to the next inner layer. + /// + /// + public async ValueTask DisposeAsync() + { + // Dispose outermost to innermost: stop user-facing layer first, + // then work inward so inner layers are not disposing while outer still runs. + for (var i = _layers.Count - 1; i >= 0; i--) + { + await _layers[i].DisposeAsync().ConfigureAwait(false); + } + } +} diff --git a/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs b/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs new file mode 100644 index 0000000..84f7513 --- /dev/null +++ b/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs @@ -0,0 +1,205 @@ +using Intervals.NET.Domain.Abstractions; +using SlidingWindowCache.Public.Configuration; +using SlidingWindowCache.Public.Instrumentation; + +namespace SlidingWindowCache.Public; + +/// +/// Fluent builder for constructing a multi-layer (L1/L2/L3/...) cache stack, where each +/// layer is a backed by the layer below it +/// via a . +/// +/// +/// The type representing range boundaries. Must implement . +/// +/// +/// The type of data being cached. +/// +/// +/// The type representing the domain of the ranges. Must implement . +/// +/// +/// Layer Ordering: +/// +/// Layers are added from deepest (first call to ) to outermost (last call). +/// The first layer reads from the real passed to +/// . Each subsequent layer reads from the previous layer via an adapter. +/// +/// Recommended Configuration Patterns: +/// +/// +/// +/// Innermost (deepest) layer: Use +/// with large leftCacheSize/rightCacheSize multipliers (e.g., 5–10x). +/// This layer absorbs rebalancing cost and provides a wide prefetch window. +/// +/// +/// +/// +/// Intermediate layers (optional): Use +/// with moderate buffer sizes (e.g., 1–3x). These layers narrow the window toward +/// the user's typical working set. +/// +/// +/// +/// +/// Outermost (user-facing) layer: Use +/// with small buffer sizes (e.g., 0.3–1.0x). This layer provides zero-allocation reads +/// with minimal memory footprint. +/// +/// +/// +/// Example — Two-Layer Cache: +/// +/// await using var cache = LayeredWindowCacheBuilder<int, byte[], IntegerFixedStepDomain> +/// .Create(realDataSource, domain) +/// .AddLayer(new WindowCacheOptions( // L2: deep background cache +/// leftCacheSize: 10.0, +/// rightCacheSize: 10.0, +/// readMode: UserCacheReadMode.CopyOnRead, +/// leftThreshold: 0.3, +/// rightThreshold: 0.3)) +/// .AddLayer(new WindowCacheOptions( // L1: user-facing cache +/// leftCacheSize: 0.5, +/// rightCacheSize: 0.5, +/// readMode: UserCacheReadMode.Snapshot)) +/// .Build(); +/// +/// var result = await cache.GetDataAsync(range, ct); +/// +/// Example — Three-Layer Cache: +/// +/// await using var cache = LayeredWindowCacheBuilder<int, byte[], IntegerFixedStepDomain> +/// .Create(realDataSource, domain) +/// .AddLayer(backgroundOptions) // L3: large distant cache (CopyOnRead, 10x) +/// .AddLayer(midOptions) // L2: medium intermediate cache (CopyOnRead, 2x) +/// .AddLayer(userOptions) // L1: small user-facing cache (Snapshot, 0.5x) +/// .Build(); +/// +/// Disposal: +/// +/// The returned by +/// owns all created cache layers and disposes them in reverse order (outermost first) when +/// is called. +/// +/// +public sealed class LayeredWindowCacheBuilder + where TRange : IComparable + where TDomain : IRangeDomain +{ + private readonly IDataSource _rootDataSource; + private readonly TDomain _domain; + private readonly List _layers = new(); + + /// + /// Private constructor — use to instantiate. + /// + private LayeredWindowCacheBuilder(IDataSource rootDataSource, TDomain domain) + { + _rootDataSource = rootDataSource; + _domain = domain; + } + + /// + /// Creates a new rooted at + /// the specified real data source. + /// + /// + /// The real (bottom-most) data source from which raw data is fetched. + /// All cache layers sit above this source. + /// + /// + /// The range domain shared by all layers. + /// + /// A new builder instance. + /// + /// Thrown when is null. + /// + public static LayeredWindowCacheBuilder Create( + IDataSource dataSource, + TDomain domain) + { + if (dataSource == null) + { + throw new ArgumentNullException(nameof(dataSource)); + } + + return new LayeredWindowCacheBuilder(dataSource, domain); + } + + /// + /// Adds a cache layer on top of all previously added layers. + /// + /// + /// Configuration options for this layer. + /// The first call adds the deepest layer (closest to the real data source); + /// each subsequent call adds a layer closer to the user. + /// + /// + /// Optional per-layer diagnostics. Pass an instance + /// to observe this layer's rebalance and data-source events independently from other layers. + /// When , diagnostics are disabled for this layer. + /// + /// This builder instance, for fluent chaining. + /// + /// Thrown when is null. + /// + public LayeredWindowCacheBuilder AddLayer( + WindowCacheOptions options, + ICacheDiagnostics? diagnostics = null) + { + if (options == null) + { + throw new ArgumentNullException(nameof(options)); + } + + _layers.Add(new LayerDefinition(options, diagnostics)); + return this; + } + + /// + /// Builds the layered cache stack and returns a + /// that owns all created layers. + /// + /// + /// A whose + /// delegates to the outermost layer. + /// Dispose the returned instance to release all layer resources. + /// + /// + /// Thrown when no layers have been added via . + /// + public LayeredWindowCache Build() + { + if (_layers.Count == 0) + { + throw new InvalidOperationException( + "At least one layer must be added before calling Build(). " + + "Use AddLayer() to configure one or more cache layers."); + } + + var caches = new List>(_layers.Count); + IDataSource currentSource = _rootDataSource; + + foreach (var layer in _layers) + { + var cache = new WindowCache( + currentSource, + _domain, + layer.Options, + layer.Diagnostics); + + caches.Add(cache); + + // Wrap this cache as the data source for the next (outer) layer + currentSource = new WindowCacheDataSourceAdapter(cache); + } + + return new LayeredWindowCache(caches); + } + + /// + /// Captures the configuration for a single cache layer. + /// + private sealed record LayerDefinition(WindowCacheOptions Options, ICacheDiagnostics? Diagnostics); +} diff --git a/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs b/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs new file mode 100644 index 0000000..439715b --- /dev/null +++ b/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs @@ -0,0 +1,139 @@ +using Intervals.NET; +using Intervals.NET.Domain.Abstractions; +using SlidingWindowCache.Public.Dto; + +namespace SlidingWindowCache.Public; + +/// +/// Adapts an instance to the +/// interface, enabling it to serve as the +/// data source for another . +/// +/// +/// The type representing range boundaries. Must implement . +/// +/// +/// The type of data being cached. +/// +/// +/// The type representing the domain of the ranges. Must implement . +/// +/// +/// Purpose: +/// +/// This adapter is the composition point for building multi-layer (L1/L2/L3/...) caches. +/// It bridges the gap between (the consumer API) +/// and (the producer API), allowing any cache instance +/// to act as a backing store for a higher (closer-to-user) cache layer. +/// +/// Data Flow: +/// +/// When the outer (higher) cache needs to fetch data, it calls this adapter's +/// method. The adapter +/// delegates to the inner (deeper) cache's , +/// which returns data from the inner cache's window (possibly triggering a background rebalance +/// in the inner cache). The from +/// is converted to an array for the contract. +/// +/// Consistency Model: +/// +/// The adapter uses GetDataAsync (eventual consistency), not GetDataAndWaitForIdleAsync. +/// Each layer manages its own rebalance lifecycle independently. The inner cache converges to its +/// optimal window in the background; the outer cache does not block waiting for it. +/// This is the correct model for layered caches: the user always gets correct data immediately, +/// and prefetch optimization happens asynchronously at each layer. +/// +/// Boundary Semantics: +/// +/// Boundary signals from the inner cache are correctly propagated. When +/// is (no data available), +/// the adapter returns a with a Range, +/// following the contract for bounded data sources. +/// +/// Lifecycle: +/// +/// The adapter does NOT own the inner cache. It holds a reference but does not dispose it. +/// Lifecycle management is the responsibility of the caller. When using +/// , the resulting +/// owns and disposes all layers. +/// +/// Typical Usage (via Builder): +/// +/// await using var cache = LayeredWindowCacheBuilder<int, byte[], IntegerFixedStepDomain> +/// .Create(realDataSource, domain) +/// .AddLayer(new WindowCacheOptions(10.0, 10.0, UserCacheReadMode.CopyOnRead, 0.3, 0.3)) +/// .AddLayer(new WindowCacheOptions(0.5, 0.5, UserCacheReadMode.Snapshot)) +/// .Build(); +/// +/// var data = await cache.GetDataAsync(range, ct); +/// +/// Manual Usage: +/// +/// // Innermost layer — reads from real data source +/// var innerCache = new WindowCache<int, byte[], IntegerFixedStepDomain>( +/// realDataSource, domain, +/// new WindowCacheOptions(10.0, 10.0, UserCacheReadMode.CopyOnRead)); +/// +/// // Adapt inner cache as a data source for the outer layer +/// var adapter = new WindowCacheDataSourceAdapter<int, byte[], IntegerFixedStepDomain>(innerCache); +/// +/// // Outermost layer — reads from the inner cache via adapter +/// var outerCache = new WindowCache<int, byte[], IntegerFixedStepDomain>( +/// adapter, domain, +/// new WindowCacheOptions(0.5, 0.5, UserCacheReadMode.Snapshot)); +/// +/// +public sealed class WindowCacheDataSourceAdapter + : IDataSource + where TRange : IComparable + where TDomain : IRangeDomain +{ + private readonly IWindowCache _innerCache; + + /// + /// Initializes a new instance of . + /// + /// + /// The cache instance to adapt as a data source. Must not be null. + /// The adapter does not take ownership; the caller is responsible for disposal. + /// + /// + /// Thrown when is null. + /// + public WindowCacheDataSourceAdapter(IWindowCache innerCache) + { + _innerCache = innerCache ?? throw new ArgumentNullException(nameof(innerCache)); + } + + /// + /// Fetches data for the specified range from the inner cache. + /// + /// The range for which to fetch data. + /// A cancellation token to cancel the operation. + /// + /// A containing the data available in the inner cache + /// for the requested range. The chunk's Range may be a subset of or equal to + /// (following inner cache boundary semantics), or + /// if no data is available. + /// + /// + /// + /// Delegates to , which may + /// also trigger a background rebalance in the inner cache (eventual consistency). + /// + /// + /// The returned by the inner cache is converted to an array + /// to satisfy the contract. This allocation is + /// intentional: for Snapshot inner caches, a copy is required to avoid capturing + /// a reference into the inner cache's internal array (which may be replaced by a rebalance); + /// for CopyOnRead inner caches, the allocation is already made by the read itself. + /// + /// + public async Task> FetchAsync( + Range range, + CancellationToken cancellationToken) + { + var result = await _innerCache.GetDataAsync(range, cancellationToken).ConfigureAwait(false); + return new RangeChunk(result.Range, result.Data.ToArray()); + } +} diff --git a/tests/SlidingWindowCache.Integration.Tests/LayeredCacheIntegrationTests.cs b/tests/SlidingWindowCache.Integration.Tests/LayeredCacheIntegrationTests.cs new file mode 100644 index 0000000..b600fc1 --- /dev/null +++ b/tests/SlidingWindowCache.Integration.Tests/LayeredCacheIntegrationTests.cs @@ -0,0 +1,468 @@ +using Intervals.NET.Domain.Default.Numeric; +using SlidingWindowCache.Public; +using SlidingWindowCache.Public.Configuration; +using SlidingWindowCache.Public.Instrumentation; +using SlidingWindowCache.Tests.Infrastructure.DataSources; + +namespace SlidingWindowCache.Integration.Tests; + +/// +/// Integration tests for the layered cache feature: +/// , +/// , and +/// . +/// +/// Goal: Verify that a multi-layer cache stack correctly: +/// - Propagates data from the real data source up through all layers +/// - Returns correct data values from the outermost layer +/// - Converges to a steady state (WaitForIdleAsync) +/// - Disposes all layers cleanly without errors +/// - Supports 2-layer and 3-layer configurations +/// - Handles per-layer diagnostics independently +/// +public sealed class LayeredCacheIntegrationTests +{ + private static readonly IntegerFixedStepDomain Domain = new(); + + private static IDataSource CreateRealDataSource() + => new SimpleTestDataSource(i => i); + + private static WindowCacheOptions DeepLayerOptions() => new( + leftCacheSize: 5.0, + rightCacheSize: 5.0, + readMode: UserCacheReadMode.CopyOnRead, + leftThreshold: 0.3, + rightThreshold: 0.3, + debounceDelay: TimeSpan.FromMilliseconds(20)); + + private static WindowCacheOptions MidLayerOptions() => new( + leftCacheSize: 2.0, + rightCacheSize: 2.0, + readMode: UserCacheReadMode.CopyOnRead, + leftThreshold: 0.3, + rightThreshold: 0.3, + debounceDelay: TimeSpan.FromMilliseconds(20)); + + private static WindowCacheOptions UserLayerOptions() => new( + leftCacheSize: 0.5, + rightCacheSize: 0.5, + readMode: UserCacheReadMode.Snapshot, + leftThreshold: 0.2, + rightThreshold: 0.2, + debounceDelay: TimeSpan.FromMilliseconds(20)); + + #region Data Correctness Tests + + [Fact] + public async Task TwoLayerCache_GetData_ReturnsCorrectValues() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(100, 110); + + // ACT + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Equal(11, array.Length); + for (var i = 0; i < array.Length; i++) + Assert.Equal(100 + i, array[i]); + } + + [Fact] + public async Task ThreeLayerCache_GetData_ReturnsCorrectValues() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(MidLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(200, 215); + + // ACT + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Equal(16, array.Length); + for (var i = 0; i < array.Length; i++) + Assert.Equal(200 + i, array[i]); + } + + [Fact] + public async Task TwoLayerCache_SubsequentRequests_ReturnCorrectValues() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + // ACT & ASSERT — three sequential non-overlapping requests + var ranges = new[] + { + Intervals.NET.Factories.Range.Closed(0, 10), + Intervals.NET.Factories.Range.Closed(100, 110), + Intervals.NET.Factories.Range.Closed(500, 510), + }; + + foreach (var range in ranges) + { + var result = await cache.GetDataAsync(range, CancellationToken.None); + var array = result.Data.ToArray(); + Assert.Equal(11, array.Length); + var start = (int)range.Start; + for (var i = 0; i < array.Length; i++) + Assert.Equal(start + i, array[i]); + } + } + + [Fact] + public async Task TwoLayerCache_SingleElementRange_ReturnsCorrectValue() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + // ACT + var range = Intervals.NET.Factories.Range.Closed(42, 42); + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Single(array); + Assert.Equal(42, array[0]); + } + + #endregion + + #region LayerCount Tests + + [Fact] + public async Task TwoLayerCache_LayerCount_IsTwo() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + // ASSERT + Assert.Equal(2, cache.LayerCount); + } + + [Fact] + public async Task ThreeLayerCache_LayerCount_IsThree() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(MidLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + // ASSERT + Assert.Equal(3, cache.LayerCount); + } + + #endregion + + #region Convergence / WaitForIdleAsync Tests + + [Fact] + public async Task TwoLayerCache_WaitForIdleAsync_ConvergesWithoutException() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(100, 110); + await cache.GetDataAsync(range, CancellationToken.None); + + // ACT — should complete without throwing + var exception = await Record.ExceptionAsync(() => cache.WaitForIdleAsync()); + + // ASSERT + Assert.Null(exception); + } + + [Fact] + public async Task TwoLayerCache_AfterConvergence_DataStillCorrect() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(50, 60); + + // Prime the cache and wait for background rebalance to settle + await cache.GetDataAsync(range, CancellationToken.None); + await cache.WaitForIdleAsync(); + + // ACT — re-read same range after convergence + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Equal(11, array.Length); + for (var i = 0; i < array.Length; i++) + Assert.Equal(50 + i, array[i]); + } + + [Fact] + public async Task TwoLayerCache_WaitForIdleAsync_AllLayersHaveConverged() + { + // ARRANGE — use per-layer diagnostics to verify both layers rebalanced + var deepDiagnostics = new EventCounterCacheDiagnostics(); + var userDiagnostics = new EventCounterCacheDiagnostics(); + + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions(), deepDiagnostics) + .AddLayer(UserLayerOptions(), userDiagnostics) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(200, 210); + + // Trigger activity on both layers + await cache.GetDataAsync(range, CancellationToken.None); + + // ACT — wait for the full stack to converge + await cache.WaitForIdleAsync(); + + // ASSERT — both layers must have processed at least one rebalance intent + // (userDiagnostics from outer layer triggered by user request; + // deepDiagnostics from inner layer triggered by outer layer's fetch) + Assert.True(userDiagnostics.RebalanceIntentPublished >= 1, + "Outer (user-facing) layer should have published at least one rebalance intent."); + Assert.True(deepDiagnostics.RebalanceIntentPublished >= 1, + "Inner (deep) layer should have published at least one rebalance intent driven by the outer layer."); + } + + [Fact] + public async Task TwoLayerCache_GetDataAndWaitForIdleAsync_ReturnsCorrectData() + { + // ARRANGE — verify that the strong consistency extension method works on a LayeredWindowCache + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(300, 315); + + // ACT — extension method should work correctly because WaitForIdleAsync now covers all layers + var result = await cache.GetDataAndWaitForIdleAsync(range); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Equal(16, array.Length); + for (var i = 0; i < array.Length; i++) + Assert.Equal(300 + i, array[i]); + } + + [Fact] + public async Task TwoLayerCache_GetDataAndWaitForIdleAsync_SubsequentRequestIsFullHit() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(400, 410); + + // ACT — prime with strong consistency (waits for full stack to converge) + await cache.GetDataAndWaitForIdleAsync(range); + + // Re-request a subset — the outer layer cache window should fully cover it + var subRange = Intervals.NET.Factories.Range.Closed(402, 408); + var result = await cache.GetDataAsync(subRange, CancellationToken.None); + + // ASSERT — data is correct + var array = result.Data.ToArray(); + Assert.Equal(7, array.Length); + for (var i = 0; i < array.Length; i++) + Assert.Equal(402 + i, array[i]); + } + + #endregion + + #region Disposal Tests + + [Fact] + public async Task TwoLayerCache_DisposeAsync_CompletesWithoutException() + { + // ARRANGE + var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + await cache.GetDataAsync(Intervals.NET.Factories.Range.Closed(1, 10), CancellationToken.None); + + // ACT + var exception = await Record.ExceptionAsync(() => cache.DisposeAsync().AsTask()); + + // ASSERT + Assert.Null(exception); + } + + [Fact] + public async Task TwoLayerCache_DisposeWithoutAnyRequests_CompletesWithoutException() + { + // ARRANGE — build but never use + var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + // ACT + var exception = await Record.ExceptionAsync(() => cache.DisposeAsync().AsTask()); + + // ASSERT + Assert.Null(exception); + } + + [Fact] + public async Task ThreeLayerCache_DisposeAsync_CompletesWithoutException() + { + // ARRANGE + var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(MidLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + await cache.GetDataAsync(Intervals.NET.Factories.Range.Closed(10, 20), CancellationToken.None); + + // ACT + var exception = await Record.ExceptionAsync(() => cache.DisposeAsync().AsTask()); + + // ASSERT + Assert.Null(exception); + } + + #endregion + + #region Adapter Integration Tests + + [Fact] + public async Task WindowCacheDataSourceAdapter_UsedAsDataSource_PropagatesDataCorrectly() + { + // ARRANGE — manually compose two layers without the builder, to test the adapter directly + var realSource = CreateRealDataSource(); + var deepCache = new WindowCache( + realSource, Domain, DeepLayerOptions()); + + await using var _ = deepCache; + + var adapter = new WindowCacheDataSourceAdapter(deepCache); + var userCache = new WindowCache( + adapter, Domain, UserLayerOptions()); + + await using var __ = userCache; + + var range = Intervals.NET.Factories.Range.Closed(300, 310); + + // ACT + var result = await userCache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Equal(11, array.Length); + for (var i = 0; i < array.Length; i++) + Assert.Equal(300 + i, array[i]); + } + + #endregion + + #region Per-Layer Diagnostics Tests + + [Fact] + public async Task TwoLayerCache_WithPerLayerDiagnostics_EachLayerTracksIndependently() + { + // ARRANGE + var deepDiagnostics = new EventCounterCacheDiagnostics(); + var userDiagnostics = new EventCounterCacheDiagnostics(); + + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions(), deepDiagnostics) + .AddLayer(UserLayerOptions(), userDiagnostics) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(100, 110); + + // ACT + await cache.GetDataAsync(range, CancellationToken.None); + await cache.WaitForIdleAsync(); + + // ASSERT — user-facing layer saw user requests + Assert.True(userDiagnostics.RebalanceIntentPublished >= 0, + "User layer diagnostics should be connected."); + + // ASSERT — data is still correct + var result = await cache.GetDataAsync(range, CancellationToken.None); + var array = result.Data.ToArray(); + Assert.Equal(11, array.Length); + Assert.Equal(100, array[0]); + Assert.Equal(110, array[^1]); + } + + #endregion + + #region Large Range Tests + + [Fact] + public async Task TwoLayerCache_LargeRange_ReturnsCorrectData() + { + // ARRANGE + await using var cache = LayeredWindowCacheBuilder + .Create(CreateRealDataSource(), Domain) + .AddLayer(DeepLayerOptions()) + .AddLayer(UserLayerOptions()) + .Build(); + + // ACT + var range = Intervals.NET.Factories.Range.Closed(0, 999); + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + var array = result.Data.ToArray(); + Assert.Equal(1000, array.Length); + Assert.Equal(0, array[0]); + Assert.Equal(999, array[^1]); + + // Spot-check values + for (var i = 0; i < array.Length; i++) + Assert.Equal(i, array[i]); + } + + #endregion +} diff --git a/tests/SlidingWindowCache.Tests.Infrastructure/DataSources/SimpleTestDataSource.cs b/tests/SlidingWindowCache.Tests.Infrastructure/DataSources/SimpleTestDataSource.cs index bde7927..fef1a4b 100644 --- a/tests/SlidingWindowCache.Tests.Infrastructure/DataSources/SimpleTestDataSource.cs +++ b/tests/SlidingWindowCache.Tests.Infrastructure/DataSources/SimpleTestDataSource.cs @@ -43,7 +43,9 @@ public async Task> FetchAsync( CancellationToken cancellationToken) { if (_simulateAsyncDelay) + { await Task.Delay(1, cancellationToken); + } return new RangeChunk(requestedRange, GenerateData(requestedRange)); } diff --git a/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Extensions/IntegerVariableStepDomain.cs b/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Extensions/IntegerVariableStepDomain.cs index d029045..3f2f60f 100644 --- a/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Extensions/IntegerVariableStepDomain.cs +++ b/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Extensions/IntegerVariableStepDomain.cs @@ -13,7 +13,9 @@ public class IntegerVariableStepDomain : IVariableStepDomain public IntegerVariableStepDomain(int[] steps) { if (steps == null || steps.Length == 0) + { throw new ArgumentException("Steps array cannot be null or empty.", nameof(steps)); + } // Ensure steps are sorted _steps = steps.OrderBy(s => s).ToArray(); @@ -48,7 +50,10 @@ public IntegerVariableStepDomain(int[] steps) // IRangeDomain base interface methods public int Add(int value, long steps) { - if (steps == 0) return value; + if (steps == 0) + { + return value; + } var current = value; if (steps > 0) @@ -57,7 +62,10 @@ public int Add(int value, long steps) { var next = GetNextStep(current); if (next == null) + { throw new InvalidOperationException($"Cannot add {steps} steps from {value}: no more steps available"); + } + current = next.Value; } } @@ -67,7 +75,10 @@ public int Add(int value, long steps) { var prev = GetPreviousStep(current); if (prev == null) + { throw new InvalidOperationException($"Cannot subtract {-steps} steps from {value}: no more steps available"); + } + current = prev.Value; } } @@ -110,7 +121,10 @@ public int Ceiling(int value) public long Distance(int from, int to) { var comparison = Comparer.Compare(from, to); - if (comparison == 0) return 0; + if (comparison == 0) + { + return 0; + } var start = comparison < 0 ? from : to; var end = comparison < 0 ? to : from; @@ -122,7 +136,10 @@ public long Distance(int from, int to) { var next = GetNextStep(current); if (next == null) + { break; + } + current = next.Value; count++; } diff --git a/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Storage/CopyOnReadStorageTests.cs b/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Storage/CopyOnReadStorageTests.cs index b4b4581..c2c223f 100644 --- a/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Storage/CopyOnReadStorageTests.cs +++ b/tests/SlidingWindowCache.Unit.Tests/Infrastructure/Storage/CopyOnReadStorageTests.cs @@ -133,8 +133,10 @@ public async Task ThreadSafety_ConcurrentReadAndRematerialize_NeverCorruptsData( for (var j = 0; j < data.Length; j++) { if (data.Span[j] != expectedStart + j) + { throw new InvalidOperationException( $"Data corruption at index {j}: expected {expectedStart + j}, got {data.Span[j]}. Range={currentRange}"); + } } } } diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs new file mode 100644 index 0000000..17ba5a1 --- /dev/null +++ b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs @@ -0,0 +1,343 @@ +using Intervals.NET.Domain.Default.Numeric; +using SlidingWindowCache.Public; +using SlidingWindowCache.Public.Configuration; +using SlidingWindowCache.Public.Instrumentation; +using SlidingWindowCache.Tests.Infrastructure.DataSources; +using SlidingWindowCache.Tests.Infrastructure.Helpers; + +namespace SlidingWindowCache.Unit.Tests.Public; + +/// +/// Unit tests for . +/// Validates the builder API: construction, layer addition, build validation, +/// layer ordering, and the resulting . +/// Uses as a lightweight real data source to avoid +/// mocking the complex interface for these tests. +/// +public sealed class LayeredWindowCacheBuilderTests +{ + #region Test Infrastructure + + private static IntegerFixedStepDomain Domain => new(); + + private static IDataSource CreateDataSource() + => new SimpleTestDataSource(i => i); + + private static WindowCacheOptions DefaultOptions( + UserCacheReadMode mode = UserCacheReadMode.Snapshot) + => TestHelpers.CreateDefaultOptions(readMode: mode); + + #endregion + + #region Create() Tests + + [Fact] + public void Create_WithNullDataSource_ThrowsArgumentNullException() + { + // ACT + var exception = Record.Exception(() => + LayeredWindowCacheBuilder + .Create(null!, Domain)); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + Assert.Contains("dataSource", ((ArgumentNullException)exception).ParamName); + } + + [Fact] + public void Create_WithValidArguments_ReturnsBuilder() + { + // ACT + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ASSERT + Assert.NotNull(builder); + } + + #endregion + + #region AddLayer() Tests + + [Fact] + public void AddLayer_WithNullOptions_ThrowsArgumentNullException() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + var exception = Record.Exception(() => builder.AddLayer(null!)); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + Assert.Contains("options", ((ArgumentNullException)exception).ParamName); + } + + [Fact] + public void AddLayer_ReturnsBuilderForFluentChaining() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + var returned = builder.AddLayer(DefaultOptions()); + + // ASSERT — same instance for fluent chaining + Assert.Same(builder, returned); + } + + [Fact] + public void AddLayer_MultipleCallsReturnSameBuilder() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + var b1 = builder.AddLayer(DefaultOptions()); + var b2 = b1.AddLayer(DefaultOptions()); + var b3 = b2.AddLayer(DefaultOptions()); + + // ASSERT + Assert.Same(builder, b1); + Assert.Same(builder, b2); + Assert.Same(builder, b3); + } + + [Fact] + public void AddLayer_AcceptsDiagnosticsParameter() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + var diagnostics = new EventCounterCacheDiagnostics(); + + // ACT + var exception = Record.Exception(() => + builder.AddLayer(DefaultOptions(), diagnostics)); + + // ASSERT + Assert.Null(exception); + } + + [Fact] + public void AddLayer_WithNullDiagnostics_DoesNotThrow() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + var exception = Record.Exception(() => + builder.AddLayer(DefaultOptions(), null)); + + // ASSERT + Assert.Null(exception); + } + + #endregion + + #region Build() Tests + + [Fact] + public void Build_WithNoLayers_ThrowsInvalidOperationException() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + var exception = Record.Exception(() => builder.Build()); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + } + + [Fact] + public async Task Build_WithSingleLayer_ReturnsLayeredCacheWithOneLayer() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + await using var cache = builder + .AddLayer(DefaultOptions()) + .Build(); + + // ASSERT + Assert.Equal(1, cache.LayerCount); + } + + [Fact] + public async Task Build_WithTwoLayers_ReturnsLayeredCacheWithTwoLayers() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + await using var cache = builder + .AddLayer(new WindowCacheOptions(2.0, 2.0, UserCacheReadMode.CopyOnRead)) + .AddLayer(new WindowCacheOptions(0.5, 0.5, UserCacheReadMode.Snapshot)) + .Build(); + + // ASSERT + Assert.Equal(2, cache.LayerCount); + } + + [Fact] + public async Task Build_WithThreeLayers_ReturnsLayeredCacheWithThreeLayers() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain); + + // ACT + await using var cache = builder + .AddLayer(new WindowCacheOptions(5.0, 5.0, UserCacheReadMode.CopyOnRead)) + .AddLayer(new WindowCacheOptions(2.0, 2.0, UserCacheReadMode.CopyOnRead)) + .AddLayer(new WindowCacheOptions(0.5, 0.5, UserCacheReadMode.Snapshot)) + .Build(); + + // ASSERT + Assert.Equal(3, cache.LayerCount); + } + + [Fact] + public async Task Build_ReturnsLayeredWindowCacheType() + { + // ARRANGE & ACT + await using var cache = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain) + .AddLayer(DefaultOptions()) + .Build(); + + // ASSERT + Assert.IsType>(cache); + } + + [Fact] + public async Task Build_ReturnedCacheImplementsIWindowCache() + { + // ARRANGE & ACT + await using var cache = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain) + .AddLayer(DefaultOptions()) + .Build(); + + // ASSERT + Assert.IsAssignableFrom>(cache); + } + + [Fact] + public async Task Build_CanBeCalledMultipleTimes_ReturnsDifferentInstances() + { + // ARRANGE + var builder = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain) + .AddLayer(DefaultOptions()); + + // ACT + await using var cache1 = builder.Build(); + await using var cache2 = builder.Build(); + + // ASSERT — each build creates a new set of independent cache instances + Assert.NotSame(cache1, cache2); + } + + #endregion + + #region Layer Wiring Tests + + [Fact] + public async Task Build_SingleLayer_CanFetchData() + { + // ARRANGE + var options = new WindowCacheOptions( + leftCacheSize: 1.0, + rightCacheSize: 1.0, + readMode: UserCacheReadMode.Snapshot, + debounceDelay: TimeSpan.FromMilliseconds(50)); + + await using var cache = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain) + .AddLayer(options) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(1, 10); + + // ACT + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + Assert.NotNull(result); + Assert.True(result.Range.HasValue); + Assert.Equal(10, result.Data.Length); + } + + [Fact] + public async Task Build_TwoLayers_CanFetchData() + { + // ARRANGE + var deepOptions = new WindowCacheOptions( + leftCacheSize: 2.0, + rightCacheSize: 2.0, + readMode: UserCacheReadMode.CopyOnRead, + debounceDelay: TimeSpan.FromMilliseconds(50)); + + var userOptions = new WindowCacheOptions( + leftCacheSize: 0.5, + rightCacheSize: 0.5, + readMode: UserCacheReadMode.Snapshot, + debounceDelay: TimeSpan.FromMilliseconds(50)); + + await using var cache = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain) + .AddLayer(deepOptions) + .AddLayer(userOptions) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(100, 110); + + // ACT + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + Assert.NotNull(result); + Assert.True(result.Range.HasValue); + Assert.Equal(11, result.Data.Length); + } + + [Fact] + public async Task Build_WithPerLayerDiagnostics_DoesNotThrowOnFetch() + { + // ARRANGE + var deepDiagnostics = new EventCounterCacheDiagnostics(); + var userDiagnostics = new EventCounterCacheDiagnostics(); + + await using var cache = LayeredWindowCacheBuilder + .Create(CreateDataSource(), Domain) + .AddLayer(new WindowCacheOptions(2.0, 2.0, UserCacheReadMode.CopyOnRead, + debounceDelay: TimeSpan.FromMilliseconds(50)), deepDiagnostics) + .AddLayer(new WindowCacheOptions(0.5, 0.5, UserCacheReadMode.Snapshot, + debounceDelay: TimeSpan.FromMilliseconds(50)), userDiagnostics) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(1, 5); + + // ACT + var exception = await Record.ExceptionAsync( + async () => await cache.GetDataAsync(range, CancellationToken.None)); + + // ASSERT + Assert.Null(exception); + } + + #endregion +} diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs new file mode 100644 index 0000000..e007c54 --- /dev/null +++ b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs @@ -0,0 +1,510 @@ +using Intervals.NET.Domain.Default.Numeric; +using Moq; +using SlidingWindowCache.Public; +using SlidingWindowCache.Public.Dto; + +namespace SlidingWindowCache.Unit.Tests.Public; + +/// +/// Unit tests for . +/// Validates delegation to the outermost layer for data operations, correct layer count, +/// and disposal ordering. Uses mocked instances +/// to isolate the wrapper from real cache behavior. +/// +public sealed class LayeredWindowCacheTests +{ + #region Test Infrastructure + + private static Mock> CreateLayerMock() + => new Mock>(MockBehavior.Strict); + + private static LayeredWindowCache CreateLayeredCache( + params IWindowCache[] layers) + { + // Use reflection-free approach: the internal constructor takes IReadOnlyList + // We use the builder with real caches in integration tests; here we test the wrapper + // by constructing it directly via internal constructor using a subclass trick. + // Since the constructor is internal, we leverage InternalsVisibleTo in the test project. + return CreateLayeredCacheFromList(layers.ToList()); + } + + private static LayeredWindowCache CreateLayeredCacheFromList( + IReadOnlyList> layers) + { + // Instantiate via the internal constructor using the test project's InternalsVisibleTo access + return (LayeredWindowCache) + Activator.CreateInstance( + typeof(LayeredWindowCache), + System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance, + null, + [layers], + null)!; + } + + private static Intervals.NET.Range MakeRange(int start, int end) + => Intervals.NET.Factories.Range.Closed(start, end); + + private static RangeResult MakeResult(int start, int end) + { + var range = MakeRange(start, end); + var data = new ReadOnlyMemory(Enumerable.Range(start, end - start + 1).ToArray()); + return new RangeResult(range, data); + } + + #endregion + + #region LayerCount Tests + + [Fact] + public void LayerCount_SingleLayer_ReturnsOne() + { + // ARRANGE + var layer = CreateLayerMock(); + var cache = CreateLayeredCache(layer.Object); + + // ASSERT + Assert.Equal(1, cache.LayerCount); + } + + [Fact] + public void LayerCount_TwoLayers_ReturnsTwo() + { + // ARRANGE + var layer1 = CreateLayerMock(); + var layer2 = CreateLayerMock(); + var cache = CreateLayeredCache(layer1.Object, layer2.Object); + + // ASSERT + Assert.Equal(2, cache.LayerCount); + } + + [Fact] + public void LayerCount_ThreeLayers_ReturnsThree() + { + // ARRANGE + var layer1 = CreateLayerMock(); + var layer2 = CreateLayerMock(); + var layer3 = CreateLayerMock(); + var cache = CreateLayeredCache(layer1.Object, layer2.Object, layer3.Object); + + // ASSERT + Assert.Equal(3, cache.LayerCount); + } + + #endregion + + #region GetDataAsync Delegation Tests + + [Fact] + public async Task GetDataAsync_DelegatesToOutermostLayer() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var range = MakeRange(100, 110); + var expectedResult = MakeResult(100, 110); + + outerLayer.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(expectedResult); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + Assert.Equal(expectedResult.Range, result.Range); + outerLayer.Verify(c => c.GetDataAsync(range, It.IsAny()), Times.Once); + // Inner layer must NOT be called — outer layer is user-facing + innerLayer.VerifyNoOtherCalls(); + } + + [Fact] + public async Task GetDataAsync_SingleLayer_DelegatesToThatLayer() + { + // ARRANGE + var onlyLayer = CreateLayerMock(); + var range = MakeRange(1, 10); + var expectedResult = MakeResult(1, 10); + + onlyLayer.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(expectedResult); + + var cache = CreateLayeredCache(onlyLayer.Object); + + // ACT + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // ASSERT + Assert.Equal(expectedResult.Range, result.Range); + onlyLayer.Verify(c => c.GetDataAsync(range, It.IsAny()), Times.Once); + } + + [Fact] + public async Task GetDataAsync_PropagatesCancellationToken() + { + // ARRANGE + var outerLayer = CreateLayerMock(); + var range = MakeRange(10, 20); + var cts = new CancellationTokenSource(); + CancellationToken capturedToken = CancellationToken.None; + var expectedResult = MakeResult(10, 20); + + outerLayer.Setup(c => c.GetDataAsync(range, It.IsAny())) + .Returns, CancellationToken>((_, ct) => + { + capturedToken = ct; + return ValueTask.FromResult(expectedResult); + }); + + var cache = CreateLayeredCache(outerLayer.Object); + + // ACT + await cache.GetDataAsync(range, cts.Token); + + // ASSERT + Assert.Equal(cts.Token, capturedToken); + } + + [Fact] + public async Task GetDataAsync_WhenOutermostLayerThrows_PropagatesException() + { + // ARRANGE + var outerLayer = CreateLayerMock(); + var range = MakeRange(10, 20); + var expectedException = new InvalidOperationException("Cache failed"); + + outerLayer.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ThrowsAsync(expectedException); + + var cache = CreateLayeredCache(outerLayer.Object); + + // ACT + var exception = await Record.ExceptionAsync( + async () => await cache.GetDataAsync(range, CancellationToken.None)); + + // ASSERT + Assert.Same(expectedException, exception); + } + + #endregion + + #region WaitForIdleAsync Tests + + [Fact] + public async Task WaitForIdleAsync_TwoLayers_AwaitsOuterLayer() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var outerLayerWasCalled = false; + + innerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(Task.CompletedTask); + outerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => + { + outerLayerWasCalled = true; + return Task.CompletedTask; + }); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.WaitForIdleAsync(); + + // ASSERT + Assert.True(outerLayerWasCalled); + outerLayer.Verify(c => c.WaitForIdleAsync(It.IsAny()), Times.Once); + } + + [Fact] + public async Task WaitForIdleAsync_TwoLayers_AwaitsInnerLayer() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var innerLayerWasCalled = false; + + innerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => + { + innerLayerWasCalled = true; + return Task.CompletedTask; + }); + outerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(Task.CompletedTask); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.WaitForIdleAsync(); + + // ASSERT + Assert.True(innerLayerWasCalled); + innerLayer.Verify(c => c.WaitForIdleAsync(It.IsAny()), Times.Once); + } + + [Fact] + public async Task WaitForIdleAsync_TwoLayers_AwaitsOuterBeforeInner() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var callOrder = new List(); + + innerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => + { + callOrder.Add("inner"); + return Task.CompletedTask; + }); + outerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => + { + callOrder.Add("outer"); + return Task.CompletedTask; + }); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.WaitForIdleAsync(); + + // ASSERT — outer must be awaited before inner + Assert.Equal(2, callOrder.Count); + Assert.Equal("outer", callOrder[0]); + Assert.Equal("inner", callOrder[1]); + } + + [Fact] + public async Task WaitForIdleAsync_ThreeLayers_AwaitsAllInOuterToInnerOrder() + { + // ARRANGE + var layer1 = CreateLayerMock(); // deepest (index 0) + var layer2 = CreateLayerMock(); // middle (index 1) + var layer3 = CreateLayerMock(); // outer (index 2) + var callOrder = new List(); + + layer1.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => { callOrder.Add("L1"); return Task.CompletedTask; }); + layer2.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => { callOrder.Add("L2"); return Task.CompletedTask; }); + layer3.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => { callOrder.Add("L3"); return Task.CompletedTask; }); + + var cache = CreateLayeredCache(layer1.Object, layer2.Object, layer3.Object); + + // ACT + await cache.WaitForIdleAsync(); + + // ASSERT — outermost (L3) first, then L2, then deepest (L1) + Assert.Equal(new[] { "L3", "L2", "L1" }, callOrder); + } + + [Fact] + public async Task WaitForIdleAsync_SingleLayer_AwaitsIt() + { + // ARRANGE + var onlyLayer = CreateLayerMock(); + var wasCalled = false; + + onlyLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(() => + { + wasCalled = true; + return Task.CompletedTask; + }); + + var cache = CreateLayeredCache(onlyLayer.Object); + + // ACT + await cache.WaitForIdleAsync(); + + // ASSERT + Assert.True(wasCalled); + onlyLayer.Verify(c => c.WaitForIdleAsync(It.IsAny()), Times.Once); + } + + [Fact] + public async Task WaitForIdleAsync_PropagatesCancellationTokenToAllLayers() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var cts = new CancellationTokenSource(); + var capturedTokens = new List(); + + innerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(ct => { capturedTokens.Add(ct); return Task.CompletedTask; }); + outerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(ct => { capturedTokens.Add(ct); return Task.CompletedTask; }); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.WaitForIdleAsync(cts.Token); + + // ASSERT — same token forwarded to both layers + Assert.Equal(2, capturedTokens.Count); + Assert.All(capturedTokens, t => Assert.Equal(cts.Token, t)); + } + + [Fact] + public async Task WaitForIdleAsync_DefaultToken_IsNoneForAllLayers() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var capturedTokens = new List(); + + innerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(ct => { capturedTokens.Add(ct); return Task.CompletedTask; }); + outerLayer.Setup(c => c.WaitForIdleAsync(It.IsAny())) + .Returns(ct => { capturedTokens.Add(ct); return Task.CompletedTask; }); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.WaitForIdleAsync(); // default token + + // ASSERT + Assert.Equal(2, capturedTokens.Count); + Assert.All(capturedTokens, t => Assert.Equal(CancellationToken.None, t)); + } + + #endregion + + #region DisposeAsync Tests + + [Fact] + public async Task DisposeAsync_SingleLayer_DisposesIt() + { + // ARRANGE + var layer = CreateLayerMock(); + layer.Setup(c => c.DisposeAsync()).Returns(ValueTask.CompletedTask); + + var cache = CreateLayeredCache(layer.Object); + + // ACT + await cache.DisposeAsync(); + + // ASSERT + layer.Verify(c => c.DisposeAsync(), Times.Once); + } + + [Fact] + public async Task DisposeAsync_TwoLayers_DisposesAllLayers() + { + // ARRANGE + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + innerLayer.Setup(c => c.DisposeAsync()).Returns(ValueTask.CompletedTask); + outerLayer.Setup(c => c.DisposeAsync()).Returns(ValueTask.CompletedTask); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.DisposeAsync(); + + // ASSERT + innerLayer.Verify(c => c.DisposeAsync(), Times.Once); + outerLayer.Verify(c => c.DisposeAsync(), Times.Once); + } + + [Fact] + public async Task DisposeAsync_TwoLayers_DisposesOutermostFirst() + { + // ARRANGE — outermost should be disposed before innermost + var innerLayer = CreateLayerMock(); + var outerLayer = CreateLayerMock(); + var disposalOrder = new List(); + + innerLayer.Setup(c => c.DisposeAsync()).Returns(() => + { + disposalOrder.Add("inner"); + return ValueTask.CompletedTask; + }); + outerLayer.Setup(c => c.DisposeAsync()).Returns(() => + { + disposalOrder.Add("outer"); + return ValueTask.CompletedTask; + }); + + var cache = CreateLayeredCache(innerLayer.Object, outerLayer.Object); + + // ACT + await cache.DisposeAsync(); + + // ASSERT + Assert.Equal(2, disposalOrder.Count); + Assert.Equal("outer", disposalOrder[0]); + Assert.Equal("inner", disposalOrder[1]); + } + + [Fact] + public async Task DisposeAsync_ThreeLayers_DisposesOuterToInner() + { + // ARRANGE + var layer1 = CreateLayerMock(); // deepest + var layer2 = CreateLayerMock(); // middle + var layer3 = CreateLayerMock(); // outermost + var disposalOrder = new List(); + + layer1.Setup(c => c.DisposeAsync()).Returns(() => + { + disposalOrder.Add("L1"); + return ValueTask.CompletedTask; + }); + layer2.Setup(c => c.DisposeAsync()).Returns(() => + { + disposalOrder.Add("L2"); + return ValueTask.CompletedTask; + }); + layer3.Setup(c => c.DisposeAsync()).Returns(() => + { + disposalOrder.Add("L3"); + return ValueTask.CompletedTask; + }); + + var cache = CreateLayeredCache(layer1.Object, layer2.Object, layer3.Object); + + // ACT + await cache.DisposeAsync(); + + // ASSERT — outermost first, innermost last + Assert.Equal(new[] { "L3", "L2", "L1" }, disposalOrder); + } + + #endregion + + #region IWindowCache Interface Tests + + [Fact] + public void LayeredWindowCache_ImplementsIWindowCache() + { + // ARRANGE + var layer = CreateLayerMock(); + layer.Setup(c => c.DisposeAsync()).Returns(ValueTask.CompletedTask); + + // ACT + var cache = CreateLayeredCache(layer.Object); + + // ASSERT + Assert.IsAssignableFrom>(cache); + } + + [Fact] + public void LayeredWindowCache_ImplementsIAsyncDisposable() + { + // ARRANGE + var layer = CreateLayerMock(); + layer.Setup(c => c.DisposeAsync()).Returns(ValueTask.CompletedTask); + var cache = CreateLayeredCache(layer.Object); + + // ASSERT + Assert.IsAssignableFrom(cache); + } + + #endregion +} diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs new file mode 100644 index 0000000..5b95d62 --- /dev/null +++ b/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs @@ -0,0 +1,393 @@ +using Intervals.NET.Domain.Default.Numeric; +using Moq; +using SlidingWindowCache.Public; +using SlidingWindowCache.Public.Dto; + +namespace SlidingWindowCache.Unit.Tests.Public; + +/// +/// Unit tests for . +/// Validates the adapter's contract: correct conversion of +/// to , boundary semantics, cancellation propagation, +/// and exception forwarding. Uses a mocked to +/// isolate the adapter from any real cache implementation. +/// +public sealed class WindowCacheDataSourceAdapterTests +{ + #region Test Infrastructure + + private static Mock> CreateCacheMock() + => new Mock>(MockBehavior.Strict); + + private static WindowCacheDataSourceAdapter CreateAdapter( + IWindowCache cache) + => new(cache); + + private static Intervals.NET.Range MakeRange(int start, int end) + => Intervals.NET.Factories.Range.Closed(start, end); + + private static RangeResult MakeResult(int start, int end) + { + var range = MakeRange(start, end); + var data = new ReadOnlyMemory(Enumerable.Range(start, end - start + 1).ToArray()); + return new RangeResult(range, data); + } + + #endregion + + #region Constructor Tests + + [Fact] + public void Constructor_WithNullCache_ThrowsArgumentNullException() + { + // ACT + var exception = Record.Exception(() => + new WindowCacheDataSourceAdapter(null!)); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + Assert.Contains("innerCache", ((ArgumentNullException)exception).ParamName); + } + + [Fact] + public void Constructor_WithValidCache_DoesNotThrow() + { + // ARRANGE + var mock = CreateCacheMock(); + + // ACT + var exception = Record.Exception(() => CreateAdapter(mock.Object)); + + // ASSERT + Assert.Null(exception); + } + + #endregion + + #region FetchAsync — Data Conversion Tests + + [Fact] + public async Task FetchAsync_WithFullResult_ReturnsChunkWithSameRange() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(100, 110); + var result = MakeResult(100, 110); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(result); + + var adapter = CreateAdapter(mock.Object); + + // ACT + var chunk = await adapter.FetchAsync(range, CancellationToken.None); + + // ASSERT + Assert.NotNull(chunk); + Assert.Equal(result.Range, chunk.Range); + } + + [Fact] + public async Task FetchAsync_WithFullResult_ReturnsChunkWithCorrectData() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(100, 105); + var result = MakeResult(100, 105); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(result); + + // ACT + var chunk = await adapter.FetchAsync(range, CancellationToken.None); + + // ASSERT + var chunkData = chunk.Data.ToArray(); + var expectedData = result.Data.ToArray(); + Assert.Equal(expectedData.Length, chunkData.Length); + Assert.Equal(expectedData, chunkData); + } + + [Fact] + public async Task FetchAsync_DataIsAnArray_NotSameReferenceAsInnerMemory() + { + // ARRANGE — ensure the adapter creates a copy, not a reference to inner cache internals + var mock = CreateCacheMock(); + var range = MakeRange(1, 5); + var innerArray = new[] { 1, 2, 3, 4, 5 }; + var result = new RangeResult(range, new ReadOnlyMemory(innerArray)); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(result); + + // ACT + var chunk = await adapter.FetchAsync(range, CancellationToken.None); + var returnedArray = chunk.Data.ToArray(); + + // Mutate inner array after fetch + innerArray[0] = 999; + + // ASSERT — chunk data was already copied; mutation of source has no effect + Assert.Equal(1, returnedArray[0]); + } + + [Fact] + public async Task FetchAsync_CallsGetDataAsyncOnce() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(10, 20); + var result = MakeResult(10, 20); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(result); + + // ACT + await adapter.FetchAsync(range, CancellationToken.None); + + // ASSERT + mock.Verify(c => c.GetDataAsync(range, It.IsAny()), Times.Once); + } + + [Fact] + public async Task FetchAsync_PassesCorrectRangeToGetDataAsync() + { + // ARRANGE + var mock = CreateCacheMock(); + var requestedRange = MakeRange(200, 300); + var result = MakeResult(200, 300); + Intervals.NET.Range? capturedRange = null; + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(It.IsAny>(), It.IsAny())) + .Returns, CancellationToken>((r, _) => + { + capturedRange = r; + return ValueTask.FromResult(result); + }); + + // ACT + await adapter.FetchAsync(requestedRange, CancellationToken.None); + + // ASSERT + Assert.Equal(requestedRange, capturedRange); + } + + #endregion + + #region FetchAsync — Boundary Semantics Tests + + [Fact] + public async Task FetchAsync_WithNullRangeResult_ReturnsChunkWithNullRange() + { + // ARRANGE — inner cache returns null range (out-of-bounds boundary miss) + var mock = CreateCacheMock(); + var range = MakeRange(9000, 9999); + var boundaryResult = new RangeResult(null, ReadOnlyMemory.Empty); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(boundaryResult); + + // ACT + var chunk = await adapter.FetchAsync(range, CancellationToken.None); + + // ASSERT + Assert.Null(chunk.Range); + } + + [Fact] + public async Task FetchAsync_WithNullRangeResult_ReturnsEmptyData() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(9000, 9999); + var boundaryResult = new RangeResult(null, ReadOnlyMemory.Empty); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(boundaryResult); + + // ACT + var chunk = await adapter.FetchAsync(range, CancellationToken.None); + + // ASSERT + Assert.Empty(chunk.Data); + } + + [Fact] + public async Task FetchAsync_WithTruncatedRangeResult_ReturnsChunkWithTruncatedRange() + { + // ARRANGE — inner cache returns a truncated range (partial boundary) + var mock = CreateCacheMock(); + var requestedRange = MakeRange(900, 1100); + var truncatedRange = MakeRange(900, 999); // truncated at upper bound + var truncatedData = new ReadOnlyMemory(Enumerable.Range(900, 100).ToArray()); + var truncatedResult = new RangeResult(truncatedRange, truncatedData); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(requestedRange, It.IsAny())) + .ReturnsAsync(truncatedResult); + + // ACT + var chunk = await adapter.FetchAsync(requestedRange, CancellationToken.None); + + // ASSERT + Assert.Equal(truncatedRange, chunk.Range); + Assert.Equal(100, chunk.Data.Count()); + } + + #endregion + + #region FetchAsync — Cancellation Propagation Tests + + [Fact] + public async Task FetchAsync_PropagatesCancellationTokenToGetDataAsync() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(10, 20); + var result = MakeResult(10, 20); + var cts = new CancellationTokenSource(); + CancellationToken capturedToken = CancellationToken.None; + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .Returns, CancellationToken>((_, ct) => + { + capturedToken = ct; + return ValueTask.FromResult(result); + }); + + // ACT + await adapter.FetchAsync(range, cts.Token); + + // ASSERT + Assert.Equal(cts.Token, capturedToken); + } + + [Fact] + public async Task FetchAsync_WhenCancelled_PropagatesOperationCanceledException() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(10, 20); + var cts = new CancellationTokenSource(); + cts.Cancel(); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ThrowsAsync(new OperationCanceledException(cts.Token)); + + // ACT + var exception = await Record.ExceptionAsync( + async () => await adapter.FetchAsync(range, cts.Token)); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + } + + #endregion + + #region FetchAsync — Exception Propagation Tests + + [Fact] + public async Task FetchAsync_WhenGetDataAsyncThrows_PropagatesException() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(10, 20); + var expectedException = new InvalidOperationException("Inner cache failed"); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ThrowsAsync(expectedException); + + // ACT + var exception = await Record.ExceptionAsync( + async () => await adapter.FetchAsync(range, CancellationToken.None)); + + // ASSERT + Assert.NotNull(exception); + Assert.Same(expectedException, exception); + } + + [Fact] + public async Task FetchAsync_WhenGetDataAsyncThrowsObjectDisposedException_Propagates() + { + // ARRANGE + var mock = CreateCacheMock(); + var range = MakeRange(10, 20); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ThrowsAsync(new ObjectDisposedException("inner-cache")); + + // ACT + var exception = await Record.ExceptionAsync( + async () => await adapter.FetchAsync(range, CancellationToken.None)); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + } + + #endregion + + #region IDataSource Contract Tests + + [Fact] + public async Task FetchAsync_ImplementsIDataSourceInterface() + { + // ARRANGE — verify via interface reference + var mock = CreateCacheMock(); + var range = MakeRange(10, 20); + var result = MakeResult(10, 20); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(result); + + IDataSource dataSource = CreateAdapter(mock.Object); + + // ACT + var chunk = await dataSource.FetchAsync(range, CancellationToken.None); + + // ASSERT + Assert.NotNull(chunk); + Assert.Equal(result.Range, chunk.Range); + } + + [Fact] + public async Task BatchFetchAsync_UsesDefaultParallelImplementation() + { + // ARRANGE — the default batch FetchAsync calls single-range FetchAsync in parallel + var mock = CreateCacheMock(); + var range1 = MakeRange(1, 5); + var range2 = MakeRange(100, 105); + var result1 = MakeResult(1, 5); + var result2 = MakeResult(100, 105); + + mock.Setup(c => c.GetDataAsync(range1, It.IsAny())) + .ReturnsAsync(result1); + mock.Setup(c => c.GetDataAsync(range2, It.IsAny())) + .ReturnsAsync(result2); + + IDataSource dataSource = CreateAdapter(mock.Object); + var ranges = new[] { range1, range2 }; + + // ACT + var chunks = (await dataSource.FetchAsync(ranges, CancellationToken.None)).ToArray(); + + // ASSERT + Assert.Equal(2, chunks.Length); + mock.Verify(c => c.GetDataAsync(range1, It.IsAny()), Times.Once); + mock.Verify(c => c.GetDataAsync(range2, It.IsAny()), Times.Once); + } + + #endregion +} From 9a44d6fa122c3df0df3d6fcbfbf382d54bc4dff8 Mon Sep 17 00:00:00 2001 From: Mykyta Zotov Date: Sun, 1 Mar 2026 21:47:09 +0100 Subject: [PATCH 2/6] docs: README file has been updated to include validation for layered cache types; feat: validation method for layered cache compilation on WASM has been added --- .../README.md | 18 +++++ .../WasmCompilationValidator.cs | 72 +++++++++++++++++++ 2 files changed, 90 insertions(+) diff --git a/src/SlidingWindowCache.WasmValidation/README.md b/src/SlidingWindowCache.WasmValidation/README.md index 91e54e2..ac22846 100644 --- a/src/SlidingWindowCache.WasmValidation/README.md +++ b/src/SlidingWindowCache.WasmValidation/README.md @@ -21,6 +21,7 @@ The sole purpose of this project is to ensure that the SlidingWindowCache librar - ✅ **CI/CD compatibility check** - Ensures library can target browser environments - ✅ **Strategy coverage validation** - Validates all internal storage and serialization strategies - ✅ **Minimal API usage** - Instantiates core types to validate no platform-incompatible APIs are used +- ✅ **Layered cache coverage** - Validates `LayeredWindowCacheBuilder`, `WindowCacheDataSourceAdapter`, and `LayeredWindowCache` compile for WASM ## Implementation @@ -55,6 +56,7 @@ Each configuration has a dedicated validation method: 2. `ValidateConfiguration2_CopyOnReadMode_UnboundedQueue()` 3. `ValidateConfiguration3_SnapshotMode_BoundedQueue()` 4. `ValidateConfiguration4_CopyOnReadMode_BoundedQueue()` +5. `ValidateLayeredCache_TwoLayer_RecommendedConfig()` All methods perform identical operations: 1. Implement a simple `IDataSource` @@ -65,6 +67,21 @@ All methods perform identical operations: All code uses deterministic, synchronous-friendly patterns suitable for compile-time validation. +### Layered Cache Validation + +Method 5 (`ValidateLayeredCache_TwoLayer_RecommendedConfig`) validates that the three new public +layered cache types compile for `net8.0-browser`: + +- `LayeredWindowCacheBuilder` — fluent builder wiring layers via the adapter +- `WindowCacheDataSourceAdapter` — bridges `IWindowCache` to `IDataSource` +- `LayeredWindowCache` — wrapper owning all layers; `WaitForIdleAsync` + awaits all layers sequentially (outermost to innermost) + +Uses the recommended configuration: `CopyOnRead` inner layer (large buffers) + `Snapshot` outer +layer (small buffers). A single method is sufficient because the layered cache types introduce no +new strategy axes — they delegate to underlying `WindowCache` instances whose internal strategies +are already covered by methods 1–4. + ## Build Validation To validate WebAssembly compatibility: @@ -79,6 +96,7 @@ A successful build confirms that: - Intervals.NET dependencies are WebAssembly-compatible - **All internal storage strategies** (SnapshotReadStorage, CopyOnReadStorage) are WASM-compatible - **All serialization strategies** (task-based, channel-based) are WASM-compatible +- **All layered cache types** (LayeredWindowCacheBuilder, WindowCacheDataSourceAdapter, LayeredWindowCache) are WASM-compatible ## Target Framework diff --git a/src/SlidingWindowCache.WasmValidation/WasmCompilationValidator.cs b/src/SlidingWindowCache.WasmValidation/WasmCompilationValidator.cs index 176530d..78a8090 100644 --- a/src/SlidingWindowCache.WasmValidation/WasmCompilationValidator.cs +++ b/src/SlidingWindowCache.WasmValidation/WasmCompilationValidator.cs @@ -221,4 +221,76 @@ public static async Task ValidateConfiguration4_CopyOnReadMode_BoundedQueue() await cache.WaitForIdleAsync(); _ = result.Data.Length; } + + /// + /// Validates layered cache: , + /// , and + /// compile for net8.0-browser. + /// Uses the recommended configuration: CopyOnRead inner layer (large buffers) + + /// Snapshot outer layer (small buffers). + /// + /// + /// Types Validated: + /// + /// + /// — fluent builder + /// wiring layers together via + /// + /// + /// — adapter bridging + /// to + /// + /// + /// — wrapper that delegates + /// to the outermost layer and + /// awaits all layers sequentially on + /// + /// + /// Why One Method Is Sufficient: + /// + /// The layered cache types introduce no new strategy axes: they delegate to underlying + /// instances whose internal strategies + /// are already covered by Configurations 1–4. A single method proving all three new + /// public types compile on WASM is therefore sufficient. + /// + /// + public static async Task ValidateLayeredCache_TwoLayer_RecommendedConfig() + { + var domain = new IntegerFixedStepDomain(); + + // Inner layer: CopyOnRead + large buffers (recommended for deep/backing layers) + var innerOptions = new WindowCacheOptions( + leftCacheSize: 5.0, + rightCacheSize: 5.0, + readMode: UserCacheReadMode.CopyOnRead, + leftThreshold: 0.3, + rightThreshold: 0.3 + ); + + // Outer (user-facing) layer: Snapshot + small buffers (recommended for user-facing layer) + var outerOptions = new WindowCacheOptions( + leftCacheSize: 0.5, + rightCacheSize: 0.5, + readMode: UserCacheReadMode.Snapshot, + leftThreshold: 0.2, + rightThreshold: 0.2 + ); + + // Build the layered cache — exercises LayeredWindowCacheBuilder, + // WindowCacheDataSourceAdapter, and LayeredWindowCache + await using var cache = LayeredWindowCacheBuilder + .Create(new SimpleDataSource(), domain) + .AddLayer(innerOptions) + .AddLayer(outerOptions) + .Build(); + + var range = Intervals.NET.Factories.Range.Closed(0, 10); + var result = await cache.GetDataAsync(range, CancellationToken.None); + + // WaitForIdleAsync on LayeredWindowCache awaits all layers (outermost to innermost) + await cache.WaitForIdleAsync(); + + _ = result.Data.Length; + _ = cache.LayerCount; + } } \ No newline at end of file From 3abd1f925caae22173632b6b147f533ee4337aee Mon Sep 17 00:00:00 2001 From: Mykyta Zotov Date: Sun, 1 Mar 2026 21:48:18 +0100 Subject: [PATCH 3/6] docs: update comments for clarity and consistency in IDataSource and LayeredWindowCache; style: improve formatting in StrongConsistencyModeTests for better readability --- src/SlidingWindowCache/Public/IDataSource.cs | 12 ++++----- .../Public/LayeredWindowCache.cs | 26 +++++++++---------- .../StrongConsistencyModeTests.cs | 16 +++++++----- 3 files changed, 28 insertions(+), 26 deletions(-) diff --git a/src/SlidingWindowCache/Public/IDataSource.cs b/src/SlidingWindowCache/Public/IDataSource.cs index b67ba10..b0b7a5e 100644 --- a/src/SlidingWindowCache/Public/IDataSource.cs +++ b/src/SlidingWindowCache/Public/IDataSource.cs @@ -74,12 +74,12 @@ public interface IDataSource where TRange : IComparable /// For data sources with physical boundaries (e.g., databases with min/max IDs, /// time-series with temporal limits, paginated APIs with maximum pages), implementations MUST: /// - /// - /// Return RangeChunk with Range = null when no data is available for the requested range - /// Return truncated range when partial data is available (intersection of requested and available) - /// NEVER throw exceptions for out-of-bounds requests - use null Range instead - /// Ensure Data contains exactly Range.Span elements when Range is non-null - /// + /// + /// Return RangeChunk with Range = null when no data is available for the requested range + /// Return truncated range when partial data is available (intersection of requested and available) + /// NEVER throw exceptions for out-of-bounds requests - use null Range instead + /// Ensure Data contains exactly Range.Span elements when Range is non-null + /// /// Boundary Handling Examples: /// /// // Database with records ID 100-500 diff --git a/src/SlidingWindowCache/Public/LayeredWindowCache.cs b/src/SlidingWindowCache/Public/LayeredWindowCache.cs index 5631d86..9ef9ecd 100644 --- a/src/SlidingWindowCache/Public/LayeredWindowCache.cs +++ b/src/SlidingWindowCache/Public/LayeredWindowCache.cs @@ -38,19 +38,19 @@ namespace SlidingWindowCache.Public; /// The outermost layer is disposed first to stop new user requests from reaching inner layers. /// Each layer's background loops are stopped gracefully before the next layer is disposed. /// - /// WaitForIdleAsync Semantics: - /// - /// awaits all layers sequentially, from outermost to innermost. - /// This guarantees that the entire cache stack has converged: the outermost layer finishes its - /// rebalance first (which drives fetch requests into inner layers), then each inner layer is - /// awaited in turn until the deepest layer is idle. - /// - /// - /// This full-stack idle guarantee is required for correct behavior of the - /// GetDataAndWaitForIdleAsync strong consistency extension method when used with a - /// : a caller waiting for strong - /// consistency needs all layers to have converged, not just the outermost one. - /// +/// WaitForIdleAsync Semantics: +/// +/// awaits all layers sequentially, from outermost to innermost. +/// This guarantees that the entire cache stack has converged: the outermost layer finishes its +/// rebalance first (which drives fetch requests into inner layers), then each inner layer is +/// awaited in turn until the deepest layer is idle. +/// +/// +/// This full-stack idle guarantee is required for correct behavior of the +/// GetDataAndWaitForIdleAsync strong consistency extension method when used with a +/// : a caller waiting for strong +/// consistency needs all layers to have converged, not just the outermost one. +/// /// public sealed class LayeredWindowCache : IWindowCache diff --git a/tests/SlidingWindowCache.Integration.Tests/StrongConsistencyModeTests.cs b/tests/SlidingWindowCache.Integration.Tests/StrongConsistencyModeTests.cs index 621ba1a..004ab8f 100644 --- a/tests/SlidingWindowCache.Integration.Tests/StrongConsistencyModeTests.cs +++ b/tests/SlidingWindowCache.Integration.Tests/StrongConsistencyModeTests.cs @@ -90,14 +90,16 @@ public static IEnumerable AllStrategiesTestData get { foreach (var storage in StorageStrategyTestData) - foreach (var execution in ExecutionStrategyTestData) { - yield return - [ - $"{execution[0]}_{storage[0]}", - storage[1], - execution[1] - ]; + foreach (var execution in ExecutionStrategyTestData) + { + yield return + [ + $"{execution[0]}_{storage[0]}", + storage[1], + execution[1] + ]; + } } } } From 6518d0ab815b25fd5aed2acd9da10f4d0609a1cf Mon Sep 17 00:00:00 2001 From: Mykyta Zotov Date: Sun, 1 Mar 2026 21:59:10 +0100 Subject: [PATCH 4/6] Update tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- .../Public/LayeredWindowCacheTests.cs | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs index e007c54..33f05bd 100644 --- a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs +++ b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs @@ -32,13 +32,7 @@ private static LayeredWindowCache CreateLayere IReadOnlyList> layers) { // Instantiate via the internal constructor using the test project's InternalsVisibleTo access - return (LayeredWindowCache) - Activator.CreateInstance( - typeof(LayeredWindowCache), - System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance, - null, - [layers], - null)!; + return new LayeredWindowCache(layers); } private static Intervals.NET.Range MakeRange(int start, int end) From 5b9aecc5fd05297c796212d31da4bb58af00a668 Mon Sep 17 00:00:00 2001 From: Mykyta Zotov Date: Sun, 1 Mar 2026 22:40:45 +0100 Subject: [PATCH 5/6] feat: implement ReadOnlyMemoryEnumerable for zero-allocation enumeration of ReadOnlyMemory data; refactor: update WindowCacheDataSourceAdapter to utilize ReadOnlyMemoryEnumerable for lazy data access; fix: add null check for domain parameter in LayeredWindowCacheBuilder; test: add unit tests for LayeredWindowCacheBuilder and WindowCacheDataSourceAdapter --- docs/storage-strategies.md | 14 +++- .../Collections/ReadOnlyMemoryEnumerable.cs | 72 +++++++++++++++++++ .../Public/LayeredWindowCacheBuilder.cs | 8 +++ .../Public/WindowCacheDataSourceAdapter.cs | 17 +++-- .../Public/LayeredWindowCacheBuilderTests.cs | 19 +++++ .../WindowCacheDataSourceAdapterTests.cs | 38 ++++++++-- 6 files changed, 153 insertions(+), 15 deletions(-) create mode 100644 src/SlidingWindowCache/Infrastructure/Collections/ReadOnlyMemoryEnumerable.cs diff --git a/docs/storage-strategies.md b/docs/storage-strategies.md index 63cde81..0521d7b 100644 --- a/docs/storage-strategies.md +++ b/docs/storage-strategies.md @@ -229,14 +229,24 @@ If you need lower-level control, you can compose layers manually using `WindowCa ```csharp var backgroundCache = new WindowCache( - slowDataSource, domain, backgroundOptions); + slowDataSource, domain, + new WindowCacheOptions( + leftCacheSize: 10.0, + rightCacheSize: 10.0, + readMode: UserCacheReadMode.CopyOnRead, + leftThreshold: 0.3, + rightThreshold: 0.3)); // Wrap background cache as IDataSource for user cache IDataSource cachedDataSource = new WindowCacheDataSourceAdapter(backgroundCache); var userCache = new WindowCache( - cachedDataSource, domain, userOptions); + cachedDataSource, domain, + new WindowCacheOptions( + leftCacheSize: 0.5, + rightCacheSize: 0.5, + readMode: UserCacheReadMode.Snapshot)); ``` --- diff --git a/src/SlidingWindowCache/Infrastructure/Collections/ReadOnlyMemoryEnumerable.cs b/src/SlidingWindowCache/Infrastructure/Collections/ReadOnlyMemoryEnumerable.cs new file mode 100644 index 0000000..2a17fbd --- /dev/null +++ b/src/SlidingWindowCache/Infrastructure/Collections/ReadOnlyMemoryEnumerable.cs @@ -0,0 +1,72 @@ +using System.Collections; + +namespace SlidingWindowCache.Infrastructure.Collections; + +/// +/// A zero-allocation wrapper over a . +/// Enables lazy, single-pass enumeration of memory-backed data without copying the underlying array. +/// +/// The element type. +/// +/// +/// The captured at construction keeps a reference to the +/// backing array, ensuring the data remains reachable for the lifetime of this enumerable. +/// +/// +/// Enumeration accesses elements via ReadOnlyMemory<T>.Span inside +/// , which is valid because the property is not an iterator +/// method and holds no state across yield boundaries. +/// +/// +internal readonly struct ReadOnlyMemoryEnumerable : IEnumerable +{ + private readonly ReadOnlyMemory _memory; + + /// + /// Initializes a new wrapping the given memory. + /// + /// The memory region to enumerate. + public ReadOnlyMemoryEnumerable(ReadOnlyMemory memory) + { + _memory = memory; + } + + /// + /// Returns an enumerator that iterates through the memory region. + /// + public Enumerator GetEnumerator() => new(_memory); + + IEnumerator IEnumerable.GetEnumerator() => new Enumerator(_memory); + + IEnumerator IEnumerable.GetEnumerator() => new Enumerator(_memory); + + /// + /// Enumerator for . + /// Accesses each element via index into . + /// + internal struct Enumerator : IEnumerator + { + private readonly ReadOnlyMemory _memory; + private int _index; + + internal Enumerator(ReadOnlyMemory memory) + { + _memory = memory; + _index = -1; + } + + /// + public T Current => _memory.Span[_index]; + + object? IEnumerator.Current => Current; + + /// + public bool MoveNext() => ++_index < _memory.Length; + + /// + public void Reset() => _index = -1; + + /// + public void Dispose() { } + } +} diff --git a/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs b/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs index 84f7513..de96caa 100644 --- a/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs +++ b/src/SlidingWindowCache/Public/LayeredWindowCacheBuilder.cs @@ -115,6 +115,9 @@ private LayeredWindowCacheBuilder(IDataSource rootDataSource, TDo /// /// Thrown when is null. /// + /// + /// Thrown when is null. + /// public static LayeredWindowCacheBuilder Create( IDataSource dataSource, TDomain domain) @@ -124,6 +127,11 @@ public static LayeredWindowCacheBuilder Create( throw new ArgumentNullException(nameof(dataSource)); } + if (domain is null) + { + throw new ArgumentNullException(nameof(domain)); + } + return new LayeredWindowCacheBuilder(dataSource, domain); } diff --git a/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs b/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs index 439715b..d06b2db 100644 --- a/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs +++ b/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs @@ -1,5 +1,6 @@ using Intervals.NET; using Intervals.NET.Domain.Abstractions; +using SlidingWindowCache.Infrastructure.Collections; using SlidingWindowCache.Public.Dto; namespace SlidingWindowCache.Public; @@ -33,7 +34,8 @@ namespace SlidingWindowCache.Public; /// delegates to the inner (deeper) cache's , /// which returns data from the inner cache's window (possibly triggering a background rebalance /// in the inner cache). The from -/// is converted to an array for the contract. +/// is wrapped in a and passed directly as +/// , avoiding an intermediate array allocation. /// /// Consistency Model: /// @@ -122,11 +124,12 @@ public WindowCacheDataSourceAdapter(IWindowCache innerCa /// also trigger a background rebalance in the inner cache (eventual consistency). /// /// - /// The returned by the inner cache is converted to an array - /// to satisfy the contract. This allocation is - /// intentional: for Snapshot inner caches, a copy is required to avoid capturing - /// a reference into the inner cache's internal array (which may be replaced by a rebalance); - /// for CopyOnRead inner caches, the allocation is already made by the read itself. + /// The returned by the inner cache is wrapped in a + /// without copying the underlying data. + /// The captures a reference to the backing array, + /// keeping it reachable for the lifetime of the enumerable. Enumeration is deferred: + /// the data is read lazily when the outer cache's rebalance path materializes the + /// sequence (a single pass). /// /// public async Task> FetchAsync( @@ -134,6 +137,6 @@ public async Task> FetchAsync( CancellationToken cancellationToken) { var result = await _innerCache.GetDataAsync(range, cancellationToken).ConfigureAwait(false); - return new RangeChunk(result.Range, result.Data.ToArray()); + return new RangeChunk(result.Range, new ReadOnlyMemoryEnumerable(result.Data)); } } diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs index 17ba5a1..1b6db37 100644 --- a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs +++ b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheBuilderTests.cs @@ -1,3 +1,4 @@ +using Intervals.NET.Domain.Abstractions; using Intervals.NET.Domain.Default.Numeric; using SlidingWindowCache.Public; using SlidingWindowCache.Public.Configuration; @@ -45,6 +46,24 @@ public void Create_WithNullDataSource_ThrowsArgumentNullException() Assert.Contains("dataSource", ((ArgumentNullException)exception).ParamName); } + [Fact] + public void Create_WithNullDomain_ThrowsArgumentNullException() + { + // ARRANGE — TDomain must be a reference type to accept null; + // use IRangeDomain as the type parameter (interface = reference type) + var dataSource = CreateDataSource(); + + // ACT + var exception = Record.Exception(() => + LayeredWindowCacheBuilder> + .Create(dataSource, null!)); + + // ASSERT + Assert.NotNull(exception); + Assert.IsType(exception); + Assert.Contains("domain", ((ArgumentNullException)exception).ParamName); + } + [Fact] public void Create_WithValidArguments_ReturnsBuilder() { diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs index 5b95d62..c4f5536 100644 --- a/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs +++ b/tests/SlidingWindowCache.Unit.Tests/Public/WindowCacheDataSourceAdapterTests.cs @@ -1,5 +1,6 @@ using Intervals.NET.Domain.Default.Numeric; using Moq; +using SlidingWindowCache.Infrastructure.Collections; using SlidingWindowCache.Public; using SlidingWindowCache.Public.Dto; @@ -111,9 +112,9 @@ public async Task FetchAsync_WithFullResult_ReturnsChunkWithCorrectData() } [Fact] - public async Task FetchAsync_DataIsAnArray_NotSameReferenceAsInnerMemory() + public async Task FetchAsync_DataIsLazyEnumerable_NotEagerCopy() { - // ARRANGE — ensure the adapter creates a copy, not a reference to inner cache internals + // ARRANGE — adapter wraps ReadOnlyMemory lazily; no intermediate array is allocated var mock = CreateCacheMock(); var range = MakeRange(1, 5); var innerArray = new[] { 1, 2, 3, 4, 5 }; @@ -125,13 +126,38 @@ public async Task FetchAsync_DataIsAnArray_NotSameReferenceAsInnerMemory() // ACT var chunk = await adapter.FetchAsync(range, CancellationToken.None); - var returnedArray = chunk.Data.ToArray(); - // Mutate inner array after fetch + // ASSERT — Data is a lazy ReadOnlyMemoryEnumerable, not a materialized copy + Assert.IsType>(chunk.Data); + Assert.Equal(innerArray, chunk.Data.ToArray()); + } + + [Fact] + public async Task FetchAsync_DataEnumeratesFromMemory_ReflectsContentAtEnumerationTime() + { + // ARRANGE — lazy enumeration reads from the captured ReadOnlyMemory backing array; + // mutations to the source array before enumeration are visible (lazy semantics) + var mock = CreateCacheMock(); + var range = MakeRange(1, 5); + var innerArray = new[] { 1, 2, 3, 4, 5 }; + var result = new RangeResult(range, new ReadOnlyMemory(innerArray)); + var adapter = CreateAdapter(mock.Object); + + mock.Setup(c => c.GetDataAsync(range, It.IsAny())) + .ReturnsAsync(result); + + // ACT — fetch the chunk but do NOT enumerate yet + var chunk = await adapter.FetchAsync(range, CancellationToken.None); + + // Mutate the source array before enumeration innerArray[0] = 999; - // ASSERT — chunk data was already copied; mutation of source has no effect - Assert.Equal(1, returnedArray[0]); + // Enumerate now — lazy read picks up the mutation (expected: 999, not 1) + var enumeratedData = chunk.Data.ToArray(); + + // ASSERT + Assert.Equal(999, enumeratedData[0]); + Assert.Equal(2, enumeratedData[1]); } [Fact] From 49766ab859827dbf74e2370f853bdf166de9cf7b Mon Sep 17 00:00:00 2001 From: Mykyta Zotov Date: Sun, 1 Mar 2026 23:14:26 +0100 Subject: [PATCH 6/6] refactor: update data source adapter to use ReadOnlyMemoryEnumerable for improved memory efficiency; docs: enhance documentation for ReadOnlyMemoryEnumerable and WindowCacheDataSourceAdapter --- docs/components/overview.md | 2 +- docs/components/public-api.md | 4 ++-- .../Collections/ReadOnlyMemoryEnumerable.cs | 6 +++--- .../Public/WindowCacheDataSourceAdapter.cs | 14 ++++++++------ .../Public/LayeredWindowCacheTests.cs | 8 +++----- 5 files changed, 17 insertions(+), 17 deletions(-) diff --git a/docs/components/overview.md b/docs/components/overview.md index ba1962e..28af71f 100644 --- a/docs/components/overview.md +++ b/docs/components/overview.md @@ -82,7 +82,7 @@ The system is easier to reason about when components are grouped by: 🟦 WindowCacheDataSourceAdapter [IDataSource adapter] │ Wraps IWindowCache as IDataSource │ FetchAsync() → calls inner cache's GetDataAsync() -│ converts ReadOnlyMemory → array for RangeChunk +│ wraps ReadOnlyMemory in ReadOnlyMemoryEnumerable for RangeChunk (avoids temp TData[] alloc) ``` **Component Type Legend:** diff --git a/docs/components/public-api.md b/docs/components/public-api.md index 2dd213b..daa11e4 100644 --- a/docs/components/public-api.md +++ b/docs/components/public-api.md @@ -157,7 +157,7 @@ Three classes support building layered cache stacks where each layer's data sour Wraps an `IWindowCache` as an `IDataSource`, allowing any `WindowCache` to act as the data source for an outer `WindowCache`. Data is retrieved using eventual consistency (`GetDataAsync`). -- Converts `ReadOnlyMemory` (returned by `IWindowCache.GetDataAsync`) to `IEnumerable` (required by `IDataSource.FetchAsync`) via `.ToArray()`. +- Wraps `ReadOnlyMemory` (returned by `IWindowCache.GetDataAsync`) in a `ReadOnlyMemoryEnumerable` to satisfy the `IEnumerable` contract of `IDataSource.FetchAsync`. This avoids allocating a temporary `TData[]` copy — the wrapper holds only a reference to the existing backing array via `ReadOnlyMemory`, and the data is enumerated lazily in a single pass during the outer cache's rematerialization. - Does **not** own the wrapped cache; the caller is responsible for disposing it. ### LayeredWindowCache\ @@ -188,7 +188,7 @@ await using var cache = LayeredWindowCacheBuilder -/// A zero-allocation wrapper over a . -/// Enables lazy, single-pass enumeration of memory-backed data without copying the underlying array. +/// A lightweight wrapper over a +/// that avoids allocating temp TData[] and copying the underlying data. /// /// The element type. /// @@ -18,7 +18,7 @@ namespace SlidingWindowCache.Infrastructure.Collections; /// method and holds no state across yield boundaries. /// /// -internal readonly struct ReadOnlyMemoryEnumerable : IEnumerable +internal sealed class ReadOnlyMemoryEnumerable : IEnumerable { private readonly ReadOnlyMemory _memory; diff --git a/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs b/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs index d06b2db..81dbd3f 100644 --- a/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs +++ b/src/SlidingWindowCache/Public/WindowCacheDataSourceAdapter.cs @@ -35,7 +35,8 @@ namespace SlidingWindowCache.Public; /// which returns data from the inner cache's window (possibly triggering a background rebalance /// in the inner cache). The from /// is wrapped in a and passed directly as -/// , avoiding an intermediate array allocation. +/// , avoiding a temporary [] +/// allocation proportional to the data range. /// /// Consistency Model: /// @@ -125,11 +126,12 @@ public WindowCacheDataSourceAdapter(IWindowCache innerCa /// /// /// The returned by the inner cache is wrapped in a - /// without copying the underlying data. - /// The captures a reference to the backing array, - /// keeping it reachable for the lifetime of the enumerable. Enumeration is deferred: - /// the data is read lazily when the outer cache's rebalance path materializes the - /// sequence (a single pass). + /// , avoiding a temporary [] + /// allocation proportional to the data range. The wrapper holds only a reference to the + /// existing backing array via , keeping it reachable for the + /// lifetime of the enumerable. Enumeration is deferred: the data is read lazily when the + /// outer cache's rebalance path materializes the + /// sequence (a single pass). /// /// public async Task> FetchAsync( diff --git a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs index 33f05bd..f8ba3cd 100644 --- a/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs +++ b/tests/SlidingWindowCache.Unit.Tests/Public/LayeredWindowCacheTests.cs @@ -21,17 +21,15 @@ private static Mock> CreateLayerM private static LayeredWindowCache CreateLayeredCache( params IWindowCache[] layers) { - // Use reflection-free approach: the internal constructor takes IReadOnlyList - // We use the builder with real caches in integration tests; here we test the wrapper - // by constructing it directly via internal constructor using a subclass trick. - // Since the constructor is internal, we leverage InternalsVisibleTo in the test project. + // The internal constructor is accessible via InternalsVisibleTo. + // Integration tests use the builder with real caches; here we test the wrapper directly. return CreateLayeredCacheFromList(layers.ToList()); } private static LayeredWindowCache CreateLayeredCacheFromList( IReadOnlyList> layers) { - // Instantiate via the internal constructor using the test project's InternalsVisibleTo access + // Instantiate via the internal constructor using the test project's InternalsVisibleTo access. return new LayeredWindowCache(layers); }