diff --git a/caching-old.md b/caching-old.md
new file mode 100644
index 00000000..27e6f4e2
--- /dev/null
+++ b/caching-old.md
@@ -0,0 +1,325 @@
+---
+title: Caching
+---
+
+# Caching
+
+Harper has integrated support for caching data from external sources. With built-in caching capabilities and distributed high-performance low-latency responsiveness, Harper makes an ideal data caching server. Harper can store cached data in standard tables, as queryable structured data, so data can easily be consumed in one format (for example JSON or CSV) and provided to end users in different formats with different selected properties (for example MessagePack, with a subset of selected properties), or even with customized querying capabilities. Harper also manages and provides timestamps/tags for proper caching control, facilitating further downstreaming caching. With these combined capabilities, Harper is an extremely fast, interoperable, flexible, and customizable caching server.
+
+## Configuring Caching
+
+To set up caching, first you will need to define a table that you will use as your cache (to store the cached data). You can review the [introduction to building applications](./) for more information on setting up the application (and the [defining schemas documentation](defining-schemas)), but once you have defined an application folder with a schema, you can add a table for caching to your `schema.graphql`:
+
+```graphql
+type MyCache @table(expiration: 3600) @export {
+ id: ID @primaryKey
+}
+```
+
+You may also note that we can define a time-to-live (TTL) expiration on the table, indicating when table records/entries should expire and be evicted from this table. This is generally necessary for "passive" caches where there is no active notification of when entries expire. However, this is not needed if you provide a means of notifying when data is invalidated and changed. The units for expiration, and other duration-based properties, are in seconds.
+
+While you can provide a single expiration time, there are actually several expiration timings that are potentially relevant, and can be independently configured. These settings are available as directive properties on the table configuration (like `expiration` above): stale expiration: The point when a request for a record should trigger a request to origin (but might possibly return the current stale record depending on policy) must-revalidate expiration: The point when a request for a record must make a request to origin first and return the latest value from origin. eviction expiration: The point when a record is actually removed from the caching table.
+
+You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives:
+
+- `expiration` - The amount of time until a record goes stale.
+- `eviction` - The amount of time after expiration before a record can be evicted (defaults to zero).
+- `scanInterval` - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction).
+
+#### How `scanInterval` Determines the Eviction Cycle
+
+`scanInterval` determines fixed clock-aligned times when eviction runs, and these times are the same regardless of when the server started. Harper takes the `scanInterval` and divides the TTL (`expiration` + `eviction`) into evenly spaced “anchor times.” These anchors are calculated in the local timezone of the server. This allows Harper to “snap” the eviction schedule to predictable points on the clock, such as every 15 minutes or every 6 hours, based on the interval length. As a result:
+
+- The server’s startup time does not affect when eviction runs.
+- Eviction timings are deterministic and timezone-aware.
+- For any given configuration, the eviction schedule is the same across restarts and across servers in the same local timezone.
+
+#### Example: 1-Hour Expiration
+
+`expiration` = 1 hour with default `scanInterval` (15 minutes, one quarter of `expiration`). This creates the following fixed eviction schedule:
+
+> 00:00
+> 00:15
+> 00:30
+> 00:45
+> 01:00
+> ... continuing every 15 minutes ...
+
+If the server starts at 12:05 it does not run eviction at 12:20 or “15 minutes after startup.” Instead, the next scheduled anchor is 12:15, then 12:30, 12:45, 13:00, etc. The schedule is clock-aligned, not startup-aligned.
+
+#### Example: 1-Day Expiration
+
+`expiration` = 1 day with default `scanInterval` (6 hours, one quarter of `expiration`). This creates the following fixed eviction schedule:
+
+> 00:00
+> 06:00
+> 12:00
+> 18:00
+> ... continuing every 6 hours ...
+
+If the server starts at 12:05 the next matching eviction time is 18:00 the same day, then 00:00, 06:00, 12:00, 18:00, etc. If the server starts at 19:30 the schedule does not shift. Instead, the next anchor time is 00:00, and the regular 6-hour cycle continues.
+
+## Define External Data Source
+
+Next, you need to define the source for your cache. External data sources could be HTTP APIs, other databases, microservices, or any other source of data. This can be defined as a resource class in your application's `resources.js` module. You can extend the `Resource` class (which is available as a global variable in the Harper environment) as your base class. The first method to implement is a `get()` method to define how to retrieve the source data. For example, if we were caching an external HTTP API, we might define it as such:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ return (await fetch(`https://some-api.com/${this.getId()}`)).json();
+ }
+}
+```
+
+Next, we define this external data resource as the "source" for the caching table we defined above:
+
+```javascript
+const { MyCache } = tables;
+MyCache.sourcedFrom(ThirdPartyAPI);
+```
+
+Now we have a fully configured and connected caching table. If you access data from `MyCache` (for example, through the REST API, like `/MyCache/some-id`), Harper will check to see if the requested entry is in the table and return it if it is available (and hasn't expired). If there is no entry, or it has expired (it is older than one hour in this case), it will go to the source, calling the `get()` method, which will then retrieve the requested entry. Once the entry is retrieved, it will be saved/cached in the caching table (for one hour based on our expiration time).
+
+```mermaid
+flowchart TD
+ Client1(Client 1)-->Cache(Caching Table)
+ Client2(Client 2)-->Cache
+ Cache-->Resource(Data Source Connector)
+ Resource-->API(Remote Data Source API)
+```
+
+Harper handles waiting for an existing cache resolution to finish and uses its result. This prevents a "cache stampede" when entries expire, ensuring that multiple requests to a cache entry will all wait on a single request to the data source.
+
+Cache tables with an expiration are periodically pruned for expired entries. Because this is done periodically, there is usually some amount of time between when a record has expired and when the record is actually evicted (the cached data is removed). But when a record is checked for availability, the expiration time is used to determine if the record is fresh (and the cache entry can be used).
+
+### Eviction with Indexing
+
+Eviction is the removal of a locally cached copy of data, but it does not imply the deletion of the actual data from the canonical or origin data source. Because evicted records still exist (just not in the local cache), if a caching table uses expiration (and eviction), and has indexing on certain attributes, the data is not removed from the indexes. The indexes that reference the evicted record are preserved, along with the attribute data necessary to maintain these indexes. Therefore eviction means the removal of non-indexed data (in this case evictions are stored as "partial" records). Eviction only removes the data that can be safely removed from a cache without affecting the integrity or behavior of the indexes. If a search query is performed that matches this evicted record, the record will be requested on-demand to fulfill the search query.
+
+### Specifying a Timestamp
+
+In the example above, we simply retrieved data to fulfill a cache request. We may want to supply the timestamp of the record we are fulfilling as well. This can be set on the context for the request:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ let response = await fetch(`https://some-api.com/${this.getId()}`);
+ this.getContext().lastModified = response.headers.get('Last-Modified');
+ return response.json();
+ }
+}
+```
+
+#### Specifying an Expiration
+
+In addition, we can also specify when a cached record "expires". When a cached record expires, this means that a request for that record will trigger a request to the data source again. This does not necessarily mean that the cached record has been evicted (removed), although expired records will be periodically evicted. If the cached record still exists, the data source can revalidate it and return it. For example:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ const context = this.getContext();
+ let headers = new Headers();
+ if (context.replacingVersion) // this is the existing cached record
+ headers.set('If-Modified-Since', new Date(context.replacingVersion).toUTCString());
+ let response = await fetch(`https://some-api.com/${this.getId()}`, { headers });
+ let cacheInfo = response.headers.get('Cache-Control');
+ let maxAge = cacheInfo?.match(/max-age=(\d)/)?.[1];
+ if (maxAge) // we can set a specific expiration time by setting context.expiresAt
+ context.expiresAt = Date.now() + maxAge * 1000; // convert from seconds to milliseconds and add to current time
+ // we can just revalidate and return the record if the origin has confirmed that it has the same version:
+ if (response.status === 304) return context.replacingRecord;
+ ...
+```
+
+## Active Caching and Invalidation
+
+The cache we have created above is a "passive" cache; it only pulls data from the data source as needed, and has no knowledge of if and when data from the data source has actually changed, so it must rely on timer-based expiration to periodically retrieve possibly updated data. This means that it is possible that the cache may have stale data for a while (if the underlying data has changed, but the cached data hasn't expired), and the cache may have to refresh more than necessary if the data source data hasn't changed. Consequently it can be significantly more effective to implement an "active" cache, in which the data source is monitored and notifies the cache when any data changes. This ensures that when data changes, the cache can immediately load the updated data, and unchanged data can remain cached much longer (or indefinitely).
+
+### Invalidate
+
+One way to provide more active caching is to specifically invalidate individual records. Invalidation is useful when you know the source data has changed, and the cache needs to re-retrieve data from the source the next time that record is accessed. This can be done by executing the `invalidate()` method on a resource. For example, you could extend a table (in your resources.js) and provide a custom POST handler that does invalidation:
+
+```javascript
+const { MyTable } = tables;
+export class MyTableEndpoint extends MyTable {
+ async post(data) {
+ if (data.invalidate)
+ // use this flag as a marker
+ this.invalidate();
+ }
+}
+```
+
+(Note that if you are now exporting this endpoint through resources.js, you don't necessarily need to directly export the table separately in your schema.graphql).
+
+### Subscriptions
+
+We can provide more control of an active cache with subscriptions. If there is a way to receive notifications from the external data source of data changes, we can implement this data source as an "active" data source for our cache by implementing a `subscribe` method. A `subscribe` method should return an asynchronous iterable that iterates and returns events indicating the updates. One straightforward way of creating an asynchronous iterable is by defining the `subscribe` method as an asynchronous generator. If we had an endpoint that we could poll for changes every second, we could implement this like:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async *subscribe() {
+ setInterval(() => { // every second retrieve more data
+ // get the next data change event from the source
+ let update = (await fetch(`https://some-api.com/latest-update`)).json();
+ const event = { // define the change event (which will update the cache)
+ type: 'put', // this would indicate that the event includes the new data value
+ id: // the primary key of the record that updated
+ value: // the new value of the record that updated
+ timestamp: // the timestamp of when the data change occurred
+ };
+ yield event; // this returns this event, notifying the cache of the change
+ }, 1000);
+ }
+ async get() {
+...
+```
+
+Notification events should always include an `id` property to indicate the primary key of the updated record. The event should have a `value` property for `put` and `message` event types. The `timestamp` is optional and can be used to indicate the exact timestamp of the change. The following event `type`s are supported:
+
+- `put` - This indicates that the record has been updated and provides the new value of the record.
+- `invalidate` - Alternately, you can notify with an event type of `invalidate` to indicate that the data has changed, but without the overhead of actually sending the data (the `value` property is not needed), so the data only needs to be sent if and when the data is requested through the cache. An `invalidate` will evict the entry and update the timestamp to indicate that there is new data that should be requested (if needed).
+- `delete` - This indicates that the record has been deleted.
+- `message` - This indicates a message is being passed through the record. The record value has not changed, but this is used for [publish/subscribe messaging](../real-time).
+- `transaction` - This indicates that there are multiple writes that should be treated as a single atomic transaction. These writes should be included as an array of data notification events in the `writes` property.
+
+And the following properties can be defined on event objects:
+
+- `type`: The event type as described above.
+- `id`: The primary key of the record that updated
+- `value`: The new value of the record that updated (for put and message)
+- `writes`: An array of event properties that are part of a transaction (used in conjunction with the transaction event type).
+- `table`: The name of the table with the record that was updated. This can be used with events within a transaction to specify events across multiple tables.
+- `timestamp`: The timestamp of when the data change occurred
+
+With an active external data source with a `subscribe` method, the data source will proactively notify the cache, ensuring a fresh and efficient active cache. Note that with an active data source, we still use the `sourcedFrom` method to register the source for a caching table, and the table will automatically detect and call the subscribe method on the data source.
+
+By default, Harper will only run the subscribe method on one thread. Harper is multi-threaded and normally runs many concurrent worker threads, but typically running a subscription on multiple threads can introduce overlap in notifications and race conditions and running on a subscription on a single thread is preferable. However, if you want to enable subscribe on multiple threads, you can define a `static subscribeOnThisThread` method to specify if the subscription should run on the current thread:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ static subscribeOnThisThread(threadIndex) {
+ return threadIndex < 2; // run on two threads (the first two threads)
+ }
+ async *subscribe() {
+ ....
+```
+
+An alternative to using asynchronous generators is to use a subscription stream and send events to it. A default subscription stream (that doesn't generate its own events) is available from the Resource's default subscribe method:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ subscribe() {
+ const subscription = super.subscribe();
+ setupListeningToRemoteService().on('update', (event) => {
+ subscription.send(event);
+ });
+ return subscription;
+ }
+}
+```
+
+## Downstream Caching
+
+It is highly recommended that you utilize the [REST interface](../rest) for accessing caching tables, as it facilitates downstreaming caching for clients. Timestamps are recorded with all cached entries. Timestamps are then used for incoming [REST requests to specify the `ETag` in the response](../rest#cachingconditional-requests). Clients can cache data themselves and send requests using the `If-None-Match` header to conditionally get a 304 and preserve their cached data based on the timestamp/`ETag` of the entries that are cached in Harper. Caching tables also have [subscription capabilities](caching#subscribing-to-caching-tables), which means that downstream caches can be fully "layered" on top of Harper, both as passive or active caches.
+
+## Write-Through Caching
+
+The cache we have defined so far only has data flowing from the data source to the cache. However, you may wish to support write methods, so that writes to the cache table can flow through to underlying canonical data source, as well as populate the cache. This can be accomplished by implementing the standard write methods, like `put` and `delete`. If you were using an API with standard RESTful methods, you can pass writes through to the data source like this:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async put(data) {
+ await fetch(`https://some-api.com/${this.getId()}`, {
+ method: 'PUT',
+ body: JSON.stringify(data)
+ });
+ }
+ async delete() {
+ await fetch(`https://some-api.com/${this.getId()}`, {
+ method: 'DELETE',
+ });
+ }
+ ...
+```
+
+When doing an insert or update to the MyCache table, the data will be sent to the underlying data source through the `put` method and the new record value will be stored in the cache as well.
+
+### Loading from Source in Methods
+
+When you are using a caching table, it is important to remember that any resource methods besides `get()`, will not automatically load data from the source. If you have defined a `put()`, `post()`, or `delete()` method and you need the source data, you can ensure it is loaded by calling the `ensureLoaded()` method. For example, if you want to modify the existing record from the source, adding a property to it:
+
+```javascript
+class MyCache extends tables.MyCache {
+ async post(data) {
+ // if the data is not cached locally, retrieves from source:
+ await this.ensuredLoaded();
+ // now we can be sure that the data is loaded, and can access properties
+ this.quantity = this.quantity - data.purchases;
+ }
+}
+```
+
+### Subscribing to Caching Tables
+
+You can subscribe to a caching table just like any other table. The one difference is that normal tables do not usually have `invalidate` events, but an active caching table may have `invalidate` events. Again, this event type gives listeners an opportunity to choose whether or not to actually retrieve the value that changed.
+
+### Passive-Active Updates
+
+With our passive update examples, we have provided a data source handler with a `get()` method that returns the specific requested record as the response. However, we can also actively update other records in our response handler (if our data source provides data that should be propagated to other related records). This can be done transactionally, to ensure that all updates occur atomically. The context that is provided to the data source holds the transaction information, so we can simply pass the context to any update/write methods that we call. For example, let's say we are loading a blog post, which also includes comment records:
+
+```javascript
+const { Post, Comment } = tables;
+class BlogSource extends Resource {
+ get() {
+ const post = await (await fetch(`https://my-blog-server/${this.getId()}`).json());
+ for (let comment of post.comments) {
+ await Comment.put(comment, this); // save this comment as part of our current context and transaction
+ }
+ return post;
+ }
+}
+Post.sourcedFrom(BlogSource);
+```
+
+Here both the update to the post and the update to the comments will be atomically/transactionally committed together with the same timestamp.
+
+## Cache-Control header
+
+When interacting with cached data, you can also use the `Cache-Control` request header to specify certain caching behaviors. When performing a PUT (or POST) method, you can use the `max-age` directive to indicate how long the resource should be cached (until stale):
+
+```http
+PUT /my-resource/id
+Cache-Control: max-age=86400
+```
+
+You can use the `only-if-cached` directive on GET requests to only return a resource if it is cached (otherwise will return 504). Note, that if the entry is not cached, this will still trigger a request for the source data from the data source. If you do not want source data retrieved, you can add the `no-store` directive. You can also use the `no-cache` directive if you do not want to use the cached resource. If you wanted to check if there is a cached resource without triggering a request to the data source:
+
+```http
+GET /my-resource/id
+Cache-Control: only-if-cached, no-store
+```
+
+You may also use the `stale-if-error` to indicate if it is acceptable to return a stale cached resource when the data source returns an error (network connection error, 500, 502, 503, or 504). The `must-revalidate` directive can indicate a stale cached resource can not be returned, even when the data source has an error (by default a stale cached resource is returned when there is a network connection error).
+
+## Caching Flow
+
+It may be helpful to understand the flow of a cache request. When a request is made to a caching table:
+
+- Harper will first create a resource instance to handle the process, and ensure that the data is loaded for the resource instance. To do this, it will first check if the record is in the table/cache.
+ - If the record is not in the cache, Harper will first check if there is a current request to get the record from the source. If there is, Harper will wait for the request to complete and return the record from the cache.
+ - If not, Harper will call the `get()` method on the source to retrieve the record. The record will then be stored in the cache.
+ - If the record is in the cache, Harper will check if the record is stale. If the record is not stale, Harper will immediately return the record from the cache. If the record is stale, Harper will call the `get()` method on the source to retrieve the record.
+ - The record will then be stored in the cache. This will write the record to the cache in a separate asynchronous/background write-behind transaction, so it does not block the current request, then return the data immediately once it has it.
+- The `get()` method will be called on the resource instance to return the record to the client (or perform any querying on the record). If this is overriden, the method will be called at this time.
+
+### Caching Flow with Write-Through
+
+When a writes are performed on a caching table (in `put()` or `post()` method, for example), the flow is slightly different:
+
+- Harper will have first created a resource instance to handle the process, and this resource instance that will be the current `this` for a call to `put()` or `post()`.
+- If a `put()` or `update()` is called, for example, this action will be record in the current transaction.
+- Once the transaction is committed (which is done automatically as the request handler completes), the transaction write will be sent to the source to update the data.
+ - The local writes will wait for the source to confirm the writes have completed (note that this effectively allows you to perform a two-phase transactional write to the source, and the source can confirm the writes have completed before the transaction is committed locally).
+ - The transaction writes will then be written the local caching table.
+- The transaction handler will wait for the local commit to be written, then the transaction will be resolved and a response will be sent to the client.
diff --git a/learn/_guide-template.mdx b/learn/_guide-template.mdx
new file mode 100644
index 00000000..36c43019
--- /dev/null
+++ b/learn/_guide-template.mdx
@@ -0,0 +1,35 @@
+---
+title:
+---
+
+Introduction paragraph - short, high-level description of what this guide will cover
+
+## What You Will Learn
+
+Include a bulleted list of details of learning outcomes for this guide
+
+## Prerequisites
+
+List of required prior knowledge and any necessary tools.
+
+Should include links to other guides for the prior knowledge.
+
+Not every page needs to list things like "Node.js", but the basic getting-started ones can
+
+Most should have some form of Harper isntallation requirements such as "Harper CLI" or "Local Harper Installation" or "Harper Fabric Instance"
+
+It could include things like "Clone this repo template/example and run these setup steps"
+
+##
+
+Include as many sub sections as necessary for the guide itself.
+
+## Additional Resources
+
+Should be the last section and include any additional resources that are relevant to the guide.
+
+This could be like reference docs links, other related guides, and even external links to things.
+
+The guide itself should leverage links as much as possible but this section can be useful for including links that didn't really fit in the content itself.
+
+This could duplicate some links from the Prerequisites section particularly other guides related to this one.
diff --git a/learn/developers/active-caching-subscriptions.mdx b/learn/developers/active-caching-subscriptions.mdx
new file mode 100644
index 00000000..94c1c8ca
--- /dev/null
+++ b/learn/developers/active-caching-subscriptions.mdx
@@ -0,0 +1,352 @@
+---
+title: Active Caching with Subscriptions
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The passive caching pattern — fetching from the source on demand, expiring on a timer — works well for data that changes infrequently and where brief staleness is acceptable. But for data that changes often, a TTL is a blunt instrument: too short and you're making unnecessary upstream calls; too long and you're serving stale data.
+
+Active caching solves this by inverting the flow. Instead of polling the source, your cache _subscribes_ to it. When the source changes, it pushes the update directly into the Harper cache — instantly, without waiting for a TTL to expire. Records stay fresh until they actually change, and there's no background polling overhead.
+
+In this guide you will implement an active cache for a live sports scoreboard feed. The source streams score updates as server-sent events; the Harper cache receives each update immediately and serves the current score to any number of downstream clients.
+
+## What You Will Learn
+
+- How passive and active caching differ in architecture and trade-offs
+- How to implement a `subscribe` method on a source Resource
+- How to yield events from an async generator
+- How to push events from a callback-based source using the subscription stream
+- When to use `put` vs. `invalidate` events
+- How to control which threads run the subscription
+
+## Prerequisites
+
+- Completed [Caching with Harper](./caching-with-harper)
+- Familiarity with async generators in JavaScript
+
+## Passive vs. Active Caching
+
+In the passive pattern, Harper drives the flow:
+
+```
+Client → Harper (cache miss or stale) → Source → Harper stores result → Client
+```
+
+The cache only knows data is stale when a client asks for it and the TTL has elapsed. Between TTL resets, the source can change any number of times and the cache has no idea.
+
+In the active pattern, the source drives the flow:
+
+```
+Source changes → Source pushes event → Harper updates cache proactively
+Client → Harper (always fresh) → Client
+```
+
+Harper receives every change the moment it happens. No TTL is needed — records stay cached indefinitely and are only replaced when the source says they changed.
+
+| Aspect | Passive | Active |
+| ------------------ | ------------------------------- | ----------------------------- |
+| TTL required | Yes | No (optional as a fallback) |
+| Staleness window | Up to TTL duration | Near-zero |
+| Upstream calls | One per record per TTL interval | Only on actual changes |
+| Source requirement | Simple `get` endpoint | Streaming or push-capable API |
+| Complexity | Low | Moderate |
+
+## Setting Up the Application
+
+Clone the example repository and open it in your editor.
+
+```bash
+git clone https://github.com/HarperFast/active-caching-example.git harper-active-caching
+```
+
+The repository has the following structure:
+
+```
+harper-active-caching/
+├── config.yaml
+├── schema.graphql
+└── resources.js
+```
+
+Start Harper in dev mode from inside the directory:
+
+```bash
+harper dev .
+```
+
+## Defining the Cache Table
+
+Open `schema.graphql`. The scoreboard cache table has no `expiration` — it stays valid until the source pushes an update:
+
+```graphql
+type GameScore @table @export {
+ id: ID @primaryKey # game ID, e.g. "game-001"
+ homeTeam: String @indexed
+ awayTeam: String @indexed
+ homeScore: Int
+ awayScore: Int
+ status: String @indexed # "live", "final", "upcoming"
+ lastUpdated: Long
+}
+```
+
+Without `expiration`, records never go stale passively. The only way they update is when the source pushes a `put` or `invalidate` event — or when Harper calls `get()` on a cache miss for a record that hasn't been loaded yet.
+
+## Implementing the Active Source
+
+Open `resources.js`. The `ScoreboardFeed` class connects to an imaginary streaming API and yields score updates as Harper cache events.
+
+```javascript
+// resources.js
+
+const SCORES_API_BASE = process.env.SCORES_API_BASE ?? 'https://scores.example.com';
+
+const scoreboardFeed = {
+ async get(id) {
+ // Called on cache miss — fetch the initial state for a specific game
+ const response = await fetch(`${SCORES_API_BASE}/games/${id}`);
+ if (!response.ok) {
+ const error = new Error('Game not found');
+ error.statusCode = 404;
+ throw error;
+ }
+ return response.json();
+ },
+
+ async *subscribe() {
+ // Called once to stream all ongoing updates into the cache
+ const response = await fetch(`${SCORES_API_BASE}/stream`, {
+ headers: { Accept: 'text/event-stream' },
+ });
+
+ for await (const chunk of response.body) {
+ const lines = chunk.toString().split('\n');
+ for (const line of lines) {
+ if (!line.startsWith('data: ')) continue;
+ const event = JSON.parse(line.slice(6));
+ yield {
+ type: 'put',
+ id: event.gameId,
+ value: event.score,
+ timestamp: event.ts,
+ };
+ }
+ }
+ },
+};
+
+tables.GameScore.sourcedFrom(scoreboardFeed);
+```
+
+`get()` and `subscribe()` have distinct roles:
+
+- **`get()`** — handles cache misses. If a client asks for `game-001` before the subscription has delivered it, Harper calls `get()` to fetch the initial state.
+- **`subscribe()`** — streams all future updates. Harper calls this once at startup and propagates every yielded event into the cache automatically.
+
+### How Harper calls `subscribe`
+
+Harper calls `subscribe()` once per process immediately after `sourcedFrom` is registered. The method should return (or be) an async iterable that yields events indefinitely. Harper does not call `subscribe()` per record — a single subscription covers the entire table.
+
+## Event Types
+
+The `type` field on each yielded event controls how Harper applies the update:
+
+```javascript
+// Replace the entire cached record with the new value
+yield { type: 'put', id: 'game-001', value: { homeScore: 3, awayScore: 1, ... } };
+
+// Tell Harper the record changed without sending the new value.
+// Harper will evict the record; the next client request triggers a get() call.
+yield { type: 'invalidate', id: 'game-001' };
+
+// Remove the record from the cache
+yield { type: 'delete', id: 'game-001' };
+```
+
+Use `put` when the event stream includes full record values — this is the most efficient path because Harper stores the value immediately without a follow-up `get()` call. Use `invalidate` when the stream only signals that something changed, and you want Harper to lazy-load the new value on demand.
+
+## Using a Callback-Based Source
+
+Not all sources use async iterables. If your upstream uses a callback or event-emitter API, use the default subscription stream instead of an async generator:
+
+```javascript
+const scoreboardFeed = {
+ subscribe() {
+ const subscription = super.subscribe(); // default stream
+
+ const socket = new WebSocket(`wss://scores.example.com/ws`);
+ socket.on('message', (raw) => {
+ const event = JSON.parse(raw);
+ subscription.send({
+ type: 'put',
+ id: event.gameId,
+ value: event.score,
+ timestamp: event.ts,
+ });
+ });
+
+ socket.on('error', (err) => {
+ subscription.error(err); // surfaces to Harper's error handling
+ });
+
+ return subscription;
+ },
+};
+```
+
+## Configuring the Application
+
+Open `config.yaml`:
+
+```yaml
+graphqlSchema:
+ files: 'schema.graphql'
+rest: true
+jsResource:
+ files: 'resources.js'
+```
+
+- `graphqlSchema` loads `schema.graphql` and creates the `GameScore` table.
+- `rest` exposes `GameScore` as an HTTP endpoint.
+- `jsResource` loads `resources.js`, registers `ScoreboardFeed`, and starts the subscription on startup.
+
+## Observing Active Updates
+
+With Harper running, open two terminals. In the first, poll a game score every second:
+
+
+
+
+```bash
+watch -n1 'curl -s http://localhost:9926/GameScore/game-001 | jq .'
+```
+
+
+
+
+```typescript
+setInterval(async () => {
+ const data = await fetch('http://localhost:9926/GameScore/game-001').then((r) => r.json());
+ console.log(data.homeScore, data.awayScore, data.status);
+}, 1000);
+```
+
+
+
+
+In the second terminal, simulate a score update being pushed by the source (bypassing the stream for testing):
+
+
+
+
+```bash
+curl -X PUT 'http://localhost:9926/GameScore/game-001' \
+ -H 'Content-Type: application/json' \
+ -d '{"homeTeam":"Rangers","awayTeam":"Hawks","homeScore":3,"awayScore":2,"status":"live","lastUpdated":1712500000000}'
+```
+
+
+
+
+```typescript
+await fetch('http://localhost:9926/GameScore/game-001', {
+ method: 'PUT',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({
+ homeTeam: 'Rangers',
+ awayTeam: 'Hawks',
+ homeScore: 3,
+ awayScore: 2,
+ status: 'live',
+ lastUpdated: 1712500000000,
+ }),
+});
+```
+
+
+
+
+The first terminal will reflect the new score immediately — no TTL expiry needed, no cache miss, no upstream call. The cache was updated in-place by the `put` event.
+
+## Controlling Subscription Threads
+
+Harper runs multiple worker threads. By default, `subscribe()` runs on exactly one thread to prevent duplicate events and race conditions — if every thread opened its own connection to the source, every event would be processed multiple times.
+
+In rare cases you may want subscriptions on multiple threads — for example, if your source shards data and each thread should subscribe to a different shard. Use `subscribeOnThisThread` to control this:
+
+```javascript
+const scoreboardFeed = {
+ subscribeOnThisThread(threadIndex) {
+ return threadIndex === 0; // default: only thread 0
+ }
+ async *subscribe() { ... }
+}
+```
+
+## Adding a TTL Fallback
+
+Even with an active subscription, network interruptions can cause the connection to drop. You can add `expiration` to the table as a safety net — if the subscription fails and a record becomes stale, Harper will fall back to calling `get()`:
+
+```graphql
+type GameScore @table(expiration: 60) @export {
+ id: ID @primaryKey
+ ...
+}
+```
+
+With this in place, records are guaranteed to be at most 60 seconds stale even if the subscription connection drops.
+
+## Putting It All Together
+
+Here is the complete `resources.js`:
+
+```javascript
+// resources.js
+
+const SCORES_API_BASE = process.env.SCORES_API_BASE ?? 'https://scores.example.com';
+
+const scoreboardFeed = {
+ async get(id) {
+ const response = await fetch(`${SCORES_API_BASE}/games/${id}`);
+ if (!response.ok) {
+ const error = new Error('Game not found');
+ error.statusCode = 404;
+ throw error;
+ }
+ return response.json();
+ },
+
+ async *subscribe() {
+ const response = await fetch(`${SCORES_API_BASE}/stream`, {
+ headers: { Accept: 'text/event-stream' },
+ });
+
+ for await (const chunk of response.body) {
+ const lines = chunk.toString().split('\n');
+ for (const line of lines) {
+ if (!line.startsWith('data: ')) continue;
+ const event = JSON.parse(line.slice(6));
+ yield {
+ type: 'put',
+ id: event.gameId,
+ value: event.score,
+ timestamp: event.ts,
+ };
+ }
+ }
+ },
+};
+
+tables.GameScore.sourcedFrom(scoreboardFeed);
+```
+
+## What Comes Next
+
+This guide covered active caching with a push-based subscription. The [Semantic Caching with Vector Indexing](./semantic-caching-vector-indexing) guide applies caching to AI-powered search — instead of keying the cache by exact ID, Harper finds semantically similar cached answers using vector similarity, so equivalent questions never hit the LLM twice.
+
+## Additional Resources
+
+- [Caching with Harper](./caching-with-harper) — foundational passive caching guide
+- [Resource API](/reference/v5/resources/resource-api) — `sourcedFrom`, `subscribe`, event types
+- [Database Schema](/reference/v5/database/schema) — `@table(expiration:)` and eviction configuration
diff --git a/learn/developers/caching-ai-generations.mdx b/learn/developers/caching-ai-generations.mdx
new file mode 100644
index 00000000..ba238a2b
--- /dev/null
+++ b/learn/developers/caching-ai-generations.mdx
@@ -0,0 +1,500 @@
+---
+title: Caching AI Generations with Harper
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+AI API calls are expensive. Generating a product description, summarizing an article, or personalizing a recommendation with a large language model can cost fractions of a cent per call — but those fractions add up fast at scale. And in most applications, the same content gets generated over and over: the same product page viewed by thousands of users, the same document summarized dozens of times.
+
+Harper's caching system is a natural fit for this problem. You can generate AI content once, cache it close to your users, and serve it instantly on every subsequent request. When the underlying data changes you invalidate the cached generation and let it be regenerated on the next access.
+
+In this guide you will build a product description endpoint backed by an LLM, wrap it in a Harper cache table, and implement an invalidation strategy so descriptions stay fresh when product data changes.
+
+## What You Will Learn
+
+- How to build a Resource class that calls an AI API and returns generated content
+- How to cache AI generations in a Harper table with an appropriate TTL
+- How to expose an invalidation endpoint so descriptions can be refreshed on demand
+- How to use ETags and conditional requests to avoid redundant content delivery downstream
+
+## Prerequisites
+
+- Completed [Caching with Harper](./caching-with-harper) (recommended — this guide builds directly on those concepts)
+- Working Harper installation (local or Fabric)
+- An OpenAI API key (or another compatible LLM provider)
+- A command-line HTTP client (`curl` recommended) or familiarity with `fetch`
+
+## The Problem This Solves
+
+Consider a product catalog with hundreds of items. Each product needs a compelling description tailored to its attributes. Generating those descriptions with an LLM produces higher-quality copy than static text, and you can re-generate them easily as your brand voice evolves.
+
+The challenge: you cannot call the LLM on every product page load. A single generation might take 1–3 seconds and cost money. Instead, you generate on first access, cache the result, and regenerate only when needed.
+
+Harper makes this straightforward:
+
+1. A `ProductDescription` cache table stores generated descriptions with a long TTL (24 hours).
+2. A `DescriptionGenerator` Resource class calls the LLM and returns the generated text.
+3. `sourcedFrom` connects the generator to the cache — Harper handles all the caching logic.
+4. An exported `ProductDescription` Resource class provides the REST endpoint and handles invalidation requests.
+
+## Setting Up the Application
+
+Clone the example repository:
+
+```bash
+git clone https://github.com/HarperFast/ai-cache-example.git harper-ai-cache
+```
+
+Create a `.env` file with your API key:
+
+```bash
+OPENAI_API_KEY=sk-...
+```
+
+Then start Harper:
+
+```bash
+harper dev .
+```
+
+## Defining the Schema
+
+Open `schema.graphql`. There are two tables: `Product` holds the product catalog, and `ProductDescription` is the cache table for AI-generated descriptions.
+
+```graphql
+type Product @table @export {
+ id: ID @primaryKey
+ name: String @indexed
+ category: String @indexed
+ price: Float
+ features: [String]
+}
+
+type ProductDescription @table(expiration: 86400) @export {
+ id: ID @primaryKey
+ description: String
+ generatedAt: Long
+ product: Product @relationship(from: id)
+}
+```
+
+The `product` field on `ProductDescription` uses `@relationship(from: id)` to declare that its own primary key (`id`) is a foreign key pointing to `Product`. Since `ProductDescription` records share the same ID as the product they describe, this gives consumers a way to fetch the full product details alongside the description in a single request.
+
+Both tables use `@export` — Harper's auto-generated REST endpoints are sufficient for reading and writing. Invalidation will be handled automatically inside a custom `Product` Resource class rather than through a separate endpoint.
+
+`ProductDescription` has a 24-hour TTL (`expiration: 86400`). A one-day window is reasonable for marketing copy — it's long enough to avoid redundant generation, and short enough that descriptions stay reasonably current even without explicit invalidation.
+
+## Configuring the Application
+
+Open `config.yaml` and enable the required plugins:
+
+```yaml
+graphqlSchema:
+ files: 'schema.graphql'
+rest: true
+jsResource:
+ files: 'resources.js'
+```
+
+- `graphqlSchema` loads `schema.graphql` and creates both tables.
+- `rest` enables Harper's REST API on port `9926`.
+- `jsResource` loads `resources.js`, registering the generator source, the `sourcedFrom` connection, and the exported `ProductDescription` endpoint.
+
+## Seeding the Product Catalog
+
+Rather than seeding data from `resources.js`, this application uses Harper's built-in [Data Loader](/reference/v5/database/data-loader) to populate the `Product` table from a JSON file on startup. Add `dataLoader` to `config.yaml`:
+
+```yaml
+graphqlSchema:
+ files: 'schema.graphql'
+rest: true
+jsResource:
+ files: 'resources.js'
+dataLoader:
+ files: 'data/products.json'
+```
+
+Then create `data/products.json`:
+
+```json
+{
+ "table": "Product",
+ "records": [
+ {
+ "id": "prod-001",
+ "name": "Titanium Water Bottle",
+ "category": "Outdoors",
+ "price": 39.99,
+ "features": ["BPA-free", "Keeps cold 24h", "Lightweight 200g"]
+ },
+ {
+ "id": "prod-002",
+ "name": "Noise-Cancelling Headphones",
+ "category": "Electronics",
+ "price": 199.99,
+ "features": ["40h battery", "Foldable", "USB-C charging"]
+ }
+ ]
+}
+```
+
+The Data Loader runs on every startup and deployment. It uses content hashing to skip records that haven't changed, so it's safe to redeploy without duplicating data or overwriting any manual edits.
+
+## Building the Description Generator
+
+The `DescriptionGenerator` class is the upstream source for the `ProductDescription` cache. Its `get` method fetches the product from the `Product` table (which it extends), builds a prompt, and calls the OpenAI API.
+
+```javascript
+// resources.js
+
+const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
+
+class DescriptionGenerator extends tables.Product {
+ static async get(productId) {
+ // Load the product data from our base class, the Product table
+ const product = await super.get(productId);
+ if (!product) {
+ const error = new Error('Product not found');
+ error.statusCode = 404;
+ throw error;
+ }
+
+ // Build a prompt from the product's attributes
+ const prompt = `Write a compelling, two-sentence product description for the following item.
+Product name: ${product.name}
+Category: ${product.category}
+Price: $${product.price}
+Features: ${product.features.join(', ')}
+Keep the tone enthusiastic but professional.`;
+
+ // Call the OpenAI chat completions API
+ const response = await fetch('https://api.openai.com/v1/chat/completions', {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'Authorization': `Bearer ${OPENAI_API_KEY}`,
+ },
+ body: JSON.stringify({
+ model: 'gpt-4o-mini',
+ messages: [{ role: 'user', content: prompt }],
+ max_tokens: 120,
+ }),
+ });
+
+ const result = await response.json();
+ const description = result.choices[0].message.content.trim();
+
+ return {
+ id: productId,
+ description,
+ generatedAt: Date.now(),
+ };
+ }
+}
+
+tables.ProductDescription.sourcedFrom(DescriptionGenerator);
+```
+
+With `sourcedFrom` in place, Harper now handles all caching behavior automatically:
+
+- **Cache miss**: the first request for `/ProductDescription/prod-001` calls `DescriptionGenerator.get()`, stores the result, and returns it.
+- **Cache hit**: subsequent requests within 24 hours return the stored description instantly — no LLM call.
+- **Cache expiry**: after 24 hours, the next request regenerates the description.
+- **Cache invalidation**: Because we are extending a table, Harper automatically invalidates the cache when the source table is updated.
+
+## Requesting a Generated Description
+
+With Harper running, request a description for the first product:
+
+
+
+
+```bash
+curl -i 'http://localhost:9926/ProductDescription/prod-001'
+```
+
+
+
+
+```typescript
+const response = await fetch('http://localhost:9926/ProductDescription/prod-001');
+const etag = response.headers.get('etag');
+const data = await response.json();
+console.log(data.description);
+console.log('ETag:', etag);
+```
+
+
+
+
+The first request will take a moment — Harper is calling the OpenAI API. You will get back something like:
+
+
+
+
+```
+HTTP/1.1 200 OK
+content-type: application/json
+etag: "abCDefGHij"
+...
+
+{
+ "id": "prod-001",
+ "description": "Stay hydrated on every adventure with this ultralight titanium bottle that keeps your drinks cold for a full 24 hours. BPA-free and weighing just 200g, it's built for those who refuse to compromise between performance and sustainability.",
+ "generatedAt": 1712500000000
+}
+```
+
+
+
+
+```
+Stay hydrated on every adventure with this ultralight titanium bottle...
+ETag: "abCDefGHij"
+```
+
+
+
+
+Make the same request again immediately:
+
+
+
+
+```bash
+curl -i 'http://localhost:9926/ProductDescription/prod-001'
+```
+
+
+
+
+```typescript
+const second = await fetch('http://localhost:9926/ProductDescription/prod-001');
+console.log(second.status);
+```
+
+
+
+
+
+
+
+```
+HTTP/1.1 200 OK
+content-type: application/json
+etag: "abCDefGHij"
+server-timing: db;dur=1.2
+...
+
+{ ... same description ... }
+```
+
+
+
+
+```
+200
+```
+
+
+
+
+The response is instant. Check the `server-timing` header — the database read time will be in single-digit milliseconds rather than the seconds the LLM call took.
+
+## Using ETags to Avoid Redundant Transfers
+
+Harper automatically includes an `ETag` header with every record response. You can use `If-None-Match` to avoid re-transferring a description your client already has:
+
+
+
+
+```bash
+# Use the etag value from the previous response, double quotes included
+curl -i 'http://localhost:9926/ProductDescription/prod-001' \
+ -H 'If-None-Match: "abCDefGHij"'
+```
+
+
+
+
+```typescript
+const first = await fetch('http://localhost:9926/ProductDescription/prod-001');
+const etag = first.headers.get('etag'); // e.g. "abCDefGHij"
+
+const second = await fetch('http://localhost:9926/ProductDescription/prod-001', {
+ headers: { 'If-None-Match': etag },
+});
+console.log(second.status);
+```
+
+
+
+
+
+
+
+```
+HTTP/1.1 304 Not Modified
+etag: "abCDefGHij"
+```
+
+
+
+
+```
+304
+```
+
+
+
+
+A `304 Not Modified` response means the cached description your client holds is still current. No data is serialized or transmitted. This layering — Harper's internal cache plus HTTP conditional requests for downstream caches — means a regeneration event only propagates to clients when their cached copy actually becomes stale.
+
+## Querying a Description with Its Product
+
+Because `ProductDescription` declares a `@relationship` to `Product`, you can fetch the description and the full product record together using the `select` query parameter:
+
+
+
+
+```bash
+curl -s 'http://localhost:9926/ProductDescription/prod-001?select(description,generatedAt,product(name,price,features))'
+```
+
+
+
+
+```typescript
+const response = await fetch(
+ 'http://localhost:9926/ProductDescription/prod-001?select(description,generatedAt,product(name,price,features))'
+);
+const data = await response.json();
+console.log(data);
+```
+
+
+
+
+
+
+
+```json
+{
+ "description": "Stay hydrated on every adventure...",
+ "generatedAt": 1712500000000,
+ "product": {
+ "name": "Titanium Water Bottle",
+ "price": 39.99,
+ "features": ["BPA-free", "Keeps cold 24h", "Lightweight 200g"]
+ }
+}
+```
+
+
+
+
+```
+{
+ description: 'Stay hydrated on every adventure...',
+ generatedAt: 1712500000000,
+ product: { name: 'Titanium Water Bottle', price: 39.99, features: [ ... ] }
+}
+```
+
+
+
+
+Harper resolves the relationship and joins the `Product` record in a single database read — no extra round-trips.
+
+## Invalidating Descriptions When Products Change
+
+With the custom `Product` class in place, a single `PATCH` to the product is all it takes. Harper calls your `patch` override, saves the update, and invalidates the cached description in the same operation — no second request needed.
+
+
+
+
+```bash
+curl -X PATCH 'http://localhost:9926/Product/prod-001' \
+ -H 'Content-Type: application/json' \
+ -d '{"features": ["BPA-free", "Keeps cold 48h", "Lightweight 180g"]}'
+```
+
+
+
+
+```typescript
+await fetch('http://localhost:9926/Product/prod-001', {
+ method: 'PATCH',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ features: ['BPA-free', 'Keeps cold 48h', 'Lightweight 180g'] }),
+});
+```
+
+
+
+
+
+
+
+```
+HTTP/1.1 204 No Content
+```
+
+
+
+
+```
+204
+```
+
+
+
+
+The next `GET /ProductDescription/prod-001` triggers a new LLM call with the updated features. Every subsequent request serves the new cached description instantly.
+
+## Handling Errors Gracefully
+
+LLM APIs can fail — rate limits, network errors, service outages. By default, Harper will surface a 500 error to the client if `DescriptionGenerator.get()` throws. For a production cache you may want to serve the stale description if the LLM is temporarily unavailable.
+
+Add the `stale-if-error` Cache-Control directive to your request to accept a stale cached response when the source returns an error:
+
+
+
+
+```bash
+curl -i 'http://localhost:9926/ProductDescription/prod-001' \
+ -H 'Cache-Control: stale-if-error'
+```
+
+
+
+
+```typescript
+const response = await fetch('http://localhost:9926/ProductDescription/prod-001', {
+ headers: { 'Cache-Control': 'stale-if-error' },
+});
+```
+
+
+
+
+With `stale-if-error`, Harper returns the most recently cached description rather than propagating the upstream error — a sensible default for AI-generated marketing copy where slightly stale content is better than a broken page.
+
+## Going Further
+
+This guide used the _passive_ caching pattern: Harper fetches from the source on demand. For high-traffic applications, you may want to _proactively_ populate the cache — for example, pre-generating descriptions for all products at startup or on a schedule. This is covered in the [Active Caching and Subscriptions](#) guide (coming soon).
+
+You might also consider:
+
+- **Category-wide invalidation**: extend the `patch` handler to iterate `tables.ProductDescription` and call `invalidate` on each record when a category-level change affects many products at once.
+- **Version-aware ETags**: include a version field in `DescriptionGenerator.get()` so clients can detect stale descriptions proactively rather than waiting for a server-side invalidation.
+- **Cost tracking**: log `generatedAt` changes to measure how often you are actually hitting the LLM versus serving from cache.
+
+## Additional Resources
+
+- [Caching with Harper](./caching-with-harper) — foundational guide covering `sourcedFrom`, ETags, and TTL expiration
+- [Resource API](/reference/v5/resources/resource-api) — `sourcedFrom`, `invalidate`, `getContext`, static methods
+- [Database Schema](/reference/v5/database/schema) — `@table(expiration:)` directive reference
+- [REST Headers](/reference/v5/rest/headers) — ETag, `If-None-Match`, `Cache-Control` directives
+- [harper-ecommerce-template](https://github.com/HarperFast/harper-ecommerce-template) — Full ecommerce application using Harper with AI-generated content
diff --git a/learn/developers/caching-with-harper.mdx b/learn/developers/caching-with-harper.mdx
new file mode 100644
index 00000000..567ce76e
--- /dev/null
+++ b/learn/developers/caching-with-harper.mdx
@@ -0,0 +1,378 @@
+---
+title: Caching with Harper
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Every production application hits the same wall eventually: an external API you depend on is slow, rate-limited, or expensive to call. Harper's caching system lets you wrap any external data source — a REST API, a microservice, a database — and serve responses from a fast local cache while transparently fetching fresh data only when needed.
+
+In this guide you will build a Harper application that caches responses from a public API, observe caching behavior using ETags and HTTP status codes, and learn how to invalidate entries on demand.
+
+## What You Will Learn
+
+- How to define a cache table using the `@table(expiration:)` schema directive
+- How to wrap an external data source with a custom Resource class
+- How to connect a data source to a cache table using `sourcedFrom`
+- How to observe caching behavior through ETag and `304 Not Modified` responses
+- How to manually invalidate a cached entry
+
+## Prerequisites
+
+- Completed [Install and Connect Harper](../getting-started/install-and-connect-harper)
+- Completed [Create Your First Application](../getting-started/create-your-first-application)
+- Working Harper installation (local or Fabric)
+- A command-line HTTP client (`curl` recommended) or familiarity with `fetch`
+
+## Setting Up the Application
+
+Clone the example repository and open it in your editor. If you are using a container install, clone into the mounted `dev/` directory.
+
+```bash
+git clone https://github.com/HarperFast/caching-guide-example.git harper-caching
+```
+
+The repository has the following structure:
+
+```
+harper-caching/
+├── config.yaml
+├── schema.graphql
+└── resources.js
+```
+
+Start Harper in dev mode from inside the directory:
+
+```bash
+harper dev .
+```
+
+## Defining a Cache Table
+
+Open `schema.graphql`. The cache table is defined with a single addition to the familiar `@table` directive: an `expiration` argument.
+
+```graphql
+type JokeCache @table(expiration: 60) @export {
+ id: ID @primaryKey
+ setup: String
+ punchline: String
+}
+```
+
+The `expiration: 60` argument tells Harper that any record in this table is considered _stale_ after 60 seconds. When a stale record is requested, Harper fetches a fresh copy from the source resource and stores it before returning the response.
+
+:::info
+A table's `expiration` is measured in seconds. Harper also supports separate `eviction` and `scanInterval` arguments if you need fine-grained control over when records are physically removed from the table. See the [Schema reference](/reference/v5/database/schema) for details.
+:::
+
+## Wrapping an External Data Source
+
+The source for the cache is a simple object in `resources.js`. The [public jokeAPI](https://official-joke-api.appspot.com/) returns a joke by ID as a JSON object — a perfect stand-in for any real external API.
+
+```javascript
+// resources.js
+
+const jokeAPI = {
+ async get(id) {
+ const response = await fetch(`https://official-joke-api.appspot.com/jokes/${id}`);
+ return response.json();
+ },
+};
+
+tables.JokeCache.sourcedFrom(jokeAPI);
+```
+
+`sourcedFrom` registers `jokeAPI` as the upstream source for `JokeCache`. Harper's caching behavior now works as follows:
+
+1. A request arrives for `/JokeCache/1`.
+2. Harper checks if the record with id `1` exists in `JokeCache` and is not stale.
+3. If it is fresh, Harper returns it immediately.
+4. If it is missing or stale, Harper calls `jokeAPI.get()` to fetch the data, stores it in `JokeCache`, and returns the result.
+
+Multiple simultaneous requests for the same missing or stale record will all wait on a single upstream call — Harper prevents cache stampedes automatically.
+
+```mermaid
+flowchart LR
+ Client --> JokeCache
+ JokeCache -->|cache miss or stale| jokeAPI
+ jokeAPI -->|fetch + store| JokeCache
+ JokeCache -->|cached response| Client
+```
+
+## Configuring the Application
+
+With the schema and resource in place, open `config.yaml` and enable the two plugins this application needs:
+
+```yaml
+graphqlSchema:
+ files: 'schema.graphql'
+rest: true
+jsResource:
+ files: 'resources.js'
+```
+
+- `graphqlSchema` loads `schema.graphql` and creates the `JokeCache` table.
+- `rest` enables Harper's REST API on port `9926`, exposing any `@export`-ed tables and resources as HTTP endpoints.
+- `jsResource` loads `resources.js`, registering the `jokeAPI` source and the `sourcedFrom` connection — as well as any exported Resource classes as endpoints.
+
+Restart Harper (or let `harper dev` pick up the change automatically), then continue.
+
+:::note
+If you need to check your work, checkout the [`01-cached-api`](https://github.com/HarperFast/caching-guide-example/tree/01-cached-api) branch.
+:::
+
+## Making Your First Cached Request
+
+With Harper running, fetch a joke:
+
+
+
+
+```bash
+curl -i 'http://localhost:9926/JokeCache/1'
+```
+
+
+
+
+```typescript
+const response = await fetch('http://localhost:9926/JokeCache/1');
+console.log(response.status); // 200
+console.log(response.headers.get('etag'));
+const joke = await response.json();
+console.log(joke);
+```
+
+
+
+
+You should see a `200` response:
+
+
+
+
+```
+HTTP/1.1 200 OK
+content-type: application/json
+etag: "abCDefGHij"
+...
+
+{
+ "id": 1,
+ "type": "general",
+ "setup": "What did the ocean say to the beach?",
+ "punchline": "Nothing, it just waved."
+}
+```
+
+
+
+
+```
+200
+"abCDefGHij"
+{ id: 1, type: 'general', setup: 'What did the ocean say to the beach?', punchline: 'Nothing, it just waved.' }
+```
+
+
+
+
+Note the double quotes on the ETag — they are part of the value itself, not just string delimiters. You will need to include them when passing the ETag back in a request header.
+
+Harper automatically computes an ETag from the record's last-modified timestamp. This is the key to downstream caching.
+
+## Observing Caching Behavior with ETags
+
+Make the same request again, this time passing the ETag back in the `If-None-Match` header:
+
+
+
+
+```bash
+# Use the etag value from the previous response, double quotes included
+curl -i 'http://localhost:9926/JokeCache/1' \
+ -H 'If-None-Match: "abCDefGHij"'
+```
+
+
+
+
+```typescript
+// Store the etag from the first request
+const first = await fetch('http://localhost:9926/JokeCache/1');
+const etag = first.headers.get('etag'); // e.g. "abCDefGHij"
+
+// Second request using the etag
+const second = await fetch('http://localhost:9926/JokeCache/1', {
+ headers: { 'If-None-Match': etag },
+});
+console.log(second.status); // 304
+```
+
+
+
+
+
+
+
+```
+HTTP/1.1 304 Not Modified
+etag: "abCDefGHij"
+```
+
+
+
+
+```
+304
+```
+
+
+
+
+The response status will be `304 Not Modified` with an empty body. Harper compared the record's current ETag to the one you sent and found them identical — the data hasn't changed, so there's nothing to transfer.
+
+This is standard HTTP conditional request behavior. Any HTTP cache layer between your client and Harper — a CDN, a service worker, or a browser cache — can use this same mechanism to avoid redundant data transfers.
+
+:::info
+The `ETag` / `If-None-Match` pattern is documented in detail in the [REST Headers reference](/reference/v5/rest/headers).
+:::
+
+## Watching Cache Expiration
+
+The `JokeCache` table has a 60-second expiration. After 60 seconds, the cached record becomes stale and the next request will fetch a fresh copy from `jokeAPI`.
+
+You can force this behavior immediately by passing the `no-cache` directive in the `Cache-Control` request header, which tells Harper to bypass the local cache and always go to the source:
+
+
+
+
+```bash
+curl -i 'http://localhost:9926/JokeCache/1' \
+ -H 'Cache-Control: no-cache'
+```
+
+
+
+
+```typescript
+const response = await fetch('http://localhost:9926/JokeCache/1', {
+ headers: { 'Cache-Control': 'no-cache' },
+});
+```
+
+
+
+
+You will see a `200` response, and if you check the Harper logs you will see an outbound request to `jokeAPI`.
+
+## Invalidating a Cache Entry
+
+Sometimes you know the source data has changed and you do not want to wait for the TTL to expire. Harper's Resource API exposes an `invalidate` method that marks a cached record as stale immediately, so it will be reloaded from the source on the next access.
+
+First, remove the `@export` directive from the `JokeCache` schema:
+
+```graphql
+type JokeCache @table(expiration: 60) {
+ id: ID @primaryKey
+ setup: String
+ punchline: String
+}
+```
+
+Then, create an exported class of the same name in `resources.js` with a custom `POST` handler:
+
+```javascript
+export class JokeCache extends tables.JokeCache {
+ static async post(target, data) {
+ const body = await data;
+ if (body?.action === 'invalidate') {
+ this.invalidate(target);
+ return { status: 200, data: { message: 'invalidated' } };
+ }
+ }
+}
+```
+
+By exporting this class, Harper registers it as the endpoint for `/JokeCache`. The `@export` directive in the schema is no longer required separately because the export is provided by this class.
+
+Now you can trigger invalidation with a `POST` request:
+
+
+
+
+```bash
+curl -X POST 'http://localhost:9926/JokeCache/1' \
+ -H 'Content-Type: application/json' \
+ -d '{"action": "invalidate"}'
+```
+
+
+
+
+```typescript
+await fetch('http://localhost:9926/JokeCache/1', {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ action: 'invalidate' }),
+});
+```
+
+
+
+
+The next `GET /JokeCache/1` will trigger a fresh fetch from `jokeAPI` regardless of whether the TTL has expired.
+
+:::note
+If you need to check your work, checkout the [`02-invalidate-example`](https://github.com/HarperFast/caching-guide-example/tree/02-invalidate-example) branch.
+:::
+
+## Putting It All Together
+
+Here is the complete `resources.js` for this guide:
+
+```javascript
+// resources.js
+
+const jokeAPI = {
+ async get() {
+ const id = this.getId();
+ const response = await fetch(`https://official-joke-api.appspot.com/jokes/${id}`);
+ return response.json();
+ },
+};
+
+tables.JokeCache.sourcedFrom(jokeAPI);
+
+export class JokeCache extends tables.JokeCache {
+ static async post(target, data) {
+ const body = await data;
+ if (body?.action === 'invalidate') {
+ this.invalidate(target);
+ return { status: 200, data: { message: 'invalidated' } };
+ }
+ }
+}
+```
+
+And the complete `schema.graphql`:
+
+```graphql
+type JokeCache @table(expiration: 60) {
+ id: ID @primaryKey
+ setup: String
+ punchline: String
+}
+```
+
+## What Comes Next
+
+This guide covered the passive caching pattern: Harper fetches from the source on demand and serves the cached copy until the TTL expires. The next guide, [Caching AI Generations with Harper](./caching-ai-generations), applies these same techniques to a real-world problem — caching expensive AI-generated content so that you don't pay for the same generation twice.
+
+## Additional Resources
+
+- [Database Schema](/reference/v5/database/schema) — `@table` directive and `expiration` argument
+- [Resource API](/reference/v5/resources/resource-api) — `sourcedFrom`, `invalidate`, static and instance methods
+- [REST Headers](/reference/v5/rest/headers) — ETag and `If-None-Match` conditional requests
+- [REST Overview](/reference/v5/rest/overview) — HTTP methods and URL structure
+- [react-ssr-example](https://github.com/HarperFast/react-ssr-example) — A full example using `sourcedFrom` to cache server-rendered HTML pages
diff --git a/learn/developers/semantic-caching-vector-indexing.mdx b/learn/developers/semantic-caching-vector-indexing.mdx
new file mode 100644
index 00000000..ab896a26
--- /dev/null
+++ b/learn/developers/semantic-caching-vector-indexing.mdx
@@ -0,0 +1,364 @@
+---
+title: Semantic Caching with Vector Indexing
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Caching LLM responses by exact prompt text catches almost nothing useful. "What shoes are good for hiking?" and "Which shoes work best for hiking?" are semantically identical but would be cache misses for each other with a string key. The result is that you pay for every generation even when a good answer already exists in the cache.
+
+Semantic caching solves this by storing a vector embedding alongside each cached response. When a new question arrives, Harper computes its embedding, searches the cache for a response that is _close enough_ in meaning, and returns it if the similarity distance is below a threshold. Only questions with no sufficiently similar answer in the cache hit the LLM.
+
+In this guide you will build a product assistant that answers customer questions using OpenAI. Semantically equivalent questions share a single cached answer — you only pay for a generation once, regardless of how the question is phrased.
+
+## What You Will Learn
+
+- How to store vector embeddings alongside text in a Harper table
+- How to define a vector index using `@indexed(type: "HNSW")`
+- How to query by cosine similarity using `table.search({ sort: { attribute, target } })`
+- How to implement a semantic cache lookup before calling the LLM
+- How to set a similarity/distance threshold to control cache hit quality
+
+## Prerequisites
+
+- Completed [Caching with Harper](./caching-with-harper)
+- An OpenAI API key (`OPENAI_API_KEY` environment variable)
+- Familiarity with the concept of embeddings (vectors of floats representing meaning)
+
+## The Architecture
+
+This guide uses a deliberate architecture that separates concerns cleanly:
+
+```
+Client → /QuestionAnswer/
+ ↓
+ Harper checks semantic cache
+ ↓ near miss? ↓ no similar cached answer?
+ Return cached answer Call LLM → store answer + embedding
+```
+
+The cache is keyed by a content-addressed ID (hash of the normalized question text). On each request, Harper:
+
+1. Embeds the incoming question
+2. Searches the HNSW index for any cached answer with cosine similarity above the threshold
+3. If found, returns the cached answer immediately
+4. If not, generates a new answer, stores it with its embedding, and returns it
+
+Subsequent questions that are phrased differently but mean the same thing will land within the similarity threshold and return the cached answer — no LLM call needed.
+
+## Defining the Schema
+
+Open `schema.graphql`:
+
+```graphql
+type QuestionAnswer @table(expiration: 604800) @export {
+ id: ID @primaryKey # SHA-256 of normalized question text
+ question: String
+ answer: String
+ embedding: [Float] @indexed(type: "HNSW", distance: "cosine")
+ generatedAt: Long
+}
+```
+
+The key field is `embedding: [Float] @indexed(type: "HNSW", distance: "cosine")`. This creates an HNSW vector index on the embedding vectors, enabling approximate nearest-neighbor search by cosine similarity.
+
+`expiration: 604800` sets a 7-day TTL. LLM answers are not infinitely fresh — product details change, pricing shifts — so a week is a reasonable window. After 7 days the record is evicted and the next identical question generates a fresh answer.
+
+## Configuring the Application
+
+Open `config.yaml`:
+
+```yaml
+graphqlSchema:
+ files: 'schema.graphql'
+rest: true
+jsResource:
+ files: 'resources.js'
+```
+
+## Building the Semantic Cache Resource
+
+The core logic lives in `resources.js`. The `ProductAssistant` class overrides `get` to implement the semantic cache lookup and generation pipeline.
+
+```javascript
+// resources.js
+
+const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
+const SIMILARITY_THRESHOLD = 0.92; // cosine similarity; tune for your use case
+
+// --- Embedding helper ---
+
+async function embed(text) {
+ const response = await fetch('https://api.openai.com/v1/embeddings', {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'Authorization': `Bearer ${OPENAI_API_KEY}`,
+ },
+ body: JSON.stringify({
+ model: 'text-embedding-3-small',
+ input: text,
+ }),
+ });
+ const result = await response.json();
+ return result.data[0].embedding; // [Float] array
+}
+
+// --- Answer generation helper ---
+
+async function generateAnswer(question) {
+ const response = await fetch('https://api.openai.com/v1/chat/completions', {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'Authorization': `Bearer ${OPENAI_API_KEY}`,
+ },
+ body: JSON.stringify({
+ model: 'gpt-4o-mini',
+ messages: [
+ {
+ role: 'system',
+ content: 'You are a helpful product assistant. Answer customer questions clearly and concisely.',
+ },
+ { role: 'user', content: question },
+ ],
+ max_tokens: 200,
+ }),
+ });
+ const result = await response.json();
+ return result.choices[0].message.content.trim();
+}
+
+// --- Content-addressed ID ---
+
+async function questionId(text) {
+ const normalized = text.trim().toLowerCase();
+ const buf = await crypto.subtle.digest('SHA-256', new TextEncoder().encode(normalized));
+ return Array.from(new Uint8Array(buf))
+ .map((b) => b.toString(16).padStart(2, '0'))
+ .join('')
+ .slice(0, 16);
+}
+
+// --- Semantic cache resource ---
+
+export class QuestionAnswer extends Resource {
+ static async get(target) {
+ const rawQuestion = target.get('q');
+ if (!rawQuestion) {
+ const error = new Error('Missing required query parameter: q');
+ error.statusCode = 400;
+ throw error;
+ }
+
+ const question = rawQuestion.trim();
+
+ // 1. Embed the incoming question
+ const queryEmbedding = await embed(question);
+
+ // 2. Search the HNSW index for the nearest cached answer
+ const results = tables.QuestionAnswer.search({
+ sort: { attribute: 'embedding', target: queryEmbedding },
+ limit: 1,
+ select: ['id', 'question', 'answer', 'generatedAt', 'embedding', '$distance'],
+ });
+
+ for await (const cached of results) {
+ const similarity = 1 - cached.$distance;
+ if (similarity >= SIMILARITY_THRESHOLD) {
+ // Cache hit — return the stored answer
+ return {
+ answer: cached.answer,
+ cachedQuestion: cached.question,
+ generatedAt: cached.generatedAt,
+ cacheHit: true,
+ similarity: Math.round(similarity * 1000) / 1000,
+ };
+ }
+ }
+
+ // 3. Cache miss — generate a new answer
+ const answer = await generateAnswer(question);
+ const id = await questionId(question);
+
+ await tables.QuestionAnswer.put({
+ id,
+ question,
+ answer,
+ embedding: queryEmbedding,
+ generatedAt: Date.now(),
+ });
+
+ return {
+ answer,
+ question,
+ generatedAt: Date.now(),
+ cacheHit: false,
+ };
+ }
+}
+```
+
+### The semantic cache flow
+
+The `get` handler implements the full pipeline in sequence:
+
+1. **Embed** the incoming question using OpenAI's `text-embedding-3-small` model.
+2. **Search** the HNSW index for the nearest stored embedding. HNSW returns approximate nearest neighbors extremely quickly — even with thousands of cached answers, the search takes microseconds.
+3. **Check similarity** against `SIMILARITY_THRESHOLD` (similarity is 1 - distance). A score of `1.0` is a perfect semantic match; `0.92` is a reasonable default for product Q&A (questions that mean the same thing typically score above `0.95`; genuinely different questions typically score below `0.85`).
+4. **Return the cached answer** if above threshold — no LLM call needed.
+5. **Generate and cache** a new answer if below threshold, storing the embedding for future lookups.
+
+:::note
+The similarity threshold is the most important tuning knob. Set it too high and you miss cache hits for slight rephrasing. Set it too low and you return irrelevant cached answers. Start at `0.92` and adjust based on your domain.
+:::
+
+## Querying the Assistant
+
+With Harper running, ask a question:
+
+
+
+
+```bash
+curl -s 'http://localhost:9926/QuestionAnswer?q=What+shoes+are+best+for+hiking'
+```
+
+
+
+
+```typescript
+const response = await fetch(
+ 'http://localhost:9926/QuestionAnswer?' + new URLSearchParams({ q: 'What shoes are best for hiking' })
+);
+const data = await response.json();
+console.log(data);
+```
+
+
+
+
+First request — cache miss, LLM called:
+
+
+
+
+```json
+{
+ "answer": "For hiking, look for boots with ankle support, a grippy rubber sole...",
+ "question": "what shoes are best for hiking",
+ "generatedAt": 1712500000000,
+ "cacheHit": false
+}
+```
+
+
+
+
+```
+{
+ answer: 'For hiking, look for boots with ankle support, a grippy rubber sole...',
+ question: 'what shoes are best for hiking',
+ generatedAt: 1712500000000,
+ cacheHit: false
+}
+```
+
+
+
+
+Now ask a semantically equivalent question phrased differently:
+
+
+
+
+```bash
+curl -s 'http://localhost:9926/QuestionAnswer?q=Which+footwear+is+recommended+for+trail+hiking'
+```
+
+
+
+
+```typescript
+const response = await fetch(
+ 'http://localhost:9926/QuestionAnswer?' + new URLSearchParams({ q: 'Which footwear is recommended for trail hiking' })
+);
+const data = await response.json();
+console.log(data);
+```
+
+
+
+
+Cache hit — same answer returned instantly:
+
+
+
+
+```json
+{
+ "answer": "For hiking, look for boots with ankle support, a grippy rubber sole...",
+ "cachedQuestion": "what shoes are best for hiking",
+ "generatedAt": 1712500000000,
+ "cacheHit": true,
+ "similarity": 0.961
+}
+```
+
+
+
+
+```
+{
+ answer: 'For hiking, look for boots with ankle support, a grippy rubber sole...',
+ cachedQuestion: 'what shoes are best for hiking',
+ generatedAt: 1712500000000,
+ cacheHit: true,
+ similarity: 0.961
+}
+```
+
+
+
+
+The second question scored `0.961` cosine similarity — above the `0.92` threshold — so it returned the cached answer without calling the LLM.
+
+## Cache Expiration and Freshness
+
+The `QuestionAnswer` table has a 7-day TTL (`expiration: 604800`). After 7 days, a record is evicted and the next request for a similar question generates a fresh answer.
+
+You can bypass the TTL and force a fresh generation by passing `Cache-Control: no-cache`:
+
+
+
+
+```bash
+curl -s 'http://localhost:9926/QuestionAnswer?q=What+shoes+are+best+for+hiking' \
+ -H 'Cache-Control: no-cache'
+```
+
+
+
+
+```typescript
+const response = await fetch(
+ 'http://localhost:9926/QuestionAnswer?' + new URLSearchParams({ q: 'What shoes are best for hiking' }),
+ { headers: { 'Cache-Control': 'no-cache' } }
+);
+```
+
+
+
+
+## Going Further
+
+- **Domain-specific system prompts**: pass product catalog context in the system prompt so answers are grounded in your actual inventory.
+- **Fine-tuning the threshold**: log `similarity` values for hits and misses to find the ideal threshold for your query distribution.
+- **Multi-table semantic caches**: maintain separate caches for different question domains (support, sales, returns) with different system prompts and TTLs.
+- **Embedding model selection**: `text-embedding-3-small` is fast and cheap; `text-embedding-3-large` offers higher accuracy for ambiguous queries.
+
+## Additional Resources
+
+- [Caching with Harper](./caching-with-harper) — foundational passive caching guide
+- [Database Schema](/reference/v5/database/schema) — `@indexed(type: "HNSW")` vector index configuration and parameters
+- [Resource API](/reference/v5/resources/resource-api) — `search`, `sort`, Query object
diff --git a/learn/developers/write-through-caching.mdx b/learn/developers/write-through-caching.mdx
new file mode 100644
index 00000000..9820849f
--- /dev/null
+++ b/learn/developers/write-through-caching.mdx
@@ -0,0 +1,374 @@
+---
+title: Write-Through Caching
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Read-through caches — where Harper fetches from the source on cache misses — only cover half the story. When clients write data, those writes still go directly to the origin. The cache only learns about the change the next time the TTL expires or someone calls `invalidate`.
+
+Write-through caching closes this loop. Writes to a Harper cache table are forwarded to the origin source before being committed locally. The cache stays consistent with the origin at all times: reads come from Harper (fast), writes go through Harper to the origin (transactional), and Harper stores the result.
+
+In this guide you will build a write-through cache for a product inventory API. Reads are served from Harper. Writes go through Harper to the upstream REST API, and the updated value is immediately available in the local cache — no invalidation step needed.
+
+## What You Will Learn
+
+- How to implement `put` and `delete` on a source Resource to enable write-through
+- How write-through works transactionally (two-phase commit)
+- How to load existing record data inside a write method using `ensureLoaded`
+- When write-through caching is the right pattern
+
+## Prerequisites
+
+- Completed [Caching with Harper](./caching-with-harper)
+
+## When to Use Write-Through Caching
+
+Write-through is a good fit when:
+
+- Clients write data that must be durably stored in an external system
+- You want reads to always come from Harper (fast) without a separate invalidation step
+- The origin supports idempotent `PUT`/`DELETE` operations
+- Consistency between the cache and origin is more important than write latency
+
+Write-through adds latency to writes because Harper waits for the origin to confirm before committing. If write throughput is a priority and brief inconsistency is acceptable, consider using invalidation-based caching instead.
+
+## Defining the Schema
+
+```graphql
+type InventoryItem @table(expiration: 300) @export {
+ id: ID @primaryKey
+ sku: String @indexed
+ name: String
+ quantity: Int
+ warehouseId: String @indexed
+}
+```
+
+A 5-minute TTL (`expiration: 300`) ensures items don't stay stale forever if something goes wrong with the write-through path. In normal operation, writes keep the cache current and the TTL is rarely relevant.
+
+## Implementing the Source Resource
+
+The `inventoryAPI` implements `get`, `put`, and `delete` methods. When write-through is active, Harper calls these methods in the appropriate order.
+
+```javascript
+// resources.js
+
+const INVENTORY_API = process.env.INVENTORY_API ?? 'https://inventory.example.com';
+const API_KEY = process.env.INVENTORY_API_KEY;
+
+const inventoryAPI = {
+ get headers() {
+ return {
+ 'Content-Type': 'application/json',
+ 'Authorization': `Bearer ${API_KEY}`,
+ };
+ },
+
+ async get(id) {
+ const response = await fetch(`${INVENTORY_API}/items/${id}`, {
+ headers: this.headers,
+ });
+ if (response.status === 404) {
+ const error = new Error('Item not found');
+ error.statusCode = 404;
+ throw error;
+ }
+ return response.json();
+ },
+
+ async put(id, data) {
+ const body = await data;
+ const response = await fetch(`${INVENTORY_API}/items/${id}`, {
+ method: 'PUT',
+ headers: this.headers,
+ body: JSON.stringify(body),
+ });
+ if (!response.ok) {
+ const error = new Error(`Upstream PUT failed: ${response.status}`);
+ error.statusCode = response.status;
+ throw error;
+ }
+ // Return the confirmed record from the origin (may include server-generated fields)
+ return response.json();
+ },
+
+ async delete(id) {
+ const response = await fetch(`${INVENTORY_API}/items/${id}`, {
+ method: 'DELETE',
+ headers: this.headers,
+ });
+ if (!response.ok) {
+ const error = new Error(`Upstream DELETE failed: ${response.status}`);
+ error.statusCode = response.status;
+ throw error;
+ }
+ },
+};
+
+tables.InventoryItem.sourcedFrom(inventoryAPI);
+```
+
+With `put` and `delete` implemented on the source, Harper's write-through behavior activates automatically:
+
+- A `PUT /InventoryItem/item-001` request calls `inventoryAPI.put()` first
+- If the upstream call succeeds, Harper commits the result to the local cache
+- If the upstream call throws, Harper does not write to the cache — the local and remote data remain consistent
+- The updated record is immediately available for reads from Harper, no invalidation needed
+
+## How the Two-Phase Write Works
+
+Write-through uses a two-phase commit to keep the cache and origin in sync:
+
+```
+Client → PUT /InventoryItem/item-001
+ ↓
+ Harper calls inventoryAPI.put()
+ ↓ success ↓ error
+ Harper commits to local Harper does not write;
+ cache; 200 returned error propagated to client
+```
+
+Harper waits for the upstream `put()` to resolve before writing locally. This means write latency includes the round-trip to the origin, but reads from the local cache are always authoritative.
+
+## Loading Existing Data in Write Methods
+
+By default, `put` and `delete` methods do not automatically load the existing cached record. If you need the current record value inside a write method — for example, to validate a field transition or merge partial data — call `get()` first:
+
+```javascript
+const inventoryAPI = {
+ async put(data) {
+ // Load the existing record from cache or origin before writing
+ const existing = await this.get(target);
+
+ const incoming = await data;
+
+ // Example: prevent setting quantity below zero
+ const current = this.quantity ?? 0;
+ if (incoming.quantity !== undefined && incoming.quantity < 0) {
+ const error = new Error('Quantity cannot be negative');
+ error.statusCode = 400;
+ throw error;
+ }
+
+ const response = await fetch(`${INVENTORY_API}/items/${this.getId()}`, {
+ method: 'PUT',
+ headers: this.headers,
+ body: JSON.stringify(incoming),
+ });
+ return response.json();
+ },
+ get,
+};
+```
+
+## Configuring the Application
+
+Open `config.yaml`:
+
+```yaml
+graphqlSchema:
+ files: 'schema.graphql'
+rest: true
+jsResource:
+ files: 'resources.js'
+```
+
+## Making Writes
+
+With Harper running, write an inventory item:
+
+
+
+
+```bash
+curl -i -X PUT 'http://localhost:9926/InventoryItem/item-001' \
+ -H 'Content-Type: application/json' \
+ -d '{"sku":"WB-100","name":"Titanium Water Bottle","quantity":150,"warehouseId":"WH-A"}'
+```
+
+
+
+
+```typescript
+const response = await fetch('http://localhost:9926/InventoryItem/item-001', {
+ method: 'PUT',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({
+ sku: 'WB-100',
+ name: 'Titanium Water Bottle',
+ quantity: 150,
+ warehouseId: 'WH-A',
+ }),
+});
+console.log(response.status);
+```
+
+
+
+
+
+
+
+```
+HTTP/1.1 200 OK
+```
+
+
+
+
+```
+200
+```
+
+
+
+
+The write went to `inventoryAPI.put()` first, was confirmed by the origin, and is now cached in Harper. A subsequent read comes directly from Harper without an upstream call:
+
+
+
+
+```bash
+curl -s 'http://localhost:9926/InventoryItem/item-001'
+```
+
+
+
+
+```typescript
+const item = await fetch('http://localhost:9926/InventoryItem/item-001').then((r) => r.json());
+console.log(item.quantity); // 150
+```
+
+
+
+
+
+
+
+```json
+{
+ "id": "item-001",
+ "sku": "WB-100",
+ "name": "Titanium Water Bottle",
+ "quantity": 150,
+ "warehouseId": "WH-A"
+}
+```
+
+
+
+
+```
+150
+```
+
+
+
+
+## Deleting a Record
+
+Deletes also flow through the source:
+
+
+
+
+```bash
+curl -i -X DELETE 'http://localhost:9926/InventoryItem/item-001'
+```
+
+
+
+
+```typescript
+await fetch('http://localhost:9926/InventoryItem/item-001', { method: 'DELETE' });
+```
+
+
+
+
+
+
+
+```
+HTTP/1.1 204 No Content
+```
+
+
+
+
+```
+204
+```
+
+
+
+
+Harper calls `inventoryAPI.delete()`, waits for confirmation, then removes the record from the local cache. A subsequent read triggers a `get()` call to the origin (or returns 404 if it's gone).
+
+## Putting It All Together
+
+Complete `resources.js`:
+
+```javascript
+// resources.js
+
+const INVENTORY_API = process.env.INVENTORY_API ?? 'https://inventory.example.com';
+const API_KEY = process.env.INVENTORY_API_KEY;
+
+const inventoryAPI = {
+ get headers() {
+ return {
+ 'Content-Type': 'application/json',
+ 'Authorization': `Bearer ${API_KEY}`,
+ };
+ }
+
+ async get() {
+ const response = await fetch(`${INVENTORY_API}/items/${this.getId()}`, {
+ headers: this.headers,
+ });
+ if (response.status === 404) {
+ const error = new Error('Item not found');
+ error.statusCode = 404;
+ throw error;
+ }
+ return response.json();
+ }
+
+ async put(data) {
+ const response = await fetch(`${INVENTORY_API}/items/${this.getId()}`, {
+ method: 'PUT',
+ headers: this.headers,
+ body: JSON.stringify(await data),
+ });
+ if (!response.ok) {
+ const error = new Error(`Upstream PUT failed: ${response.status}`);
+ error.statusCode = response.status;
+ throw error;
+ }
+ return response.json();
+ }
+
+ async delete() {
+ const response = await fetch(`${INVENTORY_API}/items/${this.getId()}`, {
+ method: 'DELETE',
+ headers: this.headers,
+ });
+ if (!response.ok) {
+ const error = new Error(`Upstream DELETE failed: ${response.status}`);
+ error.statusCode = response.status;
+ throw error;
+ }
+ }
+}
+
+tables.InventoryItem.sourcedFrom(inventoryAPI);
+```
+
+## Additional Resources
+
+- [Caching with Harper](./caching-with-harper) — foundational passive caching guide
+- [Resource API](/reference/v5/resources/resource-api) — `sourcedFrom`, `put`, `delete`, `ensureLoaded`
+- [Database Schema](/reference/v5/database/schema) — `@table(expiration:)` directive reference
diff --git a/package-lock.json b/package-lock.json
index 4efc239d..7872df73 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -253,6 +253,7 @@
"integrity": "sha512-/IYpF10BpthGZEJQZMhMqV4AqWr5avcWfZm/SIKK1RvUDmzGqLoW/+xeJVX9C8ZnNkIC8hivbIQFaNaRw0BFZQ==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@algolia/client-common": "5.39.0",
"@algolia/requester-browser-xhr": "5.39.0",
@@ -412,6 +413,7 @@
"integrity": "sha512-H3mcG6ZDLTlYfaSNi0iOKkigqMFvkTKlGUYlD8GW7nNOYRrevuA46iTypPyv+06V3fEmvvazfntkBU34L0azAw==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@babel/code-frame": "^7.28.6",
"@babel/generator": "^7.28.6",
@@ -2364,6 +2366,7 @@
}
],
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">=18"
},
@@ -2387,6 +2390,7 @@
}
],
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">=18"
}
@@ -2501,6 +2505,7 @@
"integrity": "sha512-orRsuYpJVw8LdAwqqLykBj9ecS5/cRHlI5+nvTo8LcCKmzDmqVORXtOIYEEQuL9D4BxtA1lm5isAqzQZCoQ6Eg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"cssesc": "^3.0.0",
"util-deprecate": "^1.0.2"
@@ -2938,6 +2943,7 @@
"integrity": "sha512-orRsuYpJVw8LdAwqqLykBj9ecS5/cRHlI5+nvTo8LcCKmzDmqVORXtOIYEEQuL9D4BxtA1lm5isAqzQZCoQ6Eg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"cssesc": "^3.0.0",
"util-deprecate": "^1.0.2"
@@ -3963,6 +3969,7 @@
"integrity": "sha512-C5wZsGuKTY8jEYsqdxhhFOe1ZDjH0uIYJ9T/jebHwkyxqnr4wW0jTkB72OMqNjsoQRcb0JN3PcSeTwFlVgzCZg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@docusaurus/core": "3.9.2",
"@docusaurus/logger": "3.9.2",
@@ -4259,6 +4266,7 @@
"integrity": "sha512-6c4DAbR6n6nPbnZhY2V3tzpnKnGL+6aOsLvFL26VRqhlczli9eWG0VDUNoCQEPnGwDMhPS42UhSAnz5pThm5Ag==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@docusaurus/mdx-loader": "3.9.2",
"@docusaurus/module-type-aliases": "3.9.2",
@@ -4670,7 +4678,6 @@
"integrity": "sha512-ENIdc4iLu0d93HeYirvKmrzshzofPw6VkZRKQGe9Nv46ZnWUzcF1xV01dcvEg/1wXUR61OmmlSfyeyO7EvjLxQ==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"dependencies": {
"@eslint/object-schema": "^2.1.6",
"debug": "^4.3.1",
@@ -4686,7 +4693,6 @@
"integrity": "sha512-xR93k9WhrDYpXHORXpxVL5oHj3Era7wo6k/Wd8/IsQNnZUTzkGS29lyn3nAT05v6ltUuTFVCCYDEGfy2Or/sPA==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"engines": {
"node": "^18.18.0 || ^20.9.0 || >=21.1.0"
}
@@ -4697,7 +4703,6 @@
"integrity": "sha512-78Md3/Rrxh83gCxoUc0EiciuOHsIITzLy53m3d9UyiW8y9Dj2D29FeETqyKA+BRK76tnTp6RXWb3pCay8Oyomg==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"dependencies": {
"@types/json-schema": "^7.0.15"
},
@@ -4711,7 +4716,6 @@
"integrity": "sha512-gtF186CXhIl1p4pJNGZw8Yc6RlshoePRvE0X91oPGb3vZ8pM3qOS9W9NGPat9LziaBV7XrJWGylNQXkGcnM3IQ==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"ajv": "^6.12.4",
"debug": "^4.3.2",
@@ -4736,7 +4740,6 @@
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
@@ -4754,7 +4757,6 @@
"integrity": "sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==",
"dev": true,
"license": "MIT",
- "peer": true,
"engines": {
"node": ">=18"
},
@@ -4767,8 +4769,7 @@
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
"integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
"dev": true,
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/@eslint/js": {
"version": "9.36.0",
@@ -4776,7 +4777,6 @@
"integrity": "sha512-uhCbYtYynH30iZErszX78U+nR3pJU3RHGQ57NXy5QupD4SBVwDeU8TNBy+MjMngc1UyIW9noKqsRqfjQTBU2dw==",
"dev": true,
"license": "MIT",
- "peer": true,
"engines": {
"node": "^18.18.0 || ^20.9.0 || >=21.1.0"
},
@@ -4790,7 +4790,6 @@
"integrity": "sha512-RBMg5FRL0I0gs51M/guSAj5/e14VQ4tpZnQNWwuDT66P14I43ItmPfIZRhO9fUVIPOAQXU47atlywZ/czoqFPA==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"engines": {
"node": "^18.18.0 || ^20.9.0 || >=21.1.0"
}
@@ -4801,7 +4800,6 @@
"integrity": "sha512-Z5kJ+wU3oA7MMIqVR9tyZRtjYPr4OC004Q4Rw7pgOKUOKkJfZ3O24nz3WYfGRpMDNmcOi3TwQOmgm7B7Tpii0w==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"dependencies": {
"@eslint/core": "^0.15.2",
"levn": "^0.4.1"
@@ -4853,7 +4851,6 @@
"integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"engines": {
"node": ">=18.18.0"
}
@@ -4864,7 +4861,6 @@
"integrity": "sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"dependencies": {
"@humanfs/core": "^0.19.1",
"@humanwhocodes/retry": "^0.4.0"
@@ -4879,7 +4875,6 @@
"integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"engines": {
"node": ">=12.22"
},
@@ -4894,7 +4889,6 @@
"integrity": "sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==",
"dev": true,
"license": "Apache-2.0",
- "peer": true,
"engines": {
"node": ">=18.18"
},
@@ -5188,6 +5182,7 @@
"integrity": "sha512-f++rKLQgUVYDAtECQ6fn/is15GkEH9+nZPM3MS0RcxVqoTfawHvDlSCH7JbMhAM6uJ32v3eXLvLmLvjGu7PTQw==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@types/mdx": "^2.0.0"
},
@@ -5836,6 +5831,7 @@
"integrity": "sha512-8QqtOQT5ACVlmsvKOJNEaWmRPmcojMOzCz4Hs2BGG/toAp/K38LcsMRyLp349glq5AzJbCEeimEoxaX6v/fLrA==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@babel/core": "^7.21.3",
"@svgr/babel-preset": "8.1.0",
@@ -6520,6 +6516,7 @@
"integrity": "sha512-DRh5K+ka5eJic8CjH7td8QpYEV6Zo10gfRkjHCO3weqZHWDtAaSTFtl4+VMqOJ4N5jcuhZ9/l+yy8rVgw7BQeQ==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"undici-types": "~7.10.0"
}
@@ -6561,6 +6558,7 @@
"integrity": "sha512-EhBeSYX0Y6ye8pNebpKrwFJq7BoQ8J5SO6NlvNwwHjSj6adXJViPQrKlsyPw7hLBLvckEMO1yxeGdR82YBBlDg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"csstype": "^3.0.2"
}
@@ -6748,6 +6746,7 @@
"integrity": "sha512-VGMpFQGUQWYT9LfnPcX8ouFojyrZ/2w3K5BucvxL/spdNehccKhB4jUyB1yBCXpr2XFm0jkECxgrpXBW2ipoAw==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@typescript-eslint/scope-manager": "8.44.0",
"@typescript-eslint/types": "8.44.0",
@@ -7195,6 +7194,7 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -7287,6 +7287,7 @@
"integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.3",
"fast-uri": "^3.0.1",
@@ -7335,6 +7336,7 @@
"integrity": "sha512-DzTfhUxzg9QBNGzU/0kZkxEV72TeA4MmPJ7RVfLnQwHNhhliPo7ynglEWJS791rNlLFoTyrKvkapwr/P3EXV9A==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@algolia/abtesting": "1.5.0",
"@algolia/client-abtesting": "5.39.0",
@@ -7848,6 +7850,7 @@
}
],
"license": "MIT",
+ "peer": true,
"dependencies": {
"baseline-browser-mapping": "^2.9.0",
"caniuse-lite": "^1.0.30001759",
@@ -8170,6 +8173,7 @@
"integrity": "sha512-ci2iJH6LeIkvP9eJW6gpueU8cnZhv85ELY8w8WiFtNjMHA5ad6pQLaJo9mEly/9qUyCpvqX8/POVUTf18/HFdw==",
"dev": true,
"license": "Apache-2.0",
+ "peer": true,
"dependencies": {
"@chevrotain/cst-dts-gen": "11.0.3",
"@chevrotain/gast": "11.0.3",
@@ -8934,6 +8938,7 @@
"integrity": "sha512-orRsuYpJVw8LdAwqqLykBj9ecS5/cRHlI5+nvTo8LcCKmzDmqVORXtOIYEEQuL9D4BxtA1lm5isAqzQZCoQ6Eg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"cssesc": "^3.0.0",
"util-deprecate": "^1.0.2"
@@ -9270,6 +9275,7 @@
"integrity": "sha512-iJc4TwyANnOGR1OmWhsS9ayRS3s+XQ185FmuHObThD+5AeJCakAAbWv8KimMTt08xCCLNgneQwFp+JRJOr9qGQ==",
"dev": true,
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">=0.10"
}
@@ -9714,6 +9720,7 @@
"integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==",
"dev": true,
"license": "ISC",
+ "peer": true,
"engines": {
"node": ">=12"
}
@@ -9905,8 +9912,7 @@
"resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz",
"integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==",
"dev": true,
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/deepmerge": {
"version": "4.3.1",
@@ -10550,6 +10556,7 @@
"integrity": "sha512-82GZUjRS0p/jganf6q1rEO25VSoHH0hKPCTrgillPjdI/3bgBhAE1QzHrHTizjpRvy6pGAvKjDJtk2pF9NDq8w==",
"dev": true,
"license": "MIT",
+ "peer": true,
"bin": {
"eslint-config-prettier": "bin/cli.js"
},
@@ -10624,7 +10631,6 @@
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
@@ -10642,7 +10648,6 @@
"integrity": "sha512-sNXOfKCn74rt8RICKMvJS7XKV/Xk9kA7DyJr8mJik3S7Cwgy3qlkkmyS2uQB3jiJg6VNdZd/pDBJu0nvG2NlTg==",
"dev": true,
"license": "BSD-2-Clause",
- "peer": true,
"dependencies": {
"esrecurse": "^4.3.0",
"estraverse": "^5.2.0"
@@ -10660,7 +10665,6 @@
"integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==",
"dev": true,
"license": "BSD-2-Clause",
- "peer": true,
"engines": {
"node": ">=4.0"
}
@@ -10671,7 +10675,6 @@
"integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==",
"dev": true,
"license": "ISC",
- "peer": true,
"dependencies": {
"is-glob": "^4.0.3"
},
@@ -10684,8 +10687,7 @@
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
"integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
"dev": true,
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/espree": {
"version": "10.4.0",
@@ -10693,7 +10695,6 @@
"integrity": "sha512-j6PAQ2uUr79PZhBjP5C5fhl8e39FmRnOjsD5lGnWrFU8i2G776tBK7+nP8KuQUTTyAZUwfQqXAgrVH5MbH9CYQ==",
"dev": true,
"license": "BSD-2-Clause",
- "peer": true,
"dependencies": {
"acorn": "^8.15.0",
"acorn-jsx": "^5.3.2",
@@ -10726,7 +10727,6 @@
"integrity": "sha512-ca9pw9fomFcKPvFLXhBKUK90ZvGibiGOvRJNbjljY7s7uq/5YO4BOzcYtJqExdx99rF6aAcnRxHmcUHcz6sQsg==",
"dev": true,
"license": "BSD-3-Clause",
- "peer": true,
"dependencies": {
"estraverse": "^5.1.0"
},
@@ -10740,7 +10740,6 @@
"integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==",
"dev": true,
"license": "BSD-2-Clause",
- "peer": true,
"engines": {
"node": ">=4.0"
}
@@ -11150,8 +11149,7 @@
"resolved": "https://registry.npmjs.org/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz",
"integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==",
"dev": true,
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/fast-uri": {
"version": "3.0.6",
@@ -11252,7 +11250,6 @@
"integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"flat-cache": "^4.0.0"
},
@@ -11287,6 +11284,7 @@
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
@@ -11406,7 +11404,6 @@
"integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"locate-path": "^6.0.0",
"path-exists": "^4.0.0"
@@ -11434,7 +11431,6 @@
"integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"flatted": "^3.2.9",
"keyv": "^4.5.4"
@@ -11448,8 +11444,7 @@
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz",
"integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==",
"dev": true,
- "license": "ISC",
- "peer": true
+ "license": "ISC"
},
"node_modules/follow-redirects": {
"version": "1.15.11",
@@ -13142,8 +13137,7 @@
"resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz",
"integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==",
"dev": true,
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/json5": {
"version": "2.2.3",
@@ -13318,7 +13312,6 @@
"integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"prelude-ls": "^1.2.1",
"type-check": "~0.4.0"
@@ -13396,7 +13389,6 @@
"integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"p-locate": "^5.0.0"
},
@@ -13440,8 +13432,7 @@
"resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz",
"integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==",
"dev": true,
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/lodash.uniq": {
"version": "4.5.0",
@@ -16350,6 +16341,7 @@
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
@@ -16531,7 +16523,6 @@
"integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"deep-is": "^0.1.3",
"fast-levenshtein": "^2.0.6",
@@ -16570,7 +16561,6 @@
"integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"yocto-queue": "^0.1.0"
},
@@ -16587,7 +16577,6 @@
"integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"p-limit": "^3.0.2"
},
@@ -16852,7 +16841,6 @@
"integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==",
"dev": true,
"license": "MIT",
- "peer": true,
"engines": {
"node": ">=8"
}
@@ -17082,6 +17070,7 @@
}
],
"license": "MIT",
+ "peer": true,
"dependencies": {
"nanoid": "^3.3.11",
"picocolors": "^1.1.1",
@@ -18033,6 +18022,7 @@
"integrity": "sha512-orRsuYpJVw8LdAwqqLykBj9ecS5/cRHlI5+nvTo8LcCKmzDmqVORXtOIYEEQuL9D4BxtA1lm5isAqzQZCoQ6Eg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"cssesc": "^3.0.0",
"util-deprecate": "^1.0.2"
@@ -18609,7 +18599,6 @@
"integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==",
"dev": true,
"license": "MIT",
- "peer": true,
"engines": {
"node": ">= 0.8.0"
}
@@ -18620,6 +18609,7 @@
"integrity": "sha512-UOnG6LftzbdaHZcKoPFtOcCKztrQ57WkHDeRD9t/PTQtmT0NHSeWWepj6pS0z/N7+08BHFDQVUrfmfMRcZwbMg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"bin": {
"prettier": "bin/prettier.cjs"
},
@@ -18954,6 +18944,7 @@
"integrity": "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==",
"dev": true,
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">=0.10.0"
}
@@ -18964,6 +18955,7 @@
"integrity": "sha512-AXJdLo8kgMbimY95O2aKQqsz2iWi9jMgKJhRBAxECE4IFxfcazB2LmzloIoibJI3C12IlY20+KFaLv+71bUJeQ==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"scheduler": "^0.27.0"
},
@@ -19024,6 +19016,7 @@
"integrity": "sha512-YMMxTUQV/QFSnbgrP3tjDzLHRg7vsbMn8e9HAa8o/1iXoiomo48b7sk/kkmWEuWNDPJVlKSJRB6Y2fHqdJk+SQ==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@types/react": "*"
},
@@ -19054,6 +19047,7 @@
"integrity": "sha512-Ys9K+ppnJah3QuaRiLxk+jDWOR1MekYQrlytiXxC1RyfbdsZkS5pvKAzCCr031xHixZwpnsYNT5xysdFHQaYsA==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@babel/runtime": "^7.12.13",
"history": "^4.9.0",
@@ -21061,7 +21055,8 @@
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"dev": true,
- "license": "0BSD"
+ "license": "0BSD",
+ "peer": true
},
"node_modules/type-check": {
"version": "0.4.0",
@@ -21069,7 +21064,6 @@
"integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==",
"dev": true,
"license": "MIT",
- "peer": true,
"dependencies": {
"prelude-ls": "^1.2.1"
},
@@ -21143,6 +21137,7 @@
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"dev": true,
"license": "Apache-2.0",
+ "peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
@@ -21549,6 +21544,7 @@
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
@@ -21835,6 +21831,7 @@
"integrity": "sha512-rHY3vHXRbkSfhG6fH8zYQdth/BtDgXXuR2pHF++1f/EBkI8zkgM5XWfsC3BvOoW9pr1CvZ1qQCxhCEsbNgT50g==",
"dev": true,
"license": "MIT",
+ "peer": true,
"dependencies": {
"@types/eslint-scope": "^3.7.7",
"@types/estree": "^1.0.8",
@@ -22308,7 +22305,6 @@
"integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==",
"dev": true,
"license": "MIT",
- "peer": true,
"engines": {
"node": ">=0.10.0"
}
@@ -22479,7 +22475,6 @@
"integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==",
"dev": true,
"license": "MIT",
- "peer": true,
"engines": {
"node": ">=10"
},
@@ -22493,6 +22488,7 @@
"integrity": "sha512-WPsqwxITS2tzx1bzhIKsEs19ABD5vmCVa4xBo2tq/SrV4RNZtfws1EnCWQXM6yh8bD08a1idvkB5MZSBiZsjwg==",
"dev": true,
"license": "MIT",
+ "peer": true,
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}
diff --git a/reference/database/schema.md b/reference/database/schema.md
index 08921f95..d1c75333 100644
--- a/reference/database/schema.md
+++ b/reference/database/schema.md
@@ -73,12 +73,58 @@ type MyTable @table {
Optional arguments:
-| Argument | Type | Default | Description |
-| ------------ | --------- | -------------- | ----------------------------------------------------------------------- |
-| `table` | `String` | type name | Override the table name |
-| `database` | `String` | `"data"` | Database to place the table in |
-| `expiration` | `Int` | — | Auto-expire records after this many seconds (useful for caching tables) |
-| `audit` | `Boolean` | config default | Enable audit log for this table |
+| Argument | Type | Default | Description |
+| -------------- | --------- | ----------------------------- | --------------------------------------------------------------------------- |
+| `table` | `String` | type name | Override the table name |
+| `database` | `String` | `"data"` | Database to place the table in |
+| `expiration` | `Int` | — | Seconds until a record goes stale (useful for caching tables) |
+| `eviction` | `Int` | `0` | Additional seconds after `expiration` before a record is physically removed |
+| `scanInterval` | `Int` | `(expiration + eviction) / 4` | Seconds between eviction scans |
+| `audit` | `Boolean` | config default | Enable audit log for this table |
+
+**`expiration`, `eviction`, and `scanInterval`**
+
+These three arguments work together to control the full lifecycle of a cached record:
+
+- **`expiration`** — When elapsed, a record is considered _stale_. The next request for a stale record triggers a fetch from the source. The record may still be served while revalidation is in progress.
+- **`eviction`** — Additional time after `expiration` before the record is physically removed from the table. Setting `eviction > 0` lets you serve the stale record while revalidation happens and controls how long after expiration the data is kept on disk.
+- **`scanInterval`** — How often Harper scans the table for records to evict. Defaults to one quarter of `expiration + eviction`.
+
+You can provide a single `expiration` value and all three behaviors share the same TTL. To tune them independently:
+
+```graphql
+# Expire after 5 minutes, evict after 1 hour, scan every 10 minutes
+type WeatherCache @table(expiration: 300, eviction: 3300, scanInterval: 600) {
+ id: ID @primaryKey
+ temperature: Float
+}
+```
+
+#### How `scanInterval` Determines the Eviction Cycle
+
+`scanInterval` determines fixed clock-aligned times when eviction runs. Harper divides the clock into evenly spaced anchors based on the interval, calculated in the server's local timezone. As a result:
+
+- The server's startup time does not affect when eviction runs.
+- Eviction timings are deterministic and timezone-aware.
+- For any given configuration, the eviction schedule is the same across restarts and across servers in the same local timezone.
+
+**Example: 1-hour expiration** — default `scanInterval` = 15 minutes (one quarter of `expiration`). Eviction schedule:
+
+```
+00:00, 00:15, 00:30, 00:45, 01:00, ...
+```
+
+If the server starts at 12:05, the first eviction runs at 12:15 — not 12:20. The schedule is clock-aligned, not startup-aligned.
+
+**Example: 1-day expiration** — default `scanInterval` = 6 hours. Eviction schedule:
+
+```
+00:00, 06:00, 12:00, 18:00, ...
+```
+
+#### Eviction with Indexing
+
+Eviction removes non-indexed record data, but it does _not_ remove a record from its secondary indexes. If an evicted record matches a search query, Harper fetches the full record from the source on demand to satisfy the query. This means indexes remain fully functional even when most of the data has been evicted.
**Examples:**
diff --git a/reference/resources/resource-api.md b/reference/resources/resource-api.md
index 03c442ea..5990d8d3 100644
--- a/reference/resources/resource-api.md
+++ b/reference/resources/resource-api.md
@@ -320,13 +320,184 @@ Returns the number of records in the table. By default returns an approximate (f
### `sourcedFrom(Resource, options?)`
-Configure a table to use another resource as its data source (caching behavior). When a record is not found locally, it is fetched from the source and cached. Writes are delegated to the source.
+Configure a table to use another resource as its data source. When a record is not found locally or has expired, it is fetched from the source and cached. Writes to the table are optionally delegated to the source if the source implements `put`, `patch`, or `delete`.
-Options:
+```javascript
+tables.MyCache.sourcedFrom(MyDataSource);
+```
+
+Options (all optional; prefer setting these via `@table` schema directives):
+
+| Option | Description |
+| -------------- | ------------------------------------------------ |
+| `expiration` | Seconds until a record goes stale |
+| `eviction` | Seconds after expiration before physical removal |
+| `scanInterval` | Seconds between eviction scans |
+
+Harper automatically serializes concurrent requests for the same missing or stale record — all waiting requests share a single upstream fetch, preventing cache stampedes.
+
+#### Source `get` — controlling timestamp and expiration
+
+Inside a source `get()` method, the context (`this.getContext()`) exposes caching-specific properties:
+
+```javascript
+class MySource extends Resource {
+ async get() {
+ const context = this.getContext();
+
+ // Pass If-Modified-Since to origin using the existing cached version
+ const headers = new Headers();
+ if (context.replacingVersion) {
+ headers.set('If-Modified-Since', new Date(context.replacingVersion).toUTCString());
+ }
+
+ const response = await fetch(`https://api.example.com/${this.getId()}`, { headers });
+
+ // Propagate the origin's Last-Modified timestamp to Harper's ETag
+ context.lastModified = response.headers.get('Last-Modified');
+
+ // Honor origin's Cache-Control max-age for per-record TTL
+ const maxAge = response.headers.get('Cache-Control')?.match(/max-age=(\d+)/)?.[1];
+ if (maxAge) {
+ context.expiresAt = Date.now() + Number(maxAge) * 1000;
+ }
+
+ // Return origin's 304 as a cache revalidation (no re-download)
+ if (response.status === 304) return context.replacingRecord;
+
+ return response.json();
+ }
+}
+```
+
+Context properties available inside a source `get()`:
+
+| Property | Description |
+| ------------------ | --------------------------------------------------------------------------------------------- |
+| `replacingVersion` | Timestamp of the currently cached record being replaced (useful for `If-Modified-Since`) |
+| `replacingRecord` | The currently cached record value (return this on a 304 to skip a re-download) |
+| `lastModified` | Set this to propagate the origin's timestamp as Harper's `ETag` / `Last-Modified` |
+| `expiresAt` | Set this (milliseconds epoch) to give the record a per-entry TTL overriding the table default |
+
+#### Source `subscribe` — active caching
+
+For data sources that can push change notifications, implement a `subscribe` method returning an async iterable of events. Harper calls `subscribe` once per process and propagates updates to the cache automatically — no polling needed.
+
+```javascript
+class MySource extends Resource {
+ async *subscribe() {
+ // Option A: async generator
+ const stream = connectToExternalEventStream();
+ for await (const event of stream) {
+ yield {
+ type: 'put', // 'put' | 'invalidate' | 'delete' | 'message' | 'transaction'
+ id: event.id,
+ value: event.data,
+ timestamp: event.ts,
+ };
+ }
+ }
+}
+```
+
+Alternatively, use the default subscription stream to push events from a callback-based source:
+
+```javascript
+class MySource extends Resource {
+ subscribe() {
+ const subscription = super.subscribe();
+ remoteClient.on('update', (event) => {
+ subscription.send({ type: 'put', id: event.id, value: event.data });
+ });
+ return subscription;
+ }
+}
+```
+
+**Supported event types:**
+
+| Type | Description |
+| ------------- | ---------------------------------------------------------------------------------- |
+| `put` | Record updated — `value` contains the new record |
+| `invalidate` | Record changed but value not provided — cache evicts and re-fetches on next access |
+| `delete` | Record deleted |
+| `message` | Pub/sub message passing through the record; record data is not changed |
+| `transaction` | Atomic group of writes; include an array of events in the `writes` property |
+
+**Event properties:**
+
+| Property | Description |
+| ----------- | ----------------------------------------------------------------- |
+| `type` | Event type (see above) |
+| `id` | Primary key of the affected record |
+| `value` | New record value (for `put` and `message`) |
+| `writes` | Array of events for `transaction` events |
+| `table` | Target table name (for cross-table writes inside a `transaction`) |
+| `timestamp` | Timestamp of the change |
+
+By default, `subscribe` runs on a single thread to avoid duplicate notifications and race conditions. To run on multiple threads:
+
+```javascript
+class MySource extends Resource {
+ static subscribeOnThisThread(threadIndex) {
+ return threadIndex < 2; // run on first two threads only
+ }
+ async *subscribe() { ... }
+}
+```
+
+#### Write-through caching
+
+If the source implements `put`, `patch`, or `delete`, writes to the caching table are forwarded to the source before being committed locally:
+
+```javascript
+class MySource extends Resource {
+ async get() { ... }
+
+ async put(data) {
+ await fetch(`https://api.example.com/${this.getId()}`, {
+ method: 'PUT',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify(await data),
+ });
+ }
+
+ async delete() {
+ await fetch(`https://api.example.com/${this.getId()}`, { method: 'DELETE' });
+ }
+}
+```
+
+Harper waits for the source to confirm the write before committing to the local cache (two-phase write).
+
+**Loading from source in write methods:** Methods other than `get()` do not automatically load data from the source. Call `ensureLoaded()` first if you need the existing record:
-- `expiration` — Default TTL in seconds
-- `eviction` — Eviction time in seconds
-- `scanInterval` — Period for scanning expired records
+```javascript
+class MyCache extends tables.MyCache {
+ async post(data) {
+ await this.ensureLoaded(); // loads from source if not cached
+ this.quantity = this.quantity - (await data).purchases;
+ }
+}
+```
+
+#### Passive-active updates
+
+A source `get()` can proactively populate _other_ tables as a side effect, atomically with the main record. Pass `this` (the current context) to any write call to include it in the same transaction:
+
+```javascript
+const { Post, Comment } = tables;
+class BlogSource extends Resource {
+ async get() {
+ const post = await (await fetch(`https://my-blog/${this.getId()}`)).json();
+ for (const comment of post.comments) {
+ await Comment.put(comment, this); // atomic with the Post write
+ }
+ return post;
+ }
+}
+Post.sourcedFrom(BlogSource);
+```
### `primaryKey`
diff --git a/reference/security/certificate-verification.md b/reference/security/certificate-verification.md
index 2b00542f..9eb6f2f3 100644
--- a/reference/security/certificate-verification.md
+++ b/reference/security/certificate-verification.md
@@ -242,8 +242,7 @@ certificateVerification:
Increase CRL cache TTL for stable environments:
```yaml
-
-...
+---
crl:
cacheTtl: 172800000 # 48 hours
```
@@ -251,8 +250,7 @@ crl:
Increase OCSP cache TTL for long-lived connections:
```yaml
-
-...
+---
ocsp:
cacheTtl: 7200000 # 2 hours
```
@@ -260,8 +258,7 @@ ocsp:
Reduce grace period for tighter revocation enforcement:
```yaml
-
-...
+---
crl:
gracePeriod: 0 # No grace period
```
diff --git a/reference_versioned_docs/version-v4/security/certificate-verification.md b/reference_versioned_docs/version-v4/security/certificate-verification.md
index 2b00542f..9eb6f2f3 100644
--- a/reference_versioned_docs/version-v4/security/certificate-verification.md
+++ b/reference_versioned_docs/version-v4/security/certificate-verification.md
@@ -242,8 +242,7 @@ certificateVerification:
Increase CRL cache TTL for stable environments:
```yaml
-
-...
+---
crl:
cacheTtl: 172800000 # 48 hours
```
@@ -251,8 +250,7 @@ crl:
Increase OCSP cache TTL for long-lived connections:
```yaml
-
-...
+---
ocsp:
cacheTtl: 7200000 # 2 hours
```
@@ -260,8 +258,7 @@ ocsp:
Reduce grace period for tighter revocation enforcement:
```yaml
-
-...
+---
crl:
gracePeriod: 0 # No grace period
```
diff --git a/sidebarsLearn.ts b/sidebarsLearn.ts
index 104dd40b..466847a4 100644
--- a/sidebarsLearn.ts
+++ b/sidebarsLearn.ts
@@ -30,7 +30,33 @@ const sidebarsLearn: SidebarsConfig = {
label: 'Developers',
collapsible: false,
className: 'learn-category-header',
- items: [{ type: 'autogenerated', dirName: 'developers' }],
+ items: [
+ {
+ type: 'doc',
+ id: 'developers/harper-applications-in-depth',
+ label: 'Harper Applications in Depth',
+ },
+ {
+ type: 'doc',
+ id: 'developers/caching-with-harper',
+ label: 'Caching with Harper',
+ },
+ {
+ type: 'doc',
+ id: 'developers/active-caching-subscriptions',
+ label: 'Active Caching with Subscriptions',
+ },
+ {
+ type: 'doc',
+ id: 'developers/semantic-caching-vector-indexing',
+ label: 'Semantic Caching with Vector Indexing',
+ },
+ {
+ type: 'doc',
+ id: 'developers/write-through-caching',
+ label: 'Write-Through Caching',
+ },
+ ],
},
{
type: 'category',