Decaf Core provides the foundational building blocks for the Decaf TypeScript ecosystem: strongly-typed models, repository pattern, pluggable persistence adapters, a composable query DSL, and pagination/observer utilities. With decorators and an injectable registry, it wires models to repositories and adapters so you can build data access that is framework-agnostic yet fully typed.
Release docs refreshed on 2025-11-26. See workdocs/reports/RELEASE_NOTES.md for ticket summaries.
Repository: A class that implements the repository pattern, providing a consistent API for CRUD operations and querying.Adapter: An abstract class that defines the interface for connecting to different database backends.Statement: A query builder for creating complex database queries in a fluent, type-safe manner.TaskEngine: A system for managing background jobs and asynchronous operations.ModelServiceandPersistenceService: Base classes for creating services that encapsulate business logic and data access.- Migrations: A system for managing database schema changes over time.
- RAM Adapter: An in-memory adapter for testing and development.
Documentation here, Test results here and Coverage here
Minimal size: 46.2 KB kb gzipped
The Decaf Core package provides a cohesive set of primitives for building strongly-typed data-access layers and managing background tasks in TypeScript. It centers around:
- Models (from @decaf-ts/decorator-validation) enhanced with identity and persistence metadata.
- A Repository abstraction that encapsulates CRUD, querying, and observation.
- A powerful Task Engine for defining, scheduling, and executing background jobs with support for worker threads.
- Adapters that bridge repositories to underlying storage (in-memory, HTTP, TypeORM, etc.).
- A fluent Query DSL (Statement/Condition) with pagination.
- Lightweight dependency injection utilities to auto-resolve repositories.
Below is an overview of the main modules and their public APIs exposed by core.
-
Repository<M>- Constructor:
new Repository(adapter: Adapter, clazz: Constructor<M>, ...) - CRUD:
create,read,update,delete - Bulk ops:
createAll,readAll,updateAll,deleteAll - Querying:
select(...selectors?): Start a fluent query chain.query(condition?, orderBy?, order?, limit?, skip?): Execute a simple query.- New High-Level Queries: A set of methods, often used with the
@prepareddecorator, for common query patterns:find(value, order?): Searches default attributes of a model for partial matches (starts-with).findBy(key, value): Finds records by a specific attribute-value pair.findOneBy(key, value): Finds a single record or throws aNotFoundError.listBy(key, order): Lists all records ordered by a specific key.countOf(key?): Counts records, optionally for a specific attribute.maxOf(key),minOf(key),avgOf(key),sumOf(key): Perform aggregate calculations.distinctOf(key): Retrieves distinct values for an attribute.groupOf(key): Groups records by a given attribute.page(value, direction?, ref?): Paginates through records matching a default partial-match query.paginateBy(key, order, ref?): Paginates records ordered by a specific key.
- Observation:
observe(observer, filter?),unObserve(observer),updateObservers(...),refresh(...) - Statement Execution:
statement(name, ...args): Executes a custom method on the repository decorated with@prepared.
- Repository registry helpers:
static for(config, ...args): Proxy factory for building repositories with specific adapter config.static forModel(model, alias?, ...args): Returns a Repository instance for a model.static register(model, repoCtor, alias?): Registers a repository for a model.
- Constructor:
-
Decorators (
repository/decorators)@repository(modelCtor, flavour?): Injects a repository instance or registers a repository class.@prepared(): Marks a repository method as an executable "prepared statement", allowing it to be called viarepository.statement().
A robust system for managing background jobs.
TaskEngine<A>: The core engine that polls for and executes tasks. Manages the task lifecycle, concurrency, and worker threads.TaskService<A>: A high-level service providing a clean API for interacting with theTaskEngine. It's the recommended entry point for managing tasks.push(task, track?): Submits a new task for execution.schedule(task, track?).for(date): Schedules a task to run at a specific time.track(id): Returns aTaskTrackerto monitor an existing task.
- Models:
TaskModel: Represents a task, its status (PENDING,RUNNING,SUCCEEDED,FAILED), input, and configuration (e.g.,maxAttempts,backoff). Can beATOMICorCOMPOSITE.TaskEventModel: Logs status changes and progress for a task.
- Builders:
TaskBuilder: A fluent API for constructingTaskModelinstances.CompositeTaskBuilder: A builder for creating multi-step (COMPOSITE) tasks.
- Handlers & Tracking:
ITaskHandler: The interface to implement for defining the logic of a task. Handlers are registered with theTaskHandlerRegistry.TaskTracker: An object returned when tracking a task, allowing you to await its completion and receive progress updates.
- Worker Threads: The engine can be configured to run tasks in Node.js
worker_threads, providing true parallelism and non-blocking execution for CPU-intensive jobs. Configuration is done via theworkerPoolandworkerAdapterproperties in theTaskEngineConfig.
Adapter<N, Q, R, Ctx>: The bridge between a repository and the back-end storage.- Handles CRUD operations, raw queries, and model/record transformation (
prepare/revert). - Manages different storage "flavours" (e.g., 'ram', 'fs', 'typeorm').
- Handles CRUD operations, raw queries, and model/record transformation (
Sequence: Provides identity/sequence generation.ObserverHandler: Manages observer notifications.
Statement<M>: A fluent DSL for building and executing queries.- Methods:
select,from,where,orderBy,groupBy,limit,offset,execute,paginate. - Now includes enhanced logic to "squash" simple queries into efficient prepared statements.
- Methods:
Condition<M>: A composable condition tree for buildingwhereclauses.Paginator<M>: An abstract pagination helper.- Now includes
serialize()anddeserialize()methods to easily pass pagination state.
- Now includes
BaseModel: The base class all models extend from.- Decorators like
@table,@pk,@column,@index, and relation decorators (@oneToOne,@oneToMany,@manyToOne) are used to define persistence metadata. - Includes updated logic for handling complex relations, including
oneToManyOnCreateUpdateand initial support formanyToMany.
RamAdapter: An in-memory adapter, perfect for tests and quick prototyping.FilesystemAdapter: ARamAdapter-compatible adapter that persists data to the local filesystem, enabling data to survive process restarts. Ideal for local development and testing.
This guide provides detailed, real-life examples of how to use the main features of the @decaf-ts/core library.
The Repository and Adapter are the core of the persistence layer. The Repository provides a high-level API for your application to interact with, while the Adapter handles the specific implementation details of your chosen database.
This loop is the foundation of the persistence process. It ensures data is correctly transformed, validated, and persisted.
sequenceDiagram
participant C as Client Code
participant R as Repository
participant V as Validators/Decorators
participant A as Adapter
participant DB as Database
C->>+R: create(model)
R->>R: 1. createPrefix(model)
R->>+V: 2. Enforce DB Decorators (ON)
V-->>-R:
R->>+A: 3. prepare(model)
A-->>-R: { record, id, transient }
R->>+A: 4. create(table, id, record)
A->>+DB: 5. Database Insert
DB-->>-A: Result
A-->>-R: record
R->>+A: 6. revert(record)
A-->>-R: model instance
R->>R: 7. createSuffix(model)
R->>+V: 8. Enforce DB Decorators (AFTER)
V-->>-R:
R-->>-C: created model
createPrefix: TheRepository'screatePrefixmethod is called. This is where you can add logic to be executed before the maincreateoperation.- Decorators (ON): Any decorators configured to run
ONtheCREATEoperation are executed. This is a good place for validation or data transformation. prepare: TheAdapter'spreparemethod is called to convert the model into a format suitable for the database. This includes separating transient properties.create: TheAdapter'screatemethod is called to persist the data to the database.- Database Insert: The
Adaptercommunicates with the database to perform the insert operation. revert: TheAdapter'srevertmethod is called to convert the database record back into a model instance.createSuffix: TheRepository'screateSuffixmethod is called. This is where you can add logic to be executed after the maincreateoperation.- Decorators (AFTER): Any decorators configured to run
AFTERtheCREATEoperation are executed.
FilesystemAdapter (found under core/src/fs) extends RamAdapter but writes every dataset to disk so repositories survive restarts. You can swap it anywhere you would use RamAdapter.
Configuration highlights
rootDir: Base directory where databases live. Each adapter alias becomes its own sub-folder.jsonSpacing: Optional pretty-print spacing for the JSON payloads (handy while debugging).fs: Customfs/promisesimplementation — forward your own for tests or sandboxes.onHydrated(info): Callback executed after a table is read from disk; great for metrics or warm-up logs.
Directory layout
- Records ->
{rootDir}/{alias}/{table}/{encodedPk}.jsonstoring{ id, record }. - Indexes ->
{rootDir}/{alias}/{table}/indexes/{indexName}.json, mirroring@indexmetadata so range/aggregate queries stay fast.
import path from "node:path";
import { FilesystemAdapter, Repository } from "@decaf-ts/core";
import { User } from "./models/User";
const adapter = new FilesystemAdapter(
{
rootDir: path.join(process.cwd(), ".decaf-data"),
jsonSpacing: 2,
onHydrated: ({ table, records }) => {
console.info(`Hydrated ${records} ${table} records from disk`);
},
},
"local-fs"
);
const repo = new Repository(adapter, User);
await repo.create(new User({ id: "user-1", name: "Persistent" }));
const reloaded = await repo.read("user-1"); // survives process restarts
await adapter.shutdown(); // closes open file handles when the app exitsFor tests, point rootDir at a temporary folder (see tests/fs/__helpers__/tempFs.ts) and clean it up after each suite.
The library provides a set of powerful decorators for defining models and their behavior.
@table(name): Specifies the database table name for a model.@pk(): Marks a property as the primary key.@column(name): Maps a property to a database column with a different name.@createdAt(): Automatically sets the property to the current timestamp when a model is created.@updatedAt(): Automatically sets the property to the current timestamp when a model is created or updated.@index(): Creates a database index on a property.
import { table, pk, column, createdAt, updatedAt, index } from '@decaf-ts/core';
import { model, Model } from '@decaf-ts/decorator-validation';
@table('users')
@model()
export class User extends Model {
@pk()
id: string;
@column('user_name')
@index()
name: string;
@createdAt()
createdAt: Date;
@updatedAt()
updatedAt: Date;
}You can model complex relationships between your classes using @oneToOne, @oneToMany, and @manyToOne.
import { table, pk, oneToOne, oneToMany, manyToOne } from '@decaf-ts/core';
import { model, Model } from '@decaf-ts/decorator-validation';
import { User } from './User';
@table('profiles')
@model()
export class Profile extends Model {
@pk()
id: string;
bio: string;
}
@table('posts')
@model()
export class Post extends Model {
@pk()
id: string;
title: string;
@manyToOne(() => User)
author: User;
}
@table('users')
@model()
export class User extends Model {
@pk()
id: string;
@oneToOne(() => Profile)
profile: Profile;
@oneToMany(() => Post)
posts: Post[];
}You can create your own persistence layer by extending the Adapter class.
import { Adapter, Model, Constructor, PrimaryKeyType } from '@decaf-ts/core';
class MyCustomAdapter extends Adapter<any, any, any, any> {
constructor() {
super({}, 'my-custom-adapter');
}
async create<M extends Model>(
clazz: Constructor<M>,
id: PrimaryKeyType,
model: Record<string, any>
): Promise<Record<string, any>> {
console.log(`Creating in ${Model.tableName(clazz)} with id ${id}`);
// Your database insert logic here
return model;
}
// Implement other abstract methods: read, update, delete, raw
}The ModelService provides a convenient way to interact with your repositories.
import { ModelService, Repository } from '@decaf-ts/core';
import { User } from './models';
class UserService extends ModelService<User, Repository<User, any>> {
constructor() {
super(User);
}
async findActiveUsers(): Promise<User[]> {
return this.repository.select().where({ status: 'active' }).execute();
}
}
const userService = new UserService();
const activeUsers = await userService.findActiveUsers();The TaskEngine is a powerful tool for managing background jobs.
A TaskHandler defines the logic for a specific task.
import { TaskHandler, TaskContext } from '@decaf-ts/core';
class MyTaskHandler implements TaskHandler<any, any> {
async run(input: any, context: TaskContext): Promise<any> {
console.log('Running my task with input:', input);
await context.progress({ message: 'Step 1 complete' });
// ... task logic
return { result: 'success' };
}
}import { TaskEngine, TaskModel, TaskHandlerRegistry } from '@decaf-ts/core';
import { MyTaskHandler } from './MyTaskHandler';
// 1. Register the handler
const registry = new TaskHandlerRegistry();
registry.register('my-task', new MyTaskHandler());
// 2. Create the task engine
const taskEngine = new TaskEngine({ adapter, registry });
// 3. Push a task
const task = new TaskModel({
classification: 'my-task',
input: { some: 'data' },
});
const { tracker } = await taskEngine.push(task, true);
// 4. Track the task's progress and result
tracker.on('progress', (payload) => {
console.log('Task progress:', payload);
});
const result = await tracker.resolve();
console.log('Task result:', result);
// 5. Schedule a task
taskEngine.schedule(task).for(new Date(Date.now() + 5000)); // 5 seconds from nowThe Task Engine can be configured to execute tasks in separate worker threads, enabling true parallelism.
import { TaskEngine, TaskHandlerRegistry } from '@decaf-ts/core';
import path from 'path';
const taskEngine = new TaskEngine({
adapter,
registry,
workerPool: {
entry: path.resolve(__dirname, './worker-entry.ts'), // Path to your worker entry file
size: 4, // Number of worker threads
},
workerAdapter: {
adapterModule: '@decaf-ts/core/fs', // Module to load the adapter from
adapterClass: 'FilesystemAdapter', // Adapter class name
adapterArgs: [{ rootDir: './data' }, 'fs-worker'], // Arguments for the adapter constructor
}
});
await taskEngine.start();TaskEngineConfig exposes every knob used by the engine to claim, lease, and log tasks. The full set of options is:
| Option | Description |
|---|---|
adapter |
The persistence adapter where TaskModel rows live. When migrations run via the CLI this is a dedicated RamAdapter; never reuse an alias that is also a migration target. |
overrides |
Passed to adapter.for(...) when a task needs custom flags (for example to seed identity metadata). |
registry |
TaskHandlerRegistry wiring classification strings to handler instances. Only registered handlers can run. |
bus |
Optional TaskEventBus that receives progress/log/status events. |
workerId |
Uniquely identifies the worker claiming leases. Each engine (including CLI migrations) must use a different workerId so leases do not clash. |
concurrency |
Number of work units to execute in parallel (set to 1 when migration steps must stay sequential). |
leaseMs |
How long a running task can go without a heartbeat before it is re-queued. |
pollMsIdle |
Poll interval when the queue is empty. |
pollMsBusy |
Poll interval while tasks are running (shorter than pollMsIdle). |
logTailMax |
Maximum log entries kept in memory before flushing to the bus. |
streamBufferSize |
Byte size of the stream buffer used for large log payloads. |
maxLoggingBuffer |
Upper limit (in bytes) for buffered logs before older entries are pruned. |
loggingBufferTruncation |
Percentage of the buffer kept when maxLoggingBuffer is reached; the rest gets truncated. |
gracefulShutdownMsTimeout |
Time (ms) TaskEngine.shutdown() waits for in-flight workers before forcing a stop. |
autoShutdown |
Optional backoff configuration (enabled, backoffStepMs, maxIdleDelayMs) that gradually raises pollMsIdle until the engine stops once the queue drains. |
TaskContext enriches every handler callback with helpers such as:
progress(payload): emit structured progress updates (TaskEventType.PROGRESS).pipe(...log)andflush(): buffer logs that eventually feed intoTaskEventType.LOG.heartbeat(): extend the lease before it expires (used in long-running handlers).scheduleCompositeSteps(...): dynamically insert extra steps when building migration tasks.
When migrations run through a TaskService-backed engine the adapter alias must be dedicated to the migration queue (e.g., decaf-cli-task-engine). MigrationService.migrateAdapters enforces this by comparing every adapter alias/flavour and rejecting any run that would reuse the task engine alias as a migration target. Keeping the task queue isolated prevents lease metadata from colliding with schema updates.
Tune the knobs above with migrations in mind:
- Keep
concurrencyat1so versions apply sequentially. - Increase
leaseMsslightly above your longest expected step so long-running migrations do not get re-claimed prematurely. - Use
pollMsIdle/pollMsBusyto control how aggressively the engine polls when the queue is empty or busy; CLI runners typically lowerpollMsBusy. logTailMax,streamBufferSize,maxLoggingBuffer, andloggingBufferTruncationkeep migration logs bounded; the CLI attaches aTaskEventBusso progress/state logs flush before shutdown.autoShutdowngradually raisespollMsIdleso CLI runners stop after every tracked task completes.
Migration orchestration often runs inside TaskService. Typical setup:
- Create a dedicated
Adapter, e.g.new RamAdapter({}, "decaf-cli-task-engine"), and boot it before starting theTaskService. await new TaskService().boot({ adapter: taskEngineAdapter })to power theTaskHandlerRegistryandTaskTracker.- Pass the
TaskServiceinstance intoMigrationService.migrateAdapters(..., { taskMode: true, taskService }).
TaskService.boot mirrors TaskEngineConfig: you can also supply registry, bus, or custom overrides, and the service builds the engine, event bus, and tracker registry. The CLI attaches a migration-only TaskHandlerRegistry so the worker never executes unrelated handlers.
The CLI already follows this pattern and explicitly prevents the task engine adapter alias from appearing inside the migrating aliases, which keeps persistence targets isolated. When taskMode is true, every migration version produces a CompositeTask; use migration.track() or taskService.track(id) to attach listeners so progress/status events flow through the command logger.
TaskService.track(id) wires the CLI logger to the matching TaskTracker so status/progress logs stream through your console before TaskTracker.wait() resolves. If a migration task fails, call MigrationService.retry(taskId)—it uses repository overrides to reset status to PENDING, clear error/lease metadata, and re-queue the work—then taskService.track(id) again so the TaskEngine reclaims it.
Composite tasks are ordered by the sequence you pass to CompositeTaskBuilder or by using the dependsOn/dependencies array. Each step has a classification (matching a handler), an optional name, and lock/dependsOn metadata (TaskStepSpecModel). Locks avoid concurrent execution, and dependencies support either <taskId> or <taskId>:<stepRef> shorthand so you can mix tasks and steps as prerequisites.
Task attempts are bounded by maxAttempts and backoff (configured via builders). The engine records each attempt and automatically escalates to WAITING_RETRY/RUNNING states; if a task exhausts retries, the service surfaces the final error via TaskTracker.wait() so your migration command can decide between retrying or aborting.
MigrationService is the canonical upgrade runner. Use MigrationService.migrateAdapters(adapters, config) or DecafCoreModule.migrate(config) once your persistence layer is booted, but remember that live verification expects each migration to add a required column/property and backfill existing records before moving to the next version.
MigrationService speaks the MigrationConfig / PersistenceMigrationConfig language:
persistMigrationSteps: keep track of every migration run (defaults totrue).persistenceFlavour: restricts the execution plan to a single adapter flavour alias.targetVersion: semver/string goal for this run (CLI defaults topackage.json.version).taskMode: whentrue, migrations are executed through the TaskService asCompositeTasks built per version. Whenfalse,executeMigrationruns each migration inline.includeGenericInTaskMode: whenfalse(the default for multi-adapter runs), only flavour-scoped migrations execute inside tasks so generic migrations stay in relational mode.retrieveLastVersion/setCurrentVersion: asynchronous handlers so each adapter can persist its own migration head.retrieveLastVersionis called prior to building the execution plan;setCurrentVersionruns after every successfully completed version (per task in task mode, once at the end in normal mode).taskService: required whentaskModeis enabled; the CLI boots aTaskServicebacked by a dedicatedRamAdapter(decaf-cli-task-engine).versioning: override the default npm-semver comparator (MigrationVersioning) if you deploy a non-semver scheme.handlers: per-flavour overrides (typically wired via the CLI defaults) forretrieveLastVersion/setCurrentVersionif you need special persistence beyond the default adapter cache.dryRun: compatibility flag that is parsed but does not alter runtime behaviour anymore; the migrations still execute against your database.
Example handlers:
handlers: {
nano: {
async retrieveLastVersion(adapter) {
return (await new VersionRepo(adapter).read("nano"))?.version;
},
async setCurrentVersion(version, adapter) {
await new VersionRepo(adapter).upsert("nano", { version });
},
},
}MigrationService consults retrieveLastVersion before building the execution plan so it always knows the persisted currentVersion. Only migrations whose normalized versions fall strictly greater than that value and less than or equal to the targetVersion (CLI --to) are scheduled, ensuring each run advances the system lifecycle. After every version completes successfully, setCurrentVersion records the new head so subsequent boots skip already applied hops; when the stored version already matches the target, the filtering logic yields an empty plan and the migration run is a no-op.
Use MigrationService.migrateAdapters([nanoAdapter, typeormAdapter], config) with taskMode: true and the appropriate handlers to queue each version with the TaskService, then call migration.track() to wait on each version.
Each migration class must be decorated with @migration(...). The decorator accepts multiple overloads, but all forms populate the metadata that MigrationService.sort() uses to build a deterministic plan:
@migration("1.1.0-add-isActive", {
precedence: "1.1.0",
flavour: "nano",
rules: [
async (_, adapter) => Boolean(await adapter.exists("user")),
],
})
export class AddIsActiveMigration implements Migration<any, NanoAdapter> { ... }reference: required string used for logging, dependency hints, and version normalization (typically the semver value).precedence: optional hint that can be aMigrationconstructor, string token, or object referencing another migration.MigrationService.extractPrecedenceTokensreads it to break ties when migrations share the same version and flavour; use it to force ordering between otherwise identical migrations.flavour: optional adapter flavour alias (e.g.,"nano","type-orm"). Migrations are only considered whentargetFlavourmatches or (whenincludeGenericistrue) when a generic migration declaresDefaultFlavour.rules: optional array of async predicates(qr, adapter, ctx)that gate whether the migration should run. If any rule returnsfalse, the migration is skipped.
MigrationService.sort() first compares normalized versions (normalize() via MigrationVersioning), then uses compareByPrecedence, and finally falls back to flavour/reference lexicographic ordering. If two migrations share version/flavour and have conflicting precedence, an explicit InternalError is thrown so you can clarify the ordering.
MigrationService starts by calling retrieveLastVersion (when provided) to determine the persisted currentVersion. It builds an execution plan by filtering all decorated migrations whose normalized versions fall strictly greater than currentVersion and less than or equal to targetVersion.
In normal mode, migrateNormally executes each migration with executeMigration. After the last migration succeeds, setCurrentVersion is invoked once with the last version so the next boot knows where to resume.
In task mode, migrateViaTasks uses MigrationTaskBuilder (a CompositeTaskBuilder wrapper) to queue one TaskModel per version. Each queued task depends on the previous one (the CLI attaches the dependency chain automatically), and MigrationService.track() waits for the TaskTracker of each version to finish. Immediately after each task resolves, track() calls setCurrentVersion for that version (using this.queuedTaskChain to map task IDs to versions). This per-version update ensures that, after a crash, re-running the CLI will call retrieveLastVersion and resume at the correct position.
By design setCurrentVersion executes only after a version completely finishes: inline (taskMode: false) runs update at the end of the migration batch, and task mode updates after every CompositeTask. That means the recorded currentVersion always equals the last fully successful hop, so retrieveLastVersion can skip already applied versions and start at the next semantic bump. If a version fails mid-task, the version does not advance, and rerunning MigrationService.retry() or re-launching the CLI will re-queue the failed version before moving on.
If a task fails or is canceled, call MigrationService.retry(taskId):
retrychecks for explicit IDs, pending context IDs (Context.pending(PersistenceKeys.MIGRATION)), or the queued chain.- It queries the TaskRepository (with
ignoreHandlers: true) and rewrites theTaskModeltostatus = PENDING, clearserror,leaseOwner, and timestamps so the TaskEngine can reclaim it.
If you want to rerun an entire migration from scratch, omit taskIds and let retry() call migrateViaTasks again.
MigrationService rejects any configuration where the task engine adapter alias is also part of the migrating adapters; keeping the TaskService on a separate RamAdapter ensures migrations can persist their schema changes without racing the tasks that perform them.
The Repository class now includes several high-level methods for common query patterns, simplifying data access.
// Find records by a specific attribute
const users = await userRepo.findBy('email', 'test@example.com');
// Find a single record (throws NotFoundError if not found)
const user = await userRepo.findOneBy('username', 'jdoe');
// List records ordered by a key
const sortedUsers = await userRepo.listBy('createdAt', OrderDirection.DESC);The find and page methods support partial matching (starts-with) on default query attributes.
// Assuming 'name' and 'email' are default query attributes for User
// This will find users where name OR email starts with "john"
const users = await userRepo.find('john');
// You can also specify the sort order
const sortedUsers = await userRepo.find('john', OrderDirection.DESC);Perform calculations directly on your data:
const totalUsers = await userRepo.countOf();
const activeUsersCount = await userRepo.countOf('isActive'); // Counts where isActive is truthy
const maxAge = await userRepo.maxOf('age');
const minAge = await userRepo.minOf('age');
const avgAge = await userRepo.avgOf('age');
const totalAge = await userRepo.sumOf('age');
const distinctCities = await userRepo.distinctOf('city');Easily paginate through your data, including partial match searches:
// Paginate based on a default query (e.g., all users)
// This searches for users matching "search term" (partial match) and paginates the results
const page1 = await userRepo.page('search term', OrderDirection.ASC, { limit: 10, offset: 1 });
// Paginate ordered by a specific key without filtering
const page2 = await userRepo.paginateBy('createdAt', OrderDirection.DESC, { limit: 20, offset: 2 });
console.log(`Page ${page1.current} of ${page1.total}`);If you have bug reports, questions or suggestions please create a new issue.
I am grateful for any contributions made to this project. Please read this to get started.
The first and easiest way you can support it is by Contributing. Even just finding a typo in the documentation is important.
Financial support is always welcome and helps keep both me and the project alive and healthy.
So if you can, if this project in any way. either by learning something or simply by helping you save precious time, please consider donating.
This project is released under the Mozilla Public License 2.0.
By developers, for developers...