Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,5 @@ profile.cov
# .idea/
# .vscode/

db-data
db-data
.claude
11 changes: 9 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@ No vendor lock-in. No per-seat pricing. One `docker-compose up` and you're loggi
Logwolf is a lightweight alternative to Sentry and Datadog for developers who want to own their data. Applications instrument themselves via the JS SDK, which ships structured events to a Go backend pipeline. Events are stored in MongoDB and surfaced through a React dashboard.

- **Capture events** with severity levels, tags, and arbitrary key/value payloads
- **Track durations** automatically — `LogwolfEvent` acts as a stopwatch
- **Batched delivery** — events are buffered in memory and flushed in configurable batches, keeping your application's hot path free
- **Retry with backoff** — failed sends are retried automatically; auth errors are not
- **Track durations** automatically — `LogwolfEvent` acts as a stopwatch, frozen at enqueue time
- **Sample intelligently** — configurable rates for info, warning, and error events; critical always sends
- **See everything** in a dashboard with metrics, event rate, error rate, and tag breakdowns

Expand Down Expand Up @@ -42,6 +44,11 @@ const logwolf = new Logwolf({
apiKey: process.env.LOGWOLF_API_KEY,
sampleRate: 0.5,
errorSampleRate: 1,
flushIntervalMs: 5000,
maxBatchSize: 20,
maxQueueSize: 500,
retryDelaysMs: [1000, 3000, 10000],
requestTimeoutMs: 10000,
});

const event = new LogwolfEvent({
Expand All @@ -53,7 +60,7 @@ const event = new LogwolfEvent({
event.set('userId', '123');
event.set('amount', 9900);

await logwolf.capture(event);
logwolf.capture(event); // enqueues and returns immediately
```

Full SDK reference at [logwolf-docs.vercel.app/sdk/js](https://logwolf-docs.vercel.app/sdk/js.html).
Expand Down
19 changes: 11 additions & 8 deletions docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Logger ────────────────────────

**Caddy** is the only service that faces the internet. It terminates TLS and routes traffic: `/api/*` goes to the Broker, everything else goes to the Frontend. Nothing else is exposed on the host.

**Broker** is the HTTP API gateway, written in Go using the `chi` router. All SDK traffic enters here. It validates API keys, pushes log events to RabbitMQ asynchronously, and proxies read requests to the Logger via RPC. The Broker responds `202 Accepted` to write requests immediately — before the event hits the database.
**Broker** is the HTTP API gateway, written in Go using the `chi` router. All SDK traffic enters here. It validates API keys, pushes log events to RabbitMQ asynchronously, and proxies read requests to the Logger via RPC. The Broker responds `202 Accepted` to write requests immediately — before the event hits the database. It exposes two write endpoints: `POST /logs` for single events and `POST /logs/batch` for batched delivery (max 1000 events per request).

**RabbitMQ** decouples ingestion from persistence. The Broker publishes events to a topic exchange (`logs_topic`). The Listener consumes from a durable named queue (`logwolf_logs`). If the Listener restarts, in-flight messages are not lost.

Expand All @@ -54,13 +54,16 @@ Logger ────────────────────────

When your application calls `logwolf.capture(event)`:

1. The SDK sends a `POST /api/logs` request with a `Authorization: Bearer lw_...` header.
2. Caddy forwards the request to the Broker.
3. The Broker validates the API key — checking an in-memory 60-second cache first, then MongoDB on a miss.
4. The Broker publishes the event to RabbitMQ and returns `202 Accepted`.
5. The Listener picks up the message from the durable queue.
6. The Listener calls `RPCServer.LogInfo` on the Logger over TCP.
7. The Logger writes the event to MongoDB.
1. The SDK enqueues the event in memory and returns immediately.
2. When the queue reaches `maxBatchSize` or `flushIntervalMs` elapses, the SDK sends a `POST /api/logs/batch` request with an `Authorization: Bearer lw_...` header. Failed sends are retried according to `retryDelaysMs`.
3. Caddy forwards the request to the Broker.
4. The Broker validates the API key — checking an in-memory 60-second cache first, then MongoDB on a miss.
5. The Broker publishes each event in the batch to RabbitMQ and returns `202 Accepted`.
6. The Listener picks up the messages from the durable queue.
7. The Listener calls `RPCServer.LogInfo` on the Logger over TCP.
8. The Logger writes each event to MongoDB.

When `logwolf.create(event)` is used instead, step 1–2 are skipped — the event is sent immediately via `POST /api/logs` and the call awaits the server response.

The HTTP response comes back before the database write completes. This keeps ingestion latency low and protects your application from any slowness in the persistence layer.

Expand Down
7 changes: 6 additions & 1 deletion docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,11 @@ const logwolf = new Logwolf({
apiKey: process.env.LOGWOLF_API_KEY,
sampleRate: 1,
errorSampleRate: 1,
flushIntervalMs: 5000,
maxBatchSize: 20,
maxQueueSize: 500,
retryDelaysMs: [1000, 3000, 10000],
requestTimeoutMs: 10000,
});

const event = new LogwolfEvent({
Expand All @@ -106,7 +111,7 @@ const event = new LogwolfEvent({
});

event.set('userId', '123');
await logwolf.capture(event);
logwolf.capture(event);
```

Head to the **Dashboard** — your event should appear within a few seconds.
Expand Down
75 changes: 61 additions & 14 deletions docs/sdk/js.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,19 +18,30 @@ const logwolf = new Logwolf({
apiKey: process.env.LOGWOLF_API_KEY,
sampleRate: 0.5,
errorSampleRate: 1,
flushIntervalMs: 5000,
maxBatchSize: 20,
maxQueueSize: 500,
retryDelaysMs: [1000, 3000, 10000],
requestTimeoutMs: 10000,
});
```

Configuration is validated at construction time using Zod. If a required field is missing or malformed, the constructor throws immediately.

### Configuration options

| Option | Type | Required | Description |
| ----------------- | -------- | -------- | -------------------------------------------------------------------------------------------------------------- |
| `url` | `string` | ✅ | Base URL of your Logwolf instance, including `/api/`. Must be a valid URL. |
| `apiKey` | `string` | ✅ | API key generated from the dashboard. Must start with `lw_`. |
| `sampleRate` | `number` | — | Fraction of `info` and `warning` events to send. `1` = all, `0.5` = half, `0` = none. Defaults to sending all. |
| `errorSampleRate` | `number` | — | Fraction of `error` events to send. `critical` events always bypass sampling. Defaults to sending all. |
| Option | Type | Required | Description |
| ------------------ | ---------- | -------- | --------------------------------------------------------------------------------------------------------------- |
| `url` | `string` | ✅ | Base URL of your Logwolf instance, including `/api/`. Must be a valid URL. |
| `apiKey` | `string` | ✅ | API key generated from the dashboard. Must start with `lw_` and be at least 10 characters. |
| `flushIntervalMs` | `number` | ✅ | How often (ms) the queue is flushed automatically. |
| `maxBatchSize` | `number` | ✅ | Flush immediately when the queue reaches this many events, without waiting for the interval. |
| `maxQueueSize` | `number` | ✅ | Maximum number of events to hold in memory. When exceeded, the oldest event is dropped. |
| `retryDelaysMs` | `number[]` | ✅ | Delays between retry attempts on failed sends. Length determines retry count. Use `[]` to disable retries. |
| `requestTimeoutMs` | `number` | ✅ | Abort a fetch after this many milliseconds. |
| `sampleRate` | `number` | — | Fraction of `info` and `warning` events to send. `1` = all, `0.5` = half, `0` = none. Defaults to sending all. |
| `errorSampleRate` | `number` | — | Fraction of `error` events to send. `critical` events always bypass sampling. Defaults to sending all. |
| `onDropped` | `function` | — | Called when events are dropped. Signature: `(events: LogwolfEvent[], reason: string) => void`. |

## Creating events

Expand Down Expand Up @@ -86,29 +97,60 @@ event.setSeverity('warning');

### `capture(event)`

Sends an event subject to sampling. This is the method you'll use in most cases.
Enqueues an event for batched delivery. This is the method you'll use in most cases.

```ts
await logwolf.capture(event);
logwolf.capture(event);
```

`capture()` is **synchronous** — it enqueues the event and returns immediately. Delivery happens in the background. It returns `true` if the event was accepted into the queue, `false` if it was dropped by sampling or because the queue was full.

Sampling behaviour:

- `info` and `warning` events are sampled at `sampleRate`
- `error` events are sampled at `errorSampleRate`
- `critical` events always bypass sampling and are always sent

Events are delivered to the server in batches, either when `maxBatchSize` is reached or when the flush interval fires.

### `create(event)`

Sends an event unconditionally, bypassing `sampleRate` and `errorSampleRate`. Use this when you explicitly want every event to be recorded regardless of sampling configuration.
Sends an event immediately, bypassing both sampling and the queue. Awaitable — resolves when the server has acknowledged the event.

```ts
await logwolf.create(event);
```

Use this when you need guaranteed, synchronous delivery — for example, in a process exit handler.

### `flush()`

Drains the queue immediately. If a background flush is already in progress, waits for it to complete before sending any remaining events.

```ts
await logwolf.flush();
```

Call this before process exit or page unload to avoid losing buffered events.

### `destroy()`

Stops the background flush interval. Call this for clean Node.js shutdown or in test teardown. Any events still in the queue will be lost — call `flush()` first if you need to drain them.

```ts
await logwolf.flush();
logwolf.destroy();
```

## Duration tracking

`LogwolfEvent` acts as a stopwatch. The duration is measured from object instantiation to the moment `capture()` or `create()` is called.
`LogwolfEvent` acts as a stopwatch. The duration is measured from object instantiation to the moment the event is stopped.

- `capture()` stops the clock at **enqueue time** — before the event is sent to the server.
- `create()` stops the clock immediately before sending.
- You can also stop the clock manually by calling `event.stop()` before either method.

`stop()` is idempotent — calling it more than once is a no-op; the first call wins.

```ts
const event = new LogwolfEvent({
Expand All @@ -120,7 +162,7 @@ const event = new LogwolfEvent({
const result = await db.query('SELECT ...');

event.set('rows', result.length);
await logwolf.capture(event); // duration = time since event was instantiated
logwolf.capture(event); // duration = time since event was instantiated
```

The `duration` field appears in milliseconds in the dashboard and in the event payload.
Expand Down Expand Up @@ -182,11 +224,11 @@ async function processOrder(orderId: string) {
try {
const result = await doWork(orderId);
event.set('result', result.status);
await logwolf.capture(event);
logwolf.capture(event);
} catch (err) {
event.setSeverity('error');
event.set('error', err instanceof Error ? err.message : String(err));
await logwolf.capture(event);
logwolf.capture(event);
}
}
```
Expand Down Expand Up @@ -218,6 +260,11 @@ const logwolf = new Logwolf({
apiKey: process.env.LOGWOLF_API_KEY!,
sampleRate: 0.1,
errorSampleRate: 1,
flushIntervalMs: 5000,
maxBatchSize: 20,
maxQueueSize: 500,
retryDelaysMs: [1000, 3000, 10000],
requestTimeoutMs: 10000,
});

export function middleware(request: NextRequest) {
Expand All @@ -231,7 +278,7 @@ export function middleware(request: NextRequest) {
},
});

logwolf.capture(event).catch(() => {});
logwolf.capture(event);

return NextResponse.next();
}
Expand Down
Loading
Loading