Add batched event delivery with retry, timeout, and queue management#1
Open
Add batched event delivery with retry, timeout, and queue management#1
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR overhauls the JS client's event delivery model from fire-and-forget individual HTTP calls to a buffered, batched pipeline. A matching /logs/batch endpoint is added on the broker server.
Client (
logwolf-client/js)capture()is now synchronous — it enqueues the event and returns immediately. The oldcapture()behavior (respects sampling, sends immediately) is now the behavior ofcreate(), which still awaits server acknowledgement and bypasses the queue.flushIntervalMs) or eagerly when the batch reachesmaxBatchSize.maxQueueSizeis exceeded, the oldest event is dropped andonDroppedis called.retryDelaysMs. Auth errors (401/403) are not retried.fetchcalls now go throughfetchWithTimeout()usingAbortController, controlled byrequestTimeoutMs.LogwolfEvent.stop()— freezes the event's duration at call time. Called automatically bycapture()andcreate(); idempotent.flush()— public method to drain the queue before process exit / page unload.destroy()— stops the background flush interval (useful for clean Node.js shutdown and test teardown).toJson()renamed totoObject(), returning a plain object.JSON.stringifyis now done at the send site.flushIntervalMs, maxBatchSize, maxQueueSize, retryDelaysMs, requestTimeoutMs.onDroppedis optional.Server (
logwolf-server/broker)POST /logs/batchendpoint behind the existing API key middleware.log.INFORabbitMQ event, reusing the same emitter pattern asCreateLog.Breaking changes
capture()no longer returns aPromise<void>— it now returnsboolean(whether the event was accepted into the queue).LogwolfConfignow requires the new batching fields.LogwolfEvent.toJson()is replaced bytoObject().