Skip to content

Update packages and dependencies#2

Open
feruzm wants to merge 3 commits intomainfrom
upgrade
Open

Update packages and dependencies#2
feruzm wants to merge 3 commits intomainfrom
upgrade

Conversation

@feruzm
Copy link
Member

@feruzm feruzm commented Mar 7, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Redis-based rate limiting for upload requests
    • S3-compatible blob storage support
  • Infrastructure & Maintenance

    • Upgraded Node.js to version 20 (minimum requirement now ≥18)
    • Modernized core dependencies: Koa, Redis, Sharp, and AWS SDK v3
    • Migrated code quality tooling from TSLint to ESLint
    • Improved multi-process worker lifecycle management
  • Tests

    • Added comprehensive test suites for avatar, cover, image proxy, file uploads, and blob storage functionality

@coderabbitai
Copy link

coderabbitai bot commented Mar 7, 2026

📝 Walkthrough

Walkthrough

This PR modernizes a Node.js image hosting application by upgrading dependencies (Node 18→20), refactoring worker process management with cluster API changes, introducing Redis-based rate limiting and S3 blob store abstraction, standardizing import syntax to defaults, and adding comprehensive test coverage across multiple endpoints.

Changes

Cohort / File(s) Summary
Infrastructure & Configuration
Dockerfile, Makefile, tsconfig.json, test/tsconfig.json
Updated Node.js from 18 to 20 with adjusted system libraries; transitioned build tooling from TSLint to ESLint; added Mocha --exit flag and updated TypeScript target to ES2022 with esModuleInterop enabled.
Dependency Management
package.json
Comprehensive modernization: Node engine bumped to >=18; core packages upgraded (koa family ^2.16.0, redis ^4.7.0, sharp ^0.33.5); AWS SDK v2→v3 (S3Client); removed cloudflare/s3-blob-store; dev tools expanded (ESLint ecosystem, TypeScript ~5.7.0, mocha ^10.8.0).
Configuration Cleanup
tslint.json (removed)
Removed obsolete TSLint configuration file as part of ESLint migration.
Core Application Logic
src/app.ts, src/common.ts, src/error.ts
Cluster API migration (isMaster→isPrimary); primary process now forks/manages workers with restart throttling via rate limiter; introduced Redis-based rate limiting with sliding-window algorithm; enhanced error emission with ctx parameter; refactored server lifecycle with explicit Promise wrappers; worker initialization gated by process role.
S3 Blob Store Abstraction
src/s3-store.ts
New file: S3BlobStore class implementing blob store interface with AWS SDK v3 S3Client; supports createReadStream, putBuffer, createWriteStream, exists, and remove operations.
Upload & Rate Limiting
src/upload.ts, src/utils.ts
Refactored multipart parsing with info object destructuring; replaced internal rate limiter with centralized getRatelimit function using Redis; made storeWrite async with putBuffer optimization path; replaced cloudflare SDK with fetch-based HTTP purge request.
Import Standardization
src/avatar.ts, src/blacklist-service.ts, src/cache.ts, src/cover.ts, src/constants.ts, src/fallback.ts, src/image-resizer.ts, src/legacy-proxy.ts, src/logger.ts, src/proxy.ts, src/routes.ts, src/serve.ts
Unified import style: replaced namespace imports with default imports for config, etag, Sharp, and router modules; removed unused imports; minor formatting adjustments.
Import Normalization in Tests
test/app.ts, test/common.ts, test/index.ts, test/proxy.ts, test/upload.ts, test/utils.ts
Standardized test imports to default style (assert, needle); updated mock API identifier from 'database_api-get_accounts' to 'condenser_api-get_accounts'.
New Test Suites
test/avatar.ts, test/cover.ts, test/legacy-proxy.ts, test/proxy-formats.ts, test/serve.ts, test/store.ts
Comprehensive integration tests: avatar endpoint with size/format negotiation, cover image serving, legacy proxy redirects, image format conversions, file serving with caching, and blob store read/write operations across uploadStore/proxyStore; all use before/after hooks for server lifecycle management.

Sequence Diagram

sequenceDiagram
    participant Primary as Primary Process
    participant Worker as Worker Process
    participant Redis as Redis
    participant S3 as AWS S3

    Primary->>Primary: Fork worker
    Worker->>Redis: Connect & subscribe
    Redis-->>Worker: Ready
    Worker->>S3: Initialize S3Client
    Worker->>Worker: Listen on port
    Worker-->>Primary: Worker ready

    Primary->>Primary: Handle incoming request
    Primary-->>Worker: Pass to worker
    Worker->>Redis: getRatelimit(account, max, duration)
    Redis->>Redis: Check sorted set (sliding window)
    Redis-->>Worker: Return {remaining, reset, total}
    
    alt Rate limit exceeded
        Worker-->>Primary: 429 error
    else Within limit
        Worker->>S3: GET/PUT object
        S3-->>Worker: Data/confirmation
        Worker-->>Primary: 200 response
    end

    Worker-->>Primary: Error event
    Primary->>Primary: Throttle restart with backoff
    Primary->>Primary: Fork replacement worker
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • Performance and security fixes #1: Shares identical changes to worker-restart throttling logic in src/app.ts and error event emission pattern in src/error.ts with ctx parameter addition.

Poem

🐰 From Node eighteen to twenty, dependencies refined,
Workers dance through Redis gates, rate-limited and kind,
S3 blobs now stream with grace, imports unified and bright,
Tests bloom across the garden—coverage burning strong and right! 🌟

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Update packages and dependencies' directly and clearly describes the primary change in the changeset: a comprehensive modernization and update of project dependencies across package.json, Node.js version in Dockerfile, build tooling in Makefile, and numerous source file imports.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch upgrade

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
Dockerfile (1)

46-51: Consider adding a non-root user for improved security posture.

The container runs as root by default. Adding a non-root user reduces the attack surface if the container is compromised.

🔒 Proposed fix to add non-root user
 EXPOSE 8800
 ENV PORT=8800
 ENV NODE_ENV=production
+
+RUN groupadd --gid 1000 node \
+    || true \
+  && useradd --uid 1000 --gid 1000 -m node \
+    || true
+
+USER node

 HEALTHCHECK --interval=20s --timeout=10s --start-period=5s \
   CMD wget -nv -t1 --spider "http://localhost:8800/healthcheck" || exit 1

 CMD ["node", "lib/app.js"]

Note: The node:20-slim image already includes a node user (uid 1000), so you may simplify to just USER node after verifying file ownership in the COPY steps.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Dockerfile` around lines 46 - 51, Add a non-root runtime user and ensure
application files are owned by that user: when building the image, change
ownership of the copied app files (e.g., lib, package.json, node_modules or the
app folder created during COPY) to a non-root user (use the existing node user
from the base image or create a user/group) and then switch to that user before
the final CMD; keep the HEALTHCHECK as-is but ensure the non-root user has
read/execute permissions on the app and healthcheck endpoints. Target symbols:
Dockerfile ENV NODE_ENV, HEALTHCHECK, CMD ["node", "lib/app.js"], and the COPY
steps that place lib/ and other app files so you can add chown and a final USER
node (or USER <created-user>) instruction.
src/app.ts (1)

67-68: Consider adding error handling to the listen Promise wrapper.

The current implementation doesn't handle listen errors explicitly. While most failures throw synchronously, some (like port-in-use on certain platforms) may not. Consider adding error event handling for robustness:

♻️ Optional: Add error handling
-    const listen = (port: any) => new Promise<void>((resolve) => server.listen(port, resolve))
+    const listen = (port: any) => new Promise<void>((resolve, reject) => {
+        server.once('error', reject)
+        server.listen(port, () => {
+            server.removeListener('error', reject)
+            resolve()
+        })
+    })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app.ts` around lines 67 - 68, The listen Promise wrapper lacks runtime
error handling — update the listen function to attach an 'error' listener to
server when calling server.listen so that port-in-use and async listen errors
reject the Promise; also listen for the 'listening' (or use the listen callback)
to resolve and ensure you remove the error listener after success or failure to
avoid leaks. Locate the listen function and replace the simple new Promise(() =>
server.listen(...)) with a Promise that registers server.once('listening',
resolve) and server.once('error', err => reject(err)) (and cleans up the
counterpart listener on either path). Keep the existing close Promise as-is.
src/utils.ts (1)

259-272: Fire-and-forget cache purge is intentional but consider logging success.

The Cloudflare cache purge correctly uses a fire-and-forget pattern since it's a non-critical operation. Error handling logs failures appropriately. The native fetch API is available in Node 18+, matching the engine requirement.

Consider adding a debug log on success for observability, though this is optional given the non-critical nature of cache purging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils.ts` around lines 259 - 272, Add a success/debug log for the
Cloudflare purge fetch call so successful responses are observable; inside the
fetch .then handler where it currently checks if (!res.ok) and calls
logger.error, add an else branch that calls logger.debug or logger.info with
context (e.g., { status: res.status, url: value }) and a message like
'Cloudflare cache purge success' so successes are recorded while keeping the
fire-and-forget behavior and existing logger.error/logger.error(...) usage for
failures and .catch intact.
src/common.ts (1)

191-193: Type assertion masks undefined stores in primary process.

The pattern undefined as any as AbstractBlobStore hides the fact that stores are undefined in the primary process. While the primary only manages workers and shouldn't handle HTTP requests, this could cause confusing runtime errors during testing or future refactoring.

Consider documenting this behavior or using a more explicit pattern:

♻️ Alternative: Export with explicit undefined type
-export const uploadStore = isPrimaryOnly ? (undefined as any as AbstractBlobStore) : loadStore('upload_store')
-export const proxyStore = isPrimaryOnly ? (undefined as any as AbstractBlobStore) : loadStore('proxy_store')
+export const uploadStore: AbstractBlobStore | undefined = isPrimaryOnly ? undefined : loadStore('upload_store')
+export const proxyStore: AbstractBlobStore | undefined = isPrimaryOnly ? undefined : loadStore('proxy_store')

Note: This would require updating call sites to handle the undefined case, which adds safety but more code changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/common.ts` around lines 191 - 193, The current exports uploadStore and
proxyStore use "undefined as any as AbstractBlobStore" which masks that stores
are actually undefined in the primary process; change these exports to have an
explicit nullable type (e.g., AbstractBlobStore | undefined) instead of forcing
a cast, update the initialization to export undefined when isPrimaryOnly is true
(keeping loadStore('upload_store') / loadStore('proxy_store') for non-primary),
and then update all call sites that use uploadStore or proxyStore to handle the
undefined case (guard or throw with clear message) so runtime errors are
explicit; reference symbols: uploadStore, proxyStore, AbstractBlobStore,
loadStore, isPrimaryOnly.
src/s3-store.ts (1)

24-27: Consider defensive handling for res.Body type.

In AWS SDK v3, GetObjectCommand returns Body as Readable | ReadableStream | Blob | undefined. While in Node.js it's typically a Readable, adding a runtime check would be safer.

🛡️ Optional defensive check
         })).then((res) => {
-            const body = res.Body as Readable
-            body.on('error', (err) => passthrough.destroy(err))
-            body.pipe(passthrough)
+            if (!res.Body) {
+                passthrough.destroy(new Error('Empty response body from S3'))
+                return
+            }
+            const body = res.Body as Readable
+            body.on('error', (err) => passthrough.destroy(err))
+            body.pipe(passthrough)
         }).catch((err) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/s3-store.ts` around lines 24 - 27, The code assumes res.Body is a Node
Readable when piping to passthrough; add a runtime defensive check on res.Body
(the result from GetObjectCommand) before calling body.on/body.pipe: if it's
undefined throw or callback an error, if it's a Node.js Readable use
body.on('error', ...) and body.pipe(passthrough), and for other possible types
(ReadableStream/Blob) convert them to a Node Readable (or stream the chunks)
before piping; update the logic around res.Body and the passthrough usage so you
never call body.on or body.pipe on an invalid type.
test/store.ts (1)

10-10: Isolate test artifacts more strongly.

These tests write into the configured stores with Date.now()-only keys and never remove the created blobs. On shared or persistent backends, that makes parallel runs harder to isolate and leaves stale test data behind. A per-run prefix plus teardown cleanup would make this suite much more reliable.

Also applies to: 23-23, 28-28, 36-36, 48-48, 57-57

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/store.ts` at line 10, Replace the Date.now()-only blob keys with a
per-run unique prefix and ensure created blobs are removed in teardown:
introduce a single TEST_RUN_ID (e.g., from crypto.randomUUID() or a
timestamp+random suffix) used to build keys instead of the inline `const key =
\`test-write-\${Date.now()}\`` (and update the other occurrences at the noted
lines), and add cleanup logic in the test suite teardown (afterAll/afterEach)
that iterates over and deletes any keys created with that TEST_RUN_ID from the
configured store so parallel runs are isolated and no stale blobs remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/serve.ts`:
- Around line 74-79: The test "should reject non-GET methods" currently POSTs to
a path that matches the upload route, so it doesn't verify the serve route's
GET-only guard; update the test to target the serve handler directly (import and
call serveHandler) or change the request to a path/method combination that can
only match the serve route (so a POST to serveHandler triggers the 405).
Concretely, in test/serve.ts modify the case named "should reject non-GET
methods" to either invoke serveHandler with a mock request object whose method
is 'POST' and url '/DQmSomeHash/test.jpg' and assert a 405 response, or send the
HTTP request to a URL that cannot match the upload route so the request reaches
serveHandler and you can assert the GET-only rejection.
- Around line 14-18: The test currently hard-codes port = 63205 and calls
server.listen(port, 'localhost', ...) which can conflict; change to listen on
port 0 in the before hook, capture the assigned port from server.address().port
and build a baseUrl (e.g. `baseUrl = 'http://localhost:'+assignedPort`) for the
suite, and update tests to use baseUrl for request URLs; keep the
server.close(done) in after as-is and ensure the `port`, `server`, `before`, and
`after` symbols are the ones updated.

---

Nitpick comments:
In `@Dockerfile`:
- Around line 46-51: Add a non-root runtime user and ensure application files
are owned by that user: when building the image, change ownership of the copied
app files (e.g., lib, package.json, node_modules or the app folder created
during COPY) to a non-root user (use the existing node user from the base image
or create a user/group) and then switch to that user before the final CMD; keep
the HEALTHCHECK as-is but ensure the non-root user has read/execute permissions
on the app and healthcheck endpoints. Target symbols: Dockerfile ENV NODE_ENV,
HEALTHCHECK, CMD ["node", "lib/app.js"], and the COPY steps that place lib/ and
other app files so you can add chown and a final USER node (or USER
<created-user>) instruction.

In `@src/app.ts`:
- Around line 67-68: The listen Promise wrapper lacks runtime error handling —
update the listen function to attach an 'error' listener to server when calling
server.listen so that port-in-use and async listen errors reject the Promise;
also listen for the 'listening' (or use the listen callback) to resolve and
ensure you remove the error listener after success or failure to avoid leaks.
Locate the listen function and replace the simple new Promise(() =>
server.listen(...)) with a Promise that registers server.once('listening',
resolve) and server.once('error', err => reject(err)) (and cleans up the
counterpart listener on either path). Keep the existing close Promise as-is.

In `@src/common.ts`:
- Around line 191-193: The current exports uploadStore and proxyStore use
"undefined as any as AbstractBlobStore" which masks that stores are actually
undefined in the primary process; change these exports to have an explicit
nullable type (e.g., AbstractBlobStore | undefined) instead of forcing a cast,
update the initialization to export undefined when isPrimaryOnly is true
(keeping loadStore('upload_store') / loadStore('proxy_store') for non-primary),
and then update all call sites that use uploadStore or proxyStore to handle the
undefined case (guard or throw with clear message) so runtime errors are
explicit; reference symbols: uploadStore, proxyStore, AbstractBlobStore,
loadStore, isPrimaryOnly.

In `@src/s3-store.ts`:
- Around line 24-27: The code assumes res.Body is a Node Readable when piping to
passthrough; add a runtime defensive check on res.Body (the result from
GetObjectCommand) before calling body.on/body.pipe: if it's undefined throw or
callback an error, if it's a Node.js Readable use body.on('error', ...) and
body.pipe(passthrough), and for other possible types (ReadableStream/Blob)
convert them to a Node Readable (or stream the chunks) before piping; update the
logic around res.Body and the passthrough usage so you never call body.on or
body.pipe on an invalid type.

In `@src/utils.ts`:
- Around line 259-272: Add a success/debug log for the Cloudflare purge fetch
call so successful responses are observable; inside the fetch .then handler
where it currently checks if (!res.ok) and calls logger.error, add an else
branch that calls logger.debug or logger.info with context (e.g., { status:
res.status, url: value }) and a message like 'Cloudflare cache purge success' so
successes are recorded while keeping the fire-and-forget behavior and existing
logger.error/logger.error(...) usage for failures and .catch intact.

In `@test/store.ts`:
- Line 10: Replace the Date.now()-only blob keys with a per-run unique prefix
and ensure created blobs are removed in teardown: introduce a single TEST_RUN_ID
(e.g., from crypto.randomUUID() or a timestamp+random suffix) used to build keys
instead of the inline `const key = \`test-write-\${Date.now()}\`` (and update
the other occurrences at the noted lines), and add cleanup logic in the test
suite teardown (afterAll/afterEach) that iterates over and deletes any keys
created with that TEST_RUN_ID from the configured store so parallel runs are
isolated and no stale blobs remain.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 228fa6d9-7cd1-4878-ab58-50a014c9414b

📥 Commits

Reviewing files that changed from the base of the PR and between 1f802d4 and 564275f.

⛔ Files ignored due to path filters (2)
  • test/test.gif is excluded by !**/*.gif
  • test/test.png is excluded by !**/*.png
📒 Files selected for processing (36)
  • Dockerfile
  • Makefile
  • package.json
  • src/app.ts
  • src/avatar.ts
  • src/blacklist-service.ts
  • src/cache.ts
  • src/common.ts
  • src/constants.ts
  • src/cover.ts
  • src/error.ts
  • src/fallback.ts
  • src/image-resizer.ts
  • src/legacy-proxy.ts
  • src/logger.ts
  • src/proxy.ts
  • src/routes.ts
  • src/s3-store.ts
  • src/serve.ts
  • src/upload.ts
  • src/utils.ts
  • test/app.ts
  • test/avatar.ts
  • test/common.ts
  • test/cover.ts
  • test/index.ts
  • test/legacy-proxy.ts
  • test/proxy-formats.ts
  • test/proxy.ts
  • test/serve.ts
  • test/store.ts
  • test/tsconfig.json
  • test/upload.ts
  • test/utils.ts
  • tsconfig.json
  • tslint.json
💤 Files with no reviewable changes (2)
  • tslint.json
  • src/legacy-proxy.ts

Comment on lines +14 to +18
const port = 63205
const server = http.createServer(app.callback())

before((done) => { server.listen(port, 'localhost', done) })
after((done) => { server.close(done) })
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid hard-coding the test port.

Binding Line 17 to a fixed port makes this suite flaky when another process already owns 63205 or when tests run in parallel. Bind to port 0 and capture the assigned port in before.

Possible fix
 describe('serve', function() {
-    const port = 63205
+    let baseUrl: string
     const server = http.createServer(app.callback())

-    before((done) => { server.listen(port, 'localhost', done) })
+    before((done) => {
+        server.listen(0, '127.0.0.1', () => {
+            const address = server.address()
+            if (!address || typeof address === 'string') {
+                done(new Error('failed to bind test server'))
+                return
+            }
+            baseUrl = `http://127.0.0.1:${address.port}`
+            done()
+        })
+    })
     after((done) => { server.close(done) })

Then replace the request URLs to use baseUrl.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/serve.ts` around lines 14 - 18, The test currently hard-codes port =
63205 and calls server.listen(port, 'localhost', ...) which can conflict; change
to listen on port 0 in the before hook, capture the assigned port from
server.address().port and build a baseUrl (e.g. `baseUrl =
'http://localhost:'+assignedPort`) for the suite, and update tests to use
baseUrl for request URLs; keep the server.close(done) in after as-is and ensure
the `port`, `server`, `before`, and `after` symbols are the ones updated.

Comment on lines +74 to +79
it('should reject non-GET methods', async function() {
// POST to /:hash/:filename matches /:username/:signature (upload route)
// so it returns an upload error, not 405
const res = await needle('post', `http://localhost:${port}/DQmSomeHash/test.jpg`, {})
assert(res.statusCode >= 400, 'should return error status')
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This does not actually verify the serve route's GET-only guard.

Line 75 already notes that the POST hits the upload route, so this test can still pass even if src/serve.ts:35-39 stops rejecting non-GET methods. Please target serveHandler directly or use a route/method combination that reaches the serve path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/serve.ts` around lines 74 - 79, The test "should reject non-GET methods"
currently POSTs to a path that matches the upload route, so it doesn't verify
the serve route's GET-only guard; update the test to target the serve handler
directly (import and call serveHandler) or change the request to a path/method
combination that can only match the serve route (so a POST to serveHandler
triggers the 405). Concretely, in test/serve.ts modify the case named "should
reject non-GET methods" to either invoke serveHandler with a mock request object
whose method is 'POST' and url '/DQmSomeHash/test.jpg' and assert a 405
response, or send the HTTP request to a URL that cannot match the upload route
so the request reaches serveHandler and you can assert the GET-only rejection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant