Conversation
📝 WalkthroughWalkthroughThis PR modernizes a Node.js image hosting application by upgrading dependencies (Node 18→20), refactoring worker process management with cluster API changes, introducing Redis-based rate limiting and S3 blob store abstraction, standardizing import syntax to defaults, and adding comprehensive test coverage across multiple endpoints. Changes
Sequence DiagramsequenceDiagram
participant Primary as Primary Process
participant Worker as Worker Process
participant Redis as Redis
participant S3 as AWS S3
Primary->>Primary: Fork worker
Worker->>Redis: Connect & subscribe
Redis-->>Worker: Ready
Worker->>S3: Initialize S3Client
Worker->>Worker: Listen on port
Worker-->>Primary: Worker ready
Primary->>Primary: Handle incoming request
Primary-->>Worker: Pass to worker
Worker->>Redis: getRatelimit(account, max, duration)
Redis->>Redis: Check sorted set (sliding window)
Redis-->>Worker: Return {remaining, reset, total}
alt Rate limit exceeded
Worker-->>Primary: 429 error
else Within limit
Worker->>S3: GET/PUT object
S3-->>Worker: Data/confirmation
Worker-->>Primary: 200 response
end
Worker-->>Primary: Error event
Primary->>Primary: Throttle restart with backoff
Primary->>Primary: Fork replacement worker
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
Dockerfile (1)
46-51: Consider adding a non-root user for improved security posture.The container runs as root by default. Adding a non-root user reduces the attack surface if the container is compromised.
🔒 Proposed fix to add non-root user
EXPOSE 8800 ENV PORT=8800 ENV NODE_ENV=production + +RUN groupadd --gid 1000 node \ + || true \ + && useradd --uid 1000 --gid 1000 -m node \ + || true + +USER node HEALTHCHECK --interval=20s --timeout=10s --start-period=5s \ CMD wget -nv -t1 --spider "http://localhost:8800/healthcheck" || exit 1 CMD ["node", "lib/app.js"]Note: The
node:20-slimimage already includes anodeuser (uid 1000), so you may simplify to justUSER nodeafter verifying file ownership in the COPY steps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 46 - 51, Add a non-root runtime user and ensure application files are owned by that user: when building the image, change ownership of the copied app files (e.g., lib, package.json, node_modules or the app folder created during COPY) to a non-root user (use the existing node user from the base image or create a user/group) and then switch to that user before the final CMD; keep the HEALTHCHECK as-is but ensure the non-root user has read/execute permissions on the app and healthcheck endpoints. Target symbols: Dockerfile ENV NODE_ENV, HEALTHCHECK, CMD ["node", "lib/app.js"], and the COPY steps that place lib/ and other app files so you can add chown and a final USER node (or USER <created-user>) instruction.src/app.ts (1)
67-68: Consider adding error handling to thelistenPromise wrapper.The current implementation doesn't handle
listenerrors explicitly. While most failures throw synchronously, some (like port-in-use on certain platforms) may not. Consider adding error event handling for robustness:♻️ Optional: Add error handling
- const listen = (port: any) => new Promise<void>((resolve) => server.listen(port, resolve)) + const listen = (port: any) => new Promise<void>((resolve, reject) => { + server.once('error', reject) + server.listen(port, () => { + server.removeListener('error', reject) + resolve() + }) + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app.ts` around lines 67 - 68, The listen Promise wrapper lacks runtime error handling — update the listen function to attach an 'error' listener to server when calling server.listen so that port-in-use and async listen errors reject the Promise; also listen for the 'listening' (or use the listen callback) to resolve and ensure you remove the error listener after success or failure to avoid leaks. Locate the listen function and replace the simple new Promise(() => server.listen(...)) with a Promise that registers server.once('listening', resolve) and server.once('error', err => reject(err)) (and cleans up the counterpart listener on either path). Keep the existing close Promise as-is.src/utils.ts (1)
259-272: Fire-and-forget cache purge is intentional but consider logging success.The Cloudflare cache purge correctly uses a fire-and-forget pattern since it's a non-critical operation. Error handling logs failures appropriately. The native
fetchAPI is available in Node 18+, matching the engine requirement.Consider adding a debug log on success for observability, though this is optional given the non-critical nature of cache purging.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils.ts` around lines 259 - 272, Add a success/debug log for the Cloudflare purge fetch call so successful responses are observable; inside the fetch .then handler where it currently checks if (!res.ok) and calls logger.error, add an else branch that calls logger.debug or logger.info with context (e.g., { status: res.status, url: value }) and a message like 'Cloudflare cache purge success' so successes are recorded while keeping the fire-and-forget behavior and existing logger.error/logger.error(...) usage for failures and .catch intact.src/common.ts (1)
191-193: Type assertion masks undefined stores in primary process.The pattern
undefined as any as AbstractBlobStorehides the fact that stores are undefined in the primary process. While the primary only manages workers and shouldn't handle HTTP requests, this could cause confusing runtime errors during testing or future refactoring.Consider documenting this behavior or using a more explicit pattern:
♻️ Alternative: Export with explicit undefined type
-export const uploadStore = isPrimaryOnly ? (undefined as any as AbstractBlobStore) : loadStore('upload_store') -export const proxyStore = isPrimaryOnly ? (undefined as any as AbstractBlobStore) : loadStore('proxy_store') +export const uploadStore: AbstractBlobStore | undefined = isPrimaryOnly ? undefined : loadStore('upload_store') +export const proxyStore: AbstractBlobStore | undefined = isPrimaryOnly ? undefined : loadStore('proxy_store')Note: This would require updating call sites to handle the undefined case, which adds safety but more code changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/common.ts` around lines 191 - 193, The current exports uploadStore and proxyStore use "undefined as any as AbstractBlobStore" which masks that stores are actually undefined in the primary process; change these exports to have an explicit nullable type (e.g., AbstractBlobStore | undefined) instead of forcing a cast, update the initialization to export undefined when isPrimaryOnly is true (keeping loadStore('upload_store') / loadStore('proxy_store') for non-primary), and then update all call sites that use uploadStore or proxyStore to handle the undefined case (guard or throw with clear message) so runtime errors are explicit; reference symbols: uploadStore, proxyStore, AbstractBlobStore, loadStore, isPrimaryOnly.src/s3-store.ts (1)
24-27: Consider defensive handling forres.Bodytype.In AWS SDK v3,
GetObjectCommandreturnsBodyasReadable | ReadableStream | Blob | undefined. While in Node.js it's typically aReadable, adding a runtime check would be safer.🛡️ Optional defensive check
})).then((res) => { - const body = res.Body as Readable - body.on('error', (err) => passthrough.destroy(err)) - body.pipe(passthrough) + if (!res.Body) { + passthrough.destroy(new Error('Empty response body from S3')) + return + } + const body = res.Body as Readable + body.on('error', (err) => passthrough.destroy(err)) + body.pipe(passthrough) }).catch((err) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/s3-store.ts` around lines 24 - 27, The code assumes res.Body is a Node Readable when piping to passthrough; add a runtime defensive check on res.Body (the result from GetObjectCommand) before calling body.on/body.pipe: if it's undefined throw or callback an error, if it's a Node.js Readable use body.on('error', ...) and body.pipe(passthrough), and for other possible types (ReadableStream/Blob) convert them to a Node Readable (or stream the chunks) before piping; update the logic around res.Body and the passthrough usage so you never call body.on or body.pipe on an invalid type.test/store.ts (1)
10-10: Isolate test artifacts more strongly.These tests write into the configured stores with
Date.now()-only keys and never remove the created blobs. On shared or persistent backends, that makes parallel runs harder to isolate and leaves stale test data behind. A per-run prefix plus teardown cleanup would make this suite much more reliable.Also applies to: 23-23, 28-28, 36-36, 48-48, 57-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/store.ts` at line 10, Replace the Date.now()-only blob keys with a per-run unique prefix and ensure created blobs are removed in teardown: introduce a single TEST_RUN_ID (e.g., from crypto.randomUUID() or a timestamp+random suffix) used to build keys instead of the inline `const key = \`test-write-\${Date.now()}\`` (and update the other occurrences at the noted lines), and add cleanup logic in the test suite teardown (afterAll/afterEach) that iterates over and deletes any keys created with that TEST_RUN_ID from the configured store so parallel runs are isolated and no stale blobs remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/serve.ts`:
- Around line 74-79: The test "should reject non-GET methods" currently POSTs to
a path that matches the upload route, so it doesn't verify the serve route's
GET-only guard; update the test to target the serve handler directly (import and
call serveHandler) or change the request to a path/method combination that can
only match the serve route (so a POST to serveHandler triggers the 405).
Concretely, in test/serve.ts modify the case named "should reject non-GET
methods" to either invoke serveHandler with a mock request object whose method
is 'POST' and url '/DQmSomeHash/test.jpg' and assert a 405 response, or send the
HTTP request to a URL that cannot match the upload route so the request reaches
serveHandler and you can assert the GET-only rejection.
- Around line 14-18: The test currently hard-codes port = 63205 and calls
server.listen(port, 'localhost', ...) which can conflict; change to listen on
port 0 in the before hook, capture the assigned port from server.address().port
and build a baseUrl (e.g. `baseUrl = 'http://localhost:'+assignedPort`) for the
suite, and update tests to use baseUrl for request URLs; keep the
server.close(done) in after as-is and ensure the `port`, `server`, `before`, and
`after` symbols are the ones updated.
---
Nitpick comments:
In `@Dockerfile`:
- Around line 46-51: Add a non-root runtime user and ensure application files
are owned by that user: when building the image, change ownership of the copied
app files (e.g., lib, package.json, node_modules or the app folder created
during COPY) to a non-root user (use the existing node user from the base image
or create a user/group) and then switch to that user before the final CMD; keep
the HEALTHCHECK as-is but ensure the non-root user has read/execute permissions
on the app and healthcheck endpoints. Target symbols: Dockerfile ENV NODE_ENV,
HEALTHCHECK, CMD ["node", "lib/app.js"], and the COPY steps that place lib/ and
other app files so you can add chown and a final USER node (or USER
<created-user>) instruction.
In `@src/app.ts`:
- Around line 67-68: The listen Promise wrapper lacks runtime error handling —
update the listen function to attach an 'error' listener to server when calling
server.listen so that port-in-use and async listen errors reject the Promise;
also listen for the 'listening' (or use the listen callback) to resolve and
ensure you remove the error listener after success or failure to avoid leaks.
Locate the listen function and replace the simple new Promise(() =>
server.listen(...)) with a Promise that registers server.once('listening',
resolve) and server.once('error', err => reject(err)) (and cleans up the
counterpart listener on either path). Keep the existing close Promise as-is.
In `@src/common.ts`:
- Around line 191-193: The current exports uploadStore and proxyStore use
"undefined as any as AbstractBlobStore" which masks that stores are actually
undefined in the primary process; change these exports to have an explicit
nullable type (e.g., AbstractBlobStore | undefined) instead of forcing a cast,
update the initialization to export undefined when isPrimaryOnly is true
(keeping loadStore('upload_store') / loadStore('proxy_store') for non-primary),
and then update all call sites that use uploadStore or proxyStore to handle the
undefined case (guard or throw with clear message) so runtime errors are
explicit; reference symbols: uploadStore, proxyStore, AbstractBlobStore,
loadStore, isPrimaryOnly.
In `@src/s3-store.ts`:
- Around line 24-27: The code assumes res.Body is a Node Readable when piping to
passthrough; add a runtime defensive check on res.Body (the result from
GetObjectCommand) before calling body.on/body.pipe: if it's undefined throw or
callback an error, if it's a Node.js Readable use body.on('error', ...) and
body.pipe(passthrough), and for other possible types (ReadableStream/Blob)
convert them to a Node Readable (or stream the chunks) before piping; update the
logic around res.Body and the passthrough usage so you never call body.on or
body.pipe on an invalid type.
In `@src/utils.ts`:
- Around line 259-272: Add a success/debug log for the Cloudflare purge fetch
call so successful responses are observable; inside the fetch .then handler
where it currently checks if (!res.ok) and calls logger.error, add an else
branch that calls logger.debug or logger.info with context (e.g., { status:
res.status, url: value }) and a message like 'Cloudflare cache purge success' so
successes are recorded while keeping the fire-and-forget behavior and existing
logger.error/logger.error(...) usage for failures and .catch intact.
In `@test/store.ts`:
- Line 10: Replace the Date.now()-only blob keys with a per-run unique prefix
and ensure created blobs are removed in teardown: introduce a single TEST_RUN_ID
(e.g., from crypto.randomUUID() or a timestamp+random suffix) used to build keys
instead of the inline `const key = \`test-write-\${Date.now()}\`` (and update
the other occurrences at the noted lines), and add cleanup logic in the test
suite teardown (afterAll/afterEach) that iterates over and deletes any keys
created with that TEST_RUN_ID from the configured store so parallel runs are
isolated and no stale blobs remain.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 228fa6d9-7cd1-4878-ab58-50a014c9414b
⛔ Files ignored due to path filters (2)
test/test.gifis excluded by!**/*.giftest/test.pngis excluded by!**/*.png
📒 Files selected for processing (36)
DockerfileMakefilepackage.jsonsrc/app.tssrc/avatar.tssrc/blacklist-service.tssrc/cache.tssrc/common.tssrc/constants.tssrc/cover.tssrc/error.tssrc/fallback.tssrc/image-resizer.tssrc/legacy-proxy.tssrc/logger.tssrc/proxy.tssrc/routes.tssrc/s3-store.tssrc/serve.tssrc/upload.tssrc/utils.tstest/app.tstest/avatar.tstest/common.tstest/cover.tstest/index.tstest/legacy-proxy.tstest/proxy-formats.tstest/proxy.tstest/serve.tstest/store.tstest/tsconfig.jsontest/upload.tstest/utils.tstsconfig.jsontslint.json
💤 Files with no reviewable changes (2)
- tslint.json
- src/legacy-proxy.ts
| const port = 63205 | ||
| const server = http.createServer(app.callback()) | ||
|
|
||
| before((done) => { server.listen(port, 'localhost', done) }) | ||
| after((done) => { server.close(done) }) |
There was a problem hiding this comment.
Avoid hard-coding the test port.
Binding Line 17 to a fixed port makes this suite flaky when another process already owns 63205 or when tests run in parallel. Bind to port 0 and capture the assigned port in before.
Possible fix
describe('serve', function() {
- const port = 63205
+ let baseUrl: string
const server = http.createServer(app.callback())
- before((done) => { server.listen(port, 'localhost', done) })
+ before((done) => {
+ server.listen(0, '127.0.0.1', () => {
+ const address = server.address()
+ if (!address || typeof address === 'string') {
+ done(new Error('failed to bind test server'))
+ return
+ }
+ baseUrl = `http://127.0.0.1:${address.port}`
+ done()
+ })
+ })
after((done) => { server.close(done) })Then replace the request URLs to use baseUrl.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/serve.ts` around lines 14 - 18, The test currently hard-codes port =
63205 and calls server.listen(port, 'localhost', ...) which can conflict; change
to listen on port 0 in the before hook, capture the assigned port from
server.address().port and build a baseUrl (e.g. `baseUrl =
'http://localhost:'+assignedPort`) for the suite, and update tests to use
baseUrl for request URLs; keep the server.close(done) in after as-is and ensure
the `port`, `server`, `before`, and `after` symbols are the ones updated.
| it('should reject non-GET methods', async function() { | ||
| // POST to /:hash/:filename matches /:username/:signature (upload route) | ||
| // so it returns an upload error, not 405 | ||
| const res = await needle('post', `http://localhost:${port}/DQmSomeHash/test.jpg`, {}) | ||
| assert(res.statusCode >= 400, 'should return error status') | ||
| }) |
There was a problem hiding this comment.
This does not actually verify the serve route's GET-only guard.
Line 75 already notes that the POST hits the upload route, so this test can still pass even if src/serve.ts:35-39 stops rejecting non-GET methods. Please target serveHandler directly or use a route/method combination that reaches the serve path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/serve.ts` around lines 74 - 79, The test "should reject non-GET methods"
currently POSTs to a path that matches the upload route, so it doesn't verify
the serve route's GET-only guard; update the test to target the serve handler
directly (import and call serveHandler) or change the request to a path/method
combination that can only match the serve route (so a POST to serveHandler
triggers the 405). Concretely, in test/serve.ts modify the case named "should
reject non-GET methods" to either invoke serveHandler with a mock request object
whose method is 'POST' and url '/DQmSomeHash/test.jpg' and assert a 405
response, or send the HTTP request to a URL that cannot match the upload route
so the request reaches serveHandler and you can assert the GET-only rejection.
Summary by CodeRabbit
Release Notes
New Features
Infrastructure & Maintenance
Tests