feat: restructure monorepo, add shared package, and background remover#17
feat: restructure monorepo, add shared package, and background remover#17gaureshpai wants to merge 60 commits intomainfrom
Conversation
- Introduce 'package/' directory for shared CLI and core utilities. - Centralize dependency management using pnpm workspace catalogs. - Add ImageBackgroundRemover component to the frontend. - Update backend and frontend to align with the new workspace structure. - Replace markdownlint with Biome and refine Turbo tasks. - Perform a global refactor of routes, middleware, and UI components.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 45 minutes and 49 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughUnconditionally bump package/website/frontend/backend versions in CI; add module-aliasing and stricter startup checks; replace sharp with Jimp and sanitize filenames; tighten DNS/IP, redirect, and favicon flows; gate Supabase cleanup with cron secret and admin JWT; add TotalUsage migration and many frontend alias/refactor and validation changes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CI as "CI Workflow"
participant Caller as "Cron / External Caller"
participant API as "Backend API"
participant Secret as "CronSecretMiddleware"
participant Auth as "JWTAuthMiddleware"
participant Cleaner as "cleanSupabaseStorage"
participant Storage as "Supabase Storage"
CI->>API: POST /api/clean-supabase/trigger (x-cron-secret)
API->>Secret: validate x-cron-secret
Secret-->>API: allowed / rejected
Caller->>API: POST /api/clean-supabase (x-auth-token)
API->>Auth: verify JWT & admin role
Auth-->>API: authorized / unauthorized
API->>Cleaner: invoke cleanSupabaseStorage()
Cleaner->>Storage: list objects
Storage-->>Cleaner: objects list
Cleaner->>Storage: remove old files
Storage-->>Cleaner: deletion results
Cleaner-->>API: report status
API-->>Caller: 200 / 401 / 403 / 500
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests (beta)
|
This comment was marked as outdated.
This comment was marked as outdated.
- Security: - Implemented robust filename sanitization for all file uploads. - Added UI warnings for insecure MD5 hash generation. - Added validation/length limits for analytics tracking inputs. - Switched API tracking to use req.path to prevent query string leakage. - Reliability: - Added 5s timeouts and 1MB size limits to outbound redirect checks. - Improved frontend state management to clear file selections on validation failure. - Added try/finally blocks to ensure loading states are always reset. - Improved error handling for image resizing and base64 conversion. - Media Processing: - Integrated @jimp/wasm-webp and @jimp/wasm-avif for improved format support. - Fixed background removal configuration for node environments. - Updated Jimp usage to use modern buffer reading patterns.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Add SSRF protection to redirect checker with DNS validation - Add auth middleware to clean-supabase endpoint - Fix TotalUsage model key field - Update Jimp library usage with defaultFormats/plugins - Fix PDF compression options - Various frontend component fixes (error handling, quality clamping, image loading check, diff display, timeout cleanup)
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as outdated.
This comment was marked as outdated.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This comment was marked as resolved.
This comment was marked as resolved.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
- Backend: Enhanced API documentation, added route-level authentication, and improved error handling for PDF and office conversion. - Frontend: Added request cancellation and debouncing for password strength checks, improved password generation logic, and added file size limit validation based on auth state. - CI/CD: Updated publish workflow to ensure version consistency across packages.
This comment was marked as resolved.
This comment was marked as resolved.
- Backend: Standardized middleware, updated analytics and auth routes, and refined utility functions. - Frontend: Major component overhaul including UI improvements, tool logic updates, and context refactoring. - Package (CLI): Updated core logic, interactive prompts, and utility functions for branding and file handling. - Tests: Updated distribution tests to reflect structural changes.
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as low quality.
This comment was marked as low quality.
1 similar comment
This comment was marked as resolved.
This comment was marked as resolved.
Greptile SummaryThis is a large monorepo restructuring PR that introduces a shared Key changes and findings:
Confidence Score: 2/5Not safe to merge as-is — the migration script bug prevents the server from starting. The variable-scope bug in
|
| Filename | Overview |
|---|---|
| backend/scripts/migrateTotalUsageKey.js | New migration script with a critical scoping bug: matchingDocs and totalCount are declared with const inside the try block but referenced in the return statement outside it, causing a ReferenceError on every execution and preventing the server from starting. |
| backend/routes/redirectChecker.js | Significantly hardened against SSRF and DNS-rebinding: URL/IP validation, custom pinned DNS agents for HEAD/GET requests, redirect-chain validation, and multicast/loopback/link-local checks all added correctly. |
| backend/routes/favicon.js | Main URL is validated before fetch, but downloadFile still makes the actual axios.get without DNS-pinned agents, leaving a DNS-rebinding window that redirectChecker.js fixed in this same PR. |
| backend/routes/cleanSupabase.js | Added dual-auth protection (admin JWT and cron secret) for cleanup endpoints; cron secret comparison uses a non-timing-safe !== operator instead of crypto.timingSafeEqual. |
| backend/utils/ipValidation.js | Shared SSRF-prevention utility; covers private, loopback, link-local, CGNAT, documentation, and multicast ranges for both IPv4 and IPv6, with robust IPv4-mapped IPv6 normalization. |
| backend/utils/filenameSanitizer.js | New utility that safely strips path traversal, reserved characters, and over-long names from uploaded filenames before they are stored in Supabase. |
| backend/routes/seoTools.js | Uses DNS validation with pinned lookups in fetchContent and disables redirects to prevent SSRF chains; the validateDomain function checks all resolved IPs against private ranges. |
| backend/routes/shortener.js | URL format validation is present, but private/internal IP ranges are not blocked; a stored URL that resolves to a private address can be used to redirect users to internal network pages. |
| frontend/src/components/ImageBackgroundRemover.jsx | New client-side background-removal component using @imgly/background-removal; includes file-type validation, loading state, download link cleanup, and analytics tracking. |
| backend/middleware/uploadLimiter.js | Correctly sets per-request file-size limits based on auth status (10 MB anonymous, 50 MB authenticated) and restricts accepted MIME types. |
| backend/index.js | Server startup now validates required env vars at boot and runs migrateTotalUsageKey before accepting connections — but because of the scoping bug in that migration, the server currently cannot start. |
Prompt To Fix All With AI
This is a comment left during a code review.
Path: backend/scripts/migrateTotalUsageKey.js
Line: 65-77
Comment:
**Variables out of scope at `return` statement — server will not start**
`matchingDocs` and `totalCount` are declared with `const` inside the `try { }` block (lines 22 and 29), but they are referenced in the `return` statement here, which is *outside* that block. In JavaScript, `const`/`let` declarations are block-scoped. Accessing them here will always throw `ReferenceError: matchingDocs is not defined` once the try block completes successfully.
Because `migrateTotalUsageKey` is awaited inside `startServer()` (in `index.js` line 160), this error propagates up and causes `process.exit(1)` every time — the server cannot start.
Fix: declare `matchingDocs` and `totalCount` **before** the `try` block with `let`, then assign them inside it, so they remain in scope for the `return` statement.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: backend/routes/favicon.js
Line: 54-65
Comment:
**DNS rebinding risk — HTTP request made without DNS pinning**
`downloadFile` calls `validateUrl(fileUrl)` to check that the hostname does not resolve to a private IP, but the subsequent `axios.get()` call has no custom HTTP agent and therefore performs its own DNS resolution. Between the two lookups an attacker who controls the DNS record (short TTL) can change it to point to an internal address, bypassing the SSRF guard — a classic TOCTOU / DNS-rebinding pattern.
`redirectChecker.js` in this same PR correctly addresses this by using `createPinnedAgents`, which pins the HTTP/HTTPS agent to the addresses validated upfront. `favicon.js` should do the same.
`validateUrl` in `favicon.js` currently returns `true` rather than `{ hostname, safeAddresses }`, so it would also need to be updated to match the richer return value used in `redirectChecker.js`.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: backend/routes/cleanSupabase.js
Line: 39
Comment:
**Timing-unsafe secret comparison**
`secret !== expectedSecret` uses JavaScript's built-in string equality, which short-circuits on the first mismatched character. This makes the comparison vulnerable to timing side-channel attacks. Use `crypto.timingSafeEqual(Buffer.from(secret), Buffer.from(expectedSecret))` instead, after checking both buffers are the same length.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: backend/routes/shortener.js
Line: 70-77
Comment:
**Shortened URLs may redirect to private or internal addresses**
The URL format-only regex on line 21 does not block hostnames that resolve to private/reserved IP ranges. Once stored, the redirect handler issues `res.redirect(url.originalUrl)` without further validation, so a user can create a short link that sends a victim's browser to internal network pages.
Consider resolving the submitted hostname via DNS and running each resolved address through `isPrivateIP` (already available in `@backend/utils/ipValidation`) before persisting the record.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "📝 Add docstrings to `feat/monorepo-rest..." | Re-trigger Greptile
|
|
||
| const indexes = await TotalUsage.collection.indexes(); | ||
| const keyIndex = indexes.find((index) => index.name === "key_1"); | ||
| if (keyIndex && !keyIndex.unique) { | ||
| await TotalUsage.collection.dropIndex("key_1"); | ||
| } | ||
| await TotalUsage.collection.createIndex({ key: 1 }, { unique: true, name: "key_1" }); | ||
|
|
||
| return { | ||
| migrated: matchingDocs.length > 0, | ||
| processed: matchingDocs.length, | ||
| totalCount, | ||
| }; |
There was a problem hiding this comment.
Variables out of scope at
return statement — server will not start
matchingDocs and totalCount are declared with const inside the try { } block (lines 22 and 29), but they are referenced in the return statement here, which is outside that block. In JavaScript, const/let declarations are block-scoped. Accessing them here will always throw ReferenceError: matchingDocs is not defined once the try block completes successfully.
Because migrateTotalUsageKey is awaited inside startServer() (in index.js line 160), this error propagates up and causes process.exit(1) every time — the server cannot start.
Fix: declare matchingDocs and totalCount before the try block with let, then assign them inside it, so they remain in scope for the return statement.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/scripts/migrateTotalUsageKey.js
Line: 65-77
Comment:
**Variables out of scope at `return` statement — server will not start**
`matchingDocs` and `totalCount` are declared with `const` inside the `try { }` block (lines 22 and 29), but they are referenced in the `return` statement here, which is *outside* that block. In JavaScript, `const`/`let` declarations are block-scoped. Accessing them here will always throw `ReferenceError: matchingDocs is not defined` once the try block completes successfully.
Because `migrateTotalUsageKey` is awaited inside `startServer()` (in `index.js` line 160), this error propagates up and causes `process.exit(1)` every time — the server cannot start.
Fix: declare `matchingDocs` and `totalCount` **before** the `try` block with `let`, then assign them inside it, so they remain in scope for the `return` statement.
How can I resolve this? If you propose a fix, please make it concise.| const downloadFile = async (fileUrl) => { | ||
| try { | ||
| // Validate the URL before making the request | ||
| await validateUrl(fileUrl); | ||
|
|
||
| const response = await axios.get(fileUrl, { | ||
| responseType: "arraybuffer", | ||
| maxRedirects: 0, // Disable redirects to prevent SSRF chains | ||
| timeout: 5000, | ||
| }); | ||
| return { | ||
| buffer: Buffer.from(response.data), | ||
| contentType: response.headers["content-type"], | ||
| }; | ||
| } catch (error) { | ||
| console.error(`Failed to download file from ${fileUrl}:`, error.message); | ||
| return null; | ||
| } | ||
| try { | ||
| // Validate the URL before making the request | ||
| await validateUrl(fileUrl); | ||
|
|
||
| const response = await axios.get(fileUrl, { | ||
| responseType: "arraybuffer", | ||
| maxRedirects: 0, // Disable redirects to prevent SSRF chains | ||
| timeout: 5000, | ||
| }); | ||
| return { | ||
| success: true, |
There was a problem hiding this comment.
DNS rebinding risk — HTTP request made without DNS pinning
downloadFile calls validateUrl(fileUrl) to check that the hostname does not resolve to a private IP, but the subsequent axios.get() call has no custom HTTP agent and therefore performs its own DNS resolution. Between the two lookups an attacker who controls the DNS record (short TTL) can change it to point to an internal address, bypassing the SSRF guard — a classic TOCTOU / DNS-rebinding pattern.
redirectChecker.js in this same PR correctly addresses this by using createPinnedAgents, which pins the HTTP/HTTPS agent to the addresses validated upfront. favicon.js should do the same.
validateUrl in favicon.js currently returns true rather than { hostname, safeAddresses }, so it would also need to be updated to match the richer return value used in redirectChecker.js.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/routes/favicon.js
Line: 54-65
Comment:
**DNS rebinding risk — HTTP request made without DNS pinning**
`downloadFile` calls `validateUrl(fileUrl)` to check that the hostname does not resolve to a private IP, but the subsequent `axios.get()` call has no custom HTTP agent and therefore performs its own DNS resolution. Between the two lookups an attacker who controls the DNS record (short TTL) can change it to point to an internal address, bypassing the SSRF guard — a classic TOCTOU / DNS-rebinding pattern.
`redirectChecker.js` in this same PR correctly addresses this by using `createPinnedAgents`, which pins the HTTP/HTTPS agent to the addresses validated upfront. `favicon.js` should do the same.
`validateUrl` in `favicon.js` currently returns `true` rather than `{ hostname, safeAddresses }`, so it would also need to be updated to match the richer return value used in `redirectChecker.js`.
How can I resolve this? If you propose a fix, please make it concise.| const secret = req.header("x-cron-secret"); | ||
| const expectedSecret = process.env.SUPABASE_CLEANUP_CRON_SECRET; | ||
|
|
||
| if (!secret || !expectedSecret || secret !== expectedSecret) { |
There was a problem hiding this comment.
Timing-unsafe secret comparison
secret !== expectedSecret uses JavaScript's built-in string equality, which short-circuits on the first mismatched character. This makes the comparison vulnerable to timing side-channel attacks. Use crypto.timingSafeEqual(Buffer.from(secret), Buffer.from(expectedSecret)) instead, after checking both buffers are the same length.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/routes/cleanSupabase.js
Line: 39
Comment:
**Timing-unsafe secret comparison**
`secret !== expectedSecret` uses JavaScript's built-in string equality, which short-circuits on the first mismatched character. This makes the comparison vulnerable to timing side-channel attacks. Use `crypto.timingSafeEqual(Buffer.from(secret), Buffer.from(expectedSecret))` instead, after checking both buffers are the same length.
How can I resolve this? If you propose a fix, please make it concise.| router.get("/l/:code", async (req, res) => { | ||
| try { | ||
| const url = await Url.findOne({ urlCode: req.params.code }); | ||
| try { | ||
| const url = await Url.findOne({ urlCode: req.params.code }); | ||
|
|
||
| if (url) { | ||
| return res.redirect(url.originalUrl); | ||
| } | ||
| return res.status(404).json("No url found"); | ||
| } catch (err) { | ||
| console.error(err); | ||
| return res | ||
| .status(500) | ||
| .json({ msg: "Server error during URL redirection." }); | ||
| } | ||
| if (url) { | ||
| return res.redirect(url.originalUrl); | ||
| } | ||
| return res.status(404).json("No url found"); |
There was a problem hiding this comment.
Shortened URLs may redirect to private or internal addresses
The URL format-only regex on line 21 does not block hostnames that resolve to private/reserved IP ranges. Once stored, the redirect handler issues res.redirect(url.originalUrl) without further validation, so a user can create a short link that sends a victim's browser to internal network pages.
Consider resolving the submitted hostname via DNS and running each resolved address through isPrivateIP (already available in @backend/utils/ipValidation) before persisting the record.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/routes/shortener.js
Line: 70-77
Comment:
**Shortened URLs may redirect to private or internal addresses**
The URL format-only regex on line 21 does not block hostnames that resolve to private/reserved IP ranges. Once stored, the redirect handler issues `res.redirect(url.originalUrl)` without further validation, so a user can create a short link that sends a victim's browser to internal network pages.
Consider resolving the submitted hostname via DNS and running each resolved address through `isPrivateIP` (already available in `@backend/utils/ipValidation`) before persisting the record.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
♻️ Duplicate comments (6)
frontend/src/components/PasswordGenerator.jsx (1)
50-52:⚠️ Potential issue | 🟠 MajorAvoid
%when mapping Web Crypto output to an index.Lines 52, 60, 68, 76, 95, and 100 still turn
crypto.getRandomValues()output into an index with%. That makes both character selection and the Fisher–Yates shuffle slightly biased, which is not great for a password generator. Please route all index selection through one rejection-sampling helper instead.🔐 Proposed fix
+ const getUniformIndex = (max) => { + const limit = 0x100000000 - (0x100000000 % max); + const buffer = new Uint32Array(1); + do { + crypto.getRandomValues(buffer); + } while (buffer[0] >= limit); + return buffer[0] % max; + }; + if (includeUppercase) { charset += charsets.uppercase; requiredChars.push( - charsets.uppercase.charAt( - crypto.getRandomValues(new Uint32Array(1))[0] % charsets.uppercase.length, - ), + charsets.uppercase.charAt(getUniformIndex(charsets.uppercase.length)), ); } if (includeLowercase) { charset += charsets.lowercase; requiredChars.push( - charsets.lowercase.charAt( - crypto.getRandomValues(new Uint32Array(1))[0] % charsets.lowercase.length, - ), + charsets.lowercase.charAt(getUniformIndex(charsets.lowercase.length)), ); } if (includeNumbers) { charset += charsets.numbers; requiredChars.push( - charsets.numbers.charAt( - crypto.getRandomValues(new Uint32Array(1))[0] % charsets.numbers.length, - ), + charsets.numbers.charAt(getUniformIndex(charsets.numbers.length)), ); } if (includeSymbols) { charset += charsets.symbols; requiredChars.push( - charsets.symbols.charAt( - crypto.getRandomValues(new Uint32Array(1))[0] % charsets.symbols.length, - ), + charsets.symbols.charAt(getUniformIndex(charsets.symbols.length)), ); } - const randomValues = new Uint32Array(validLength - requiredChars.length); - crypto.getRandomValues(randomValues); - const newPassword = [...requiredChars]; for (let i = 0; i < validLength - requiredChars.length; i++) { - newPassword.push(charset.charAt(randomValues[i] % charset.length)); + newPassword.push(charset.charAt(getUniformIndex(charset.length))); } for (let i = newPassword.length - 1; i > 0; i--) { - const j = crypto.getRandomValues(new Uint32Array(1))[0] % (i + 1); + const j = getUniformIndex(i + 1); [newPassword[i], newPassword[j]] = [newPassword[j], newPassword[i]]; }Web Crypto getRandomValues modulo bias rejection sampling Fisher-Yates shuffle password generatorAlso applies to: 58-60, 66-68, 74-76, 94-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordGenerator.jsx` around lines 50 - 52, Replace all uses of modulo (%) on crypto.getRandomValues() outputs (used when selecting characters and during the Fisher–Yates shuffle) with a rejection-sampling helper that returns an unbiased random integer in [0, n). Implement a helper function (e.g., getRandomInt(max) or sampleUniform(max)) that repeatedly calls crypto.getRandomValues(new Uint32Array(1)) and rejects values >= floor(2^32 / max) * max, then use that helper wherever the code currently computes indices for requiredChars pushes (e.g., charsets.uppercase, lowercase, digits, symbols) and the Fisher–Yates swap index to ensure unbiased selection and shuffling.backend/routes/analytics.js (1)
35-36:⚠️ Potential issue | 🟠 MajorKeep public analytics on an explicit tool whitelist.
HtmlToMarkdownConverter:copystill survives this filter, so/statsand/popularcan expose an internal event as if it were a public tool. Deriving the public set by exclusion is too brittle here; this should be an explicit public-tool map/metadata flag instead.Also applies to: 81-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/analytics.js` around lines 35 - 36, The current public-tool filtering derives visibility by exclusion and mistakenly lets "HtmlToMarkdownConverter:copy" through; change the logic that builds the public analytics set to use an explicit inclusion whitelist (e.g., a Map or object keyed by tool event names) or add a boolean metadata flag like isPublic on each tool descriptor, then update the code paths that compute the public set (the same code that currently references "HtmlToMarkdownConverter:copy" and the arrays used for /stats and /popular) to consult that whitelist/flag (only include events present in the whitelist) instead of excluding known-internal names; ensure both places mentioned (the block around the "HtmlToMarkdownConverter:..." entries and the code used for lines ~81-86) are updated to use the explicit whitelist/flag.backend/docs/README.md (1)
437-445:⚠️ Potential issue | 🟡 MinorThe 401 example is still tripping MD046.
Line 441 is still indented such that markdownlint reads this as an indented code block instead of a fenced one, so the docs stay lint-noisy.
📝 Suggested change
- **401** - Unauthorized - Content-Type: `application/json` - Schema: - ```json - { - "msg": "Invalid or missing cron secret" - } - ``` + ```json + { + "msg": "Invalid or missing cron secret" + } + ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/docs/README.md` around lines 437 - 445, The 401 response example in README.md is formatted as an indented code block triggering MD046; convert the indented JSON to a proper fenced block by moving the opening fence to its own line with the language tag (```json) and outdenting the subsequent JSON lines so the closing fence (```) is also on its own line; update the "401 - Unauthorized" example (the JSON snippet showing {"msg":"Invalid or missing cron secret"}) to use the fenced ```json block style instead of the indented block.backend/routes/favicon.js (2)
16-45:⚠️ Potential issue | 🔴 CriticalValidation still isn't bound to the actual connection target.
validateUrl()proves that one DNS lookup returned public addresses, but the lateraxios.get(...)calls re-resolve the hostname when connecting. A rebinding host can still pass this preflight and then send the real request to a private IP. Return the vetted address data from here and pin both outbound requests to it via a customlookupor an IP URL plusHost/SNI configuration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/favicon.js` around lines 16 - 45, The current validateUrl(url) only checks DNS results but doesn't bind the later HTTP/TLS connection to those vetted IPs; change validateUrl to return the resolved IP(s) and the original hostname (e.g., return {hostname, addresses}) and update callers (the axios requests) to use those addresses: either create a custom lookup function that returns the vetted address for the hostname (pass to axios via http(s).Agent lookup option) or construct an IP-based URL and set the original hostname in the Host header and SNI (servername) option so the outbound connection is pinned to the vetted IPs; ensure validateUrl still enforces protocol and private-IP checks before returning the vetted data.
59-63:⚠️ Potential issue | 🟠 MajorDon't treat ordinary 3xx responses as hard failures.
Both the site fetch and the favicon downloads disable redirects and only accept direct 2xx responses. That breaks common flows like apex→
www,http→https, and CDN-hosted favicon redirects, so valid sites now fail extraction. Follow redirects manually with the same validation/pinning you apply to the initial URL.Also applies to: 118-122
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/favicon.js` around lines 59 - 63, The axios call that fetches fileUrl currently sets maxRedirects: 0 and treats non-2xx (3xx) responses as failures; change this to follow redirects manually instead of hard-disabling them: after each axios.get(fileUrl, { responseType: "arraybuffer", maxRedirects: 0, ... }) check for 3xx status and a Location header, validate/pin the redirected URL with the same host/pinning rules you use for the initial URL, then set fileUrl = resolvedLocation and repeat up to a safe redirect limit (e.g., 5) before failing; apply the same manual-follow logic to the other axios.get block referenced (lines 118-122) so CDN/http→https/apex→www redirects are accepted but still validated at each hop.backend/routes/pdfConverter.js (1)
288-298:⚠️ Potential issue | 🟠 Major
highstill behaves the same asmedium.
objectsPerTickonly changes serialization scheduling inpdf-lib; it does not produce smaller bytes. With both cases settinguseObjectStreams = true, the API advertises a compression distinction it can't actually deliver.In pdf-lib SaveOptions, does `objectsPerTick` affect PDF output size, or only how serialization work is scheduled?🔧 Minimal fix
switch (compressionLevel) { case "low": compressionOptions.useObjectStreams = false; break; case "medium": - compressionOptions.useObjectStreams = true; - break; case "high": compressionOptions.useObjectStreams = true; - compressionOptions.objectsPerTick = 50; break; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/pdfConverter.js` around lines 288 - 298, The "high" branch currently only sets objectsPerTick which doesn't change output size, so it falsely advertises stronger compression; update the switch handling for compressionLevel (the compressionOptions object configured in the switch) so "high" does not rely on objectsPerTick for size reduction — either collapse "high" into the same settings as "medium" (set compressionOptions.useObjectStreams = true and remove objectsPerTick) or implement a real size-reducing step (e.g., invoke a PDF optimization/minification routine or a true compression flag) so that the behavior matches the API promise; adjust the case for "high" in the switch that manipulates compressionOptions accordingly and remove or document objectsPerTick usage.
🟡 Minor comments (13)
frontend/src/components/ImageFormatConverter.jsx-101-110 (1)
101-110:⚠️ Potential issue | 🟡 MinorPrevent duplicate in-flight conversions in submit handler.
A rapid double-submit can still trigger two requests before the disabled state is reflected. Add an early guard when
loadingis already true.🔧 Suggested patch
const onSubmit = async (e) => { e.preventDefault(); + if (loading) return; if (selectedFiles.length === 0) { toast.error("Please select at least one image file."); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageFormatConverter.jsx` around lines 101 - 110, The onSubmit handler can be invoked twice before the UI disables, so add an early guard at the top of the onSubmit function to immediately return (and optionally show a toast) when the component-level loading state is already true; in practice check the loading variable before doing e.preventDefault() or any work, and only proceed to setLoading(true) and build FormData if loading is false (references: onSubmit, loading, setLoading).backend/routes/textConverter.js-4-7 (1)
4-7:⚠️ Potential issue | 🟡 MinorReject effectively empty decode input (
" ") after normalization.For
type === "decode",text.lengthin Line 18 allows whitespace-only payloads, and the helper currently trims during comparison. Treat normalized empty input as invalid before decoding.💡 Suggested fix
const isValidBase64 = (str) => { + const normalized = str.trim(); + if (normalized.length === 0) { + return false; + } try { - return Buffer.from(str, "base64").toString("base64") === str.trim(); + const decoded = Buffer.from(normalized, "base64"); + return decoded.toString("base64").replace(/=+$/, "") === normalized.replace(/=+$/, ""); } catch { return false; } };Also applies to: 18-20, 27-30
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/textConverter.js` around lines 4 - 7, Normalize the incoming text by trimming whitespace before validation and reject empty strings for decode operations: when handling type === "decode" compute const normalized = text.trim(), return a validation error if normalized.length === 0, and use normalized for isValidBase64 checks and decoding; also update isValidBase64 to perform its base64 round-trip comparison against a trimmed value (e.g., use the trimmed input inside Buffer.from(...).toString("base64") === trimmed) so whitespace-only inputs are treated as invalid.frontend/src/components/CsvToJsonConverter.jsx-117-127 (1)
117-127:⚠️ Potential issue | 🟡 MinorLink CSV helper text to the textarea for screen readers.
csv-input-helpexists but the textarea does not reference it witharia-describedby.Proposed fix
<textarea id="csv-input" className="w-full px-3 py-2 bg-background placeholder:text-muted-foreground border border-input rounded-md focus:outline-none focus:ring-ring focus:border-primary sm:text-sm" rows="10" placeholder="Enter CSV here..." value={csvInput} onChange={handleCsvChange} + aria-describedby="csv-input-help" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/CsvToJsonConverter.jsx` around lines 117 - 127, The helper text div with id "csv-input-help" is not associated with the textarea, so add an aria-describedby="csv-input-help" attribute to the textarea in the CsvToJsonConverter component (the element currently using id="csv-input" and value={csvInput}, onChange={handleCsvChange}) so screen readers announce the helper text; ensure the id string exactly matches the helper div id.frontend/src/components/CsvToJsonConverter.jsx-171-176 (1)
171-176:⚠️ Potential issue | 🟡 MinorUse a format-neutral copy button label.
Line 175 says “Copy JSON to clipboard”, but output can also be CSV.
Proposed fix
- aria-label="Copy JSON to clipboard" + aria-label="Copy converted output to clipboard"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/CsvToJsonConverter.jsx` around lines 171 - 176, The button's aria-label is specific to JSON but the component output can be CSV or JSON; update the button in CsvToJsonConverter (the element using onClick={copyToClipboard}) to use a format-neutral label such as "Copy output to clipboard" (and if the visible button text also says "Copy JSON to clipboard", change that as well) so both the aria-label and any displayed text refer generically to the output rather than "JSON".backend/routes/auth.js-175-177 (1)
175-177:⚠️ Potential issue | 🟡 MinorForgot-password still returns raw validation payload.
/forgot-passwordreturnserrors.array()directly, which is now inconsistent with the sanitized handling used in/registerand/login. Reuse the same sanitization path for consistency and safer responses.🔧 Suggested change
+const sanitizeValidationErrors = (errors) => + errors.array().map((error) => { + const { value, ...sanitizedError } = error; + if (error.path === "password") { + return { ...sanitizedError, value: "[REDACTED]" }; + } + return sanitizedError; + }); + // `@route` POST /api/auth/forgot-password // `@desc` Send password reset email // `@access` Public router.post( "/forgot-password", [check("email", "Please include a valid email").isEmail()], async (req, res) => { const errors = validationResult(req); if (!errors.isEmpty()) { - return res.status(400).json({ errors: errors.array() }); + return res.status(400).json({ errors: sanitizeValidationErrors(errors) }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/auth.js` around lines 175 - 177, The /forgot-password handler currently returns the raw validation payload by calling validationResult(req) and errors.array(); replace that with the same sanitized formatting used by /register and /login: obtain validationResult(req), pass it through the shared sanitization/formatting function used elsewhere (the same helper invoked by the register/login handlers) and then return res.status(400).json(...) with that sanitized errors object instead of errors.array(). Locate the validationResult usage in the forgot-password route and swap the direct errors.array() response for the common sanitizer/formatter to ensure consistent, safe error output.frontend/src/components/ImageCropper.jsx-119-121 (1)
119-121:⚠️ Potential issue | 🟡 MinorTrack usage only after a successful crop.
trackToolUsagecurrently runs before the ref/readiness checks and beforetoBlob()succeeds, so failed or aborted attempts still get counted. Move it into the success path afterblobis created.Proposed fix
try { - trackToolUsage("ImageCropper", "image"); const currentOpId = ++cropOpIdRef.current; const image = imageRef.current; const canvas = canvasRef.current; @@ const objectUrl = URL.createObjectURL(blob); setCroppedImageSrc(objectUrl); + trackToolUsage("ImageCropper", "image"); const extension = (blob.type || "image/png").split("/")[1] || "png"; handleDownload(objectUrl, `dkutils-cropped-image-${Date.now()}.${extension}`); setLoading(false);Also applies to: 177-182
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 119 - 121, The call to trackToolUsage("ImageCropper","image") is being executed too early (before readiness checks and toBlob() success) so aborted/failed crops are counted; move the trackToolUsage invocation into the successful crop completion path—i.e., after toBlob() resolves and after verifying currentOpId matches cropOpIdRef.current and any other readiness checks—so it only runs on a successful blob creation; update both occurrences (the one near cropOpIdRef usage and the similar call around lines 177-182) to be invoked after the blob is created and before emitting/sending the blob.frontend/src/components/PasswordGenerator.jsx-34-35 (1)
34-35:⚠️ Potential issue | 🟡 MinorFreeze the form while generation is pending.
Lines 34-35 delay the work by 500ms, but the length and checkbox controls stay editable. If the user changes options during that window, the generated password can reflect the old snapshot while the UI shows the new one. Disabling those inputs during
loadingwould keep the result consistent.💡 Proposed fix
<input id="lengthInput" type="number" className="w-full px-3 py-2 bg-background placeholder:text-muted-foreground border border-input rounded-md focus:outline-none focus:ring-ring focus:border-primary sm:text-sm" value={length} + disabled={loading} onChange={(e) => { const value = Number.parseInt(e.target.value, 10); setLength(Number.isNaN(value) || value < 4 ? 4 : Math.min(value, 32)); }} @@ <input type="checkbox" className="form-checkbox" checked={includeUppercase} + disabled={loading} onChange={() => setIncludeUppercase(!includeUppercase)} /> @@ <input type="checkbox" className="form-checkbox" checked={includeLowercase} + disabled={loading} onChange={() => setIncludeLowercase(!includeLowercase)} /> @@ <input type="checkbox" className="form-checkbox" checked={includeNumbers} + disabled={loading} onChange={() => setIncludeNumbers(!includeNumbers)} /> @@ <input type="checkbox" className="form-checkbox" checked={includeSymbols} + disabled={loading} onChange={() => setIncludeSymbols(!includeSymbols)} />Also applies to: 138-149, 153-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordGenerator.jsx` around lines 34 - 35, The form inputs must be disabled while generation is pending: update the PasswordGenerator.jsx component to use the existing loading state (setLoading) to set the disabled prop on the length input, all character-type checkboxes, and any related option controls and the generate button so users cannot change options during the setTimeout window; locate the setLoading(true) call and ensure the inputs referenced in the render (length input, checkbox elements around lines 138-149 and 153-190) read loading and return disabled={loading} (or equivalent) until setLoading(false) is called.frontend/src/components/PasswordGenerator.jsx-81-84 (1)
81-84:⚠️ Potential issue | 🟡 MinorAnnounce generation errors to assistive tech.
The messages set on Lines 83 and 109 are rendered at Line 203 in a plain
<p>, so screen reader users may never hear them when they appear. Addrole="alert"or anaria-liveregion to the error container.♿ Proposed fix
- {error && <p className="text-destructive text-sm mt-2">{error}</p>} + {error && ( + <p role="alert" aria-live="assertive" className="text-destructive text-sm mt-2"> + {error} + </p> + )}Also applies to: 107-109, 203-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordGenerator.jsx` around lines 81 - 84, The rendered error messages from PasswordGenerator.jsx (setError) are currently displayed in a plain <p> and aren't announced to screen readers; update the JSX that outputs the error message (the <p> that renders the error state in the PasswordGenerator component) to include an accessibility live region such as role="alert" or aria-live="assertive" (or both) so assistive tech will announce generation errors; apply the same change to the other places where generation errors/messages are rendered (the other <p> that displays messages tied to setError/setPassword) to ensure consistent announcement.frontend/src/components/ImageResizer.jsx-65-86 (1)
65-86:⚠️ Potential issue | 🟡 MinorHandle metadata-read failures before keeping the new file selected.
originalImageis committed before theFileReader/Imagepath succeeds. A corrupt-but-image/*file can leave the UI showing the new file with stale dimensions/inputs and only fail later during resize. Mirror the reset/error path here too.Possible fix
setOriginalImage(file); + setOriginalDimensions({ width: 0, height: 0 }); + setNewWidth(""); + setNewHeight(""); + setResizedImageSrc(null); const reader = new FileReader(); + reader.onerror = () => { + toast.error("Failed to read the image file."); + setOriginalImage(null); + setOriginalDimensions({ width: 0, height: 0 }); + setNewWidth(""); + setNewHeight(""); + setResizedImageSrc(null); + e.target.value = ""; + }; /** * Handles the onload event triggered by the FileReader after reading the original image file. @@ reader.onload = (event) => { const img = new Image(); + img.onerror = () => { + toast.error("Failed to decode the uploaded image."); + setOriginalImage(null); + setOriginalDimensions({ width: 0, height: 0 }); + setNewWidth(""); + setNewHeight(""); + setResizedImageSrc(null); + e.target.value = ""; + }; /** * Handles the onload event triggered by the Image object after loading the original image file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 65 - 86, You currently call setOriginalImage(file) before the FileReader/Image metadata load succeeds; move the success path so setOriginalImage(file) is only called inside img.onload after successful setOriginalDimensions and setNewWidth/setNewHeight, and add reader.onerror and img.onerror handlers that clear/reset original image state (call setOriginalImage(null) or equivalent) and clear dimensions/inputs (setOriginalDimensions(null) and setNewWidth/ setNewHeight to empty) and optionally set an error flag/message; keep reader.readAsDataURL(file) but ensure all error handlers mirror the reset/error path so a corrupt image cannot remain selected with stale dimensions.frontend/src/components/ImageResizer.jsx-56-59 (1)
56-59:⚠️ Potential issue | 🟡 MinorMake the oversize error message conditional on auth state.
Authenticated users who hit the 50 MB cap still see “Login for a higher limit (50MB)”, which is misleading because they already have the higher limit.
Possible fix
if (file.size > maxSize) { + const limitMb = maxSize / (1024 * 1024); toast.error( - `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, + isAuthenticated + ? `File too large: ${file.name}. Maximum size is ${limitMb}MB.` + : `File too large: ${file.name}. Maximum size is ${limitMb}MB. Login for a higher limit (50MB).`, ); setOriginalImage(null); setResizedImageSrc(null);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 56 - 59, In ImageResizer.jsx update the oversize error to be conditional on auth state: when checking file.size > maxSize (the existing block that calls toast.error), use the component's auth indicator (e.g. isAuthenticated or currentUser) to choose the message; if unauthenticated show the current "Login for a higher limit (50MB)" hint, if authenticated show a plain size limit message (e.g. "File too large: X. Maximum size is 50MB.") and still display maxSize-derived size; modify the toast.error call in that branch accordingly so authenticated users are not prompted to log in.backend/routes/cleanSupabase.js-5-10 (1)
5-10:⚠️ Potential issue | 🟡 MinorFix the documented header names.
These docblocks say the middleware reads the
Authorizationheader, but the implementation actually expectsx-auth-tokenandx-cron-secret. That mismatch will mislead future callers and any docs generated from these comments.Also applies to: 31-33
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/cleanSupabase.js` around lines 5 - 10, The JSDoc comments incorrectly state the middleware reads the Authorization header; update the docblocks in backend/routes/cleanSupabase.js (including the other comment block at lines referenced 31-33) to accurately describe that the middleware expects the x-auth-token and x-cron-secret headers, and explain the behavior for missing/invalid x-auth-token (401) and for unauthorized cron secret (403) so the documentation matches the implementation.frontend/src/components/PdfPageDeleter.jsx-83-113 (1)
83-113:⚠️ Potential issue | 🟡 MinorReject separator-only input before saving an unchanged PDF.
Inputs like
","or" , , "pass the empty-string check, produce an emptypagesToDeleteArray, and then fall through to a success download with no pages removed. Add a post-parse guard before buildingpagesToKeep.Proposed fix
const pagesToDeleteArray = pagesToDelete .split(",") .map((s) => s.trim()) .filter(Boolean) .flatMap((token) => { const parts = token.split("-").map((p) => p.trim()); if (parts.length === 1) { const page = Number(parts[0]); if (!Number.isInteger(page) || page < 1 || page > numPages) { throw new Error(`Invalid page number: ${token}`); } return [page]; } if (parts.length === 2) { const [start, end] = parts.map(Number); if ( !Number.isInteger(start) || !Number.isInteger(end) || start < 1 || end > numPages || start > end ) { throw new Error(`Invalid page range: ${token}`); } return Array.from({ length: end - start + 1 }, (_, i) => start + i); } throw new Error(`Invalid page selector: ${token}`); }); + + if (pagesToDeleteArray.length === 0) { + throw new Error("Please specify at least one valid page to delete."); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfPageDeleter.jsx` around lines 83 - 113, The parsed pagesToDeleteArray in PdfPageDeleter.jsx can be empty for inputs like "," which passes the earlier .filter(Boolean) and then leads to saving an unchanged PDF; after the flatMap parsing block, add a guard that checks if pagesToDeleteArray.length === 0 (or all entries are out-of-range) and reject the input (throw an error or set validation state) before computing pagesToKeep so the user cannot submit separator-only input that results in no deletions.backend/routes/imageConverter.js-282-290 (1)
282-290:⚠️ Potential issue | 🟡 MinorValidate integer format before parsing.
Number.parseInt()accepts and parses partial values—"300px"returns300,"120.5"returns120—allowing malformed input to pass the NaN and ≤ 0 checks. Validate that the raw string matches integer format before conversion.Also applies to: 412-418
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/imageConverter.js` around lines 282 - 290, The current width/height parsing in the request handler allows partial/malformed strings (e.g., "300px", "120.5") because Number.parseInt silently accepts them; update the validation so you first verify req.body.width and req.body.height are strict integer strings (e.g., using a /^\d+$/ test or Number.isInteger after coercion) before converting to numbers, then parse to integers and check >0 using the parsedWidth/parsedHeight variables in the same handler (refer to parsedWidth, parsedHeight, width, height) and apply the same fix to the second occurrence around lines referenced (the block at 412-418) to reject non-integer formats and return the appropriate error response.
🧹 Nitpick comments (3)
frontend/src/components/ImageFlipper.jsx (1)
84-86: Track usage only after successful flip completion.Line 85 currently records usage before outcome is known, so failed requests are counted as successful usage events.
Proposed change
setLoading(true); - trackToolUsage("ImageFlipper", "image"); const formData = new FormData(); formData.append("image", selectedFile); formData.append("direction", flipDirection); @@ link.click(); document.body.removeChild(link); + trackToolUsage("ImageFlipper", "image"); } catch (err) {Also applies to: 101-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageFlipper.jsx` around lines 84 - 86, trackToolUsage("ImageFlipper", "image") is being called before the flip result is known; move the call into the successful completion path so failures aren't counted. In the ImageFlipper component, relocate the trackToolUsage invocation from its current pre-request position (near setLoading(true)) into the success branch of the flip function (after the awaited request/response is validated or inside the .then that handles a 200/ok response), and remove any duplicate pre-request calls (including the similar call in the block around lines 101-110) so usage is only recorded when the flip completes successfully.frontend/src/components/CsvToJsonConverter.jsx (1)
60-63: Track usage after successful conversion to keep analytics clean.
trackToolUsagecurrently runs before parse/convert success, so failed conversions are counted as usage. Move tracking into the success path in both conversion flows.Proposed refactor
const convertCsvToJson = () => { setLoading(true); - trackToolUsage("CsvToJsonConverter", "web"); Papa.parse(csvInput, { header: true, complete: (results) => { if (results.errors?.length) { setConvertedOutput(`Error parsing CSV: ${results.errors[0].message}`); setLoading(false); return; } + trackToolUsage("CsvToJsonConverter", "web"); setConvertedOutput(JSON.stringify(results.data, null, 2)); setLoading(false); },const convertJsonToCsv = () => { setLoading(true); - trackToolUsage("CsvToJsonConverter", "web"); try { const jsonData = JSON.parse(jsonInput); const csv = Papa.unparse(jsonData); + trackToolUsage("CsvToJsonConverter", "web"); setConvertedOutput(csv); } catch (e) { setConvertedOutput(`Error parsing JSON: ${e.message}`); } setLoading(false); };Also applies to: 96-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/CsvToJsonConverter.jsx` around lines 60 - 63, The tracking call is being invoked before conversion succeeds; move trackToolUsage("CsvToJsonConverter", "web") into the parse success path so only successful conversions are counted—specifically, in convertCsvToJson move the trackToolUsage call from before Papa.parse into the Papa.parse complete/success callback (after setJsonResult / parsing success) and do the same for the other conversion flow (the reverse conversion handling around lines 96-99), ensuring tracking runs only after the successful result is produced and error paths do not call it.frontend/src/components/ExcelToPdfConverter.jsx (1)
75-79: Track usage after the conversion succeeds.This fires before the POST resolves, so failed conversions still get counted as usage. Move the detached analytics call into the success path, after the download is prepared.
💡 Suggested change
try { - void Promise.resolve() - .then(() => trackToolUsage("ExcelToPdfConverter", "pdf")) - .catch((analyticsError) => { - console.error("Failed to track tool usage", analyticsError); - }); const res = await axios.post( `${import.meta.env.VITE_API_BASE_URL}/api/convert/excel-to-pdf`, formData, @@ document.body.removeChild(link); window.URL.revokeObjectURL(url); toast.success("File converted to PDF successfully!"); + void Promise.resolve() + .then(() => trackToolUsage("ExcelToPdfConverter", "pdf")) + .catch((analyticsError) => { + console.error("Failed to track tool usage", analyticsError); + }); } catch (err) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ExcelToPdfConverter.jsx` around lines 75 - 79, The analytics call is currently invoked detached and runs before the POST completes; move the trackToolUsage("ExcelToPdfConverter", "pdf") invocation into the success branch after the conversion response/download is prepared so failed conversions are not counted. Locate the code in the ExcelToPdfConverter component where the POST response is handled (the success handler that prepares or triggers the download) and call trackToolUsage there inside a try/catch (or chained Promise .then) so it only runs after a successful response; remove the standalone void Promise.resolve() block so tracking only occurs on success.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/middleware/apiActivityTracker.js`:
- Around line 17-41: The middleware currently awaits ApiActivity.save and the
two upserts (TotalUsage.findOneAndUpdate and ServiceUsage.findOneAndUpdate)
before calling next(), blocking the request on Mongo; change it to run telemetry
after the response finishes by moving the save/upsert logic into a
res.on("finish") callback (or enqueue the job) so you do not await those DB
calls on the hot path; specifically, wrap the ApiActivity creation and the two
findOneAndUpdate calls inside res.on("finish", async () => { try { ... await
ApiActivity.save(); await Promise.all([TotalUsage.findOneAndUpdate(...),
ServiceUsage.findOneAndUpdate(...)]) } catch (err) { console.error(...) } });
and ensure the middleware calls next() immediately (no awaits) so tracking
becomes best-effort and non-blocking.
In `@backend/routes/auth.js`:
- Around line 193-194: The current password-reset route logs maskedEmail
unconditionally; avoid emitting email identifiers in production by guarding the
log with the same non-production check used elsewhere (e.g., wrap the
console.log("Password reset requested for:", maskedEmail) behind the NODE_ENV
!== 'production' or equivalent environment check or use the existing
processLogger.debug-only path). Locate the maskedEmail variable and the
console.log in backend/routes/auth.js and restrict or remove the log in
production environments so only non-production runs output this debug info.
In `@backend/routes/cleanSupabase.js`:
- Around line 62-64: Remove returning error.message in the response: instead of
sending error.message in res.status(500).json({...}), log the full error (e.g.,
console.error(error) or call your existing logger) and return only a generic
failure payload such as { msg: "Failed to trigger Supabase cleanup." } or
include a non-sensitive errorCode; update the handler around
res.status(500).json to stop exposing error.message while preserving the
detailed error in server logs.
In `@backend/routes/imageConverter.js`:
- Around line 313-315: The resize call is passing wrong option keys to Jimp v1;
replace the object keys { width: parsedWidth, height: parsedHeight } with { w:
parsedWidth, h: parsedHeight } wherever image.resize(...) is invoked (e.g., the
call that precedes creation of resizedBuffer and the similar call later around
the second resize), so the parsedWidth/parsedHeight values are actually applied.
In `@backend/routes/passwordStrength.js`:
- Around line 19-25: The current logic adds feedback when password.length < 8
but continues computing other score increments, so short passwords can still get
a high score; modify the function (the block using variables password, score,
feedback) to short-circuit when password.length < 8 — e.g., set score = 0 and
immediately return the result (or set a boolean isTooShort and skip all
subsequent scoring blocks) so no further increments (the branches that add to
score for length, character classes, etc.) run for short passwords; ensure the
returned object still includes the feedback explaining the length requirement.
In `@backend/routes/seoTools.js`:
- Around line 121-157: The route change to router.get("/robots-txt") broke the
frontend contract (it used POST { domain } and expected res.data.exists); update
the endpoint(s) so they accept the old input and return the old shape: support
POST payload { domain } in addition to GET ?url= (or detect both in the same
handler using req.query.url || req.body.domain) and include an exists boolean in
the JSON response along with content and url (i.e., preserve result.exists in
the returned object). Apply the same compatibility fix to the similar sitemap
handler around the 164-200 region that also changed contract; keep using
validateDomain and fetchContent to determine exists/content/url but ensure the
response fields match the previous frontend expectations.
- Around line 89-117: fetchContent currently disables redirects (maxRedirects:
0) and treats any 3xx as a missing file; instead allow a small, bounded number
of redirects so canonical redirects (e.g., to www) are followed while still
using the pinned DNS resolver to prevent SSRF. Change the axios option
maxRedirects from 0 to a small value (e.g., 5) in fetchContent and remove the
explicit 3xx error branch; keep the httpAgent/httpsAgent using the pinnedLookup
so any redirected hostnames are resolved via the pinned resolver, and preserve
handling for ERR_FR_TOO_MANY_REDIRECTS (return too_many_redirects) and 404 as
before. Ensure the returned success case still uses response.data and callers no
longer treat simple canonical redirects as missing.
In `@backend/routes/textToPdf.js`:
- Around line 51-54: The catch block currently returns res.status(500).json({
error: ... }) which is inconsistent with earlier responses that use the msg key
and breaks the frontend expecting err.response?.data?.msg; update the catch
handler (the catch (err) block that calls res.status(500).json) to return
res.status(500).json({ msg: err.message || "Server Error" }) so the error shape
matches prior responses and the frontend (TextToPdfGenerator.jsx) can read the
message; keep the console.error(err) logging but change the response key from
error to msg.
In `@backend/utils/pdfErrorHandler.js`:
- Around line 76-98: The code decodes the buffer to UTF-8 and uses
string.indexOf to find "%PDF-" which can desynchronize byte offsets; instead use
Buffer.indexOf on file.buffer to locate the header so offsets remain
byte-accurate: replace the header discovery logic that sets headerSearchLimit,
headerSearchBuffer, and headerIndex with a raw search using
file.buffer.indexOf(Buffer.from("%PDF-"), 0, "ascii") (or
file.buffer.indexOf("%PDF-") directly) and then base subsequent magic-byte
checks (pdfSignature, fileSignature) and version slicing (pdfVersionStart,
pdfVersionEnd) off that byte index; throw the same errors if indexOf returns -1
or if pdfSignature.equals(fileSignature) fails.
In `@frontend/src/components/ImageToPdfConverter.jsx`:
- Around line 34-37: The conversion invalidation increments
conversionIdRef.current in onFileChange (and elsewhere) but doesn't clear the
loading state, so a stale in-flight promise's guarded finally won't reset
loading; fix by, in onFileChange (and the other invalidation sites), immediately
call setLoading(false) (and keep setConvertedFile(null)), and in the async
conversion handler capture const myId = conversionIdRef.current at start and
only clear loading/assign convertedFile in finally when myId ===
conversionIdRef.current to avoid races; reference onFileChange, conversionIdRef,
setLoading, and the conversion async function/finally guard.
In `@frontend/src/components/PdfCompressor.jsx`:
- Around line 30-46: The oversize branch in onFileChange references an undefined
isAuthenticated causing a ReferenceError; fix by reusing the derived boolean
used for maxFileSize (e.g., create a local const like isAuth =
state?.isAuthenticated ?? false or use state?.isAuthenticated directly) and
replace the undefined isAuthenticated in the toast messages with that local
boolean so the validation toast displays correctly when file.size > maxFileSize
in PdfCompressor.jsx.
---
Minor comments:
In `@backend/routes/auth.js`:
- Around line 175-177: The /forgot-password handler currently returns the raw
validation payload by calling validationResult(req) and errors.array(); replace
that with the same sanitized formatting used by /register and /login: obtain
validationResult(req), pass it through the shared sanitization/formatting
function used elsewhere (the same helper invoked by the register/login handlers)
and then return res.status(400).json(...) with that sanitized errors object
instead of errors.array(). Locate the validationResult usage in the
forgot-password route and swap the direct errors.array() response for the common
sanitizer/formatter to ensure consistent, safe error output.
In `@backend/routes/cleanSupabase.js`:
- Around line 5-10: The JSDoc comments incorrectly state the middleware reads
the Authorization header; update the docblocks in
backend/routes/cleanSupabase.js (including the other comment block at lines
referenced 31-33) to accurately describe that the middleware expects the
x-auth-token and x-cron-secret headers, and explain the behavior for
missing/invalid x-auth-token (401) and for unauthorized cron secret (403) so the
documentation matches the implementation.
In `@backend/routes/imageConverter.js`:
- Around line 282-290: The current width/height parsing in the request handler
allows partial/malformed strings (e.g., "300px", "120.5") because
Number.parseInt silently accepts them; update the validation so you first verify
req.body.width and req.body.height are strict integer strings (e.g., using a
/^\d+$/ test or Number.isInteger after coercion) before converting to numbers,
then parse to integers and check >0 using the parsedWidth/parsedHeight variables
in the same handler (refer to parsedWidth, parsedHeight, width, height) and
apply the same fix to the second occurrence around lines referenced (the block
at 412-418) to reject non-integer formats and return the appropriate error
response.
In `@backend/routes/textConverter.js`:
- Around line 4-7: Normalize the incoming text by trimming whitespace before
validation and reject empty strings for decode operations: when handling type
=== "decode" compute const normalized = text.trim(), return a validation error
if normalized.length === 0, and use normalized for isValidBase64 checks and
decoding; also update isValidBase64 to perform its base64 round-trip comparison
against a trimmed value (e.g., use the trimmed input inside
Buffer.from(...).toString("base64") === trimmed) so whitespace-only inputs are
treated as invalid.
In `@frontend/src/components/CsvToJsonConverter.jsx`:
- Around line 117-127: The helper text div with id "csv-input-help" is not
associated with the textarea, so add an aria-describedby="csv-input-help"
attribute to the textarea in the CsvToJsonConverter component (the element
currently using id="csv-input" and value={csvInput}, onChange={handleCsvChange})
so screen readers announce the helper text; ensure the id string exactly matches
the helper div id.
- Around line 171-176: The button's aria-label is specific to JSON but the
component output can be CSV or JSON; update the button in CsvToJsonConverter
(the element using onClick={copyToClipboard}) to use a format-neutral label such
as "Copy output to clipboard" (and if the visible button text also says "Copy
JSON to clipboard", change that as well) so both the aria-label and any
displayed text refer generically to the output rather than "JSON".
In `@frontend/src/components/ImageCropper.jsx`:
- Around line 119-121: The call to trackToolUsage("ImageCropper","image") is
being executed too early (before readiness checks and toBlob() success) so
aborted/failed crops are counted; move the trackToolUsage invocation into the
successful crop completion path—i.e., after toBlob() resolves and after
verifying currentOpId matches cropOpIdRef.current and any other readiness
checks—so it only runs on a successful blob creation; update both occurrences
(the one near cropOpIdRef usage and the similar call around lines 177-182) to be
invoked after the blob is created and before emitting/sending the blob.
In `@frontend/src/components/ImageFormatConverter.jsx`:
- Around line 101-110: The onSubmit handler can be invoked twice before the UI
disables, so add an early guard at the top of the onSubmit function to
immediately return (and optionally show a toast) when the component-level
loading state is already true; in practice check the loading variable before
doing e.preventDefault() or any work, and only proceed to setLoading(true) and
build FormData if loading is false (references: onSubmit, loading, setLoading).
In `@frontend/src/components/ImageResizer.jsx`:
- Around line 65-86: You currently call setOriginalImage(file) before the
FileReader/Image metadata load succeeds; move the success path so
setOriginalImage(file) is only called inside img.onload after successful
setOriginalDimensions and setNewWidth/setNewHeight, and add reader.onerror and
img.onerror handlers that clear/reset original image state (call
setOriginalImage(null) or equivalent) and clear dimensions/inputs
(setOriginalDimensions(null) and setNewWidth/ setNewHeight to empty) and
optionally set an error flag/message; keep reader.readAsDataURL(file) but ensure
all error handlers mirror the reset/error path so a corrupt image cannot remain
selected with stale dimensions.
- Around line 56-59: In ImageResizer.jsx update the oversize error to be
conditional on auth state: when checking file.size > maxSize (the existing block
that calls toast.error), use the component's auth indicator (e.g.
isAuthenticated or currentUser) to choose the message; if unauthenticated show
the current "Login for a higher limit (50MB)" hint, if authenticated show a
plain size limit message (e.g. "File too large: X. Maximum size is 50MB.") and
still display maxSize-derived size; modify the toast.error call in that branch
accordingly so authenticated users are not prompted to log in.
In `@frontend/src/components/PasswordGenerator.jsx`:
- Around line 34-35: The form inputs must be disabled while generation is
pending: update the PasswordGenerator.jsx component to use the existing loading
state (setLoading) to set the disabled prop on the length input, all
character-type checkboxes, and any related option controls and the generate
button so users cannot change options during the setTimeout window; locate the
setLoading(true) call and ensure the inputs referenced in the render (length
input, checkbox elements around lines 138-149 and 153-190) read loading and
return disabled={loading} (or equivalent) until setLoading(false) is called.
- Around line 81-84: The rendered error messages from PasswordGenerator.jsx
(setError) are currently displayed in a plain <p> and aren't announced to screen
readers; update the JSX that outputs the error message (the <p> that renders the
error state in the PasswordGenerator component) to include an accessibility live
region such as role="alert" or aria-live="assertive" (or both) so assistive tech
will announce generation errors; apply the same change to the other places where
generation errors/messages are rendered (the other <p> that displays messages
tied to setError/setPassword) to ensure consistent announcement.
In `@frontend/src/components/PdfPageDeleter.jsx`:
- Around line 83-113: The parsed pagesToDeleteArray in PdfPageDeleter.jsx can be
empty for inputs like "," which passes the earlier .filter(Boolean) and then
leads to saving an unchanged PDF; after the flatMap parsing block, add a guard
that checks if pagesToDeleteArray.length === 0 (or all entries are out-of-range)
and reject the input (throw an error or set validation state) before computing
pagesToKeep so the user cannot submit separator-only input that results in no
deletions.
---
Duplicate comments:
In `@backend/docs/README.md`:
- Around line 437-445: The 401 response example in README.md is formatted as an
indented code block triggering MD046; convert the indented JSON to a proper
fenced block by moving the opening fence to its own line with the language tag
(```json) and outdenting the subsequent JSON lines so the closing fence (```) is
also on its own line; update the "401 - Unauthorized" example (the JSON snippet
showing {"msg":"Invalid or missing cron secret"}) to use the fenced ```json
block style instead of the indented block.
In `@backend/routes/analytics.js`:
- Around line 35-36: The current public-tool filtering derives visibility by
exclusion and mistakenly lets "HtmlToMarkdownConverter:copy" through; change the
logic that builds the public analytics set to use an explicit inclusion
whitelist (e.g., a Map or object keyed by tool event names) or add a boolean
metadata flag like isPublic on each tool descriptor, then update the code paths
that compute the public set (the same code that currently references
"HtmlToMarkdownConverter:copy" and the arrays used for /stats and /popular) to
consult that whitelist/flag (only include events present in the whitelist)
instead of excluding known-internal names; ensure both places mentioned (the
block around the "HtmlToMarkdownConverter:..." entries and the code used for
lines ~81-86) are updated to use the explicit whitelist/flag.
In `@backend/routes/favicon.js`:
- Around line 16-45: The current validateUrl(url) only checks DNS results but
doesn't bind the later HTTP/TLS connection to those vetted IPs; change
validateUrl to return the resolved IP(s) and the original hostname (e.g., return
{hostname, addresses}) and update callers (the axios requests) to use those
addresses: either create a custom lookup function that returns the vetted
address for the hostname (pass to axios via http(s).Agent lookup option) or
construct an IP-based URL and set the original hostname in the Host header and
SNI (servername) option so the outbound connection is pinned to the vetted IPs;
ensure validateUrl still enforces protocol and private-IP checks before
returning the vetted data.
- Around line 59-63: The axios call that fetches fileUrl currently sets
maxRedirects: 0 and treats non-2xx (3xx) responses as failures; change this to
follow redirects manually instead of hard-disabling them: after each
axios.get(fileUrl, { responseType: "arraybuffer", maxRedirects: 0, ... }) check
for 3xx status and a Location header, validate/pin the redirected URL with the
same host/pinning rules you use for the initial URL, then set fileUrl =
resolvedLocation and repeat up to a safe redirect limit (e.g., 5) before
failing; apply the same manual-follow logic to the other axios.get block
referenced (lines 118-122) so CDN/http→https/apex→www redirects are accepted but
still validated at each hop.
In `@backend/routes/pdfConverter.js`:
- Around line 288-298: The "high" branch currently only sets objectsPerTick
which doesn't change output size, so it falsely advertises stronger compression;
update the switch handling for compressionLevel (the compressionOptions object
configured in the switch) so "high" does not rely on objectsPerTick for size
reduction — either collapse "high" into the same settings as "medium" (set
compressionOptions.useObjectStreams = true and remove objectsPerTick) or
implement a real size-reducing step (e.g., invoke a PDF
optimization/minification routine or a true compression flag) so that the
behavior matches the API promise; adjust the case for "high" in the switch that
manipulates compressionOptions accordingly and remove or document objectsPerTick
usage.
In `@frontend/src/components/PasswordGenerator.jsx`:
- Around line 50-52: Replace all uses of modulo (%) on crypto.getRandomValues()
outputs (used when selecting characters and during the Fisher–Yates shuffle)
with a rejection-sampling helper that returns an unbiased random integer in [0,
n). Implement a helper function (e.g., getRandomInt(max) or sampleUniform(max))
that repeatedly calls crypto.getRandomValues(new Uint32Array(1)) and rejects
values >= floor(2^32 / max) * max, then use that helper wherever the code
currently computes indices for requiredChars pushes (e.g., charsets.uppercase,
lowercase, digits, symbols) and the Fisher–Yates swap index to ensure unbiased
selection and shuffling.
---
Nitpick comments:
In `@frontend/src/components/CsvToJsonConverter.jsx`:
- Around line 60-63: The tracking call is being invoked before conversion
succeeds; move trackToolUsage("CsvToJsonConverter", "web") into the parse
success path so only successful conversions are counted—specifically, in
convertCsvToJson move the trackToolUsage call from before Papa.parse into the
Papa.parse complete/success callback (after setJsonResult / parsing success) and
do the same for the other conversion flow (the reverse conversion handling
around lines 96-99), ensuring tracking runs only after the successful result is
produced and error paths do not call it.
In `@frontend/src/components/ExcelToPdfConverter.jsx`:
- Around line 75-79: The analytics call is currently invoked detached and runs
before the POST completes; move the trackToolUsage("ExcelToPdfConverter", "pdf")
invocation into the success branch after the conversion response/download is
prepared so failed conversions are not counted. Locate the code in the
ExcelToPdfConverter component where the POST response is handled (the success
handler that prepares or triggers the download) and call trackToolUsage there
inside a try/catch (or chained Promise .then) so it only runs after a successful
response; remove the standalone void Promise.resolve() block so tracking only
occurs on success.
In `@frontend/src/components/ImageFlipper.jsx`:
- Around line 84-86: trackToolUsage("ImageFlipper", "image") is being called
before the flip result is known; move the call into the successful completion
path so failures aren't counted. In the ImageFlipper component, relocate the
trackToolUsage invocation from its current pre-request position (near
setLoading(true)) into the success branch of the flip function (after the
awaited request/response is validated or inside the .then that handles a 200/ok
response), and remove any duplicate pre-request calls (including the similar
call in the block around lines 101-110) so usage is only recorded when the flip
completes successfully.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
📜 Review details
🧰 Additional context used
🪛 markdownlint-cli2 (0.22.0)
backend/docs/README.md
[warning] 441-441: Code block style
Expected: fenced; Actual: indented
(MD046, code-block-style)
| try { | ||
| const apiActivity = new ApiActivity({ | ||
| endpoint: req.path, | ||
| method: req.method, | ||
| userId: req.user ? req.user.id : null, | ||
| ipAddress: req.ip, | ||
| }); | ||
| await apiActivity.save(); | ||
|
|
||
| await TotalUsage.findOneAndUpdate( | ||
| {}, | ||
| { $inc: { totalCount: 1 } }, | ||
| { upsert: true, new: true }, | ||
| ); | ||
|
|
||
| await ServiceUsage.findOneAndUpdate( | ||
| { endpoint: req.originalUrl }, | ||
| { $inc: { count: 1 } }, | ||
| { upsert: true, new: true }, | ||
| ); | ||
| } catch (err) { | ||
| console.error("Error saving API activity:", err.message); | ||
| } | ||
| next(); | ||
| await Promise.all([ | ||
| TotalUsage.findOneAndUpdate( | ||
| { key: "global" }, | ||
| { $inc: { totalCount: 1 }, $setOnInsert: { key: "global" } }, | ||
| { upsert: true, new: true }, | ||
| ), | ||
| ServiceUsage.findOneAndUpdate( | ||
| { endpoint: req.path }, | ||
| { $inc: { count: 1 } }, | ||
| { upsert: true, new: true }, | ||
| ), | ||
| ]); | ||
| } catch (err) { | ||
| console.error("Error saving API activity:", err.message); | ||
| } | ||
| next(); |
There was a problem hiding this comment.
Move telemetry off the hot request path.
This middleware still waits for one insert and two upserts before next(). If Mongo stalls, every API request stalls with it. Prefer res.on("finish") or a queue and make the tracking best-effort.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/middleware/apiActivityTracker.js` around lines 17 - 41, The
middleware currently awaits ApiActivity.save and the two upserts
(TotalUsage.findOneAndUpdate and ServiceUsage.findOneAndUpdate) before calling
next(), blocking the request on Mongo; change it to run telemetry after the
response finishes by moving the save/upsert logic into a res.on("finish")
callback (or enqueue the job) so you do not await those DB calls on the hot
path; specifically, wrap the ApiActivity creation and the two findOneAndUpdate
calls inside res.on("finish", async () => { try { ... await ApiActivity.save();
await Promise.all([TotalUsage.findOneAndUpdate(...),
ServiceUsage.findOneAndUpdate(...)]) } catch (err) { console.error(...) } });
and ensure the middleware calls next() immediately (no awaits) so tracking
becomes best-effort and non-blocking.
| const maskedEmail = email ? email.replace(/^(.{2}).*(@.*)$/, "$1***$2") : undefined; | ||
| console.log("Password reset requested for:", maskedEmail); |
There was a problem hiding this comment.
Avoid logging email identifiers in production.
This path logs maskedEmail unconditionally. Even masked identifiers can be sensitive in production logs; align with the existing non-production guard pattern used above.
🔧 Suggested change
const maskedEmail = email ? email.replace(/^(.{2}).*(@.*)$/, "$1***$2") : undefined;
- console.log("Password reset requested for:", maskedEmail);
+ if (process.env.NODE_ENV !== "production") {
+ console.log("Password reset requested for:", maskedEmail);
+ }
return res.json({
msg: "If an account with that email exists, a password reset link has been sent.",
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const maskedEmail = email ? email.replace(/^(.{2}).*(@.*)$/, "$1***$2") : undefined; | |
| console.log("Password reset requested for:", maskedEmail); | |
| const maskedEmail = email ? email.replace(/^(.{2}).*(@.*)$/, "$1***$2") : undefined; | |
| if (process.env.NODE_ENV !== "production") { | |
| console.log("Password reset requested for:", maskedEmail); | |
| } | |
| return res.json({ | |
| msg: "If an account with that email exists, a password reset link has been sent.", | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/auth.js` around lines 193 - 194, The current password-reset
route logs maskedEmail unconditionally; avoid emitting email identifiers in
production by guarding the log with the same non-production check used elsewhere
(e.g., wrap the console.log("Password reset requested for:", maskedEmail) behind
the NODE_ENV !== 'production' or equivalent environment check or use the
existing processLogger.debug-only path). Locate the maskedEmail variable and the
console.log in backend/routes/auth.js and restrict or remove the log in
production environments so only non-production runs output this debug info.
| res.status(500).json({ | ||
| msg: "Failed to trigger Supabase cleanup.", | ||
| error: error.message, |
There was a problem hiding this comment.
Don't return raw backend error messages to clients.
error.message can expose internal Supabase or configuration details from the cleanup path. Keep the detailed error in logs and return only the generic failure message here.
🔒 Suggested change
res.status(500).json({
msg: "Failed to trigger Supabase cleanup.",
- error: error.message,
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| res.status(500).json({ | |
| msg: "Failed to trigger Supabase cleanup.", | |
| error: error.message, | |
| res.status(500).json({ | |
| msg: "Failed to trigger Supabase cleanup.", | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/cleanSupabase.js` around lines 62 - 64, Remove returning
error.message in the response: instead of sending error.message in
res.status(500).json({...}), log the full error (e.g., console.error(error) or
call your existing logger) and return only a generic failure payload such as {
msg: "Failed to trigger Supabase cleanup." } or include a non-sensitive
errorCode; update the handler around res.status(500).json to stop exposing
error.message while preserving the detailed error in server logs.
| image.resize({ width: parsedWidth, height: parsedHeight }); | ||
| const mime = `image/${outputFormat}`; | ||
| const resizedBuffer = await image.getBuffer(mime); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Jimp v1, what option shape does image.resize()accept:{ width, height }or{ w, h }?
💡 Result:
In Jimp v1, the image.resize method accepts an options object with the shape { w, h }.
Citations:
- 1: https://jimp-dev.github.io/jimp/guides/migrate-to-v1/
- 2: Can't find 'resize/ method anymore jimp-dev/jimp#1304
- 3: https://www.jsdocs.io/package/jimp
- 4: https://jimp-dev.github.io/jimp/api/jimp/type-aliases/resizeoptions/
🏁 Script executed:
# First, check the actual code at the mentioned locations
echo "=== Lines 310-320 (around 313-315) ==="
sed -n '310,320p' backend/routes/imageConverter.js
echo -e "\n=== Lines 355-365 (around 360-362) ==="
sed -n '355,365p' backend/routes/imageConverter.js
echo -e "\n=== Checking Jimp version in package.json ==="
grep -A2 -B2 "jimp" package.json || echo "Jimp not found in package.json"
echo -e "\n=== All .resize( calls in the file ==="
grep -n "\.resize(" backend/routes/imageConverter.jsRepository: gaureshpai/dkutils
Length of output: 1210
🏁 Script executed:
# Check package-lock.json for Jimp version to confirm v1
grep -A5 '"jimp"' package-lock.json | head -20 || echo "Checking if Jimp is installed..."
grep '"jimp"' package.json package-lock.json 2>/dev/null | head -5Repository: gaureshpai/dkutils
Length of output: 112
🏁 Script executed:
# Check what dependency files exist
echo "=== Dependency files ==="
ls -la | grep -E "package|yarn|pnpm"
# Check if there's a node_modules directory and Jimp version
echo -e "\n=== Checking node_modules for Jimp ==="
find . -name "jimp" -type d 2>/dev/null | head -5
# Check if there are any imports/requires of Jimp to see version hints
echo -e "\n=== Jimp imports/requires in the file ==="
grep -n "Jimp\|require.*jimp\|import.*jimp" backend/routes/imageConverter.js | head -10Repository: gaureshpai/dkutils
Length of output: 933
🏁 Script executed:
# Check pnpm-lock.yaml for Jimp version
echo "=== Jimp version in pnpm-lock.yaml ==="
grep -A10 "jimp:" pnpm-lock.yaml | head -20
# Also check if there's a package.json in the root
echo -e "\n=== package.json content ==="
cat package.jsonRepository: gaureshpai/dkutils
Length of output: 2127
Fix Jimp v1 resize option keys from width/height to w/h.
The code at lines 313 and 360 passes { width: parsedWidth, height: parsedHeight } to resize(), but Jimp v1 expects { w, h }. The current parameter names will not be recognized, preventing the resize dimensions from being applied.
🔧 Minimal fix
- image.resize({ width: parsedWidth, height: parsedHeight });
+ image.resize({ w: parsedWidth, h: parsedHeight });Also applies to line 360.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| image.resize({ width: parsedWidth, height: parsedHeight }); | |
| const mime = `image/${outputFormat}`; | |
| const resizedBuffer = await image.getBuffer(mime); | |
| image.resize({ w: parsedWidth, h: parsedHeight }); | |
| const mime = `image/${outputFormat}`; | |
| const resizedBuffer = await image.getBuffer(mime); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/imageConverter.js` around lines 313 - 315, The resize call is
passing wrong option keys to Jimp v1; replace the object keys { width:
parsedWidth, height: parsedHeight } with { w: parsedWidth, h: parsedHeight }
wherever image.resize(...) is invoked (e.g., the call that precedes creation of
resizedBuffer and the similar call later around the second resize), so the
parsedWidth/parsedHeight values are actually applied.
| if (password.length < 8) { | ||
| feedback.push("Password is too short (min 8 characters)."); | ||
| } else if (password.length < 12) { | ||
| score += 1; | ||
| } else { | ||
| score += 2; | ||
| } |
There was a problem hiding this comment.
Short passwords can still get a high score.
At Line 19 the “too short” case adds feedback, but scoring continues at Line 27-47, so a 7-char password with mixed character types can still return a high score. That conflicts with the documented rule that short passwords are weak and can mislead clients that rely on score.
Proposed fix
const checkPasswordStrength = (password) => {
let score = 0;
const feedback = [];
if (password.length < 8) {
feedback.push("Password is too short (min 8 characters).");
+ return { score: 0, feedback };
} else if (password.length < 12) {
score += 1;
} else {
score += 2;
}Also applies to: 27-53
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/passwordStrength.js` around lines 19 - 25, The current logic
adds feedback when password.length < 8 but continues computing other score
increments, so short passwords can still get a high score; modify the function
(the block using variables password, score, feedback) to short-circuit when
password.length < 8 — e.g., set score = 0 and immediately return the result (or
set a boolean isTooShort and skip all subsequent scoring blocks) so no further
increments (the branches that add to score for length, character classes, etc.)
run for short passwords; ensure the returned object still includes the feedback
explaining the length requirement.
| // @route GET /api/seo/robots-txt | ||
| // @desc Fetch and validate robots.txt from a domain | ||
| // @access Public | ||
| router.post("/robots-txt", async (req, res) => { | ||
| const { domain } = req.body; | ||
|
|
||
| if (!domain) { | ||
| return res.status(400).json({ msg: "Domain is required." }); | ||
| } | ||
|
|
||
| try { | ||
| await validateDomain(domain); | ||
| } catch (error) { | ||
| return res.status(400).json({ msg: error.message }); | ||
| } | ||
|
|
||
| const url = `http://${domain}/robots.txt`; | ||
| const httpsUrl = `https://${domain}/robots.txt`; | ||
|
|
||
| let result = await fetchContent(httpsUrl); | ||
| // Fallback to HTTP if HTTPS fails with 404 or redirect-related errors | ||
| if ( | ||
| !result.exists && | ||
| (result.error === "File not found (404)" || | ||
| result.error?.includes("redirect") || | ||
| result.error?.includes("302") || | ||
| result.error?.includes("301")) | ||
| ) { | ||
| result = await fetchContent(url); | ||
| } | ||
|
|
||
| return res.status(200).json(result); | ||
| router.get("/robots-txt", async (req, res) => { | ||
| const { url } = req.query; | ||
|
|
||
| if (!url) { | ||
| return res.status(400).json({ msg: "URL is required" }); | ||
| } | ||
|
|
||
| try { | ||
| const robotsUrl = new URL("/robots.txt", url); | ||
|
|
||
| let validatedAddresses; | ||
| try { | ||
| const validation = await validateDomain(robotsUrl.hostname); | ||
| validatedAddresses = validation; | ||
| } catch (validationError) { | ||
| return res.status(400).json({ msg: validationError.message }); | ||
| } | ||
|
|
||
| let result = await fetchContent(robotsUrl.href, validatedAddresses); | ||
|
|
||
| if (!result.exists) { | ||
| // Try HTTP if HTTPS fails | ||
| const httpUrl = robotsUrl.href.replace(/^https:/, "http:"); | ||
| result = await fetchContent(httpUrl, validatedAddresses); | ||
| } | ||
|
|
||
| if (result.exists) { | ||
| return res.json({ | ||
| content: result.content, | ||
| url: robotsUrl.href, | ||
| }); | ||
| } | ||
|
|
||
| return res.status(404).json({ msg: result.error || "robots.txt not found" }); |
There was a problem hiding this comment.
This API contract change breaks the current frontend caller.
frontend/src/components/SeoTools.jsx:58-90 and frontend/src/components/SeoTools.jsx:96-128 still POST { domain } and branch on res.data.exists. These handlers now require GET ?url= and never return exists, so successful responses will be treated as "not found" until the client is updated in the same PR or the old contract is preserved temporarily.
Also applies to: 164-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/seoTools.js` around lines 121 - 157, The route change to
router.get("/robots-txt") broke the frontend contract (it used POST { domain }
and expected res.data.exists); update the endpoint(s) so they accept the old
input and return the old shape: support POST payload { domain } in addition to
GET ?url= (or detect both in the same handler using req.query.url ||
req.body.domain) and include an exists boolean in the JSON response along with
content and url (i.e., preserve result.exists in the returned object). Apply the
same compatibility fix to the similar sitemap handler around the 164-200 region
that also changed contract; keep using validateDomain and fetchContent to
determine exists/content/url but ensure the response fields match the previous
frontend expectations.
| } catch (err) { | ||
| console.error(err.message); | ||
| res.status(500).json({ error: err.message || "Server Error" }); | ||
| } |
There was a problem hiding this comment.
Unify error response key to avoid frontend contract break.
Line 53 returns { error: ... }, but this route already uses { msg: ... } (Lines 16 and 23), and frontend/src/components/TextToPdfGenerator.jsx (Lines 59-61) reads err.response?.data?.msg. This causes the UI to fall back to the generic toast instead of showing the backend error.
💡 Suggested fix
} catch (err) {
console.error(err.message);
- res.status(500).json({ error: err.message || "Server Error" });
+ res.status(500).json({ msg: err.message || "Server Error" });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (err) { | |
| console.error(err.message); | |
| res.status(500).json({ error: err.message || "Server Error" }); | |
| } | |
| } catch (err) { | |
| console.error(err.message); | |
| res.status(500).json({ msg: err.message || "Server Error" }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/textToPdf.js` around lines 51 - 54, The catch block currently
returns res.status(500).json({ error: ... }) which is inconsistent with earlier
responses that use the msg key and breaks the frontend expecting
err.response?.data?.msg; update the catch handler (the catch (err) block that
calls res.status(500).json) to return res.status(500).json({ msg: err.message ||
"Server Error" }) so the error shape matches prior responses and the frontend
(TextToPdfGenerator.jsx) can read the message; keep the console.error(err)
logging but change the response key from error to msg.
| // Find PDF header within the first 1KB | ||
| const headerSearchLimit = Math.min(file.buffer.length, 1024); | ||
| const headerSearchBuffer = file.buffer | ||
| .slice(0, headerSearchLimit) | ||
| .toString("utf8", 0, headerSearchLimit); | ||
| const headerIndex = headerSearchBuffer.indexOf("%PDF-"); | ||
|
|
||
| if (headerIndex === -1) { | ||
| throw new Error("Invalid PDF file signature. The file is not a valid PDF."); | ||
| } | ||
|
|
||
| // Magic bytes validation for PDF files (relative to found header) | ||
| const pdfSignature = Buffer.from([0x25, 0x50, 0x44, 0x46, 0x2d]); // %PDF- | ||
| const fileSignature = file.buffer.slice(headerIndex, headerIndex + 5); | ||
|
|
||
| if (!pdfSignature.equals(fileSignature)) { | ||
| throw new Error("Invalid PDF file signature. The file is not a valid PDF."); | ||
| } | ||
|
|
||
| // Additional validation: check for PDF version (relative to found header) | ||
| const pdfVersionStart = headerIndex + 5; | ||
| const pdfVersionEnd = headerIndex + 8; | ||
| const pdfVersion = file.buffer.slice(pdfVersionStart, pdfVersionEnd).toString(); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Node.js, if a Buffer is converted to UTF-8 text before searching for an ASCII marker, can the resulting string index differ from the original byte offset for arbitrary binary data?
💡 Result:
Yes, the resulting string index can differ from the original byte offset for arbitrary binary data when a Buffer is converted to UTF-8 text before searching for an ASCII marker. Buffers store raw bytes, while UTF-8 strings store Unicode code points (JavaScript characters). When converting via buf.toString('utf8'), invalid UTF-8 byte sequences are replaced with the Unicode replacement character U+FFFD (�), which consumes one character position in the string but can correspond to multiple bytes in the original Buffer. ASCII bytes (0x00-0x7F) are single bytes and valid UTF-8, mapping 1:1. For an ASCII marker (single byte 0x00-0x7F): - If no invalid sequences precede it, the string index equals the byte offset (both advance 1 per ASCII byte). - If invalid sequences before it produce � characters, the string index will be smaller than the byte offset, because multiple bytes map to one �. Example scenario: Buffer with bytes [0xFF, 0x41] ('A' at byte offset 1). - toString('utf8') produces string "�A" (0xFF -> � at string index 0, 'A' at string index 1). - buf.indexOf(0x41) returns 1 (byte offset). - str.indexOf('A') returns 1 (matches). But if more invalid bytes: [0xFF, 0xFE, 0x41], string "��A", 'A' at string index 2, while byte offset 2. Node.js docs confirm: "If encoding is 'utf8' and a byte sequence in the input is not valid UTF-8, then each invalid byte is replaced with the replacement character U+FFFD." For arbitrary binary data containing non-UTF-8 bytes before the marker, positions differ. Use Buffer methods like indexOf(value, byteOffset) directly on binary data to avoid mismatches.
Citations:
- 1: https://nodejs.org/api/buffer.html
- 2: https://stackoverflow.com/questions/68756320/unexpected-result-when-calling-tostring-on-a-buffer-in-node
- 3: Buffer.toString('utf8') appears to use wtf-8 nodejs/node#23280
- 4: https://stackoverflow.com/questions/34048889/why-are-buffer-and-new-bufferbuffer-tostring-not-always-byte-for-byte-eq
🏁 Script executed:
cat -n backend/utils/pdfErrorHandler.js | sed -n '76,98p'Repository: gaureshpai/dkutils
Length of output: 1193
Use Buffer.indexOf() to search for the PDF header instead of decoding to UTF-8 first.
The code decodes the buffer to UTF-8 text before searching for %PDF-, then reuses the string index as a byte offset into the original buffer. Invalid UTF-8 sequences before the header will be replaced with the Unicode replacement character, causing the string index to diverge from the byte offset. This can result in reading the wrong bytes for signature validation, rejecting valid PDFs or bypassing validation for malformed ones. Search the raw buffer directly to keep byte offsets accurate.
🔧 Minimal fix
// Find PDF header within the first 1KB
const headerSearchLimit = Math.min(file.buffer.length, 1024);
- const headerSearchBuffer = file.buffer
- .slice(0, headerSearchLimit)
- .toString("utf8", 0, headerSearchLimit);
- const headerIndex = headerSearchBuffer.indexOf("%PDF-");
+ const pdfSignature = Buffer.from("%PDF-");
+ const headerIndex = file.buffer.subarray(0, headerSearchLimit).indexOf(pdfSignature);
if (headerIndex === -1) {
throw new Error("Invalid PDF file signature. The file is not a valid PDF.");
}
// Magic bytes validation for PDF files (relative to found header)
- const pdfSignature = Buffer.from([0x25, 0x50, 0x44, 0x46, 0x2d]); // %PDF-
const fileSignature = file.buffer.slice(headerIndex, headerIndex + 5);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Find PDF header within the first 1KB | |
| const headerSearchLimit = Math.min(file.buffer.length, 1024); | |
| const headerSearchBuffer = file.buffer | |
| .slice(0, headerSearchLimit) | |
| .toString("utf8", 0, headerSearchLimit); | |
| const headerIndex = headerSearchBuffer.indexOf("%PDF-"); | |
| if (headerIndex === -1) { | |
| throw new Error("Invalid PDF file signature. The file is not a valid PDF."); | |
| } | |
| // Magic bytes validation for PDF files (relative to found header) | |
| const pdfSignature = Buffer.from([0x25, 0x50, 0x44, 0x46, 0x2d]); // %PDF- | |
| const fileSignature = file.buffer.slice(headerIndex, headerIndex + 5); | |
| if (!pdfSignature.equals(fileSignature)) { | |
| throw new Error("Invalid PDF file signature. The file is not a valid PDF."); | |
| } | |
| // Additional validation: check for PDF version (relative to found header) | |
| const pdfVersionStart = headerIndex + 5; | |
| const pdfVersionEnd = headerIndex + 8; | |
| const pdfVersion = file.buffer.slice(pdfVersionStart, pdfVersionEnd).toString(); | |
| // Find PDF header within the first 1KB | |
| const headerSearchLimit = Math.min(file.buffer.length, 1024); | |
| const pdfSignature = Buffer.from("%PDF-"); | |
| const headerIndex = file.buffer.subarray(0, headerSearchLimit).indexOf(pdfSignature); | |
| if (headerIndex === -1) { | |
| throw new Error("Invalid PDF file signature. The file is not a valid PDF."); | |
| } | |
| // Magic bytes validation for PDF files (relative to found header) | |
| const fileSignature = file.buffer.slice(headerIndex, headerIndex + 5); | |
| if (!pdfSignature.equals(fileSignature)) { | |
| throw new Error("Invalid PDF file signature. The file is not a valid PDF."); | |
| } | |
| // Additional validation: check for PDF version (relative to found header) | |
| const pdfVersionStart = headerIndex + 5; | |
| const pdfVersionEnd = headerIndex + 8; | |
| const pdfVersion = file.buffer.slice(pdfVersionStart, pdfVersionEnd).toString(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/utils/pdfErrorHandler.js` around lines 76 - 98, The code decodes the
buffer to UTF-8 and uses string.indexOf to find "%PDF-" which can desynchronize
byte offsets; instead use Buffer.indexOf on file.buffer to locate the header so
offsets remain byte-accurate: replace the header discovery logic that sets
headerSearchLimit, headerSearchBuffer, and headerIndex with a raw search using
file.buffer.indexOf(Buffer.from("%PDF-"), 0, "ascii") (or
file.buffer.indexOf("%PDF-") directly) and then base subsequent magic-byte
checks (pdfSignature, fileSignature) and version slicing (pdfVersionStart,
pdfVersionEnd) off that byte index; throw the same errors if indexOf returns -1
or if pdfSignature.equals(fileSignature) fails.
| const onFileChange = (e) => { | ||
| setConvertedFile(null); | ||
| // Increment conversion ID to invalidate any pending requests when files change | ||
| conversionIdRef.current += 1; |
There was a problem hiding this comment.
Invalidating a conversion on file change can leave loading stuck.
Line 37 makes the in-flight request stale, so the guarded finally at Line 122 never clears loading. If the user picks different files mid-conversion, the button can stay disabled indefinitely and the next conversion is blocked.
Suggested fix
const onFileChange = (e) => {
setConvertedFile(null);
+ setLoading(false);
// Increment conversion ID to invalidate any pending requests when files change
conversionIdRef.current += 1;Also applies to: 120-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 34 - 37, The
conversion invalidation increments conversionIdRef.current in onFileChange (and
elsewhere) but doesn't clear the loading state, so a stale in-flight promise's
guarded finally won't reset loading; fix by, in onFileChange (and the other
invalidation sites), immediately call setLoading(false) (and keep
setConvertedFile(null)), and in the async conversion handler capture const myId
= conversionIdRef.current at start and only clear loading/assign convertedFile
in finally when myId === conversionIdRef.current to avoid races; reference
onFileChange, conversionIdRef, setLoading, and the conversion async
function/finally guard.
| const onFileChange = (e) => { | ||
| const file = e.target.files[0]; | ||
| const maxFileSize = (state?.isAuthenticated ?? false) ? 50 * 1024 * 1024 : 10 * 1024 * 1024; | ||
|
|
||
| const onFileChange = (e) => { | ||
| const file = e.target.files[0]; | ||
| const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024; | ||
| if (!file || e.target.files.length === 0) { | ||
| setSelectedFile(null); | ||
| e.target.value = null; | ||
| return; | ||
| } | ||
|
|
||
| if (file && file.type === "application/pdf") { | ||
| if (file.size > maxFileSize) { | ||
| toast.error( | ||
| `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, | ||
| ); | ||
| setSelectedFile(null); | ||
| e.target.value = null; | ||
| } else { | ||
| setSelectedFile(file); | ||
| } | ||
| } else { | ||
| toast.error("Please select a PDF file."); | ||
| setSelectedFile(null); | ||
| e.target.value = null; | ||
| } | ||
| }; | ||
| if (file.type === "application/pdf") { | ||
| if (file.size > maxFileSize) { | ||
| toast.error( | ||
| isAuthenticated | ||
| ? `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB.` | ||
| : `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, | ||
| ); |
There was a problem hiding this comment.
Fix the undefined auth flag in the oversize branch.
isAuthenticated is never declared in this component. When a file exceeds the limit, this path throws a ReferenceError instead of showing the validation toast. Reuse the same derived boolean you already use for maxFileSize.
🐛 Proposed fix
const onFileChange = (e) => {
const file = e.target.files[0];
- const maxFileSize = (state?.isAuthenticated ?? false) ? 50 * 1024 * 1024 : 10 * 1024 * 1024;
+ const isAuthenticated = state?.isAuthenticated ?? false;
+ const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/PdfCompressor.jsx` around lines 30 - 46, The oversize
branch in onFileChange references an undefined isAuthenticated causing a
ReferenceError; fix by reusing the derived boolean used for maxFileSize (e.g.,
create a local const like isAuth = state?.isAuthenticated ?? false or use
state?.isAuthenticated directly) and replace the undefined isAuthenticated in
the toast messages with that local boolean so the validation toast displays
correctly when file.size > maxFileSize in PdfCompressor.jsx.
Summary by CodeRabbit
New Features
Improvements
Documentation
Chores