Skip to content

feat(dev-portal-api): INFRA-1582 enable auto-publishing#7450

Open
SavelevMatthew wants to merge 5 commits intomainfrom
feat/INFRA-1585/crons
Open

feat(dev-portal-api): INFRA-1582 enable auto-publishing#7450
SavelevMatthew wants to merge 5 commits intomainfrom
feat/INFRA-1585/crons

Conversation

@SavelevMatthew
Copy link
Copy Markdown
Member

@SavelevMatthew SavelevMatthew commented Apr 10, 2026

TODO

  • Cron sync (republish if changed)
  • conflictPolicy import tests
  • Web (change alerts and remove info publish)

Summary by CodeRabbit

  • New Features

    • Automatic auto-publishing of B2C apps when key fields change (dev/prod)
    • Import conflict policies for app imports: panic, setNull, delete
    • Concurrency-safe publishing with distributed locks and a recover cron to re-schedule missed publishes
    • Background publish task exposed for scheduled/delayed publishes
  • Tests

    • Expanded tests for auto-publishing, publish-request approval flows, and import conflict policies

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 10, 2026

📝 Walkthrough

Walkthrough

Adds auto-publish hooks, import conflict-resolution, task-based publishing with environment-aware prechecks, KV-backed distributed locking for publish operations, a cron recoverer for publishing state, and associated tests and task exports.

Changes

Cohort / File(s) Summary
Schema: Auto-publish hooks
apps/dev-portal-api/domains/miniapp/schema/B2CApp.js
Added change-detection helpers, composed afterChange hook (composeNonResolveInputHook(logoMetaAfterChange, autoPublishAfterChange)), and autoPublishAfterChange which schedules publish tasks for dev/prod when specific fields differ.
Schema: Import service
apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.js
Added ImportAppConflictPolicy enum and options.conflictPolicy; implemented resolveConflicts flow to handle matching condo apps with policies panic, setNull, or delete.
Schema: Publish service (locking)
apps/dev-portal-api/domains/miniapp/schema/PublishB2CAppService.js
Introduced KVLock usage and wrapWithLock to serialize publish mutation by app+environment; applied wrapper to publishB2CApp resolver.
Tasks: publish & recover
apps/dev-portal-api/domains/miniapp/tasks/publishB2CApp.js, apps/dev-portal-api/domains/miniapp/tasks/recoverB2CPublishingState.js, apps/dev-portal-api/domains/miniapp/tasks/index.js
Added publishB2CApp task (environment pre-check for PROD, executes publish mutation) and recoverB2CPublishingState cron task (scans apps, compares env-specific publishedAt, schedules publish when needed); re-exported tasks.
KV Lock implementation
packages/keystone/locks.js
New KVLock implementation with retry/backoff, UUID token, acquire() returning Lock with release() using atomic eval script; throws on acquisition failure.
Tests & test utils
apps/dev-portal-api/domains/miniapp/schema/B2CApp.test.js, ImportB2CAppService.test.js, PublishB2CAppService.test.js, apps/dev-portal-api/domains/miniapp/utils/testSchema/index.js
Updated/added tests for auto-publish behavior and publish tasks; updated import tests to exercise conflictPolicy; adjusted test utilities to wait/poll for auto-publish and include conflictPolicy in test imports.
App integration
apps/dev-portal-api/index.js
Registered miniapp tasks module in application tasks() array to enable task execution at runtime.

Sequence Diagram

sequenceDiagram
    actor User
    participant Schema as B2CApp Schema
    participant Hook as autoPublishAfterChange
    participant TaskQueue as publishB2CApp Task
    participant TaskSvc as PublishB2CAppService
    participant KV as KVLock
    participant DB as Keystone DB

    User->>Schema: Save/Update B2CApp
    Schema->>Hook: afterChange hook runs
    Hook->>Hook: Compare updated vs existing fields
    alt Changes detected
        Hook->>TaskQueue: schedule publishB2CApp.delay(appId, environment)
    end
    TaskQueue->>TaskSvc: Task runner invokes publishB2CApp(appId, env)
    alt env == PROD
        TaskSvc->>DB: query B2CAppPublishRequest for approved request
        alt approved found
            TaskSvc->>KV: acquire(lockKey=publish:appId:env)
            KV->>TaskSvc: lock acquired
            TaskSvc->>DB: execute publish mutation
            TaskSvc->>KV: release()
        else none
            TaskSvc->>TaskSvc: exit early
        end
    else env == DEV
        TaskSvc->>KV: acquire(lockKey=publish:appId:env)
        KV->>TaskSvc: lock acquired
        TaskSvc->>DB: execute publish mutation
        TaskSvc->>KV: release()
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 I watched the hooks and tasks align,
Locks in place so jobs don't collide,
Conflicts sorted, cron keeps time,
Dev and Prod now publish side by side. 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main feature: enabling auto-publishing for dev-portal-api with the related ticket (INFRA-1582).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/INFRA-1585/crons

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: bdc7faaead

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
packages/keystone/locks.js (2)

71-73: Consider exporting LockAcquisitionError for specific error handling.

Callers may want to catch and handle lock acquisition failures differently from other errors. Currently, they would need to match on error.name === 'LockAcquisitionError' or check the message.

♻️ Export the error class
 module.exports = {
     KVLock,
+    LockAcquisitionError,
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/keystone/locks.js` around lines 71 - 73, The module currently only
exports KVLock but not LockAcquisitionError, making callers unable to import the
specific error type; update the exports in locks.js to also export the
LockAcquisitionError class so callers can import and catch it directly (e.g.,
add LockAcquisitionError to module.exports alongside KVLock) and ensure any
tests/imports referencing LockAcquisitionError use the named export.

18-21: Consider verifying lock ownership before release.

The release() method deletes the lock key without verifying that this process still owns it. If the lock expired (after lockDuration) and another process acquired it, calling release() would delete the other process's lock.

For the current use case with a 30-second lock duration, this may be acceptable. However, for robustness, consider storing the UUID value and using a conditional delete (e.g., Redis Lua script or compare-and-delete).

♻️ Suggested approach using conditional delete
 class Lock {
-    constructor (key, client) {
+    constructor (key, value, client) {
         this._key = key
+        this._value = value
         this._client = client
     }

     async release () {
-        const result = await this._client.del(this._key)
-        return Boolean(result)
+        // Only delete if we still own the lock
+        const currentValue = await this._client.get(this._key)
+        if (currentValue === this._value) {
+            const result = await this._client.del(this._key)
+            return Boolean(result)
+        }
+        return false
     }
 }

And update the Lock instantiation in acquire():

-                return new Lock(kvKey, this._client)
+                return new Lock(kvKey, value, this._client)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/keystone/locks.js` around lines 18 - 21, The release() method
currently calls this._client.del(this._key) without checking ownership; modify
acquire() to store the lock token/UUID on the Lock instance (e.g., this._token)
when a lock is granted, and change release() to perform a conditional delete
that only removes the key if its value matches this._token (use a Redis EVAL Lua
script or a compare-and-delete pattern supported by the client) instead of
unconditionally calling this._client.del(this._key), so you never delete another
process's lock after expiration.
apps/dev-portal-api/domains/miniapp/schema/B2CApp.test.js (1)

44-46: Avoid fixed sleeps in the production publish test.

The sleep(3_000) makes this timing-dependent but still does not prove the pre-approval PROD task was skipped. If the worker is slow, that first queued job can run only after the request becomes approved, and this test will still pass for the wrong reason. Prefer a deterministic signal here instead of a wall-clock delay.

Also applies to: 352-353

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dev-portal-api/domains/miniapp/schema/B2CApp.test.js` around lines 44 -
46, The test uses a fixed sleep function (sleep) to wait for background work
which is timing-dependent; replace the wall-clock sleep(3_000) uses with a
deterministic wait that observes the actual job/approval state (for example:
poll the job queue or DB record the worker reads, or await a worker-provided
event/Promise) so the test asserts that the pre-approval PROD task was not
enqueued/run rather than relying on timing. Locate references to sleep and the
test that publishes to PROD (B2CApp.test.js) and change the logic to wait for a
specific signal (e.g., check the job status via getJob/getQueuedTasks, or
resolve a Promise from a test-hook/event emitted by the worker) before asserting
the PROD task was skipped; remove the fixed sleep helper entirely or keep it but
stop calling it in these tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dev-portal-api/domains/miniapp/schema/B2CApp.js`:
- Around line 34-40: autoPublishAfterChange currently enqueues a PROD publish
unconditionally when PROD_CHANGEABLE_FIELDS differ; change it to only enqueue
PROD publish when the updatedItem carries an approval flag/state (e.g.,
updatedItem.approved or updatedItem.approvalStatus) indicating the save is
approved, or alternatively include version/timestamp metadata (e.g.,
updatedItem.updatedAt or updatedItem.version) with publishB2CAppTask.delay so
the worker can reject stale jobs; update the logic around
publishB2CAppTask.delay, referencing autoPublishAfterChange, updatedItem,
existingItem, PROD_CHANGEABLE_FIELDS, PROD_ENVIRONMENT, and ensure the worker
side checks the same approval/version when processing.

In `@apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.js`:
- Around line 393-395: The conditional is comparing an array to a number so it
never passes; change the check on Object.keys(apiUpdateInput) to test its length
(e.g., Object.keys(apiUpdateInput).length > 2) so the B2CApp.update(context,
app.id, apiUpdateInput) call runs when developmentExportId or productionExportId
are being nullified; update the conditional that currently references
Object.keys(apiUpdateInput) and leave the B2CApp.update call and apiUpdateInput
construction unchanged.

In `@apps/dev-portal-api/domains/miniapp/schema/PublishB2CAppService.js`:
- Around line 469-476: The catch block is releasing the lock redundantly because
the finally block already releases it; in the resolver wrapper around
originalResolver remove the await lock.release() from the catch clause and
simply rethrow the caught error (throw e) so the finally block alone calls await
lock.release(); keep the try/catch/finally structure but ensure only the finally
invokes lock.release() and errors are propagated unchanged from the catch via
rethrow.

In `@apps/dev-portal-api/domains/miniapp/tasks/publishB2CApp.js`:
- Around line 23-33: The catch block in the try around publishB2CAppMutation
swallows errors; update it to log the failure via the repository Pino logger
(use repoLogger.error with an object containing msg and standard fields like
task:'publishB2CApp', appId, environment, and the error) and then rethrow the
error so the task runner can retry/monitor; locate the try/catch around
publishB2CAppMutation(context, { ... }) and replace the console.error(e) with
structured repoLogger.error({ msg: 'publishB2CApp failed', task:
'publishB2CApp', appId, environment, err: e }) followed by throw e.

---

Nitpick comments:
In `@apps/dev-portal-api/domains/miniapp/schema/B2CApp.test.js`:
- Around line 44-46: The test uses a fixed sleep function (sleep) to wait for
background work which is timing-dependent; replace the wall-clock sleep(3_000)
uses with a deterministic wait that observes the actual job/approval state (for
example: poll the job queue or DB record the worker reads, or await a
worker-provided event/Promise) so the test asserts that the pre-approval PROD
task was not enqueued/run rather than relying on timing. Locate references to
sleep and the test that publishes to PROD (B2CApp.test.js) and change the logic
to wait for a specific signal (e.g., check the job status via
getJob/getQueuedTasks, or resolve a Promise from a test-hook/event emitted by
the worker) before asserting the PROD task was skipped; remove the fixed sleep
helper entirely or keep it but stop calling it in these tests.

In `@packages/keystone/locks.js`:
- Around line 71-73: The module currently only exports KVLock but not
LockAcquisitionError, making callers unable to import the specific error type;
update the exports in locks.js to also export the LockAcquisitionError class so
callers can import and catch it directly (e.g., add LockAcquisitionError to
module.exports alongside KVLock) and ensure any tests/imports referencing
LockAcquisitionError use the named export.
- Around line 18-21: The release() method currently calls
this._client.del(this._key) without checking ownership; modify acquire() to
store the lock token/UUID on the Lock instance (e.g., this._token) when a lock
is granted, and change release() to perform a conditional delete that only
removes the key if its value matches this._token (use a Redis EVAL Lua script or
a compare-and-delete pattern supported by the client) instead of unconditionally
calling this._client.del(this._key), so you never delete another process's lock
after expiration.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: eff5d3e6-ee67-41d0-91b1-4256a0dbe4fb

📥 Commits

Reviewing files that changed from the base of the PR and between af4016f and bdc7faa.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (11)
  • apps/dev-portal-api/domains/miniapp/schema/B2CApp.js
  • apps/dev-portal-api/domains/miniapp/schema/B2CApp.test.js
  • apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.js
  • apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.test.js
  • apps/dev-portal-api/domains/miniapp/schema/PublishB2CAppService.js
  • apps/dev-portal-api/domains/miniapp/schema/PublishB2CAppService.test.js
  • apps/dev-portal-api/domains/miniapp/tasks/index.js
  • apps/dev-portal-api/domains/miniapp/tasks/publishB2CApp.js
  • apps/dev-portal-api/domains/miniapp/utils/testSchema/index.js
  • apps/dev-portal-api/index.js
  • packages/keystone/locks.js

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dev-portal-api/domains/miniapp/tasks/recoverB2CPublishingState.js`:
- Around line 25-26: The loop currently awaits publishB2CApp.delay(b2cApp.id,
environment) directly which lets any rejection abort the whole recovery pass;
wrap each per-item enqueue call to publishB2CApp.delay(b2cApp.id, environment)
in a try/catch inside the loop (the code handling b2cApp/environment) so a
failure logs an error containing the b2cApp.id and environment (and the caught
error) and then continues to the next app/environment instead of throwing.
- Around line 12-14: The current implementation in recoverB2CPublishingState.js
fetches all non-deleted B2CApp records into memory via find('B2CApp', {
deletedAt: null }), which can blow up; change it to paginated/batched
processing: replace the single find call that assigns allB2CApps with a loop
that repeatedly queries a limited page (e.g., using limit/offset or a stable
cursor/createdAt id) and processes each page, continuing until no more rows, so
each batch is handled and garbage-collected before fetching the next; keep the
existing processing logic but operate on the page variable (instead of
allB2CApps) and ensure the pagination key (offset, lastId, or createdAt) and a
reasonable pageSize are configurable.

In `@packages/keystone/locks.js`:
- Around line 47-53: Validate lock key and retryCount early and ensure acquire
always returns a boolean instead of undefined: in the constructor (constructor)
throw if settings.key is missing or not a non-empty string (to avoid keys like
"lock:undefined") and coerce/validate other settings against DEFAULT_SETTINGS;
in acquire() validate this._settings.retryCount is either -1 or a non-negative
integer (throw on other invalid values) and change the loop to guarantee at
least one attempt (e.g., use a do/while style condition or adjust the loop
check) so retryCount === 0 performs a single try instead of causing an undefined
return; update any related logic in the acquire/release methods that rely on the
return value to expect true/false.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cf4ea02c-0f3c-49e7-a333-42469e70eb1a

📥 Commits

Reviewing files that changed from the base of the PR and between bdc7faa and 5344b87.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (7)
  • apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.js
  • apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.test.js
  • apps/dev-portal-api/domains/miniapp/schema/PublishB2CAppService.js
  • apps/dev-portal-api/domains/miniapp/tasks/index.js
  • apps/dev-portal-api/domains/miniapp/tasks/publishB2CApp.js
  • apps/dev-portal-api/domains/miniapp/tasks/recoverB2CPublishingState.js
  • packages/keystone/locks.js
✅ Files skipped from review due to trivial changes (1)
  • apps/dev-portal-api/domains/miniapp/tasks/publishB2CApp.js
🚧 Files skipped from review as they are similar to previous changes (4)
  • apps/dev-portal-api/domains/miniapp/schema/PublishB2CAppService.js
  • apps/dev-portal-api/domains/miniapp/tasks/index.js
  • apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.js
  • apps/dev-portal-api/domains/miniapp/schema/ImportB2CAppService.test.js

Comment on lines +12 to +14
const allB2CApps = await find('B2CApp', {
deletedAt: null,
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid full-table loading in a 15-minute cron.

Line 12 currently pulls all non-deleted B2CApp records into memory each run. This will degrade as data grows and can make the cron runtime unpredictable. Please switch to batched/paginated processing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dev-portal-api/domains/miniapp/tasks/recoverB2CPublishingState.js`
around lines 12 - 14, The current implementation in recoverB2CPublishingState.js
fetches all non-deleted B2CApp records into memory via find('B2CApp', {
deletedAt: null }), which can blow up; change it to paginated/batched
processing: replace the single find call that assigns allB2CApps with a loop
that repeatedly queries a limited page (e.g., using limit/offset or a stable
cursor/createdAt id) and processes each page, continuing until no more rows, so
each batch is handled and garbage-collected before fetching the next; keep the
existing processing logic but operate on the page variable (instead of
allB2CApps) and ensure the pagination key (offset, lastId, or createdAt) and a
reasonable pageSize are configurable.

Comment on lines +25 to +26
await publishB2CApp.delay(b2cApp.id, environment)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Isolate enqueue failures so one app does not abort the whole recovery pass.

At Line 25, any publishB2CApp.delay(...) rejection exits the function and skips remaining apps/environments. Wrap per-item enqueue in try/catch and continue processing, with an error log for failed items.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dev-portal-api/domains/miniapp/tasks/recoverB2CPublishingState.js`
around lines 25 - 26, The loop currently awaits publishB2CApp.delay(b2cApp.id,
environment) directly which lets any rejection abort the whole recovery pass;
wrap each per-item enqueue call to publishB2CApp.delay(b2cApp.id, environment)
in a try/catch inside the loop (the code handling b2cApp/environment) so a
failure logs an error containing the b2cApp.id and environment (and the caught
error) and then continues to the next app/environment instead of throwing.

Comment on lines +47 to +53
constructor (settings) {
this._client = getKVClient()
this._settings = {
...DEFAULT_SETTINGS,
...settings,
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast on invalid settings/key to prevent silent acquisition failures.

acquire() can return undefined when retryCount is 0 (or invalid negative values except -1) because the loop never runs. Also, missing key validation allows accidental lock key collisions like lock:undefined.

Proposed fix
 class KVLock {
     constructor (settings) {
         this._client = getKVClient()
-        this._settings = {
+        const mergedSettings = {
             ...DEFAULT_SETTINGS,
             ...settings,
         }
+
+        if (!Number.isInteger(mergedSettings.retryCount) || (mergedSettings.retryCount < 1 && mergedSettings.retryCount !== -1)) {
+            throw new TypeError('retryCount must be a positive integer or -1')
+        }
+        if (typeof mergedSettings.retryDelay !== 'number' || Number.isNaN(mergedSettings.retryDelay) || mergedSettings.retryDelay < 0) {
+            throw new TypeError('retryDelay must be a non-negative number')
+        }
+        if (typeof mergedSettings.retryJitter !== 'number' || Number.isNaN(mergedSettings.retryJitter) || mergedSettings.retryJitter < 0) {
+            throw new TypeError('retryJitter must be a non-negative number')
+        }
+        if (typeof mergedSettings.lockDuration !== 'number' || Number.isNaN(mergedSettings.lockDuration) || mergedSettings.lockDuration === 0) {
+            throw new TypeError('lockDuration must be a non-zero number')
+        }
+
+        this._settings = mergedSettings
-            ...DEFAULT_SETTINGS,
-            ...settings,
-        }
     }
@@
     async acquire (key) {
+        if (typeof key !== 'string' || key.length === 0) {
+            throw new TypeError('key must be a non-empty string')
+        }
+
         const maxAttempts = this._settings.retryCount === -1 ? Infinity : this._settings.retryCount
         const value = generateUUIDv4()

Also applies to: 63-69, 70-85

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/keystone/locks.js` around lines 47 - 53, Validate lock key and
retryCount early and ensure acquire always returns a boolean instead of
undefined: in the constructor (constructor) throw if settings.key is missing or
not a non-empty string (to avoid keys like "lock:undefined") and coerce/validate
other settings against DEFAULT_SETTINGS; in acquire() validate
this._settings.retryCount is either -1 or a non-negative integer (throw on other
invalid values) and change the loop to guarantee at least one attempt (e.g., use
a do/while style condition or adjust the loop check) so retryCount === 0
performs a single try instead of causing an undefined return; update any related
logic in the acquire/release methods that rely on the return value to expect
true/false.

@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant