Skip to content

Commit ad90fe3

Browse files
committed
feat(webapp): MOLLIFIER_DRAINER_ENABLED for per-service drainer control
The drainer's polling loop has been gated on WORKER_ENABLED, which couples it to the legacy ZodWorker role. To split the drainer onto a dedicated worker service in cloud (and keep all other replicas as producer-only), introduce its own switch. Semantics: - Unset → inherits MOLLIFIER_ENABLED. Single-container self-hosters with MOLLIFIER_ENABLED=1 get the drainer for free, no second flag to remember. - Explicit MOLLIFIER_DRAINER_ENABLED=0 → drainer off on this replica. Cloud sets this everywhere except the dedicated drainer service. - Explicit MOLLIFIER_DRAINER_ENABLED=1 → drainer on, subject to MOLLIFIER_ENABLED still being the master kill switch (a drainer can't construct without the gate-side buffer singleton). The bootstrap in mollifierDrainerWorker.server.ts now gates on the new flag instead of WORKER_ENABLED, so the drainer's lifecycle is no longer coupled to the legacy worker role.
1 parent 02c0b71 commit ad90fe3

2 files changed

Lines changed: 20 additions & 6 deletions

File tree

apps/webapp/app/env.server.ts

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1055,6 +1055,17 @@ const EnvironmentSchema = z
10551055
COMMON_WORKER_REDIS_CLUSTER_MODE_ENABLED: z.string().default("0"),
10561056

10571057
MOLLIFIER_ENABLED: z.string().default("0"),
1058+
// Separate switch for the drainer (consumer side) so it can be split
1059+
// off onto a dedicated worker service. Unset → inherits
1060+
// MOLLIFIER_ENABLED, so single-container self-hosters don't have to
1061+
// flip two switches. In multi-replica deployments, set this to "0"
1062+
// explicitly on every replica except the one dedicated drainer
1063+
// service — otherwise every replica's polling loop races for the
1064+
// same buffer entries. `MOLLIFIER_ENABLED` is still the master kill
1065+
// switch; setting this to "1" while `MOLLIFIER_ENABLED` is "0" is a
1066+
// no-op because the gate-side singleton refuses to construct a
1067+
// buffer when the system is off.
1068+
MOLLIFIER_DRAINER_ENABLED: z.string().default(process.env.MOLLIFIER_ENABLED ?? "0"),
10581069
MOLLIFIER_SHADOW_MODE: z.string().default("0"),
10591070
MOLLIFIER_REDIS_HOST: z
10601071
.string()

apps/webapp/app/v3/mollifierDrainerWorker.server.ts

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,15 +29,18 @@ declare global {
2929
* `batchTriggerWorker`).
3030
*
3131
* Gating order:
32-
* - `WORKER_ENABLED !== "true"` → early return (API-only replicas
33-
* still produce into the buffer via the trigger hot path; only worker
34-
* replicas drain it, otherwise every replica races for the same
35-
* entries).
32+
* - `MOLLIFIER_DRAINER_ENABLED !== "1"` → early return. Unset defaults
33+
* to `MOLLIFIER_ENABLED`, so single-container self-hosters still get
34+
* the drainer for free with one flag. In multi-replica deployments,
35+
* set this to "0" explicitly on every replica except the dedicated
36+
* drainer service so the polling loop doesn't race across replicas.
3637
* - `MOLLIFIER_ENABLED !== "1"` → `getMollifierDrainer()` returns null
37-
* and the bootstrap is a no-op.
38+
* and the bootstrap is a no-op. `MOLLIFIER_ENABLED` remains the
39+
* master kill switch; the new flag only controls WHICH replicas
40+
* run the drainer when the system is on.
3841
*/
3942
export function initMollifierDrainerWorker(): void {
40-
if (env.WORKER_ENABLED !== "true") {
43+
if (env.MOLLIFIER_DRAINER_ENABLED !== "1") {
4144
return;
4245
}
4346

0 commit comments

Comments
 (0)