feat: update package and fix break changes#163
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 23 minutes and 29 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (14)
📝 WalkthroughWalkthroughAdds DB indexes, replaces CodeService with a new TotpService (otpauth) for MFA, namespaces several Redis keys and adds Redis caching for user info, migrates tracing/logging to OTLP/JSON, updates tooling/dependencies and TypeScript ESM settings, and aligns many tests to new services/Redis and Jest ESM imports. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
makefile (1)
30-32:⚠️ Potential issue | 🔴 Critical
test/loadtarget calls a non-existentstart:rule.Line 32 invokes
make start, but nostart:rule is defined in this makefile. This will fail withNo rule to make target 'start'before k6 runs. Replace with the appropriate bootstrap target (likelymake infra/upor another initialization step).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@makefile` around lines 30 - 32, The test/load Make target currently calls a non-existent target "start" which causes make to fail; edit the test/load recipe (the "test/load" target) to invoke the correct bootstrap target (e.g., replace the "make start" line with "make infra/up" or the actual initialization target used in this repo) so that the needed infra is brought up before running the tests; ensure the updated command references the correct target name (infra/up or your project's initialization rule) and remove any reference to the nonexistent "start" target.src/core/providers/user.repository.ts (2)
230-252:⚠️ Potential issue | 🟠 MajorRemove
limit(1)before cachinguser_info.
user_infois reduced into{ phone, deviceId, googleAuth }, butlimit(1)means only one row can ever participate. The new cache then persists that partial snapshot underuser:${userId}, so whichever row the database returns first wins and the other fields disappear on every later lookup.🧩 Minimal fix
const userInfolist = await this.database<UserInfoRow>('user_info') .where('user_id', userId) - .limit(1) const info = userInfolist.reduce(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/providers/user.repository.ts` around lines 230 - 252, The query building userInfolist wrongly uses .limit(1) so only one UserInfoRow is fetched and the reduce in userInfolist -> info (producing phone, deviceId, googleAuth) can never see all rows; remove the .limit(1) from the database<UserInfoRow>('user_info').where('user_id', userId) call so the reduction aggregates all rows before calling this.cache.set(`user:${userId}`, JSON.stringify(info)).
148-209:⚠️ Potential issue | 🟠 MajorInvalidate
user:${userId}wheneveruser_infochanges.
updatePhone,updateDevice, andupdateGAall mutateuser_info, butgetUserInfoById()now memoizes the old blob in Redis. After the first read, these write paths can leave stale phone/deviceId/GA data in cache indefinitely unless they delete or refresh that key.Also applies to: 225-253
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/providers/user.repository.ts` around lines 148 - 209, The updatePhone, updateDevice, and updateGA methods mutate the user_info rows but don't invalidate the cached blob that getUserInfoById memoizes, causing stale data; after successful insert or update in updatePhone, updateDevice, updateGA (and the similar methods around lines 225-253), call the Redis cache invalidation for that user key (e.g. delete/expire the key "user:{userId}" via your cache client — e.g. this.redis.del(`user:${userId}`) or this.cache.delete(...) depending on your cache wrapper) so the next getUserInfoById reloads fresh data.
🧹 Nitpick comments (2)
src/core/config/database.ts (1)
21-26: Prefer env-driven pool tuning instead of hardcoded values.Lines 21-26 should be configurable per environment to avoid timeout/throughput regressions during load or CI variance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/config/database.ts` around lines 21 - 26, The pool configuration currently hardcodes min, max, idleTimeoutMillis, and acquireTimeoutMillis; make these env-driven by reading process.env (or a config util) with sensible defaults and parsing as numbers, e.g., replace literals for pool.min, pool.max, pool.idleTimeoutMillis, and pool.acquireTimeoutMillis with values like Number(process.env.DB_POOL_MIN ?? <default>) and similar for the others, and validate/normalize (fallback to the current defaults) so CI and production can tune timeouts and sizes via environment variables without changing code.test/unit/usecases/reset_password.usecase.test.ts (1)
99-101: Assert the concrete recover arguments.These
anything()verifications only prove that some calls happened. A regression like passing the hash intofindByEmail()or the wrong value intoupdatePassword()would still pass this test.🎯 Tighten the assertions
- verify(mockFindingResetPassword.findByHash(anything())).once() - verify(mockFindingUser.findByEmail(anything())).once() - verify(mockUpdatingUser.updatePassword(anything(), anything())).once() + verify(mockFindingResetPassword.findByHash(hash)).once() + verify(mockFindingUser.findByEmail(user.email)).once() + verify(mockUpdatingUser.updatePassword(user, newPassword)).once()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/usecases/reset_password.usecase.test.ts` around lines 99 - 101, Replace the broad anything() verifications with concrete argument checks so the test asserts the correct values are passed: verify mockFindingResetPassword.findByHash(...) with the expected reset hash value used in the test, verify mockFindingUser.findByEmail(...) with the expected user's email string, and verify mockUpdatingUser.updatePassword(...) with the expected user id/email and the expected new password/hash (or capture the actual password argument and assert it equals the expected). Use the same variables/constants from the test setup (e.g., expectedHash, expectedEmail, expectedPassword/expectedHashedPassword) or argument captors/matchers to assert exact values instead of anything().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@db/migrations/20260323201209_index.sql`:
- Around line 3-4: The migration creates two indexes on "user_info" that are
redundant: idx_user_info_user_type covers ("user_id","type") and thus already
provides leading-column coverage for "user_id", so remove the redundant
idx_user_info_user_id index to avoid write amplification; update the migration
by deleting the CREATE INDEX idx_user_info_user_id ON "user_info"("user_id")
statement (and do the same for the repeated pair mentioned at lines 11-12),
leaving only CREATE INDEX idx_user_info_user_type ON
"user_info"("user_id","type").
In `@makefile`:
- Around line 42-44: The clean/docker make target currently runs docker volume
prune -f --all which removes unused volumes across the host; change the command
in the clean/docker recipe to use docker compose down -v so only this project's
containers and volumes are removed (replace the docker volume prune -f --all
line with a docker compose down -v invocation in the clean/docker rule).
In `@src/core/providers/mfa_code.repository.ts`:
- Around line 67-69: The validateGA method must accept and pass the per-user GA
secret to the TotpService instead of discarding it: change validateGA(inputCode:
string) to validateGA(inputCode: string, secret: string) and call
this.totpService.validate(inputCode, secret); likewise update
TotpService.validate(token: string) to validate(token: string, secret: string)
and implement validation using the provided secret (e.g., create/verify a TOTP
with the secret rather than using shared instance/state). Update any call sites
(e.g., where user.info.googleAuth is passed) to match the new signature.
In `@src/core/providers/mfa.repository.ts`:
- Line 3: The mfa.repository.ts file incorrectly imports gaGenerator from test
fixtures; remove that import and stop referencing the test-only symbol
gaGenerator. Replace it by either (A) importing a production-safe generator from
a non-test module (e.g., move the generator to src/core/utils and import it), or
(B) accept a generator function via dependency injection on the MFARepository
constructor or createMfaRecord method and use that injected function instead;
update references to gaGenerator inside MFARepository to use the new injected
parameter or the new utils import.
- Around line 56-58: The GA secret flow is wrong: gaGenerator() currently
returns ephemeral TOTP codes (via totp.generate()) instead of the persistent
secret and validateGA() ignores the per-user secret. Change gaGenerator() to
return the persistent secret from the TOTP object (the base32/secret value) and
ensure you persist that secret via updatingUser.updateGA(user.id, secret); then
update validateGA(strategy/validateGA) to accept and use the stored per-user
secret (look up the user's secret from where you saved it) when calling
totp.check/verify rather than relying on a shared TOTP instance or global secret
so that validation is performed against the saved user-specific secret.
In `@src/core/services/totp.service.ts`:
- Around line 5-20: The constructor-created singleton Secret in totp.service.ts
causes every generateRandomNumber() and validate() to use the same in-memory
TOTP secret; instead make TOTP/Secret usage per-secret by changing validation
and generation to accept a secret string and instantiate a TOTP (or Secret) from
that secret for each call. Specifically, modify generateRandomNumber and
validate (and the service API) to take a user's secret and derive TOTP via new
TOTP({ ..., secret: Secret.fromBase32(userSecret) }) (or equivalent from the
TOTP library) rather than using the constructor's this.totp; update
validateGA(inputCode, secret) implementation in mfa_code.repository.ts to pass
the stored per-user secret into totp.validateGA/validate so validation uses the
enrolled user's secret instead of the singleton.
In `@src/presentation/http/routes/mfa.route.ts`:
- Line 97: The metric call counterTotal.add(1, { body: req.body }) should not
include req.body; remove the body attribute to avoid leaking sensitive values
and high-cardinality keys, and replace it with safe, low-cardinality attributes
(e.g., route, method, userIdPresent: Boolean(req.user), hasHash:
Boolean(req.body?.hash), hasCode: Boolean(req.body?.code), status) inside the
same handler in mfa.route.ts where counterTotal.add is invoked; ensure any flags
are booleans or fixed strings and never surface raw request values or secrets.
- Line 105: Replace the current high-cardinality metric label that uses the full
error message in counterError.add(1, { error: (error as Error).message }) with a
low-cardinality error category or code; map errors in the mfa route handler
(using the thrown Error.name, a custom error.code, or a small switch/mapper
function) to a fixed set of categories (e.g., "validation", "auth",
"external_service", "unknown") and call counterError.add(1, { error_category:
category }) or counterError.add(1, { error_code: code }) instead so metrics use
stable labels; update any tests/logging that expect the previous label name if
necessary.
In `@src/presentation/tracing.ts`:
- Around line 20-23: The service.version is hard-coded to '0.1.0' in the
resourceFromAttributes call; update the ATTR_SERVICE_VERSION value to use the
actual build/runtime version (e.g., pull from getEnv().app.version or from the
package/build metadata instead of the literal string) so
resourceFromAttributes({ [ATTR_SERVICE_VERSION]: ... }) stays in sync with the
release version.
- Around line 37-42: The SIGTERM handler currently calls sdk.shutdown() and then
forces process.exit(0) in finally, which can abort other shutdown listeners;
change the handler to invoke sdk.shutdown() (and log success/error) but remove
the forced process.exit(0) so the application owns termination; specifically
update the code path around process.on('SIGTERM') / sdk.shutdown() to stop
calling process.exit and only log results (optionally add a bounded
timeout/warning if desired) so other listeners (HTTP server, DB/Kafka cleanup)
can complete their shutdown.
In `@test/fixtures/generators.ts`:
- Around line 31-42: The code currently imports gaGenerator from test fixtures
and derives TOTP secrets from env.app.jwtSecret which is invalid; remove any
test-fixture import of gaGenerator and replace with a production GA secret
generator that creates a cryptographically-secure random base32 secret (do not
use env.app.jwtSecret or Secret.fromBase32(jwtSecret)); update the
MFACodeRepository.validateGA implementation to match the
validating_code.driven.ts signature validateGA(inputCode: string, secret:
string) so it accepts the stored secret parameter and uses that secret to verify
codes (invoke TotpService or TOTP with the provided secret when validating, not
a randomly-generated one), and ensure any TotpService/TOTP usage properly
constructs the secret from the generated base32 value instead of the JWT key.
In `@test/unit/providers/mfa_code.repository.test.ts`:
- Around line 39-45: The test and TotpService wiring currently reuse a single
in-memory Secret/TOTP instance so generateRandomNumber() and validateGA() share
the same app-level secret; change TotpService so generateRandomNumber() returns
a cryptographically-random per-request code (use crypto.randomInt or similar)
and validateGA(code, userSecret) constructs a fresh TOTP from the provided
userSecret (do not use any class-level Secret/TOTP); update callers/tests (e.g.,
MFACodeRepository construction in the test, and any uses of
TotpService.validateGA) to mock or pass the user-specific secret and only mock
generateRandomNumber() for cache-backed MFA flows, leaving validateGA to operate
on the supplied secret.
In `@test/unit/services/code.services.test.ts`:
- Around line 12-18: The test titled "should succeed when creating a new code
with length different from 6" is misleading because TotpService hardcodes
digits: 6; update the test for clarity by either removing this duplicate test or
renaming and adjusting its description to something like "should generate a
6-digit numeric code" while keeping assertions against
TotpService.generateRandomNumber (assert length === 6 and numeric), referencing
the hardcoded digits: 6 in TotpService (src/core/services/totp.service.ts) to
justify the change.
---
Outside diff comments:
In `@makefile`:
- Around line 30-32: The test/load Make target currently calls a non-existent
target "start" which causes make to fail; edit the test/load recipe (the
"test/load" target) to invoke the correct bootstrap target (e.g., replace the
"make start" line with "make infra/up" or the actual initialization target used
in this repo) so that the needed infra is brought up before running the tests;
ensure the updated command references the correct target name (infra/up or your
project's initialization rule) and remove any reference to the nonexistent
"start" target.
In `@src/core/providers/user.repository.ts`:
- Around line 230-252: The query building userInfolist wrongly uses .limit(1) so
only one UserInfoRow is fetched and the reduce in userInfolist -> info
(producing phone, deviceId, googleAuth) can never see all rows; remove the
.limit(1) from the database<UserInfoRow>('user_info').where('user_id', userId)
call so the reduction aggregates all rows before calling
this.cache.set(`user:${userId}`, JSON.stringify(info)).
- Around line 148-209: The updatePhone, updateDevice, and updateGA methods
mutate the user_info rows but don't invalidate the cached blob that
getUserInfoById memoizes, causing stale data; after successful insert or update
in updatePhone, updateDevice, updateGA (and the similar methods around lines
225-253), call the Redis cache invalidation for that user key (e.g.
delete/expire the key "user:{userId}" via your cache client — e.g.
this.redis.del(`user:${userId}`) or this.cache.delete(...) depending on your
cache wrapper) so the next getUserInfoById reloads fresh data.
---
Nitpick comments:
In `@src/core/config/database.ts`:
- Around line 21-26: The pool configuration currently hardcodes min, max,
idleTimeoutMillis, and acquireTimeoutMillis; make these env-driven by reading
process.env (or a config util) with sensible defaults and parsing as numbers,
e.g., replace literals for pool.min, pool.max, pool.idleTimeoutMillis, and
pool.acquireTimeoutMillis with values like Number(process.env.DB_POOL_MIN ??
<default>) and similar for the others, and validate/normalize (fallback to the
current defaults) so CI and production can tune timeouts and sizes via
environment variables without changing code.
In `@test/unit/usecases/reset_password.usecase.test.ts`:
- Around line 99-101: Replace the broad anything() verifications with concrete
argument checks so the test asserts the correct values are passed: verify
mockFindingResetPassword.findByHash(...) with the expected reset hash value used
in the test, verify mockFindingUser.findByEmail(...) with the expected user's
email string, and verify mockUpdatingUser.updatePassword(...) with the expected
user id/email and the expected new password/hash (or capture the actual password
argument and assert it equals the expected). Use the same variables/constants
from the test setup (e.g., expectedHash, expectedEmail,
expectedPassword/expectedHashedPassword) or argument captors/matchers to assert
exact values instead of anything().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fa86cead-ab9d-4791-a5f7-2de3b9b0f3ab
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (44)
coverage/lcov.infodb/migrations/20260323201209_index.sqlmakefilepackage.jsonsrc/config/enviroment_config.tssrc/config/logger.tssrc/core/config/cache.tssrc/core/config/database.tssrc/core/index.tssrc/core/providers/mfa.repository.tssrc/core/providers/mfa_code.repository.tssrc/core/providers/user.repository.tssrc/core/services/code.service.tssrc/core/services/totp.service.tssrc/presentation/http/routes/login.route.tssrc/presentation/http/routes/mfa.route.tssrc/presentation/tracing.tstest/fixtures/generators.tstest/integration/http/login.route.test.tstest/integration/http/logout.route.test.tstest/integration/http/mfa.route.test.tstest/integration/http/organization.route.test.tstest/integration/http/reset_password.route.test.tstest/integration/http/server.route.test.tstest/integration/http/user.route.test.tstest/unit/providers/mfa.repository.test.tstest/unit/providers/mfa_choose.repository.test.tstest/unit/providers/mfa_code.repository.test.tstest/unit/providers/notification.provider.test.tstest/unit/providers/organization.repository.test.tstest/unit/providers/user.repository.test.tstest/unit/services/code.services.test.tstest/unit/services/password.services.test.tstest/unit/services/uuid.services.test.tstest/unit/usecases/login.usecase.test.tstest/unit/usecases/logout.usecase.test.tstest/unit/usecases/mfa.usecase.test.tstest/unit/usecases/mfa_choose.usecase.test.tstest/unit/usecases/mfa_code.usecase.test.tstest/unit/usecases/organization.usecase.test.tstest/unit/usecases/reset_password.usecase.test.tstest/unit/usecases/token.usecase.test.tstest/unit/usecases/user.usecase.test.tstsconfig.json
💤 Files with no reviewable changes (1)
- src/core/services/code.service.ts
| it('should succeed when creating a new code with length different from 6', async () => { | ||
| const size = 10 | ||
| const codeService = new CodeService() | ||
| const code = codeService.generateRandomNumber(size) | ||
| const size = 6 | ||
| const totpService = new TotpService() | ||
| const code = totpService.generateRandomNumber() | ||
| expect(code.length).toEqual(size) | ||
| expect(typeof Number(code)).toEqual('number') | ||
| }) |
There was a problem hiding this comment.
Test description is misleading after refactor.
The test name says "with length different from 6" but TotpService always generates 6-digit codes (hardcoded digits: 6 in src/core/services/totp.service.ts). This test is now effectively a duplicate of the first test.
Consider removing this test or renaming it to reflect what it actually tests.
Suggested fix
Either remove the duplicate test:
- it('should succeed when creating a new code with length different from 6', async () => {
- const size = 6
- const totpService = new TotpService()
- const code = totpService.generateRandomNumber()
- expect(code.length).toEqual(size)
- expect(typeof Number(code)).toEqual('number')
- })Or rename to reflect current behavior:
- it('should succeed when creating a new code with length different from 6', async () => {
+ it('should succeed when creating multiple codes', async () => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should succeed when creating a new code with length different from 6', async () => { | |
| const size = 10 | |
| const codeService = new CodeService() | |
| const code = codeService.generateRandomNumber(size) | |
| const size = 6 | |
| const totpService = new TotpService() | |
| const code = totpService.generateRandomNumber() | |
| expect(code.length).toEqual(size) | |
| expect(typeof Number(code)).toEqual('number') | |
| }) |
| it('should succeed when creating a new code with length different from 6', async () => { | |
| const size = 10 | |
| const codeService = new CodeService() | |
| const code = codeService.generateRandomNumber(size) | |
| const size = 6 | |
| const totpService = new TotpService() | |
| const code = totpService.generateRandomNumber() | |
| expect(code.length).toEqual(size) | |
| expect(typeof Number(code)).toEqual('number') | |
| }) | |
| it('should succeed when creating multiple codes', async () => { | |
| const size = 6 | |
| const totpService = new TotpService() | |
| const code = totpService.generateRandomNumber() | |
| expect(code.length).toEqual(size) | |
| expect(typeof Number(code)).toEqual('number') | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/services/code.services.test.ts` around lines 12 - 18, The test
titled "should succeed when creating a new code with length different from 6" is
misleading because TotpService hardcodes digits: 6; update the test for clarity
by either removing this duplicate test or renaming and adjusting its description
to something like "should generate a 6-digit numeric code" while keeping
assertions against TotpService.generateRandomNumber (assert length === 6 and
numeric), referencing the hardcoded digits: 6 in TotpService
(src/core/services/totp.service.ts) to justify the change.
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
makefile (3)
23-28:⚠️ Potential issue | 🔴 Critical
test/mutationcalls removed targetclean/docker.The
clean/dockertarget was deleted from this makefile, buttest/mutationstill invokes it on line 28. This will cause the target to fail with a "No rule to make target" error.Proposed fix: use `infra/down` instead
.PHONY: test/mutation test/mutation: make infra/up docker compose exec -T api npm ci docker compose exec -T api npm run stryker - make clean/docker + make infra/down🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@makefile` around lines 23 - 28, The make target test/mutation still invokes the now-removed target clean/docker; update the recipe for the test/mutation target to call infra/down instead of clean/docker (locate the test/mutation target and replace the final invocation of clean/docker with infra/down) so the makefile no longer references a missing target.
30-33:⚠️ Potential issue | 🔴 Critical
test/loadcalls removed targetstart.The
starttarget was deleted, buttest/loadstill references it on line 32. This will fail at runtime.Proposed fix: inline the dev setup or use `dev` target
.PHONY: test/load test/load: - make start + make dev docker run --rm -i grafana/k6 run - <test/loading/k6.jsAlternatively, if
devdoesn't match the intended behavior, re-add a minimalstarttarget or inline the necessary commands.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@makefile` around lines 30 - 33, The test/load Makefile target still invokes the deleted start target; update test/load to either call the existing dev target (replace the "make start" invocation with "make dev"), inline the minimal dev/start commands directly into the test/load recipe, or restore a minimal start target that performs the required dev setup so the docker k6 run line can execute; adjust the test/load rule accordingly and ensure you reference the test/load, start, and dev targets when making the change.
1-47:⚠️ Potential issue | 🟡 MinorDocumentation references removed
clean/dockertarget.Per
README.md:76, the documentation still advertisesmake clean/dockeras a valid command. Since this target has been removed, users following the README will encounter errors.Update
README.mdto remove or replace the reference tomake clean/docker.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@makefile` around lines 1 - 47, README references a nonexistent Make target `clean/docker`; update the docs to avoid confusing users by removing or replacing that reference. Edit README.md and either delete the `make clean/docker` mention or change it to an existing target (e.g., `make clean/test` or `make clean/node`) and, if appropriate, add a short note explaining the recommended cleanup command(s) to run instead of `clean/docker`.src/core/providers/mfa.repository.ts (1)
53-59:⚠️ Potential issue | 🔴 CriticalMake GA MFA creation atomic.
This inserts an enabled MFA row before persisting the user's GA secret. If
updateGA()fails, the account is left withis_enable = truebut no matching secret, which corrupts MFA state and can break subsequent auth flows.Wrap the insert and secret update in one transaction, or keep the MFA row disabled until
updateGA()succeeds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/providers/mfa.repository.ts` around lines 53 - 59, The current flow inserts an enabled MFA row (this.database(this.tableName).insert(insertLine).returning('id')) before persisting the GA secret (this.totpService.secretGenerate() + this.updatingUser.updateGA), which can leave is_enable=true without a secret if updateGA fails; modify create flow in the method that handles Strategy.GA so the insert and updatingUser.updateGA(user.id, secret) happen inside a single DB transaction (use your DB transaction API around this.database(...) calls) or alternatively insert the MFA row with is_enable=false, call totpService.secretGenerate() and updatingUser.updateGA(...), and only set is_enable=true (update the MFA row by id from resp[0].id) after updateGA succeeds; reference this.tableName, insertLine, resp, totpService.secretGenerate, updatingUser.updateGA, and Strategy.GA when making the change.
🧹 Nitpick comments (1)
makefile (1)
6-8: Volumes will be deleted on everyinfra/downcall.Adding
-vmeansv-databaseandv-cache(perdocker-compose.yml:76-78) are removed each time. This is a behavioral change from the previousdocker compose down(no-v). Developers who rely on persistent local data between restarts may lose work unexpectedly.If this is intentional for a clean-slate workflow, consider documenting it or adding a separate
infra/down/cleantarget for volume removal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@makefile` around lines 6 - 8, The infra/down Makefile target currently runs "docker compose down -v" which removes named volumes (v-database, v-cache) on every call; change infra/down to run "docker compose down" (no -v) to preserve volumes, and add a new target (e.g., infra/down/clean or infra/down-clean) that explicitly runs "docker compose down -v" for users who want to remove volumes; update the target names referenced (infra/down and the new clean target) and add a short comment above each target explaining the expected behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/index.ts`:
- Around line 71-85: The updateGA, updatePassword, updatePhone, and updateDevice
methods on UserRepository currently mutate the DB but never invalidate the Redis
cache key used by getUserInfoById, causing stale data; modify each of those
methods (updateGA, updatePassword, updatePhone, updateDevice) to call await
this.cache.del(`user:${userId}`) after a successful DB update (or immediately
before returning) so that the cache entry read by findingUser/ getUserInfoById
is invalidated; ensure the calls use the same cache instance as UserRepository
and await the promise to avoid race conditions.
In `@src/core/providers/mfa_choose.repository.ts`:
- Around line 22-26: The multi() cache pipeline is setting the key
`mfa-choose:${hash}` but calling expire(hash, this.TTL), so the namespaced key
never gets a TTL; change the expire call to use the same namespaced key used in
set (i.e., expire(`mfa-choose:${hash}`, this.TTL)) so the entry created by
this.cache.multi().set(...) receives the intended TTL; update the expire
invocation in the method that builds this pipeline (the code around
this.cache.multi().set(...).expire(...).exec()) and keep using the existing
this.TTL, hash and strategyList symbols.
In `@src/core/providers/mfa_code.repository.ts`:
- Around line 41-45: The expire call is using the wrong key so the stored MFA
payload set with set(`strategy:${hash}`, ...) never receives the TTL; update the
pipeline that calls this.cache.multi().set(...).expire(...).exec() to expire the
exact same key string (`strategy:${hash}`) instead of plain hash, keeping the
rest of the logic (hash, content, this.TTL) unchanged so the cached MFA code is
evicted after the intended TTL.
In `@src/core/providers/reset_password.repository.ts`:
- Around line 22-26: The TTL is being set on the bare hash instead of the actual
Redis key; update the expire call used after the pipeline in the
ResetPasswordRepository so it targets the same key string passed to set (i.e.
the prefixed key `reset-password:${hash}`) rather than `hash`, ensuring the
entry created by this.cache.multi().set(...) receives the intended TTL via
.expire(..., this.TTL) before .exec().
In `@src/core/providers/token.repository.ts`:
- Around line 21-22: The code writes the blacklist key as `invalidate:${token}`
but calls expire on `token`; update the expire call to use the same key you set
so the invalidation marker gets a TTL (i.e., call
this.cache.expire(`invalidate:${token}`, this.TTL) right after await
this.cache.set(`invalidate:${token}`, token) in the code that performs
blacklisting); ensure you reference the existing this.cache.set and
this.cache.expire calls and reuse this.TTL.
In `@src/core/services/totp.service.ts`:
- Around line 17-23: The codeGenerate method currently calls crypto.randomInt(9)
which returns 0..8 and therefore never emits digit 9; update the call in
codeGenerate (and any other place using crypto.randomInt(9)) to
crypto.randomInt(10) so digits 0..9 are possible, preserving the existing loop
and concatenation logic and keeping the default size parameter
(this.CODE_MASX_SIZE) unchanged.
- Around line 6-15: secretGenerate currently returns the ephemeral 6-digit token
from totp.generate() instead of the long-lived seed; change secretGenerate (the
function creating new TOTP with new Secret()) to return the Secret's serialized
seed (use the Secret instance's .base32 property) so callers persist the
enrollment secret rather than the current token; update references around the
TOTP construction (TOTP, Secret, secretGenerate) to return secret.base32.
In `@test/integration/http/login.route.test.ts`:
- Around line 92-95: In the beforeEach block where you clear tables with
database('multi_factor_authentication').del() and database('user_info').del(),
await the Redis flush by changing the redis.flushDb() call to be awaited so the
beforeEach doesn't proceed until Redis is cleared; update the beforeEach
containing redis.flushDb() to use await (or return the promise) to ensure
deterministic test isolation.
In `@test/integration/http/reset_password.route.test.ts`:
- Around line 96-97: The beforeEach setup hook calls redis.flushDb() without
awaiting it, which can make tests flaky; update the beforeEach in
reset_password.route.test.ts to await the Promise returned by redis.flushDb()
(i.e., use await redis.flushDb()) so Redis is cleared before each test runs,
mirroring the pattern used in mfa_code.repository.test.ts.
In `@test/unit/providers/mfa_code.repository.test.ts`:
- Around line 68-70: The test seeds Redis with an incomplete object for the
cached MFA code; update the seeded payload to match the full CacheCode shape
used by findByHash() by including the strategy field (and any other required
properties) so the repository and downstream MFA branching see the proper
schema; modify the lines that set the redis key for `strategy:${mockHash}`
(using mockHash, mockUserId, mockCode) to store a complete CacheCode fixture
(e.g., include strategy and any metadata fields) rather than the current JSON
that omits strategy, and reuse the same complete fixture for the other seed at
82-86.
---
Outside diff comments:
In `@makefile`:
- Around line 23-28: The make target test/mutation still invokes the now-removed
target clean/docker; update the recipe for the test/mutation target to call
infra/down instead of clean/docker (locate the test/mutation target and replace
the final invocation of clean/docker with infra/down) so the makefile no longer
references a missing target.
- Around line 30-33: The test/load Makefile target still invokes the deleted
start target; update test/load to either call the existing dev target (replace
the "make start" invocation with "make dev"), inline the minimal dev/start
commands directly into the test/load recipe, or restore a minimal start target
that performs the required dev setup so the docker k6 run line can execute;
adjust the test/load rule accordingly and ensure you reference the test/load,
start, and dev targets when making the change.
- Around line 1-47: README references a nonexistent Make target `clean/docker`;
update the docs to avoid confusing users by removing or replacing that
reference. Edit README.md and either delete the `make clean/docker` mention or
change it to an existing target (e.g., `make clean/test` or `make clean/node`)
and, if appropriate, add a short note explaining the recommended cleanup
command(s) to run instead of `clean/docker`.
In `@src/core/providers/mfa.repository.ts`:
- Around line 53-59: The current flow inserts an enabled MFA row
(this.database(this.tableName).insert(insertLine).returning('id')) before
persisting the GA secret (this.totpService.secretGenerate() +
this.updatingUser.updateGA), which can leave is_enable=true without a secret if
updateGA fails; modify create flow in the method that handles Strategy.GA so the
insert and updatingUser.updateGA(user.id, secret) happen inside a single DB
transaction (use your DB transaction API around this.database(...) calls) or
alternatively insert the MFA row with is_enable=false, call
totpService.secretGenerate() and updatingUser.updateGA(...), and only set
is_enable=true (update the MFA row by id from resp[0].id) after updateGA
succeeds; reference this.tableName, insertLine, resp,
totpService.secretGenerate, updatingUser.updateGA, and Strategy.GA when making
the change.
---
Nitpick comments:
In `@makefile`:
- Around line 6-8: The infra/down Makefile target currently runs "docker compose
down -v" which removes named volumes (v-database, v-cache) on every call; change
infra/down to run "docker compose down" (no -v) to preserve volumes, and add a
new target (e.g., infra/down/clean or infra/down-clean) that explicitly runs
"docker compose down -v" for users who want to remove volumes; update the target
names referenced (infra/down and the new clean target) and add a short comment
above each target explaining the expected behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c3ac2bb8-02e9-470d-8e26-56468902d33e
📒 Files selected for processing (19)
coverage/lcov.infodb/migrations/20260323201209_index.sqlmakefilesrc/core/index.tssrc/core/providers/mfa.repository.tssrc/core/providers/mfa_choose.repository.tssrc/core/providers/mfa_code.repository.tssrc/core/providers/reset_password.repository.tssrc/core/providers/token.repository.tssrc/core/services/totp.service.tstest/integration/http/login.route.test.tstest/integration/http/logout.route.test.tstest/integration/http/mfa.route.test.tstest/integration/http/organization.route.test.tstest/integration/http/reset_password.route.test.tstest/integration/http/user.route.test.tstest/unit/providers/mfa.repository.test.tstest/unit/providers/mfa_code.repository.test.tstest/unit/services/code.services.test.ts
✅ Files skipped from review due to trivial changes (3)
- test/unit/services/code.services.test.ts
- test/integration/http/organization.route.test.ts
- db/migrations/20260323201209_index.sql
🚧 Files skipped from review as they are similar to previous changes (4)
- test/integration/http/logout.route.test.ts
- test/unit/providers/mfa.repository.test.ts
- test/integration/http/user.route.test.ts
- test/integration/http/mfa.route.test.ts
| const findingUser: FindingUser = new UserRepository( | ||
| database, | ||
| passwordService, | ||
| cache | ||
| ) | ||
| const creatingUser: CreatingUser = new UserRepository( | ||
| database, | ||
| passwordService | ||
| passwordService, | ||
| cache | ||
| ) | ||
| const updatingUser: UpdatingUser = new UserRepository( | ||
| database, | ||
| passwordService | ||
| passwordService, | ||
| cache | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Inspect UserRepository cache usage =="
sed -n '1,260p' src/core/providers/user.repository.ts
echo
echo "== Cached read paths =="
rg -n -C3 'getUserInfoById|cache\.(get|set|setEx)' src/core/providers/user.repository.ts
echo
echo "== Mutating paths that should invalidate the same cache key =="
rg -n -C3 'updateGA|updatePassword|update.*User|create.*User|cache\.(del|unlink)' src/core/providers/user.repository.tsRepository: auth-plus/auth-plus-authentication
Length of output: 10231
Cache keys must be invalidated when user info is updated.
The updateGA, updatePassword, updatePhone, and updateDevice methods update the database but never invalidate the Redis cache key user:${userId}. This causes findingUser's calls to getUserInfoById to return stale cached data indefinitely—including stale or missing googleAuth on MFA setup, breaking immediate-post-setup auth flows. Add await this.cache.del(user:${userId}) to each update method, or invalidate the cache before returning from those methods.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/index.ts` around lines 71 - 85, The updateGA, updatePassword,
updatePhone, and updateDevice methods on UserRepository currently mutate the DB
but never invalidate the Redis cache key used by getUserInfoById, causing stale
data; modify each of those methods (updateGA, updatePassword, updatePhone,
updateDevice) to call await this.cache.del(`user:${userId}`) after a successful
DB update (or immediately before returning) so that the cache entry read by
findingUser/ getUserInfoById is invalidated; ensure the calls use the same cache
instance as UserRepository and await the promise to avoid race conditions.
| await redis.set( | ||
| mockHash, | ||
| `strategy:${mockHash}`, | ||
| JSON.stringify({ userId: mockUserId, code: mockCode }) |
There was a problem hiding this comment.
Use a complete CacheCode fixture here.
This seeded payload omits strategy, even though findByHash() returns CacheCode and later MFA flow logic branches on that field. The test can still pass with a broken cache schema.
Suggested fix
await redis.set(
`strategy:${mockHash}`,
- JSON.stringify({ userId: mockUserId, code: mockCode })
+ JSON.stringify({
+ userId: mockUserId,
+ code: mockCode,
+ strategy: mockStrategy,
+ })
)
@@
expect(result.userId).toEqual(mockUserId)
expect(result.code).toEqual(mockCode)
+ expect(result.strategy).toEqual(mockStrategy)Also applies to: 82-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/providers/mfa_code.repository.test.ts` around lines 68 - 70, The
test seeds Redis with an incomplete object for the cached MFA code; update the
seeded payload to match the full CacheCode shape used by findByHash() by
including the strategy field (and any other required properties) so the
repository and downstream MFA branching see the proper schema; modify the lines
that set the redis key for `strategy:${mockHash}` (using mockHash, mockUserId,
mockCode) to store a complete CacheCode fixture (e.g., include strategy and any
metadata fields) rather than the current JSON that omits strategy, and reuse the
same complete fixture for the other seed at 82-86.
|



Summary by CodeRabbit
New Features
Performance Improvements
Chores