Conversation
dev -> main
dev => main
dev => main
…on to Effect + Bun
…ation; add repoUtils and message handling logic
…e-to-effect # Conflicts: # github-service/package.json # github-service/pnpm-lock.yaml
…sconfig.build.json
📝 WalkthroughWalkthroughThis PR migrates the github-service from a NestJS microservice to an Effect-based TypeScript framework running on Bun, removing all NestJS/pnpm infrastructure and introducing functional effect-driven message handling. Documentation is updated to reflect the new tooling, Docker images switch to Bun, observability infrastructure (Prometheus/Tempo/Grafana) is added, and API layer code is completely reimplemented. Changes
Sequence DiagramsequenceDiagram
participant RabbitMQ
participant Consumer as RabbitMQ Consumer
participant Handler as Message Handler
participant GitHub as GitHub API Client
participant TempFS as Temp Filesystem
participant RepoUtils as Repo Utils
participant Results as Results Queue
RabbitMQ->>Consumer: Consume Message
Consumer->>Handler: handleMessage(msg)
Handler->>Handler: Parse & Validate Schema
alt GitHub Permission Change
Handler->>GitHub: Decrypt Token
GitHub->>GitHub: Execute Permissions API Call
GitHub-->>Handler: Success/Error
Handler->>Results: Emit {pattern}_done/{pattern}_error
else Create Team Repository
Handler->>GitHub: Decrypt Org Token
GitHub->>GitHub: Create Repository
GitHub-->>Handler: Repo Created
Handler->>TempFS: Create Temp Folder
Handler->>RepoUtils: Clone Mono Repo
RepoUtils->>TempFS: Clone & Initialize
RepoUtils->>RepoUtils: Transform Configs/Scripts
RepoUtils-->>Handler: Repo Ready
Handler->>RepoUtils: Push to Team Repo
RepoUtils->>GitHub: Authenticated Push
RepoUtils-->>Handler: Push Complete
Handler->>Results: Emit repository_created
Handler->>GitHub: Add Collaborators
GitHub-->>Handler: Collaborators Added
Handler->>TempFS: Cleanup
Handler->>Results: Emit create_team_repository_done
end
Handler-->>Consumer: Complete
Consumer->>RabbitMQ: ACK/NACK Message
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR rewrites github-service/ from a NestJS-based microservice to a Bun + Effect implementation, replacing the previous GitHub API client, RabbitMQ integration, and repo templating utilities with Effect Layers/services and Bun tooling.
Changes:
- Replaced the NestJS entrypoint/service/controller with an Effect-based program/consumer architecture.
- Introduced Effect-based implementations for GitHub API calls, RabbitMQ consumption/publishing, repo cloning/pushing utilities, and message schemas.
- Switched
github-service/tooling to Bun (build/test) and updated Dockerfile + repo docs accordingly.
Reviewed changes
Copilot reviewed 28 out of 30 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| github-service/tsconfig.build.json | Adjusts build exclusions to align with new test location (src/test). |
| github-service/src/test/effect/repo/repoUtils.test.ts | Adds Bun tests for Effect-based repo utilities. |
| github-service/src/repo.utils.ts | Removes legacy NestJS/Promise-based RepoUtils implementation. |
| github-service/src/main.ts | Removes NestJS bootstrap entrypoint. |
| github-service/src/githubApi/types.ts | Removes legacy GitHub API client types. |
| github-service/src/githubApi/response-types.ts | Removes legacy GitHub API response types. |
| github-service/src/githubApi/README.md | Removes legacy GitHub API client documentation. |
| github-service/src/githubApi/index.ts | Removes legacy GitHub API exports. |
| github-service/src/githubApi/example.ts | Removes legacy GitHub API example. |
| github-service/src/githubApi/endpoints.ts | Removes legacy endpoint wrapper classes. |
| github-service/src/githubApi/client.ts | Removes legacy GitHub API client implementation. |
| github-service/src/effect/util/crypto.ts | Adds Effect-based secret decryption utility. |
| github-service/src/effect/schemas/messages.ts | Adds Effect Schema definitions for inbound queue messages + dispatcher. |
| github-service/src/effect/repo/repoUtils.ts | Adds Effect-based repo clone/push utilities with typed errors + tracing. |
| github-service/src/effect/rabbitmq/consumer.ts | Adds Effect-based RabbitMQ consumer/publisher Layer. |
| github-service/src/effect/program.ts | Adds core message handling program and GitHub/repo workflows. |
| github-service/src/effect/main.ts | Adds Bun/Effect service entrypoint wiring layers + consumer. |
| github-service/src/effect/layers/config.ts | Adds Effect Config layer for RabbitMQ + encryption key configuration. |
| github-service/src/effect/github/gitHubClient.ts | Adds Effect-based GitHub API client + factory for DI. |
| github-service/src/app.service.ts | Removes legacy NestJS AppService orchestration code. |
| github-service/src/app.module.ts | Removes legacy NestJS module definition. |
| github-service/src/app.controller.ts | Removes legacy NestJS event controller. |
| github-service/pnpm-workspace.yaml | Removes pnpm workspace config for github-service (now Bun-managed). |
| github-service/package.json | Switches scripts/deps to Bun + Effect ecosystem and removes Jest/NestJS deps. |
| github-service/eslint.config.mts | Adds Effect ESLint plugin rule and plugin configuration. |
| github-service/Dockerfile | Migrates runtime/build images to Bun and updates install/build steps. |
| github-service/bun.lock | Adds Bun lockfile for github-service dependencies. |
| api/src/github-api/github-api.service.ts | Minor readonly refactor (no functional change) for the API’s RMQ client wrapper. |
| AGENTS.md | Updates monorepo documentation to reflect Effect + Bun github-service. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (10)
github-service/package.json (1)
27-28: Keep type-only packages out ofdependencies.
@types/amqpliband@types/bunare compile-time only, but the production Docker stage installs everything underdependencies. Leaving them here ships extra packages in the runtime image for no runtime benefit.Suggested fix
"dependencies": { - "@types/amqplib": "^0.10.8", - "@types/bun": "^1.3.11", "@types/crypto-js": "^4.2.2", "amqplib": "^0.10.9", "crypto-js": "^4.2.0", "effect": "^3.21.0", "simple-git": "^3.32.3", "yaml": "^2.8.3" }, "devDependencies": { "@effect/eslint-plugin": "^0.3.2", "@eslint/eslintrc": "^3.3.5", "@eslint/js": "^10.0.1", + "@types/amqplib": "^0.10.8", + "@types/bun": "^1.3.11", "@types/node": "^25.4.0",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/package.json` around lines 27 - 28, The package.json currently lists type-only packages "@types/amqplib" and "@types/bun" under dependencies which causes them to be installed into the production Docker image; move these entries from dependencies into devDependencies in package.json (or run an equivalent package manager command such as npm/yarn pnpm to save them as devDependencies) so they are only present at build/compile time and not shipped in the runtime image, ensuring references to "@types/amqplib" and "@types/bun" are removed from the top-level "dependencies" and added under "devDependencies".github-service/src/effect/repo/repoUtils.ts (3)
123-127: Missing semicolon after closing brace.As per coding guidelines, semicolons are required.
return gitRepo; - }).pipe( + }); +}).pipe( Effect.withSpan("initRepo", { attributes: { "temp.folder": tempFolderPath, "base.path": basePath }, - }) - ) + }), + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/repo/repoUtils.ts` around lines 123 - 127, The statement ending the pipe chain that calls Effect.withSpan("initRepo", { attributes: { "temp.folder": tempFolderPath, "base.path": basePath } }) is missing a trailing semicolon; update the end of the expression that uses Effect.withSpan (the initRepo span with tempFolderPath and basePath attributes) to include the required semicolon to comply with the project coding guidelines.
364-398: Potential race condition:initReporemoves.gitwhile parallel operations may access files.
Effect.allwithconcurrency: "unbounded"runsinitRepo(which removes.gitdirectories) concurrently with file operations likeupdateGitignoreFromCoreignore. If the git operations and file operations interact unexpectedly, this could cause intermittent failures. Consider runninginitRepofirst, then the file transformations.♻️ Suggested restructuring
const repoRoot = path.join(opts.tempFolderPath, opts.basePath); - const [gitRepo] = yield* Effect.all( - [ - initRepo(fs, opts.tempFolderPath, opts.basePath), - updateGitignoreFromCoreignore(fs, repoRoot), - updateDevcontainerDockerCompose( - fs, - repoRoot, - opts.myCoreBotDockerImage, - opts.visualizerDockerImage, - ), - updateTeamName(fs, repoRoot, opts.teamName), - writeJsonConfig( - fs, - repoRoot, - "game.config.json", - opts.gameConfig, - "game", - ), - writeJsonConfig( - fs, - repoRoot, - "server.config.json", - opts.serverConfig, - "server", - ), - updateConfigsUrls( - fs, - repoRoot, - opts.eventId, - opts.apiBaseUrl, - opts.starterTemplateId, - ), - ], - { concurrency: "unbounded" }, - ); + const gitRepo = yield* initRepo(fs, opts.tempFolderPath, opts.basePath); + yield* Effect.all( + [ + updateGitignoreFromCoreignore(fs, repoRoot), + updateDevcontainerDockerCompose( + fs, + repoRoot, + opts.myCoreBotDockerImage, + opts.visualizerDockerImage, + ), + updateTeamName(fs, repoRoot, opts.teamName), + writeJsonConfig( + fs, + repoRoot, + "game.config.json", + opts.gameConfig, + "game", + ), + writeJsonConfig( + fs, + repoRoot, + "server.config.json", + opts.serverConfig, + "server", + ), + updateConfigsUrls( + fs, + repoRoot, + opts.eventId, + opts.apiBaseUrl, + opts.starterTemplateId, + ), + ], + { concurrency: "unbounded" }, + ); return gitRepo;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/repo/repoUtils.ts` around lines 364 - 398, initRepo is currently run in parallel with several file-transforming effects (updateGitignoreFromCoreignore, updateDevcontainerDockerCompose, updateTeamName, writeJsonConfig, updateConfigsUrls) causing a race where initRepo may remove .git while others access repo files; change the flow to run initRepo first (await its Effect) and then call Effect.all (with the remaining effects and concurrency:"unbounded") for the rest of the transformations so all file operations run only after initRepo completes.
416-423: Chained git operations all report as "git push" on failure — granular error context is lost.The
.then()chain wraps multiple operations (addRemote, add, commit, branch, push) under a singletryGit("git push", ...). IfaddRemoteorcommitfails, the error will misleadingly say "git push". Consider separating critical steps or using a more descriptive label.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/repo/repoUtils.ts` around lines 416 - 423, The current single tryGit("git push", ...) call hides which git operation failed because it wraps opts.gitRepo.addRemote, .add, .commit, .branch and .push; split these into separate tryGit calls with descriptive labels (e.g., tryGit("git addRemote", () => opts.gitRepo.addRemote(authUrl)), tryGit("git add", () => opts.gitRepo.add(".")), tryGit("git commit", () => opts.gitRepo.commit("Initial commit")), tryGit("git branch -M main", () => opts.gitRepo.branch(["-M", "main"])), and tryGit("git push", () => opts.gitRepo.push("team-repo", "main", ["-u"]))) so failures report the exact failing operation and include the authUrl/context where relevant.github-service/src/effect/github/gitHubClient.ts (1)
78-84: Rate limit error reports status 429 even when actual status is 403.When rate limited via 403 with
x-ratelimit-remaining: 0, the error is created withstatus: 429instead of the actual403. This could confuse debugging/monitoring.♻️ Suggested fix
private static classifyError(status: number, headers: Headers.Headers): GitHubError | GitHubRateLimitError | GitHubNotFoundError { if (GitHubClient.isRateLimited(status, headers)) { - return new GitHubRateLimitError({ resetAt: GitHubClient.parseRateLimitReset(headers), message: "Rate limited", status: 429 }); + return new GitHubRateLimitError({ resetAt: GitHubClient.parseRateLimitReset(headers), message: "Rate limited", status }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/github/gitHubClient.ts` around lines 78 - 84, The rate-limit branch in GitHubClient.classifyError hardcodes status: 429 when constructing GitHubRateLimitError, causing 403-with-zero-remaining to be reported as 429; change the construction to pass the actual status variable (status) instead of 429 so the returned GitHubRateLimitError reflects the real HTTP status while still computing resetAt via GitHubClient.parseRateLimitReset(headers).github-service/src/test/effect/repo/repoUtils.test.ts (1)
263-268: Redundant double-provision ofFileSystem.FileSystem.The
FileSystemservice is provided both via the layer (line 255) and again viaprovideService(line 266). The second provision overwrites the first, making the layer provision unnecessary for this service.♻️ Suggested simplification
return Effect.runPromise( effectProgram.pipe( Effect.provide(layer), - Effect.provideService(FileSystem.FileSystem, fs.asService()), ), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/test/effect/repo/repoUtils.test.ts` around lines 263 - 268, The test redundantly provides the FileSystem service twice: once via layer and again with Effect.provideService(FileSystem.FileSystem, fs.asService()), which overwrites the layer; remove the second provision to rely on the layer instead (i.e., delete the Effect.provideService(FileSystem.FileSystem, fs.asService()) call) so effectProgram receives FileSystem only from layer, keeping effectProgram, layer, FileSystem.FileSystem, and fs.asService() unchanged.github-service/src/effect/rabbitmq/consumer.ts (3)
84-99:Runtime.runForkis fire-and-forget — no fiber tracking or supervision.The forked fiber handling each message is not awaited or tracked. If the consumer is closed while messages are in-flight, those handlers may be interrupted or orphaned. Consider using
Effect.forkScopedor tracking fibers for graceful shutdown.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/rabbitmq/consumer.ts` around lines 84 - 99, The current use of Runtime.runFork to run handler(msg) is fire-and-forget and can orphan message handlers on shutdown; replace it with a tracked/scoped fork (e.g., Effect.forkScoped or explicit Fiber tracking) so message handler fibers are supervised and awaited during shutdown. Update the code path that invokes Runtime.runFork to spawn fibers via Effect.forkScoped (or push each Fiber into a consumer-level registry you join on shutdown), ensure channel.ack(msg) and channel.nack(msg, false, false) remain executed in the handler’s success/failure finalizers, and adjust shutdown logic to await or interrupt these tracked fibers before closing the channel/consumer.
54-55: Channel close errors are silently ignored.
Effect.ignoreswallows any channel close failures without logging, making debugging harder if cleanup issues occur.♻️ Suggested improvement
(chan) => Effect.promise(() => chan.close()).pipe( - Effect.ignore + Effect.tap(() => Effect.logInfo("RabbitMQ channel closed")), + Effect.catchAll((e) => + Effect.logWarning(`RabbitMQ channel close error: ${String(e)}`), + ), ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/rabbitmq/consumer.ts` around lines 54 - 55, The channel close effect currently swallows errors via Effect.ignore; change the cleanup for chan (the (chan) => Effect.promise(() => chan.close()) call) to surface and log failures instead of ignoring them—replace Effect.ignore with an error handler like Effect.catchAll/Effect.catchSome that logs the error via Effect.logError (include context like "RabbitMQ channel close failed") and then continues, so channel.close failures are recorded for debugging while still allowing shutdown to proceed.
42-46: Connection release swallows errors and logs after close.If
conn.close()rejects, the.tap()never runs and the error is silently converted to a resolved promise byEffect.promise. Consider logging before close or catching the error.♻️ Suggested improvement
(conn) => - Effect.promise(() => conn.close()).pipe( - Effect.tap(() => Effect.logInfo("RabbitMQ connection closed")), - ), + Effect.tryPromise({ + try: () => conn.close(), + catch: (e) => e, + }).pipe( + Effect.tap(() => Effect.logInfo("RabbitMQ connection closed")), + Effect.catchAll((e) => + Effect.logWarning(`RabbitMQ connection close error: ${String(e)}`), + ), + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/rabbitmq/consumer.ts` around lines 42 - 46, The release code currently uses Effect.promise(() => conn.close()).pipe(Effect.tap(...)) which swallows a rejection and prevents the tap from running; change the release to log before attempting conn.close() and handle close errors explicitly: call Effect.logInfo("RabbitMQ connection closing") (or similar) prior to invoking conn.close(), then invoke Effect.promise(() => conn.close()) and attach an error handler (e.g., Effect.catchAll or equivalent) that logs the close failure (using Effect.logError with the caught error) and re-raises or fails the effect so errors aren’t silently swallowed; refer to the conn.close() call, the surrounding Effect.promise and Effect.tap usage to find and update the code.github-service/src/effect/program.ts (1)
35-39:parseMessageerror is caught butJSON.parseexception is not typed — consider usingSchema.decodeUnknowndirectly.The
try/catchpattern here loses the original error details. SinceSchema.decodeUnknown(InboundMessage)is called immediately after, you could combine parsing and decoding into a single step with better error reporting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/program.ts` around lines 35 - 39, The parseMessage function currently wraps JSON.parse in Effect.try and loses original parse errors; replace the Effect.try block with a single decode step using Schema.decodeUnknown(InboundMessage, JSON.parse(msg.content.toString("utf8"))) (or call Schema.decodeUnknown on the raw string/unknown value) so parsing and validation happen together and return typed decoding errors instead of a generic ParseMessageError; update references to parseMessage to expect the Schema.decodeUnknown result and remove the old Effect.try/ParseMessageError usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@github-service/package.json`:
- Around line 16-17: The "test:watch" NPM script is invoking bun with the test
directory directly instead of Bun's test runner; update the "test:watch" script
to call the same test runner as "test" by changing its command to use "bun test"
with the same directory filter and the --watch flag so it mirrors the "test"
script behavior (reference the package.json scripts "test" and "test:watch").
In `@github-service/src/effect/github/gitHubClient.ts`:
- Around line 127-137: The retry helper withRetries currently causes infinite
recursion and double-executes eff after a rate-limit sleep; change withRetries
to track remaining retries (e.g., withRetries(eff, remainingRetries)) instead of
a single-arg function, call it initially with this.cfg.maxRetries, and after
catching a GitHubRateLimitError with resetAt perform Effect.sleep(...) and then
call withRetries(eff, remainingRetries - 1) (via Effect.flatMap) without also
calling eff via Effect.andThen so eff runs only once per retry; ensure
non-rate-limit errors still return Effect.fail(e) and that you stop retrying
when remainingRetries reaches 0.
- Around line 119-124: The schema decode branch in the Effect.flatMap is
suppressing decode errors by casting to never; replace the cast so decode
failures are preserved as a typed effect instead of throwing untyped exceptions
— call Schema.decodeUnknown(options.responseSchema)(body) and return it as an
Effect that carries the schema ParseError (or map the decoder's error into the
desired error type) rather than casting to Effect.Effect<A, never>, ensuring the
decode error type flows out of the function; update any callers to handle the
decoder's error type if necessary.
In `@github-service/src/effect/layers/config.ts`:
- Line 4: Replace the loose string config for rabbitmqUrl with a non-empty
string validator: change the rabbitmqUrl entry that currently uses
Config.string("RABBITMQ_URL") to use Config.nonEmptyString("RABBITMQ_URL") so
startup validation fails on empty values (consistent with the existing
API_SECRET_ENCRYPTION_KEY usage). Ensure you update the rabbitmqUrl definition
only and keep the rest of the config object intact.
In `@github-service/src/effect/main.ts`:
- Around line 22-25: The current consumer wraps handleMessage(msg) with
Effect.catchAll which swallows errors; replace that with Effect.tapError so
handler failures are logged but the effect still fails and lets the consumer
perform its nack logic. Locate the mq.consume call wrapping handleMessage(msg)
and change the Effect.catchAll((err) => Effect.logError(...)) usage to
Effect.tapError((err) => Effect.logError(...)) so errors are not converted to
successes while preserving the logging side-effect.
In `@github-service/src/effect/program.ts`:
- Around line 444-458: The catchAll handler for Effect.catchAll currently logs
the clone/push failure and runs gh.deleteRepository(d.githubOrg, repoName) but
then returns repoName, which masks the error; change the handler in program.ts
so that after performing cleanup (calling gh.deleteRepository and logging any
cleanup errors) it re-raises or fails with the original error instead of
returning repoName (or alternatively return a discriminated result object like {
repoName, ok: false, error }) so downstream code (e.g.,
mq.publish("repository_created", ...)) can distinguish failure; locate the
Effect.catchAll callback around repoName and gh.deleteRepository and ensure you
either rethrow the original caught error or convert to a clear failure result
before returning.
- Around line 473-485: The call to executeGitHubAction(...) builds an Effect but
isn't executed, so collaborators aren't added; change the invocation inside the
effect workflow to actually run the Effect (use the generator-based runner,
e.g., prefix with yield* when calling executeGitHubAction) so the returned
Effect executes; keep the same payload (username, githubId, encryptedSecret,
actionName, context, action) and ensure the action callback (which calls
ghInner.addUserToRepository(d.githubOrg, repoName, u)) remains unchanged.
In `@github-service/src/effect/rabbitmq/consumer.ts`:
- Around line 108-119: The publish implementation ignores channel.sendToQueue's
boolean return (backpressure) which can drop messages; update publish (the
Effect returned by publish in consumer.ts) to check the sendToQueue return value
and wait for the channel 'drain' event when sendToQueue returns false before
completing the Effect. Replace the inner Effect.sync call around
channel.sendToQueue with an Effect that either resolves immediately when
sendToQueue returns true or registers a listener for channel.once('drain') (and
cleans up the listener on error) to resume when drained; ensure errors from
sendToQueue or the channel are propagated as a RabbitMQError via the Effect
instead of being ignored. Reference: publish, channel.sendToQueue,
cfg.resultsQueue.
In `@github-service/src/effect/repo/repoUtils.ts`:
- Around line 137-141: The code reads .gitignore without checking existence so
readTextFile(fs, gitignorePath) can throw; before calling readTextFile use the
repo utils to check for file existence (or catch RepoIOError) and treat a
missing .gitignore as empty (i.e. set existing = ""), then writeTextFile(fs,
gitignorePath, existing + content) and call removeIfExists(fs, coreignorePath);
update the block around readTextFile, gitignorePath, existing, content,
writeTextFile, and removeIfExists accordingly.
In `@github-service/src/effect/util/crypto.ts`:
- Around line 11-18: Replace Effect.sync with Effect.try in decryptSecret so
that thrown DecryptError is delivered as a typed failure rather than a defect:
wrap the AES.decrypt logic inside Effect.try(() => { const bytes =
CryptoJS.AES.decrypt(encryptedSecret, key); const decrypted =
bytes.toString(CryptoJS.enc.Utf8); if (!decrypted) throw new DecryptError();
return decrypted; }) so failures from decryptSecret (the thrown DecryptError)
flow into the Effect error channel and can be handled by downstream catchAll.
In `@github-service/src/test/effect/repo/repoUtils.test.ts`:
- Around line 488-502: The test is missing an await before the assertion so the
rejection from runWithRepoUtils(... repoUtils.pushToTeamRepo(...)) may not be
observed; modify the test to await the promise-based assertion by changing the
call to use await expect(runWithRepoUtils(fs, (repoUtils) =>
repoUtils.pushToTeamRepo({ ... }))).rejects.toThrow("git push") so the test
actually waits for the rejection from runWithRepoUtils /
repoUtils.pushToTeamRepo.
- Line 49: The normalize helper only replaces the first backslash because it
uses replace("\\", "/"); update the normalize function to replace all
backslashes before calling path.posix.normalize by using a global replacement
(e.g., replaceAll("\\", "/") or replace(/\\/g, "/")) so Windows paths with
multiple backslashes are fully normalized; keep the function name normalize and
the call to path.posix.normalize intact.
---
Nitpick comments:
In `@github-service/package.json`:
- Around line 27-28: The package.json currently lists type-only packages
"@types/amqplib" and "@types/bun" under dependencies which causes them to be
installed into the production Docker image; move these entries from dependencies
into devDependencies in package.json (or run an equivalent package manager
command such as npm/yarn pnpm to save them as devDependencies) so they are only
present at build/compile time and not shipped in the runtime image, ensuring
references to "@types/amqplib" and "@types/bun" are removed from the top-level
"dependencies" and added under "devDependencies".
In `@github-service/src/effect/github/gitHubClient.ts`:
- Around line 78-84: The rate-limit branch in GitHubClient.classifyError
hardcodes status: 429 when constructing GitHubRateLimitError, causing
403-with-zero-remaining to be reported as 429; change the construction to pass
the actual status variable (status) instead of 429 so the returned
GitHubRateLimitError reflects the real HTTP status while still computing resetAt
via GitHubClient.parseRateLimitReset(headers).
In `@github-service/src/effect/program.ts`:
- Around line 35-39: The parseMessage function currently wraps JSON.parse in
Effect.try and loses original parse errors; replace the Effect.try block with a
single decode step using Schema.decodeUnknown(InboundMessage,
JSON.parse(msg.content.toString("utf8"))) (or call Schema.decodeUnknown on the
raw string/unknown value) so parsing and validation happen together and return
typed decoding errors instead of a generic ParseMessageError; update references
to parseMessage to expect the Schema.decodeUnknown result and remove the old
Effect.try/ParseMessageError usage.
In `@github-service/src/effect/rabbitmq/consumer.ts`:
- Around line 84-99: The current use of Runtime.runFork to run handler(msg) is
fire-and-forget and can orphan message handlers on shutdown; replace it with a
tracked/scoped fork (e.g., Effect.forkScoped or explicit Fiber tracking) so
message handler fibers are supervised and awaited during shutdown. Update the
code path that invokes Runtime.runFork to spawn fibers via Effect.forkScoped (or
push each Fiber into a consumer-level registry you join on shutdown), ensure
channel.ack(msg) and channel.nack(msg, false, false) remain executed in the
handler’s success/failure finalizers, and adjust shutdown logic to await or
interrupt these tracked fibers before closing the channel/consumer.
- Around line 54-55: The channel close effect currently swallows errors via
Effect.ignore; change the cleanup for chan (the (chan) => Effect.promise(() =>
chan.close()) call) to surface and log failures instead of ignoring them—replace
Effect.ignore with an error handler like Effect.catchAll/Effect.catchSome that
logs the error via Effect.logError (include context like "RabbitMQ channel close
failed") and then continues, so channel.close failures are recorded for
debugging while still allowing shutdown to proceed.
- Around line 42-46: The release code currently uses Effect.promise(() =>
conn.close()).pipe(Effect.tap(...)) which swallows a rejection and prevents the
tap from running; change the release to log before attempting conn.close() and
handle close errors explicitly: call Effect.logInfo("RabbitMQ connection
closing") (or similar) prior to invoking conn.close(), then invoke
Effect.promise(() => conn.close()) and attach an error handler (e.g.,
Effect.catchAll or equivalent) that logs the close failure (using
Effect.logError with the caught error) and re-raises or fails the effect so
errors aren’t silently swallowed; refer to the conn.close() call, the
surrounding Effect.promise and Effect.tap usage to find and update the code.
In `@github-service/src/effect/repo/repoUtils.ts`:
- Around line 123-127: The statement ending the pipe chain that calls
Effect.withSpan("initRepo", { attributes: { "temp.folder": tempFolderPath,
"base.path": basePath } }) is missing a trailing semicolon; update the end of
the expression that uses Effect.withSpan (the initRepo span with tempFolderPath
and basePath attributes) to include the required semicolon to comply with the
project coding guidelines.
- Around line 364-398: initRepo is currently run in parallel with several
file-transforming effects (updateGitignoreFromCoreignore,
updateDevcontainerDockerCompose, updateTeamName, writeJsonConfig,
updateConfigsUrls) causing a race where initRepo may remove .git while others
access repo files; change the flow to run initRepo first (await its Effect) and
then call Effect.all (with the remaining effects and concurrency:"unbounded")
for the rest of the transformations so all file operations run only after
initRepo completes.
- Around line 416-423: The current single tryGit("git push", ...) call hides
which git operation failed because it wraps opts.gitRepo.addRemote, .add,
.commit, .branch and .push; split these into separate tryGit calls with
descriptive labels (e.g., tryGit("git addRemote", () =>
opts.gitRepo.addRemote(authUrl)), tryGit("git add", () =>
opts.gitRepo.add(".")), tryGit("git commit", () => opts.gitRepo.commit("Initial
commit")), tryGit("git branch -M main", () => opts.gitRepo.branch(["-M",
"main"])), and tryGit("git push", () => opts.gitRepo.push("team-repo", "main",
["-u"]))) so failures report the exact failing operation and include the
authUrl/context where relevant.
In `@github-service/src/test/effect/repo/repoUtils.test.ts`:
- Around line 263-268: The test redundantly provides the FileSystem service
twice: once via layer and again with
Effect.provideService(FileSystem.FileSystem, fs.asService()), which overwrites
the layer; remove the second provision to rely on the layer instead (i.e.,
delete the Effect.provideService(FileSystem.FileSystem, fs.asService()) call) so
effectProgram receives FileSystem only from layer, keeping effectProgram, layer,
FileSystem.FileSystem, and fs.asService() unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 702c08bd-c1d5-4cea-8e37-512f3bab8b70
⛔ Files ignored due to path filters (2)
github-service/bun.lockis excluded by!**/*.lockgithub-service/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (28)
AGENTS.mdapi/src/github-api/github-api.service.tsgithub-service/Dockerfilegithub-service/eslint.config.mtsgithub-service/package.jsongithub-service/pnpm-workspace.yamlgithub-service/src/app.controller.tsgithub-service/src/app.module.tsgithub-service/src/app.service.tsgithub-service/src/effect/github/gitHubClient.tsgithub-service/src/effect/layers/config.tsgithub-service/src/effect/main.tsgithub-service/src/effect/program.tsgithub-service/src/effect/rabbitmq/consumer.tsgithub-service/src/effect/repo/repoUtils.tsgithub-service/src/effect/schemas/messages.tsgithub-service/src/effect/util/crypto.tsgithub-service/src/githubApi/README.mdgithub-service/src/githubApi/client.tsgithub-service/src/githubApi/endpoints.tsgithub-service/src/githubApi/example.tsgithub-service/src/githubApi/index.tsgithub-service/src/githubApi/response-types.tsgithub-service/src/githubApi/types.tsgithub-service/src/main.tsgithub-service/src/repo.utils.tsgithub-service/src/test/effect/repo/repoUtils.test.tsgithub-service/tsconfig.build.json
💤 Files with no reviewable changes (13)
- github-service/pnpm-workspace.yaml
- github-service/src/app.module.ts
- github-service/src/githubApi/README.md
- github-service/src/githubApi/index.ts
- github-service/src/githubApi/response-types.ts
- github-service/src/main.ts
- github-service/src/githubApi/example.ts
- github-service/src/githubApi/endpoints.ts
- github-service/src/githubApi/client.ts
- github-service/src/githubApi/types.ts
- github-service/src/app.controller.ts
- github-service/src/repo.utils.ts
- github-service/src/app.service.ts
…sleep Agent-Logs-Url: https://github.com/42core-team/website/sessions/6040f71f-5fce-43b9-b2aa-72ea8e33bd82 Co-authored-by: Peu77 <60301119+Peu77@users.noreply.github.com>
…t' into refactor-github-service-to-effect
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
github-service/src/effect/program.ts (2)
107-126: Consider handlingresolveUsernamefailures in error recovery.If
resolveUsernamefails (e.g.,getUserByIdreturns an error), the original action error is lost because the new error fromresolveUsernamepropagates instead. Consider logging the original error before attempting username resolution.💡 Optional improvement
Effect.catchAll( opts.action(opts.username, gh), (error) => Effect.gen(function* () { if (opts.githubId) { - const newUsername = yield* resolveUsername( + const newUsername = yield* Effect.catchAll( + resolveUsername( opts.githubId, opts.username, gh, - ); + ), + () => Effect.succeed(null), + ); if (newUsername) { return yield* opts.action(newUsername, gh); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/program.ts` around lines 107 - 126, The recovery branch inside Effect.catchAll currently attempts to call resolveUsername and may let a resolveUsername error replace the original opts.action error; modify the handler so it first logs the original error (using Effect.log with opts.actionName and opts.context and the original error), then attempt resolveUsername in a safe/fallible way (catching any errors from resolveUsername) and only call opts.action(newUsername, gh) if resolveUsername succeeds and returns a username; if resolveUsername fails or returns no username, rethrow or fail with the original error so the original failure is preserved. Reference resolveUsername, opts.action, Effect.catchAll, Effect.log, opts.actionName, and opts.context when making the change.
498-527: Consider error handling for collaborator addition failures.If
addCollaboratorsfails afterrepository_createdis published, no error notification is sent. The repository exists but some users may not have access. Consider:
- Wrapping
addCollaboratorswith error logging/notification- Publishing a separate event for partial failures
💡 Suggested improvement
yield* mq.publish("repository_created", { repositoryName: repoName, teamId: d.teamId, }); - yield* addCollaborators(d, repoName); + yield* Effect.catchAll(addCollaborators(d, repoName), (error) => + Effect.gen(function* () { + yield* Effect.logError( + `Failed to add some collaborators to ${repoName}: ${String(error)}`, + ); + yield* mq.publish("add_collaborators_error", { + repositoryName: repoName, + teamId: d.teamId, + error: String(error), + }); + }), + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/program.ts` around lines 498 - 527, The addCollaborators call inside handleCreateTeamRepository can fail after the "repository_created" event is published; wrap the addCollaborators(d, repoName) invocation in a try/catch (or Effect.catch) to capture failures, use process logging (Effect.log or processLogger) to record the error and affected users/repos, and publish a new event (e.g., "repository_collaborators_failed") via mq.publish with repositoryName, teamId and error/details so consumers can detect partial failures; keep the successful repository_created publish as-is but ensure the catch returns or rethrows appropriately after publishing the failure event.github-service/src/effect/schemas/messages.ts (1)
14-48: Consider field naming consistency across message schemas.The schemas use inconsistent naming:
repoInfoFields:repoOwner,repoNameAddUserToRepository:githubOrg,repositoryNameIf these field names are dictated by external message producers, the inconsistency is unavoidable. Otherwise, consider standardizing (e.g., always use
repoOwner/repoNameorgithubOrg/repositoryName).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/schemas/messages.ts` around lines 14 - 48, Schemas are using inconsistent repository field names: repoInfoFields defines repoOwner/repoName while AddUserToRepository uses githubOrg/repositoryName; update AddUserToRepository to use the same names as repoInfoFields (replace githubOrg -> repoOwner and repositoryName -> repoName) so RemoveWritePermissions, AddWritePermissions and AddUserToRepository share a consistent schema shape (update any callers/consumers accordingly), or if external producers require the alternate names, add a short comment near repoInfoFields/GitHubUser documenting the canonical field names and map/transform incoming messages in the message parsing layer to the canonical fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@github-service/src/effect/program.ts`:
- Around line 257-297: handleAddUserToRepository continues to call
acceptInvitationIfToken regardless of executeGitHubAction outcome because both
onFailure and onSuccess only publish messages; change the flow so invitation
acceptance runs only on success by short‑circuiting on failure: have the
Effect.matchEffect invocation either return/throw onFailure (e.g., call fail and
then return from the generator) or refactor to produce a success value and then
only call acceptInvitationIfToken in the success path; update references to
executeGitHubAction, onFailure (fail), onSuccess (ok), and
acceptInvitationIfToken accordingly and add a brief comment if you intentionally
keep the current behavior.
---
Nitpick comments:
In `@github-service/src/effect/program.ts`:
- Around line 107-126: The recovery branch inside Effect.catchAll currently
attempts to call resolveUsername and may let a resolveUsername error replace the
original opts.action error; modify the handler so it first logs the original
error (using Effect.log with opts.actionName and opts.context and the original
error), then attempt resolveUsername in a safe/fallible way (catching any errors
from resolveUsername) and only call opts.action(newUsername, gh) if
resolveUsername succeeds and returns a username; if resolveUsername fails or
returns no username, rethrow or fail with the original error so the original
failure is preserved. Reference resolveUsername, opts.action, Effect.catchAll,
Effect.log, opts.actionName, and opts.context when making the change.
- Around line 498-527: The addCollaborators call inside
handleCreateTeamRepository can fail after the "repository_created" event is
published; wrap the addCollaborators(d, repoName) invocation in a try/catch (or
Effect.catch) to capture failures, use process logging (Effect.log or
processLogger) to record the error and affected users/repos, and publish a new
event (e.g., "repository_collaborators_failed") via mq.publish with
repositoryName, teamId and error/details so consumers can detect partial
failures; keep the successful repository_created publish as-is but ensure the
catch returns or rethrows appropriately after publishing the failure event.
In `@github-service/src/effect/schemas/messages.ts`:
- Around line 14-48: Schemas are using inconsistent repository field names:
repoInfoFields defines repoOwner/repoName while AddUserToRepository uses
githubOrg/repositoryName; update AddUserToRepository to use the same names as
repoInfoFields (replace githubOrg -> repoOwner and repositoryName -> repoName)
so RemoveWritePermissions, AddWritePermissions and AddUserToRepository share a
consistent schema shape (update any callers/consumers accordingly), or if
external producers require the alternate names, add a short comment near
repoInfoFields/GitHubUser documenting the canonical field names and
map/transform incoming messages in the message parsing layer to the canonical
fields.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c8056460-baa9-4f3c-9c23-ec96cb01b9bc
📒 Files selected for processing (9)
github-service/package.jsongithub-service/src/effect/github/gitHubClient.tsgithub-service/src/effect/layers/config.tsgithub-service/src/effect/main.tsgithub-service/src/effect/program.tsgithub-service/src/effect/repo/repoUtils.tsgithub-service/src/effect/schemas/messages.tsgithub-service/src/effect/util/crypto.tsgithub-service/src/test/effect/repo/repoUtils.test.ts
✅ Files skipped from review due to trivial changes (4)
- github-service/src/effect/layers/config.ts
- github-service/src/effect/main.ts
- github-service/src/effect/util/crypto.ts
- github-service/src/test/effect/repo/repoUtils.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- github-service/package.json
- github-service/src/effect/repo/repoUtils.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
github-service/src/effect/program.ts (3)
35-39: JSON parsing error is typed but loses original cause.The
ParseMessageErrordiscards the original exception. For debugging malformed messages, preserving the cause would be helpful.💡 Optional improvement
const parseMessage = (msg: ConsumeMessage) => Effect.try({ try: () => JSON.parse(msg.content.toString("utf8")) as unknown, - catch: () => new ParseMessageError({ message: "Invalid JSON message" }), + catch: (e) => new ParseMessageError({ message: `Invalid JSON message: ${String(e)}` }), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/program.ts` around lines 35 - 39, The JSON parse in parseMessage currently converts any parse failure into a ParseMessageError but drops the original exception; update the catch handler in parseMessage to accept the caught error and pass it through into the ParseMessageError (e.g., as a cause/innerError property or field your error type supports) so the original exception and stack are preserved for debugging when JSON.parse fails.
277-317:handleAddUserToRepositorynow correctly sequences operations.The
executeGitHubActionis properly yielded withyield*, and since it's inside anEffect.gen, any failure will propagate and skip subsequent operations (acceptInvitationIfTokenandok). This addresses the previous review concern about invitation acceptance running regardless of add-user outcome.However, there's no
fail()call on error path — ifexecuteGitHubActionthrows, the error propagates up tohandleMessagebut noadd_user_to_repository_errormessage is published.💡 Consider wrapping with matchEffect for consistent error reporting
const handleAddUserToRepository = (msg: AddUserToRepositoryMessage) => { const d = msg.data; - return Effect.gen(function* () { - yield* executeGitHubAction({ + return Effect.matchEffect( + executeGitHubAction({ username: d.username, ... }), - - yield* acceptInvitationIfToken(...); - - yield* ok("add_user_to_repository", {...}); - }).pipe( + { + onFailure: (e) => + fail("add_user_to_repository", e, {...}), + onSuccess: () => + acceptInvitationIfToken(...).pipe( + Effect.andThen(ok("add_user_to_repository", {...})), + ), + }, + ).pipe( Effect.withSpan(...), ); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/program.ts` around lines 277 - 317, handleAddUserToRepository currently lets errors from executeGitHubAction bubble up without publishing an add_user_to_repository_error event; wrap the Effect.gen returned by handleAddUserToRepository with an error matcher (e.g., Effect.matchEffect or Effect.catchAll) so that on failure you call fail("add_user_to_repository_error", { githubOrg: d.githubOrg, repositoryName: d.repositoryName, username: d.username, githubId: d.githubId, error: <serialized error> }) and on success continue to call acceptInvitationIfToken and ok as before; reference the existing symbols handleAddUserToRepository, executeGitHubAction, acceptInvitationIfToken, ok, and fail when implementing the match so failures produce the consistent error message.
107-126:executeGitHubActionerror handling correctly retries with resolved username.The pattern of catching the initial action failure, resolving the potentially changed username via
getUserById, and retrying is well-implemented. Theusername_changedevent is published to keep downstream systems in sync.One edge case: if
resolveUsernameitself fails (e.g., invalidgithubId), the error is not caught and the original action error is lost.💡 Consider preserving original error context
Effect.gen(function* () { if (opts.githubId) { - const newUsername = yield* resolveUsername( + const newUsername = yield* Effect.catchAll( + resolveUsername(opts.githubId, opts.username, gh), + () => Effect.succeed(null), + ); - opts.githubId, - opts.username, - gh, - ); if (newUsername) { return yield* opts.action(newUsername, gh); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/program.ts` around lines 107 - 126, In the catchAll handler, prevent resolveUsername errors from swallowing the original action error by running resolveUsername inside its own Effect.catchAll (or Effect.gen try/catch) so resolution failures return undefined (or a controlled value) and are logged, then only retry opts.action(newUsername, gh) if a valid newUsername is returned; ensure the final logged error and returned failure use the original caught error (String(error)) when resolution fails. Reference: resolveUsername, opts.action, opts.actionName, opts.context.github-service/package.json (1)
33-34:@types/*packages should be indevDependencies, notdependencies.Type definition packages (
@types/amqpliband@types/bun) are only needed at compile time and shouldn't be included in production dependencies. This adds unnecessary weight to production installs.♻️ Proposed fix
Move these to
devDependencies:"dependencies": { "@effect-messaging/amqp": "^0.6.4", ... - "@types/amqplib": "^0.10.8", - "@types/bun": "^1.3.11", "@types/crypto-js": "^4.2.2", ... }, "devDependencies": { + "@types/amqplib": "^0.10.8", + "@types/bun": "^1.3.11", "@effect/eslint-plugin": "^0.3.2",Note:
@types/crypto-jsshould also be moved todevDependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/package.json` around lines 33 - 34, The package.json currently lists type-only packages (`@types/amqplib` and `@types/bun` — and also `@types/crypto-js`) under dependencies; move these three entries out of "dependencies" and into "devDependencies" so type definitions are only installed at compile time; update the package.json by removing the three `@types` entries from dependencies and adding them under devDependencies, then reinstall or update the lockfile (npm/yarn/pnpm) so the production install no longer includes these type packages.github-service/src/effect/main.ts (1)
48-64: Consumer error handling improved withtapError, butRabbitMQErrorcatch may silently exit.The
Effect.tapErrorcorrectly preserves failure semantics while logging (addressing the previous review). However, whenRabbitMQErroris caught at line 62, the effect completes without restarting the consumer. SinceRabbitMQLivealready hasreconnectSchedulefor connection issues, this should primarily catch unrecoverable errors — but consider whether the process should exit with a non-zero code to trigger container restart.💡 Optional: Exit with error code on fatal RabbitMQ failure
Effect.catchTag("RabbitMQError", (err) => Effect.logError(err)), + Effect.catchAll(() => Effect.sync(() => process.exit(1))),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/src/effect/main.ts` around lines 48 - 64, The catch for RabbitMQError in the consumer effect currently logs and lets the effect complete, which silently stops the consumer; update the handler for the RabbitMQError tag (the catchTag on the Effect.gen that builds consumer) to either rethrow/fail the effect after logging or terminate the process with a non-zero exit so the container can restart (e.g., log via Effect.logError(err) and then return an Effect.fail(err) or call process exit), ensuring you don't swallow failures from RabbitMQ (references: consumer, RabbitMQError, RabbitMQ, AppLayer, handleMessage).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@github-service/src/test/effect/repo/repoUtils.test.ts`:
- Around line 405-422: The test calling runWithRepoUtils(...) and
repoUtils.cloneMonoRepo(...) uses expect(...).rejects.toThrow(...) but is
missing an await, so the promise rejection isn't awaited; add await before the
expect expression in this test to properly await the promise and validate that
repoUtils.cloneMonoRepo (the cloneMonoRepo call inside runWithRepoUtils) rejects
with the expected "git clone" error.
---
Nitpick comments:
In `@github-service/package.json`:
- Around line 33-34: The package.json currently lists type-only packages
(`@types/amqplib` and `@types/bun` — and also `@types/crypto-js`) under dependencies;
move these three entries out of "dependencies" and into "devDependencies" so
type definitions are only installed at compile time; update the package.json by
removing the three `@types` entries from dependencies and adding them under
devDependencies, then reinstall or update the lockfile (npm/yarn/pnpm) so the
production install no longer includes these type packages.
In `@github-service/src/effect/main.ts`:
- Around line 48-64: The catch for RabbitMQError in the consumer effect
currently logs and lets the effect complete, which silently stops the consumer;
update the handler for the RabbitMQError tag (the catchTag on the Effect.gen
that builds consumer) to either rethrow/fail the effect after logging or
terminate the process with a non-zero exit so the container can restart (e.g.,
log via Effect.logError(err) and then return an Effect.fail(err) or call process
exit), ensuring you don't swallow failures from RabbitMQ (references: consumer,
RabbitMQError, RabbitMQ, AppLayer, handleMessage).
In `@github-service/src/effect/program.ts`:
- Around line 35-39: The JSON parse in parseMessage currently converts any parse
failure into a ParseMessageError but drops the original exception; update the
catch handler in parseMessage to accept the caught error and pass it through
into the ParseMessageError (e.g., as a cause/innerError property or field your
error type supports) so the original exception and stack are preserved for
debugging when JSON.parse fails.
- Around line 277-317: handleAddUserToRepository currently lets errors from
executeGitHubAction bubble up without publishing an add_user_to_repository_error
event; wrap the Effect.gen returned by handleAddUserToRepository with an error
matcher (e.g., Effect.matchEffect or Effect.catchAll) so that on failure you
call fail("add_user_to_repository_error", { githubOrg: d.githubOrg,
repositoryName: d.repositoryName, username: d.username, githubId: d.githubId,
error: <serialized error> }) and on success continue to call
acceptInvitationIfToken and ok as before; reference the existing symbols
handleAddUserToRepository, executeGitHubAction, acceptInvitationIfToken, ok, and
fail when implementing the match so failures produce the consistent error
message.
- Around line 107-126: In the catchAll handler, prevent resolveUsername errors
from swallowing the original action error by running resolveUsername inside its
own Effect.catchAll (or Effect.gen try/catch) so resolution failures return
undefined (or a controlled value) and are logged, then only retry
opts.action(newUsername, gh) if a valid newUsername is returned; ensure the
final logged error and returned failure use the original caught error
(String(error)) when resolution fails. Reference: resolveUsername, opts.action,
opts.actionName, opts.context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0a305959-423e-46d5-9955-c93f2fd6a804
⛔ Files ignored due to path filters (1)
github-service/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
.devcontainer/docker-compose.yml.devcontainer/grafana/provisioning/dashboards/dashboards.yml.devcontainer/grafana/provisioning/dashboards/github-service-effect.json.devcontainer/grafana/provisioning/datasources/prometheus.yml.devcontainer/prometheus/prometheus.yml.devcontainer/tempo/tempo.ymlgithub-service/.env.examplegithub-service/package.jsongithub-service/src/effect/layers/config.tsgithub-service/src/effect/main.tsgithub-service/src/effect/program.tsgithub-service/src/effect/rabbitmq/consumer.tsgithub-service/src/test/effect/repo/repoUtils.test.ts
✅ Files skipped from review due to trivial changes (5)
- .devcontainer/prometheus/prometheus.yml
- .devcontainer/grafana/provisioning/dashboards/dashboards.yml
- .devcontainer/tempo/tempo.yml
- .devcontainer/grafana/provisioning/dashboards/github-service-effect.json
- .devcontainer/grafana/provisioning/datasources/prometheus.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- github-service/src/effect/layers/config.ts
| expect( | ||
| runWithRepoUtils(fs, (repoUtils) => | ||
| repoUtils.cloneMonoRepo({ | ||
| monoRepoUrl: "https://github.com/acme/mono.git", | ||
| monoRepoVersion: "main", | ||
| myCoreBotDockerImage: "core:v1", | ||
| visualizerDockerImage: "vis:v1", | ||
| tempFolderPath, | ||
| eventId: "event-1", | ||
| teamName: "Alpha", | ||
| basePath: "template", | ||
| gameConfig: "{}", | ||
| serverConfig: "{}", | ||
| apiBaseUrl: "https://api.example.com", | ||
| }), | ||
| ), | ||
| ).rejects.toThrow("git clone"); | ||
| }); |
There was a problem hiding this comment.
Missing await before expect(...).rejects.toThrow() — test may pass incorrectly.
Without await, the promise rejection is not awaited, and the test will pass regardless of whether the rejection occurs. This is the same pattern flagged in the pushToTeamRepo test.
🐛 Proposed fix
- expect(
+ await expect(
runWithRepoUtils(fs, (repoUtils) =>
repoUtils.cloneMonoRepo({
monoRepoUrl: "https://github.com/acme/mono.git",
monoRepoVersion: "main",
myCoreBotDockerImage: "core:v1",
visualizerDockerImage: "vis:v1",
tempFolderPath,
eventId: "event-1",
teamName: "Alpha",
basePath: "template",
gameConfig: "{}",
serverConfig: "{}",
apiBaseUrl: "https://api.example.com",
}),
),
).rejects.toThrow("git clone");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@github-service/src/test/effect/repo/repoUtils.test.ts` around lines 405 -
422, The test calling runWithRepoUtils(...) and repoUtils.cloneMonoRepo(...)
uses expect(...).rejects.toThrow(...) but is missing an await, so the promise
rejection isn't awaited; add await before the expect expression in this test to
properly await the promise and validate that repoUtils.cloneMonoRepo (the
cloneMonoRepo call inside runWithRepoUtils) rejects with the expected "git
clone" error.
…empo, and Grafana
There was a problem hiding this comment.
🧹 Nitpick comments (3)
local-setup/infra/observability.yaml (2)
128-132: Hardcoded Grafana credentials are fine for local dev.For production deployments, these should be externalized to Secrets. For local development,
admin/adminis acceptable and common practice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@local-setup/infra/observability.yaml` around lines 128 - 132, Replace the hardcoded GF_SECURITY_ADMIN_USER and GF_SECURITY_ADMIN_PASSWORD env entries in the env block with secret-backed references for non-local deployments: remove the literal value fields and use Kubernetes secret valueFrom references (or envFrom) that point to a Grafana admin Secret you create, and keep the current admin/admin literals only in a local-dev overlay or docker-compose override; update any deployment/overlay tooling to populate that Secret for prod.
29-45: Consider adding securityContext for defense in depth (optional for local dev).Static analysis flags missing security contexts on all three deployments. While less critical for local development, adding baseline security settings would better mirror production and is considered best practice.
🛡️ Example securityContext for Prometheus container
containers: - name: prometheus image: prom/prometheus:v3.5.0 + securityContext: + allowPrivilegeEscalation: false + runAsNonRoot: true + readOnlyRootFilesystem: true args: - --config.file=/etc/prometheus/prometheus.ymlSimilar patterns can be applied to Tempo and Grafana containers. Note that Grafana may need a writable
/var/lib/grafana(already handled viaemptyDir).Also applies to: 74-92, 124-148
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@local-setup/infra/observability.yaml` around lines 29 - 45, Add a baseline securityContext to the Prometheus deployment by setting pod and container-level securityContext fields: for the container "prometheus" set securityContext with runAsNonRoot: true, runAsUser (non-root UID), allowPrivilegeEscalation: false, readOnlyRootFilesystem: true and drop all capabilities; at the pod spec add a podSecurityContext with fsGroup and runAsNonRoot as well. Apply the same pattern to the Tempo and Grafana container specs (noting Grafana needs writable /var/lib/grafana via its emptyDir) so all three deployments use consistent hardened securityContext settings.github-service/Dockerfile.dev (1)
1-16: Consider adding a non-root user for defense in depth.Per static analysis (Trivy DS-0002), the container runs as root. While acceptable for local development, adding a non-root user improves security posture and better mirrors production behavior.
🛡️ Optional: Add non-root user
FROM oven/bun:1.3.11-alpine RUN apk add --no-cache git WORKDIR /app +RUN adduser -D -u 1001 appuser + COPY package.json bun.lock ./ RUN bun install --frozen-lockfile COPY . . +RUN chown -R appuser:appuser /app RUN git config --global user.name "corethegame" && \ git config --global user.email "core@paulgrossmann.de" EXPOSE 3000 +USER appuser CMD ["bun", "run", "start:dev"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@github-service/Dockerfile.dev` around lines 1 - 16, The Dockerfile currently runs as root; add a non-root user and switch to it to improve security: create a user (e.g., "appuser") and group during image build (via RUN), chown the WORKDIR (/app) and any copied files to that user (after COPY . . and after bun install), and add a USER appuser line before CMD so the container runs as that non-root user; keep the existing RUN git config steps either run as root if needed or move them to run as the new user only if the user has appropriate permissions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@github-service/Dockerfile.dev`:
- Around line 1-16: The Dockerfile currently runs as root; add a non-root user
and switch to it to improve security: create a user (e.g., "appuser") and group
during image build (via RUN), chown the WORKDIR (/app) and any copied files to
that user (after COPY . . and after bun install), and add a USER appuser line
before CMD so the container runs as that non-root user; keep the existing RUN
git config steps either run as root if needed or move them to run as the new
user only if the user has appropriate permissions.
In `@local-setup/infra/observability.yaml`:
- Around line 128-132: Replace the hardcoded GF_SECURITY_ADMIN_USER and
GF_SECURITY_ADMIN_PASSWORD env entries in the env block with secret-backed
references for non-local deployments: remove the literal value fields and use
Kubernetes secret valueFrom references (or envFrom) that point to a Grafana
admin Secret you create, and keep the current admin/admin literals only in a
local-dev overlay or docker-compose override; update any deployment/overlay
tooling to populate that Secret for prod.
- Around line 29-45: Add a baseline securityContext to the Prometheus deployment
by setting pod and container-level securityContext fields: for the container
"prometheus" set securityContext with runAsNonRoot: true, runAsUser (non-root
UID), allowPrivilegeEscalation: false, readOnlyRootFilesystem: true and drop all
capabilities; at the pod spec add a podSecurityContext with fsGroup and
runAsNonRoot as well. Apply the same pattern to the Tempo and Grafana container
specs (noting Grafana needs writable /var/lib/grafana via its emptyDir) so all
three deployments use consistent hardened securityContext settings.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 907d07d3-ed27-4ae8-a27e-f12a159559ac
📒 Files selected for processing (10)
.devcontainer/docker-compose.ymlTiltfilegithub-service/Dockerfile.devlocal-setup/infra/observability.yamllocal-setup/infra/observability/grafana/provisioning/dashboards/dashboards.ymllocal-setup/infra/observability/grafana/provisioning/dashboards/github-service-effect.jsonlocal-setup/infra/observability/grafana/provisioning/datasources/prometheus.ymllocal-setup/infra/observability/prometheus/prometheus.ymllocal-setup/infra/observability/tempo/tempo.ymllocal-setup/local-values/github-service.yaml
✅ Files skipped from review due to trivial changes (5)
- local-setup/local-values/github-service.yaml
- local-setup/infra/observability/grafana/provisioning/dashboards/dashboards.yml
- local-setup/infra/observability/prometheus/prometheus.yml
- local-setup/infra/observability/grafana/provisioning/datasources/prometheus.yml
- local-setup/infra/observability/tempo/tempo.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- .devcontainer/docker-compose.yml
In this pull request, I removed the NestJS framework from the GitHub service and completely rewrote the project using Effect (https://effect.website/) and Bun.
The reason for this is that Effect will make this service production ready by identifying exactly which errors can occur at which point and providing full OpenTelemetry support for this code, as well as robust features such as exponential retry.
This will also improve the readability and testability of the code, because Effect has builtin support for dependency injection and is designed for functional programming.
Summary by CodeRabbit
New Features
Documentation
Refactor
Chores