feat: add neural engagement prediction endpoint (TRIBE v2)#425
feat: add neural engagement prediction endpoint (TRIBE v2)#425sidneyswift wants to merge 1 commit intotestfrom
Conversation
New REST resource at /api/predictions with POST (create), GET (list), and GET /:id (detail). Includes Modal integration, Supabase persistence, Zod validation, MCP tools (predict_engagement, get_predictions), and 26 unit tests. Made-with: Cursor
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR introduces a complete prediction API for TRIBE v2 neural engagement analysis. It adds REST endpoints for creating and retrieving predictions, MCP tools for engagement prediction and retrieval, Supabase database operations for persistence, and an external TRIBE API client with validation and error handling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant POST_Handler as POST /predictions
participant Auth as validateAuthContext
participant Validate as validateCreatePredictionBody
participant Tribe as callTribePredict
participant DB as insertPrediction
Client->>POST_Handler: POST body: {file_url, modality}
POST_Handler->>Auth: Validate request
Auth-->>POST_Handler: accountId
POST_Handler->>Validate: Parse & validate body
Validate-->>POST_Handler: CreatePredictionBody
POST_Handler->>Tribe: Call TRIBE API
Tribe-->>POST_Handler: TribePredictResult
POST_Handler->>DB: Insert prediction + metrics
DB-->>POST_Handler: PredictionRow
POST_Handler-->>Client: 200 {id, metrics, created_at}
sequenceDiagram
participant Client
participant GET_Handler as GET /predictions or /predictions/{id}
participant Auth as validateAuthContext
participant DB as selectPredictions or selectPredictionById
participant Client2 as Client
Client->>GET_Handler: GET request (with query params or id)
GET_Handler->>Auth: Validate request
Auth-->>GET_Handler: accountId
GET_Handler->>DB: Query predictions
DB-->>GET_Handler: PredictionRow(s) or null
alt Not Found
GET_Handler-->>Client2: 404 {error: "Prediction not found"}
else Unauthorized
GET_Handler-->>Client2: 404 {error: "Prediction not found"}
else Success
GET_Handler-->>Client2: 200 {status: "success", predictions}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 1❌ Failed checks (1 warning)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
6 issues found across 23 files
Confidence score: 3/5
- There is moderate merge risk because several medium-severity, high-confidence issues are user-impacting: unbounded query limits in
lib/supabase/predictions/selectPredictions.tscan trigger large database reads, and malformed IDs inlib/predictions/getOnePredictionHandler.tscan surface as 500s instead of clean client errors. - The most severe runtime reliability concern is in
lib/tribe/callTribePredict.ts: parsing external service responses without guardingresponse.json()can turn non-JSON upstream failures into opaqueSyntaxErrors, making production failures harder to diagnose. - Input/response validation is currently weaker than the type contracts imply in
lib/tribe/isTribePredictResult.ts; shallow array checks on external payloads increase regression risk if element shapes are malformed. - Pay close attention to
lib/tribe/callTribePredict.ts,lib/tribe/isTribePredictResult.ts,lib/supabase/predictions/selectPredictions.ts,lib/predictions/getOnePredictionHandler.ts- external payload handling, validation correctness, and query bounds are the key risk areas.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="lib/predictions/__tests__/postCreatePredictionHandler.test.ts">
<violation number="1" location="lib/predictions/__tests__/postCreatePredictionHandler.test.ts:1">
P2: Custom agent: **Enforce Clear Code Style and Maintainability Practices**
Split this test file into smaller files to stay under the 100-line limit and keep single responsibility.</violation>
</file>
<file name="lib/tribe/callTribePredict.ts">
<violation number="1" location="lib/tribe/callTribePredict.ts:31">
P2: Wrap `response.json()` in a try-catch so that a non-JSON response from Modal produces a clear error instead of an opaque `SyntaxError`. This is especially important for an external GPU inference service where HTML error pages or empty responses are realistic failure modes.</violation>
</file>
<file name="lib/tribe/isTribePredictResult.ts">
<violation number="1" location="lib/tribe/isTribePredictResult.ts:25">
P2: Type guard claims `value is TribePredictResult` but only shallow-checks the three array fields with `Array.isArray()`, never validating element shapes `{ time_seconds: number; score: number }`. Since this guards an external API response, a malformed array element would silently pass and propagate incorrect types downstream. Consider validating at least the first element of each array (or using Zod for the full response).</violation>
</file>
<file name="lib/supabase/predictions/selectPredictions.ts">
<violation number="1" location="lib/supabase/predictions/selectPredictions.ts:21">
P2: The documented max limit of 100 is not enforced. Callers can pass arbitrarily large values, resulting in unbounded reads from the database. Clamp the value to match the documented contract.</violation>
</file>
<file name="lib/predictions/getOnePredictionHandler.ts">
<violation number="1" location="lib/predictions/getOnePredictionHandler.ts:26">
P2: Validate `id` as a UUID before querying the database. A malformed `id` like `"abc"` will cause a Postgres type error, and the handler will return 500 instead of a 400/404.</violation>
</file>
<file name="lib/supabase/predictions/selectPredictionById.ts">
<violation number="1" location="lib/supabase/predictions/selectPredictionById.ts:3">
P2: Duplicate `PredictionRow` interface — `insertPrediction.ts` in the same directory already defines this type (as `PredictionInsert` + `{ id, created_at }`). Extract it to a shared file (e.g., `lib/supabase/predictions/types.ts`) or re-export it from `insertPrediction.ts` to avoid drift when the schema changes.</violation>
</file>
Architecture diagram
sequenceDiagram
participant Client as User / MCP Client
participant Router as API Route / MCP Tool
participant Auth as Auth Service
participant Logic as Tribe Logic (Shared)
participant Modal as Modal (GPU Inference)
participant DB as Supabase
Note over Client,DB: Engagement Prediction Flow (TRIBE v2)
Client->>Router: NEW: Predict Request (file_url, modality)
Router->>Auth: validateAuth() / resolveAccountId()
Auth-->>Router: accountId
Router->>Router: Validate Body (Zod)
Router->>Logic: NEW: processPredictRequest()
Logic->>Modal: NEW: callTribePredict()
Note right of Modal: TRIBE_PREDICT_URL
alt Modal Success
Modal-->>Logic: Engagement metrics (score, timeline, regions)
Logic-->>Router: Prediction results
Router->>DB: NEW: insertPrediction()
DB-->>Router: Saved record with UUID
Router-->>Client: 200 OK (Prediction Data)
else Modal Error / Timeout
Modal-->>Logic: Error response
Logic-->>Router: Error object
Router-->>Client: 500 Error (Engagement prediction failed)
end
Note over Client,DB: Retrieval Flow
Client->>Router: GET /api/predictions (List or Detail)
Router->>Auth: validateAuth()
Auth-->>Router: accountId
Router->>DB: NEW: selectPredictions() / selectPredictionById()
DB-->>Router: Prediction record(s)
alt Found & Owned
Router-->>Client: 200 OK (Data)
else Not Found / Forbidden
Router-->>Client: 404 Not Found
end
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.
| @@ -0,0 +1,142 @@ | |||
| import { describe, it, expect, vi, beforeEach } from "vitest"; | |||
There was a problem hiding this comment.
P2: Custom agent: Enforce Clear Code Style and Maintainability Practices
Split this test file into smaller files to stay under the 100-line limit and keep single responsibility.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/predictions/__tests__/postCreatePredictionHandler.test.ts, line 1:
<comment>Split this test file into smaller files to stay under the 100-line limit and keep single responsibility.</comment>
<file context>
@@ -0,0 +1,142 @@
+import { describe, it, expect, vi, beforeEach } from "vitest";
+import { NextRequest, NextResponse } from "next/server";
+import { postCreatePredictionHandler } from "../postCreatePredictionHandler";
</file context>
| throw new Error(`Engagement prediction failed (status ${response.status})`); | ||
| } | ||
|
|
||
| const data = await response.json(); |
There was a problem hiding this comment.
P2: Wrap response.json() in a try-catch so that a non-JSON response from Modal produces a clear error instead of an opaque SyntaxError. This is especially important for an external GPU inference service where HTML error pages or empty responses are realistic failure modes.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/tribe/callTribePredict.ts, line 31:
<comment>Wrap `response.json()` in a try-catch so that a non-JSON response from Modal produces a clear error instead of an opaque `SyntaxError`. This is especially important for an external GPU inference service where HTML error pages or empty responses are realistic failure modes.</comment>
<file context>
@@ -0,0 +1,36 @@
+ throw new Error(`Engagement prediction failed (status ${response.status})`);
+ }
+
+ const data = await response.json();
+ if (!isTribePredictResult(data)) {
+ throw new Error("TRIBE v2 returned an unexpected response shape");
</file context>
| const c = value as Record<string, unknown>; | ||
| return ( | ||
| typeof c.engagement_score === "number" && | ||
| Array.isArray(c.engagement_timeline) && |
There was a problem hiding this comment.
P2: Type guard claims value is TribePredictResult but only shallow-checks the three array fields with Array.isArray(), never validating element shapes { time_seconds: number; score: number }. Since this guards an external API response, a malformed array element would silently pass and propagate incorrect types downstream. Consider validating at least the first element of each array (or using Zod for the full response).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/tribe/isTribePredictResult.ts, line 25:
<comment>Type guard claims `value is TribePredictResult` but only shallow-checks the three array fields with `Array.isArray()`, never validating element shapes `{ time_seconds: number; score: number }`. Since this guards an external API response, a malformed array element would silently pass and propagate incorrect types downstream. Consider validating at least the first element of each array (or using Zod for the full response).</comment>
<file context>
@@ -0,0 +1,33 @@
+ const c = value as Record<string, unknown>;
+ return (
+ typeof c.engagement_score === "number" &&
+ Array.isArray(c.engagement_timeline) &&
+ Array.isArray(c.peak_moments) &&
+ Array.isArray(c.weak_spots) &&
</file context>
| */ | ||
| export async function selectPredictions( | ||
| accountId: string, | ||
| limit = 20, |
There was a problem hiding this comment.
P2: The documented max limit of 100 is not enforced. Callers can pass arbitrarily large values, resulting in unbounded reads from the database. Clamp the value to match the documented contract.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/supabase/predictions/selectPredictions.ts, line 21:
<comment>The documented max limit of 100 is not enforced. Callers can pass arbitrarily large values, resulting in unbounded reads from the database. Clamp the value to match the documented contract.</comment>
<file context>
@@ -0,0 +1,36 @@
+ */
+export async function selectPredictions(
+ accountId: string,
+ limit = 20,
+ offset = 0,
+): Promise<PredictionSummary[]> {
</file context>
| const { accountId } = authResult; | ||
|
|
||
| try { | ||
| const prediction = await selectPredictionById(id); |
There was a problem hiding this comment.
P2: Validate id as a UUID before querying the database. A malformed id like "abc" will cause a Postgres type error, and the handler will return 500 instead of a 400/404.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/predictions/getOnePredictionHandler.ts, line 26:
<comment>Validate `id` as a UUID before querying the database. A malformed `id` like `"abc"` will cause a Postgres type error, and the handler will return 500 instead of a 400/404.</comment>
<file context>
@@ -0,0 +1,66 @@
+ const { accountId } = authResult;
+
+ try {
+ const prediction = await selectPredictionById(id);
+
+ if (!prediction) {
</file context>
| @@ -0,0 +1,37 @@ | |||
| import supabase from "../serverClient"; | |||
|
|
|||
| interface PredictionRow { | |||
There was a problem hiding this comment.
P2: Duplicate PredictionRow interface — insertPrediction.ts in the same directory already defines this type (as PredictionInsert + { id, created_at }). Extract it to a shared file (e.g., lib/supabase/predictions/types.ts) or re-export it from insertPrediction.ts to avoid drift when the schema changes.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/supabase/predictions/selectPredictionById.ts, line 3:
<comment>Duplicate `PredictionRow` interface — `insertPrediction.ts` in the same directory already defines this type (as `PredictionInsert` + `{ id, created_at }`). Extract it to a shared file (e.g., `lib/supabase/predictions/types.ts`) or re-export it from `insertPrediction.ts` to avoid drift when the schema changes.</comment>
<file context>
@@ -0,0 +1,37 @@
+import supabase from "../serverClient";
+
+interface PredictionRow {
+ id: string;
+ account_id: string;
</file context>
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (4)
lib/supabase/predictions/insertPrediction.ts (1)
3-19: Extract shared prediction row/input types to one module.
PredictionInsert/PredictionRoware duplicated across prediction Supabase files. Please centralize them (e.g., one shared types module) to avoid drift when the table schema changes.As per coding guidelines: “Extract shared logic into reusable utilities following Don't Repeat Yourself (DRY) principle.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/supabase/predictions/insertPrediction.ts` around lines 3 - 19, PredictionInsert and PredictionRow are duplicated; extract them into a single shared types module (e.g., create a new module exporting interfaces PredictionInsert and PredictionRow), then update this file to import those types instead of declaring them locally; ensure the exported names match exactly (PredictionInsert, PredictionRow) and update other prediction-related Supabase files to import from the new module so all code uses the single source of truth.lib/predictions/getOnePredictionHandler.ts (1)
28-40: Deduplicate the repeated 404 response branch.Both “not found” and “wrong account” return the same payload. Extracting a small helper keeps this handler tighter and easier to maintain.
As per coding guidelines: “Extract shared logic into reusable utilities following Don't Repeat Yourself (DRY) principle” and “Keep functions small and focused.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/predictions/getOnePredictionHandler.ts` around lines 28 - 40, The two identical 404 branches in getOnePredictionHandler (checking if prediction is falsy and if prediction.account_id !== accountId) should be deduplicated: add a small helper function (e.g., notFoundResponse or respondPredictionNotFound) that returns NextResponse.json({ status: "error", error: "Prediction not found" }, { status: 404, headers: getCorsHeaders() }) and replace both direct returns with calls to that helper so the handler uses the single reusable response for both the missing-prediction and wrong-account checks.lib/supabase/predictions/selectPredictionById.ts (1)
1-1: Align this Supabase module to the project typing/import pattern.Please switch to the canonical Supabase import and table-backed typing instead of a local row interface + cast. This keeps schema drift from silently compiling and matches the repository standard for
lib/supabase/**.As per coding guidelines: “Supabase database functions should import from
@/lib/supabase/serverClient… and return typed results usingTables<"table_name">.”Also applies to: 3-16, 36-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/supabase/predictions/selectPredictionById.ts` at line 1, Replace the local supabase import and manual row interface/cast in selectPredictionById with the project canonical Supabase client and table-backed typing: import the client from "@/lib/supabase/serverClient" and type results using Tables<"predictions"> (e.g., use PostgrestResponse<Tables["predictions"]> or return value typed as Tables["predictions"]) instead of a local interface and casts; update the function selectPredictionById and any related variables/returns (lines referenced around 3-16 and 36) to use the Tables<"predictions"> type so the module matches the repo standard and prevents schema drift.lib/predictions/postCreatePredictionHandler.ts (1)
16-89: Refactor this handler into smaller units to satisfy SRP and size constraints.
postCreatePredictionHandlercurrently combines parsing, auth, validation, prediction orchestration, persistence, and response mapping in one long function. Please split this into focused helpers (e.g., request parsing, prediction persistence payload mapping, success/error response builders) to reduce complexity and improve testability.♻️ Suggested refactor shape
export async function postCreatePredictionHandler( request: NextRequest, ): Promise<NextResponse> { - let body: unknown; - try { - body = await request.json(); - } catch { - return NextResponse.json( - { status: "error", error: "Request body must be valid JSON" }, - { status: 400, headers: getCorsHeaders() }, - ); - } + const bodyOrError = await parseJsonBody(request); + if (bodyOrError instanceof NextResponse) return bodyOrError; + const body = bodyOrError; ... } + +async function parseJsonBody(request: NextRequest): Promise<unknown | NextResponse> { + try { + return await request.json(); + } catch { + return NextResponse.json( + { status: "error", error: "Request body must be valid JSON" }, + { status: 400, headers: getCorsHeaders() }, + ); + } +}As per coding guidelines, “
lib/**/*.ts: Apply Single Responsibility Principle (SRP): one exported function per file; each file should do one thing well” and “Flag functions longer than 20 lines… Keep functions small and focused…lib/**/*.ts… Keep functions under 50 lines.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/predictions/postCreatePredictionHandler.ts` around lines 16 - 89, Split the large postCreatePredictionHandler into small focused helpers: extract request JSON parsing into parseJsonBody(request) that returns validated body or NextResponse error; move auth + validation usage to a coordinator that calls validateAuthContext and validateCreatePredictionBody; map the processPredictRequest result into a persistence payload using buildPredictionPayload(validated, metrics) and persist via savePrediction(payload) which wraps insertPrediction; and move response creation into buildSuccessResponse(row) and buildErrorResponse(errOrMessage) to return NextResponse objects. Keep postCreatePredictionHandler as a thin orchestrator that calls parseJsonBody, auth/validation coordinator, processPredictRequest, buildPredictionPayload, savePrediction, and then uses the response builders.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/api/predictions/`[id]/route.ts:
- Around line 32-37: Validate the incoming route param `id` in the GET function
before calling getOnePredictionHandler: create a Zod schema (e.g.,
predictionIdSchema = z.object({ id: z.string().uuid() })) and parse await
options.params with it, returning a NextResponse with status 400 if validation
fails; on success extract the validated id and pass that to
getOnePredictionHandler(request, id). Ensure this validation occurs in the GET
function (which currently reads const { id } = await options.params) so
malformed UUIDs never reach getOnePredictionHandler.
In `@lib/predictions/getListPredictionsHandler.ts`:
- Line 1: Run Prettier to fix the formatting issues in the file (e.g., the
import line in getListPredictionsHandler.ts). Execute the formatting command
used by the repo (pnpm prettier --write
lib/predictions/getListPredictionsHandler.ts) or run your configured formatter
in the editor to apply consistent styling to the import and the rest of the
file.
In `@lib/predictions/getOnePredictionHandler.ts`:
- Around line 59-63: The catch block in getOnePredictionHandler currently
returns err.message to the client (err and NextResponse.json with
getCorsHeaders), which can leak internal details; instead, log the full error
internally (e.g., console.error(err) or your app logger) and return a generic
error message in NextResponse.json like { status: "error", error: "Internal
server error" } with the same 500 status and getCorsHeaders. Ensure you still
capture the original err for logs but do not surface err.message in the HTTP
response.
In `@lib/predictions/postCreatePredictionHandler.ts`:
- Around line 42-45: The handler currently returns raw internal error text
(result.error and err.message) to clients in postCreatePredictionHandler;
instead, change both error responses so they return a stable public error string
(e.g., "internal_server_error" or "prediction_creation_failed") and HTTP 500,
and log the full internal error server-side using the existing logger or console
(reference the result.type === "error" branch and the catch handling that
references err.message) so internal details are recorded but not exposed to
clients.
In `@lib/supabase/predictions/selectPredictions.ts`:
- Around line 21-23: The selectPredictions function trusts caller-provided
limit/offset even though docs cap limit at 100; sanitize/clamp the incoming
parameters (e.g., ensure limit is at least 1 and at most 100, and offset is >=
0) before using them to compute the start/end for the Supabase query
(.range(...)); update the computation that builds the range using these clamped
values (refer to selectPredictions and the .range(...) call) so the SQL query
behavior is predictable regardless of caller input.
In `@lib/tribe/callTribePredict.ts`:
- Line 1: The file lib/tribe/callTribePredict.ts has formatting issues; run the
formatter to fix them—execute pnpm prettier --write
lib/tribe/callTribePredict.ts (or run your project's Prettier task) to reformat
the import and the rest of the file so the TRIBE_PREDICT_URL import and the
callTribePredict module conform to project style.
- Around line 16-23: The fetch to TRIBE_PREDICT_URL in callTribePredict.ts
currently has no timeout; wrap the request with an AbortController (create
controller, pass controller.signal into fetch called in the response = await
fetch(...) call) and set a timer (e.g., setTimeout) to call controller.abort()
after a reasonable timeout (e.g., 30s), clear the timer after fetch completes,
and handle the abort/error path (catch and log or return an error) in the
function that invokes the fetch so hang-ups are prevented.
In `@lib/tribe/isTribePredictResult.ts`:
- Around line 24-31: The isTribePredictResult type guard is too shallow: enhance
checks inside the function (referencing isTribePredictResult and the local
variable c) to validate array element shapes and regional_activation structure.
For engagement_timeline, peak_moments, and weak_spots ensure Array.isArray(...)
and that every element matches expected types (e.g., objects with specific
numeric/string keys such as timestamp/score or start/end), using
Array.prototype.every to confirm entries are not null and have the required keys
with correct primitive types; for regional_activation verify it's a plain object
(not null/array) and that each value is a number (or matches the expected
subtype), and keep existing numeric checks for engagement_score,
total_duration_seconds, and elapsed_seconds. Update the guard to return false if
any of these stricter per-item or per-key validations fail so malformed TRIBE
responses are rejected.
In `@lib/tribe/processPredictRequest.ts`:
- Line 1: The file import of callTribePredict has formatting issues; run the
project's Prettier formatter on this file (e.g., pnpm prettier --write
lib/tribe/processPredictRequest.ts) to fix spacing/formatting, then
re-stage/commit the formatted changes so the pipeline passes; ensure import line
for callTribePredict remains unchanged semantically after formatting.
---
Nitpick comments:
In `@lib/predictions/getOnePredictionHandler.ts`:
- Around line 28-40: The two identical 404 branches in getOnePredictionHandler
(checking if prediction is falsy and if prediction.account_id !== accountId)
should be deduplicated: add a small helper function (e.g., notFoundResponse or
respondPredictionNotFound) that returns NextResponse.json({ status: "error",
error: "Prediction not found" }, { status: 404, headers: getCorsHeaders() }) and
replace both direct returns with calls to that helper so the handler uses the
single reusable response for both the missing-prediction and wrong-account
checks.
In `@lib/predictions/postCreatePredictionHandler.ts`:
- Around line 16-89: Split the large postCreatePredictionHandler into small
focused helpers: extract request JSON parsing into parseJsonBody(request) that
returns validated body or NextResponse error; move auth + validation usage to a
coordinator that calls validateAuthContext and validateCreatePredictionBody; map
the processPredictRequest result into a persistence payload using
buildPredictionPayload(validated, metrics) and persist via
savePrediction(payload) which wraps insertPrediction; and move response creation
into buildSuccessResponse(row) and buildErrorResponse(errOrMessage) to return
NextResponse objects. Keep postCreatePredictionHandler as a thin orchestrator
that calls parseJsonBody, auth/validation coordinator, processPredictRequest,
buildPredictionPayload, savePrediction, and then uses the response builders.
In `@lib/supabase/predictions/insertPrediction.ts`:
- Around line 3-19: PredictionInsert and PredictionRow are duplicated; extract
them into a single shared types module (e.g., create a new module exporting
interfaces PredictionInsert and PredictionRow), then update this file to import
those types instead of declaring them locally; ensure the exported names match
exactly (PredictionInsert, PredictionRow) and update other prediction-related
Supabase files to import from the new module so all code uses the single source
of truth.
In `@lib/supabase/predictions/selectPredictionById.ts`:
- Line 1: Replace the local supabase import and manual row interface/cast in
selectPredictionById with the project canonical Supabase client and table-backed
typing: import the client from "@/lib/supabase/serverClient" and type results
using Tables<"predictions"> (e.g., use PostgrestResponse<Tables["predictions"]>
or return value typed as Tables["predictions"]) instead of a local interface and
casts; update the function selectPredictionById and any related
variables/returns (lines referenced around 3-16 and 36) to use the
Tables<"predictions"> type so the module matches the repo standard and prevents
schema drift.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7ca7ca66-645d-416e-86fc-55a10acb3dfc
⛔ Files ignored due to path filters (6)
lib/predictions/__tests__/getListPredictionsHandler.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/predictions/__tests__/getOnePredictionHandler.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/predictions/__tests__/postCreatePredictionHandler.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/tribe/__tests__/callTribePredict.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/tribe/__tests__/processPredictRequest.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/tribe/__tests__/validateCreatePredictionBody.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**
📒 Files selected for processing (17)
app/api/predictions/[id]/route.tsapp/api/predictions/route.tslib/const.tslib/mcp/tools/index.tslib/mcp/tools/tribe/index.tslib/mcp/tools/tribe/registerGetPredictionsTool.tslib/mcp/tools/tribe/registerPredictEngagementTool.tslib/predictions/getListPredictionsHandler.tslib/predictions/getOnePredictionHandler.tslib/predictions/postCreatePredictionHandler.tslib/supabase/predictions/insertPrediction.tslib/supabase/predictions/selectPredictionById.tslib/supabase/predictions/selectPredictions.tslib/tribe/callTribePredict.tslib/tribe/isTribePredictResult.tslib/tribe/processPredictRequest.tslib/tribe/validateCreatePredictionBody.ts
| export async function GET( | ||
| request: NextRequest, | ||
| options: { params: Promise<{ id: string }> }, | ||
| ): Promise<NextResponse> { | ||
| const { id } = await options.params; | ||
| return getOnePredictionHandler(request, id); |
There was a problem hiding this comment.
Validate id as UUID before delegating to the handler.
The route currently forwards raw id. Add Zod validation at the boundary and return 400 on invalid params.
Example boundary validation
import type { NextRequest } from "next/server";
import { NextResponse } from "next/server";
+import { z } from "zod";
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
import { getOnePredictionHandler } from "@/lib/predictions/getOnePredictionHandler";
+
+const getPredictionParamsSchema = z.object({ id: z.string().uuid() });
export async function GET(
request: NextRequest,
options: { params: Promise<{ id: string }> },
): Promise<NextResponse> {
- const { id } = await options.params;
- return getOnePredictionHandler(request, id);
+ const parsed = getPredictionParamsSchema.safeParse(await options.params);
+ if (!parsed.success) {
+ return NextResponse.json(
+ { status: "error", error: "Invalid prediction id" },
+ { status: 400, headers: getCorsHeaders() },
+ );
+ }
+ return getOnePredictionHandler(request, parsed.data.id);
}As per coding guidelines: “All API endpoints should use a validate function for input parsing using Zod for schema validation.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/predictions/`[id]/route.ts around lines 32 - 37, Validate the
incoming route param `id` in the GET function before calling
getOnePredictionHandler: create a Zod schema (e.g., predictionIdSchema =
z.object({ id: z.string().uuid() })) and parse await options.params with it,
returning a NextResponse with status 400 if validation fails; on success extract
the validated id and pass that to getOnePredictionHandler(request, id). Ensure
this validation occurs in the GET function (which currently reads const { id } =
await options.params) so malformed UUIDs never reach getOnePredictionHandler.
| @@ -0,0 +1,40 @@ | |||
| import type { NextRequest } from "next/server"; | |||
There was a problem hiding this comment.
Run Prettier to fix formatting.
The pipeline reports formatting issues. Run pnpm prettier --write lib/predictions/getListPredictionsHandler.ts to resolve.
🧰 Tools
🪛 GitHub Actions: Format Check
[warning] 1-1: Prettier (format:check) reported code style issues in this file. Run Prettier with --write to fix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/predictions/getListPredictionsHandler.ts` at line 1, Run Prettier to fix
the formatting issues in the file (e.g., the import line in
getListPredictionsHandler.ts). Execute the formatting command used by the repo
(pnpm prettier --write lib/predictions/getListPredictionsHandler.ts) or run your
configured formatter in the editor to apply consistent styling to the import and
the rest of the file.
| } catch (err) { | ||
| const message = err instanceof Error ? err.message : "Failed to fetch prediction"; | ||
| return NextResponse.json( | ||
| { status: "error", error: message }, | ||
| { status: 500, headers: getCorsHeaders() }, |
There was a problem hiding this comment.
Do not expose raw backend error messages in API 500 responses.
err.message is returned directly to clients. This can leak internal details from Supabase/network failures. Return a generic message externally and log the detailed cause internally.
Safer error response pattern
} catch (err) {
- const message = err instanceof Error ? err.message : "Failed to fetch prediction";
+ console.error("getOnePredictionHandler failed", err);
return NextResponse.json(
- { status: "error", error: message },
+ { status: "error", error: "Failed to fetch prediction" },
{ status: 500, headers: getCorsHeaders() },
);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (err) { | |
| const message = err instanceof Error ? err.message : "Failed to fetch prediction"; | |
| return NextResponse.json( | |
| { status: "error", error: message }, | |
| { status: 500, headers: getCorsHeaders() }, | |
| } catch (err) { | |
| console.error("getOnePredictionHandler failed", err); | |
| return NextResponse.json( | |
| { status: "error", error: "Failed to fetch prediction" }, | |
| { status: 500, headers: getCorsHeaders() }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/predictions/getOnePredictionHandler.ts` around lines 59 - 63, The catch
block in getOnePredictionHandler currently returns err.message to the client
(err and NextResponse.json with getCorsHeaders), which can leak internal
details; instead, log the full error internally (e.g., console.error(err) or
your app logger) and return a generic error message in NextResponse.json like {
status: "error", error: "Internal server error" } with the same 500 status and
getCorsHeaders. Ensure you still capture the original err for logs but do not
surface err.message in the HTTP response.
| if (result.type === "error") { | ||
| return NextResponse.json( | ||
| { status: "error", error: result.error }, | ||
| { status: 500, headers: getCorsHeaders() }, |
There was a problem hiding this comment.
Avoid returning raw upstream/database error messages to clients.
Line 44 and Line 85 expose internal error text directly (result.error, err.message). This can leak backend/provider details. Return a stable public error string and log the internal message server-side.
🔒 Suggested hardening
if (result.type === "error") {
+ // log internal details here
return NextResponse.json(
- { status: "error", error: result.error },
+ { status: "error", error: "Prediction request failed" },
{ status: 500, headers: getCorsHeaders() },
);
}
...
} catch (err) {
- const message = err instanceof Error ? err.message : "Failed to save prediction";
+ // log err internally here
return NextResponse.json(
- { status: "error", error: message },
+ { status: "error", error: "Failed to save prediction" },
{ status: 500, headers: getCorsHeaders() },
);
}Also applies to: 83-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/predictions/postCreatePredictionHandler.ts` around lines 42 - 45, The
handler currently returns raw internal error text (result.error and err.message)
to clients in postCreatePredictionHandler; instead, change both error responses
so they return a stable public error string (e.g., "internal_server_error" or
"prediction_creation_failed") and HTTP 500, and log the full internal error
server-side using the existing logger or console (reference the result.type ===
"error" branch and the catch handling that references err.message) so internal
details are recorded but not exposed to clients.
| limit = 20, | ||
| offset = 0, | ||
| ): Promise<PredictionSummary[]> { |
There was a problem hiding this comment.
Enforce limit/offset bounds in code (not just docs).
The JSDoc says max limit is 100, but the function currently trusts caller input. Clamp/sanitize before building .range(...) to keep query behavior predictable.
Suggested guard
export async function selectPredictions(
accountId: string,
limit = 20,
offset = 0,
): Promise<PredictionSummary[]> {
+ const safeLimit = Math.min(Math.max(limit, 1), 100);
+ const safeOffset = Math.max(offset, 0);
+
const { data, error } = await supabase
.from("predictions")
.select("id, file_url, modality, engagement_score, created_at")
.eq("account_id", accountId)
.order("created_at", { ascending: false })
- .range(offset, offset + limit - 1);
+ .range(safeOffset, safeOffset + safeLimit - 1);Also applies to: 29-29
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/supabase/predictions/selectPredictions.ts` around lines 21 - 23, The
selectPredictions function trusts caller-provided limit/offset even though docs
cap limit at 100; sanitize/clamp the incoming parameters (e.g., ensure limit is
at least 1 and at most 100, and offset is >= 0) before using them to compute the
start/end for the Supabase query (.range(...)); update the computation that
builds the range using these clamped values (refer to selectPredictions and the
.range(...) call) so the SQL query behavior is predictable regardless of caller
input.
| @@ -0,0 +1,36 @@ | |||
| import { TRIBE_PREDICT_URL } from "@/lib/const"; | |||
There was a problem hiding this comment.
Run Prettier to fix formatting.
The pipeline reports formatting issues. Run pnpm prettier --write lib/tribe/callTribePredict.ts to resolve.
🧰 Tools
🪛 GitHub Actions: Format Check
[warning] 1-1: Prettier (format:check) reported code style issues in this file. Run Prettier with --write to fix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/tribe/callTribePredict.ts` at line 1, The file
lib/tribe/callTribePredict.ts has formatting issues; run the formatter to fix
them—execute pnpm prettier --write lib/tribe/callTribePredict.ts (or run your
project's Prettier task) to reformat the import and the rest of the file so the
TRIBE_PREDICT_URL import and the callTribePredict module conform to project
style.
| const response = await fetch(TRIBE_PREDICT_URL, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ | ||
| file_url: params.file_url, | ||
| modality: params.modality, | ||
| }), | ||
| }); |
There was a problem hiding this comment.
Consider adding a timeout for the external HTTP call.
The fetch to the Modal endpoint has no timeout configured. GPU inference can be slow or the endpoint could become unresponsive, potentially causing requests to hang indefinitely. This is a reliability concern for external-call hazards.
🛡️ Proposed fix using AbortController
export async function callTribePredict(
params: CreatePredictionBody,
): Promise<TribePredictResult> {
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 60000); // 60s timeout
+
- const response = await fetch(TRIBE_PREDICT_URL, {
- method: "POST",
- headers: { "Content-Type": "application/json" },
- body: JSON.stringify({
- file_url: params.file_url,
- modality: params.modality,
- }),
- });
+ try {
+ const response = await fetch(TRIBE_PREDICT_URL, {
+ method: "POST",
+ headers: { "Content-Type": "application/json" },
+ body: JSON.stringify({
+ file_url: params.file_url,
+ modality: params.modality,
+ }),
+ signal: controller.signal,
+ });
+ clearTimeout(timeoutId);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/tribe/callTribePredict.ts` around lines 16 - 23, The fetch to
TRIBE_PREDICT_URL in callTribePredict.ts currently has no timeout; wrap the
request with an AbortController (create controller, pass controller.signal into
fetch called in the response = await fetch(...) call) and set a timer (e.g.,
setTimeout) to call controller.abort() after a reasonable timeout (e.g., 30s),
clear the timer after fetch completes, and handle the abort/error path (catch
and log or return an error) in the function that invokes the fetch so hang-ups
are prevented.
| typeof c.engagement_score === "number" && | ||
| Array.isArray(c.engagement_timeline) && | ||
| Array.isArray(c.peak_moments) && | ||
| Array.isArray(c.weak_spots) && | ||
| typeof c.regional_activation === "object" && | ||
| c.regional_activation !== null && | ||
| typeof c.total_duration_seconds === "number" && | ||
| typeof c.elapsed_seconds === "number" |
There was a problem hiding this comment.
Strengthen payload validation in the type guard.
Current checks are shallow: timeline arrays and regional_activation value types aren’t validated. Malformed TRIBE responses can be accepted as TribePredictResult.
Proposed hardening
export function isTribePredictResult(value: unknown): value is TribePredictResult {
if (!value || typeof value !== "object") return false;
const c = value as Record<string, unknown>;
+ const isPoint = (p: unknown): p is { time_seconds: number; score: number } =>
+ !!p &&
+ typeof p === "object" &&
+ typeof (p as Record<string, unknown>).time_seconds === "number" &&
+ typeof (p as Record<string, unknown>).score === "number";
+ const isNumberRecord = (r: unknown): r is Record<string, number> =>
+ !!r &&
+ typeof r === "object" &&
+ Object.values(r as Record<string, unknown>).every((v) => typeof v === "number");
+
return (
typeof c.engagement_score === "number" &&
- Array.isArray(c.engagement_timeline) &&
- Array.isArray(c.peak_moments) &&
- Array.isArray(c.weak_spots) &&
- typeof c.regional_activation === "object" &&
- c.regional_activation !== null &&
+ Array.isArray(c.engagement_timeline) &&
+ c.engagement_timeline.every(isPoint) &&
+ Array.isArray(c.peak_moments) &&
+ c.peak_moments.every(isPoint) &&
+ Array.isArray(c.weak_spots) &&
+ c.weak_spots.every(isPoint) &&
+ isNumberRecord(c.regional_activation) &&
typeof c.total_duration_seconds === "number" &&
typeof c.elapsed_seconds === "number"
);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/tribe/isTribePredictResult.ts` around lines 24 - 31, The
isTribePredictResult type guard is too shallow: enhance checks inside the
function (referencing isTribePredictResult and the local variable c) to validate
array element shapes and regional_activation structure. For engagement_timeline,
peak_moments, and weak_spots ensure Array.isArray(...) and that every element
matches expected types (e.g., objects with specific numeric/string keys such as
timestamp/score or start/end), using Array.prototype.every to confirm entries
are not null and have the required keys with correct primitive types; for
regional_activation verify it's a plain object (not null/array) and that each
value is a number (or matches the expected subtype), and keep existing numeric
checks for engagement_score, total_duration_seconds, and elapsed_seconds. Update
the guard to return false if any of these stricter per-item or per-key
validations fail so malformed TRIBE responses are rejected.
| @@ -0,0 +1,33 @@ | |||
| import { callTribePredict } from "./callTribePredict"; | |||
There was a problem hiding this comment.
Run Prettier to fix formatting.
The pipeline reports formatting issues. Run pnpm prettier --write lib/tribe/processPredictRequest.ts to resolve.
🧰 Tools
🪛 GitHub Actions: Format Check
[warning] 1-1: Prettier (format:check) reported code style issues in this file. Run Prettier with --write to fix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/tribe/processPredictRequest.ts` at line 1, The file import of
callTribePredict has formatting issues; run the project's Prettier formatter on
this file (e.g., pnpm prettier --write lib/tribe/processPredictRequest.ts) to
fix spacing/formatting, then re-stage/commit the formatted changes so the
pipeline passes; ensure import line for callTribePredict remains unchanged
semantically after formatting.
Summary
/api/predictionswith POST (create), GET (list), GET /:id (detail)callTribePredictfor GPU inferenceprocessPredictRequestlogic used by both REST handlers and MCP tools (DRY)predict_engagementandget_predictionsFiles added
lib/const.ts— TRIBE_PREDICT_URL constantlib/supabase/predictions/— 3 Supabase query fileslib/tribe/— domain logic (call, validate, type guard, process)lib/predictions/— 3 HTTP handlersapp/api/predictions/— 2 route fileslib/mcp/tools/tribe/— 2 MCP tools + indexTest plan
pnpm vitest run lib/tribe/__tests__ lib/predictions/__tests__)Made with Cursor
Summary by cubic
Adds a new
/api/predictionsAPI to run TRIBE v2 neural engagement predictions via Modal, persist results in Supabase, and expose matching MCP tools for programmatic use./api/predictions(create), GET/api/predictions(list withlimit/offset), GET/api/predictions/:id(detail).file_url,modality= video|audio|text). Returns engagement score, timeline, peak/weak moments, regional activation, and duration metrics.TRIBE_PREDICT_URL, with sharedprocessPredictRequestlogic.x-api-keyor Bearer token.predict_engagement(runs and saves a prediction) andget_predictions(lists summaries), wired viaregisterAllTribeTools.Written for commit 00d8759. Summary will update on new commits.
Summary by CodeRabbit
Release Notes