Skip to content

feat: add neural engagement prediction endpoint (TRIBE v2)#425

Open
sidneyswift wants to merge 1 commit intotestfrom
feature/predictions-endpoint
Open

feat: add neural engagement prediction endpoint (TRIBE v2)#425
sidneyswift wants to merge 1 commit intotestfrom
feature/predictions-endpoint

Conversation

@sidneyswift
Copy link
Copy Markdown
Contributor

@sidneyswift sidneyswift commented Apr 10, 2026

Summary

  • REST resource at /api/predictions with POST (create), GET (list), GET /:id (detail)
  • Modal integration via callTribePredict for GPU inference
  • Supabase persistence (insertPrediction, selectPredictions, selectPredictionById)
  • Zod validation for request body (file_url + modality)
  • Shared processPredictRequest logic used by both REST handlers and MCP tools (DRY)
  • 2 MCP tools: predict_engagement and get_predictions
  • 26 unit tests, all passing

Files added

  • lib/const.ts — TRIBE_PREDICT_URL constant
  • lib/supabase/predictions/ — 3 Supabase query files
  • lib/tribe/ — domain logic (call, validate, type guard, process)
  • lib/predictions/ — 3 HTTP handlers
  • app/api/predictions/ — 2 route files
  • lib/mcp/tools/tribe/ — 2 MCP tools + index

Test plan

  • All 26 new tests pass (pnpm vitest run lib/tribe/__tests__ lib/predictions/__tests__)
  • Full test suite passes
  • Modal endpoint accessible and returning valid predictions

Made with Cursor


Summary by cubic

Adds a new /api/predictions API to run TRIBE v2 neural engagement predictions via Modal, persist results in Supabase, and expose matching MCP tools for programmatic use.

  • New Features
    • REST: POST /api/predictions (create), GET /api/predictions (list with limit/offset), GET /api/predictions/:id (detail).
    • Validates body with Zod (file_url, modality = video|audio|text). Returns engagement score, timeline, peak/weak moments, regional activation, and duration metrics.
    • Runs GPU inference through Modal using TRIBE_PREDICT_URL, with shared processPredictRequest logic.
    • Persists results to Supabase and scopes list/detail by account. CORS enabled. Auth via x-api-key or Bearer token.
    • MCP tools: predict_engagement (runs and saves a prediction) and get_predictions (lists summaries), wired via registerAllTribeTools.
    • 26 unit tests cover handlers, validation, and TRIBE call flow.

Written for commit 00d8759. Summary will update on new commits.

Summary by CodeRabbit

Release Notes

  • New Features
    • Added comprehensive prediction API endpoints to create and retrieve engagement predictions.
    • Integrated TRIBE v2 neural service for analyzing media engagement performance.
    • Implemented prediction history management with pagination and filtering capabilities.
    • Added secure, authenticated API access with support for multiple accounts.
    • Stored prediction results with automatic timestamp tracking and account organization.

New REST resource at /api/predictions with POST (create), GET (list),
and GET /:id (detail). Includes Modal integration, Supabase persistence,
Zod validation, MCP tools (predict_engagement, get_predictions), and
26 unit tests.

Made-with: Cursor
@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented Apr 10, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
recoup-api Ready Ready Preview Apr 10, 2026 3:45pm

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 10, 2026

📝 Walkthrough

Walkthrough

This PR introduces a complete prediction API for TRIBE v2 neural engagement analysis. It adds REST endpoints for creating and retrieving predictions, MCP tools for engagement prediction and retrieval, Supabase database operations for persistence, and an external TRIBE API client with validation and error handling.

Changes

Cohort / File(s) Summary
API Routes
app/api/predictions/route.ts, app/api/predictions/[id]/route.ts
Added Next.js route handlers for listing/creating predictions (GET/POST /api/predictions) and fetching individual predictions (GET /api/predictions/{id}). All routes support CORS via getCorsHeaders().
API Handlers
lib/predictions/getListPredictionsHandler.ts, lib/predictions/getOnePredictionHandler.ts, lib/predictions/postCreatePredictionHandler.ts
Implemented request handlers managing authentication, body validation, query parameter parsing, database queries, and error responses. Each handler applies CORS headers and delegates to prediction-specific utilities.
MCP Tool Registration
lib/mcp/tools/index.ts, lib/mcp/tools/tribe/index.ts, lib/mcp/tools/tribe/registerGetPredictionsTool.ts, lib/mcp/tools/tribe/registerPredictEngagementTool.ts
Created MCP server tool registration for prediction operations: predict_engagement tool accepts file uploads and modality, get_predictions supports pagination; both validate inputs, resolve authenticated accounts, and handle errors.
Database Layer
lib/supabase/predictions/insertPrediction.ts, lib/supabase/predictions/selectPredictionById.ts, lib/supabase/predictions/selectPredictions.ts
Added Supabase operations for prediction persistence and retrieval: insert predictions with metrics, fetch by UUID with authorization checks, list with pagination and filtering by account ID.
TRIBE v2 Integration
lib/tribe/callTribePredict.ts, lib/tribe/isTribePredictResult.ts, lib/tribe/processPredictRequest.ts, lib/tribe/validateCreatePredictionBody.ts
Implemented external API client calling TRIBE v2 prediction endpoint, runtime type validation of response payload, discriminated union result handling (success/error), and Zod schema validation for request bodies.
Configuration
lib/const.ts
Added TRIBE_PREDICT_URL constant pointing to TRIBE v2 Modal endpoint.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant POST_Handler as POST /predictions
    participant Auth as validateAuthContext
    participant Validate as validateCreatePredictionBody
    participant Tribe as callTribePredict
    participant DB as insertPrediction
    
    Client->>POST_Handler: POST body: {file_url, modality}
    POST_Handler->>Auth: Validate request
    Auth-->>POST_Handler: accountId
    POST_Handler->>Validate: Parse & validate body
    Validate-->>POST_Handler: CreatePredictionBody
    POST_Handler->>Tribe: Call TRIBE API
    Tribe-->>POST_Handler: TribePredictResult
    POST_Handler->>DB: Insert prediction + metrics
    DB-->>POST_Handler: PredictionRow
    POST_Handler-->>Client: 200 {id, metrics, created_at}
Loading
sequenceDiagram
    participant Client
    participant GET_Handler as GET /predictions or /predictions/{id}
    participant Auth as validateAuthContext
    participant DB as selectPredictions or selectPredictionById
    participant Client2 as Client
    
    Client->>GET_Handler: GET request (with query params or id)
    GET_Handler->>Auth: Validate request
    Auth-->>GET_Handler: accountId
    GET_Handler->>DB: Query predictions
    DB-->>GET_Handler: PredictionRow(s) or null
    alt Not Found
        GET_Handler-->>Client2: 404 {error: "Prediction not found"}
    else Unauthorized
        GET_Handler-->>Client2: 404 {error: "Prediction not found"}
    else Success
        GET_Handler-->>Client2: 200 {status: "success", predictions}
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • sweetmantech

Poem

🔮 Predictions take flight through the TRIBE v2 gateway,
Engagement metrics dance in columnar arrays,
From Modal endpoints to Supabase rows,
Your neural muse now knows what's next—
One endpoint to predict them all! ✨

🚥 Pre-merge checks | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Solid & Clean Code ⚠️ Warning PR violates DRY principle with 6 identical error handlers and 3 duplicate prediction response objects; exposes raw errors to clients despite review feedback; ignores validation, timeout, and bounds-checking concerns. Extract shared error and response formatters; implement UUID validation; add AbortController timeout to fetch calls; move bounds clamping to data layer for architectural consistency.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/predictions-endpoint

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 23 files

Confidence score: 3/5

  • There is moderate merge risk because several medium-severity, high-confidence issues are user-impacting: unbounded query limits in lib/supabase/predictions/selectPredictions.ts can trigger large database reads, and malformed IDs in lib/predictions/getOnePredictionHandler.ts can surface as 500s instead of clean client errors.
  • The most severe runtime reliability concern is in lib/tribe/callTribePredict.ts: parsing external service responses without guarding response.json() can turn non-JSON upstream failures into opaque SyntaxErrors, making production failures harder to diagnose.
  • Input/response validation is currently weaker than the type contracts imply in lib/tribe/isTribePredictResult.ts; shallow array checks on external payloads increase regression risk if element shapes are malformed.
  • Pay close attention to lib/tribe/callTribePredict.ts, lib/tribe/isTribePredictResult.ts, lib/supabase/predictions/selectPredictions.ts, lib/predictions/getOnePredictionHandler.ts - external payload handling, validation correctness, and query bounds are the key risk areas.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="lib/predictions/__tests__/postCreatePredictionHandler.test.ts">

<violation number="1" location="lib/predictions/__tests__/postCreatePredictionHandler.test.ts:1">
P2: Custom agent: **Enforce Clear Code Style and Maintainability Practices**

Split this test file into smaller files to stay under the 100-line limit and keep single responsibility.</violation>
</file>

<file name="lib/tribe/callTribePredict.ts">

<violation number="1" location="lib/tribe/callTribePredict.ts:31">
P2: Wrap `response.json()` in a try-catch so that a non-JSON response from Modal produces a clear error instead of an opaque `SyntaxError`. This is especially important for an external GPU inference service where HTML error pages or empty responses are realistic failure modes.</violation>
</file>

<file name="lib/tribe/isTribePredictResult.ts">

<violation number="1" location="lib/tribe/isTribePredictResult.ts:25">
P2: Type guard claims `value is TribePredictResult` but only shallow-checks the three array fields with `Array.isArray()`, never validating element shapes `{ time_seconds: number; score: number }`. Since this guards an external API response, a malformed array element would silently pass and propagate incorrect types downstream. Consider validating at least the first element of each array (or using Zod for the full response).</violation>
</file>

<file name="lib/supabase/predictions/selectPredictions.ts">

<violation number="1" location="lib/supabase/predictions/selectPredictions.ts:21">
P2: The documented max limit of 100 is not enforced. Callers can pass arbitrarily large values, resulting in unbounded reads from the database. Clamp the value to match the documented contract.</violation>
</file>

<file name="lib/predictions/getOnePredictionHandler.ts">

<violation number="1" location="lib/predictions/getOnePredictionHandler.ts:26">
P2: Validate `id` as a UUID before querying the database. A malformed `id` like `"abc"` will cause a Postgres type error, and the handler will return 500 instead of a 400/404.</violation>
</file>

<file name="lib/supabase/predictions/selectPredictionById.ts">

<violation number="1" location="lib/supabase/predictions/selectPredictionById.ts:3">
P2: Duplicate `PredictionRow` interface — `insertPrediction.ts` in the same directory already defines this type (as `PredictionInsert` + `{ id, created_at }`). Extract it to a shared file (e.g., `lib/supabase/predictions/types.ts`) or re-export it from `insertPrediction.ts` to avoid drift when the schema changes.</violation>
</file>
Architecture diagram
sequenceDiagram
    participant Client as User / MCP Client
    participant Router as API Route / MCP Tool
    participant Auth as Auth Service
    participant Logic as Tribe Logic (Shared)
    participant Modal as Modal (GPU Inference)
    participant DB as Supabase

    Note over Client,DB: Engagement Prediction Flow (TRIBE v2)

    Client->>Router: NEW: Predict Request (file_url, modality)
    Router->>Auth: validateAuth() / resolveAccountId()
    Auth-->>Router: accountId

    Router->>Router: Validate Body (Zod)
    
    Router->>Logic: NEW: processPredictRequest()
    
    Logic->>Modal: NEW: callTribePredict()
    Note right of Modal: TRIBE_PREDICT_URL
    
    alt Modal Success
        Modal-->>Logic: Engagement metrics (score, timeline, regions)
        Logic-->>Router: Prediction results
        
        Router->>DB: NEW: insertPrediction()
        DB-->>Router: Saved record with UUID
        
        Router-->>Client: 200 OK (Prediction Data)
    else Modal Error / Timeout
        Modal-->>Logic: Error response
        Logic-->>Router: Error object
        Router-->>Client: 500 Error (Engagement prediction failed)
    end

    Note over Client,DB: Retrieval Flow

    Client->>Router: GET /api/predictions (List or Detail)
    Router->>Auth: validateAuth()
    Auth-->>Router: accountId
    
    Router->>DB: NEW: selectPredictions() / selectPredictionById()
    DB-->>Router: Prediction record(s)
    
    alt Found & Owned
        Router-->>Client: 200 OK (Data)
    else Not Found / Forbidden
        Router-->>Client: 404 Not Found
    end
Loading

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.

@@ -0,0 +1,142 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Custom agent: Enforce Clear Code Style and Maintainability Practices

Split this test file into smaller files to stay under the 100-line limit and keep single responsibility.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/predictions/__tests__/postCreatePredictionHandler.test.ts, line 1:

<comment>Split this test file into smaller files to stay under the 100-line limit and keep single responsibility.</comment>

<file context>
@@ -0,0 +1,142 @@
+import { describe, it, expect, vi, beforeEach } from "vitest";
+import { NextRequest, NextResponse } from "next/server";
+import { postCreatePredictionHandler } from "../postCreatePredictionHandler";
</file context>
Fix with Cubic

throw new Error(`Engagement prediction failed (status ${response.status})`);
}

const data = await response.json();
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Wrap response.json() in a try-catch so that a non-JSON response from Modal produces a clear error instead of an opaque SyntaxError. This is especially important for an external GPU inference service where HTML error pages or empty responses are realistic failure modes.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/tribe/callTribePredict.ts, line 31:

<comment>Wrap `response.json()` in a try-catch so that a non-JSON response from Modal produces a clear error instead of an opaque `SyntaxError`. This is especially important for an external GPU inference service where HTML error pages or empty responses are realistic failure modes.</comment>

<file context>
@@ -0,0 +1,36 @@
+    throw new Error(`Engagement prediction failed (status ${response.status})`);
+  }
+
+  const data = await response.json();
+  if (!isTribePredictResult(data)) {
+    throw new Error("TRIBE v2 returned an unexpected response shape");
</file context>
Fix with Cubic

const c = value as Record<string, unknown>;
return (
typeof c.engagement_score === "number" &&
Array.isArray(c.engagement_timeline) &&
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Type guard claims value is TribePredictResult but only shallow-checks the three array fields with Array.isArray(), never validating element shapes { time_seconds: number; score: number }. Since this guards an external API response, a malformed array element would silently pass and propagate incorrect types downstream. Consider validating at least the first element of each array (or using Zod for the full response).

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/tribe/isTribePredictResult.ts, line 25:

<comment>Type guard claims `value is TribePredictResult` but only shallow-checks the three array fields with `Array.isArray()`, never validating element shapes `{ time_seconds: number; score: number }`. Since this guards an external API response, a malformed array element would silently pass and propagate incorrect types downstream. Consider validating at least the first element of each array (or using Zod for the full response).</comment>

<file context>
@@ -0,0 +1,33 @@
+  const c = value as Record<string, unknown>;
+  return (
+    typeof c.engagement_score === "number" &&
+    Array.isArray(c.engagement_timeline) &&
+    Array.isArray(c.peak_moments) &&
+    Array.isArray(c.weak_spots) &&
</file context>
Fix with Cubic

*/
export async function selectPredictions(
accountId: string,
limit = 20,
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The documented max limit of 100 is not enforced. Callers can pass arbitrarily large values, resulting in unbounded reads from the database. Clamp the value to match the documented contract.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/supabase/predictions/selectPredictions.ts, line 21:

<comment>The documented max limit of 100 is not enforced. Callers can pass arbitrarily large values, resulting in unbounded reads from the database. Clamp the value to match the documented contract.</comment>

<file context>
@@ -0,0 +1,36 @@
+ */
+export async function selectPredictions(
+  accountId: string,
+  limit = 20,
+  offset = 0,
+): Promise<PredictionSummary[]> {
</file context>
Fix with Cubic

const { accountId } = authResult;

try {
const prediction = await selectPredictionById(id);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Validate id as a UUID before querying the database. A malformed id like "abc" will cause a Postgres type error, and the handler will return 500 instead of a 400/404.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/predictions/getOnePredictionHandler.ts, line 26:

<comment>Validate `id` as a UUID before querying the database. A malformed `id` like `"abc"` will cause a Postgres type error, and the handler will return 500 instead of a 400/404.</comment>

<file context>
@@ -0,0 +1,66 @@
+  const { accountId } = authResult;
+
+  try {
+    const prediction = await selectPredictionById(id);
+
+    if (!prediction) {
</file context>
Fix with Cubic

@@ -0,0 +1,37 @@
import supabase from "../serverClient";

interface PredictionRow {
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Duplicate PredictionRow interface — insertPrediction.ts in the same directory already defines this type (as PredictionInsert + { id, created_at }). Extract it to a shared file (e.g., lib/supabase/predictions/types.ts) or re-export it from insertPrediction.ts to avoid drift when the schema changes.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/supabase/predictions/selectPredictionById.ts, line 3:

<comment>Duplicate `PredictionRow` interface — `insertPrediction.ts` in the same directory already defines this type (as `PredictionInsert` + `{ id, created_at }`). Extract it to a shared file (e.g., `lib/supabase/predictions/types.ts`) or re-export it from `insertPrediction.ts` to avoid drift when the schema changes.</comment>

<file context>
@@ -0,0 +1,37 @@
+import supabase from "../serverClient";
+
+interface PredictionRow {
+  id: string;
+  account_id: string;
</file context>
Fix with Cubic

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (4)
lib/supabase/predictions/insertPrediction.ts (1)

3-19: Extract shared prediction row/input types to one module.

PredictionInsert/PredictionRow are duplicated across prediction Supabase files. Please centralize them (e.g., one shared types module) to avoid drift when the table schema changes.

As per coding guidelines: “Extract shared logic into reusable utilities following Don't Repeat Yourself (DRY) principle.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/supabase/predictions/insertPrediction.ts` around lines 3 - 19,
PredictionInsert and PredictionRow are duplicated; extract them into a single
shared types module (e.g., create a new module exporting interfaces
PredictionInsert and PredictionRow), then update this file to import those types
instead of declaring them locally; ensure the exported names match exactly
(PredictionInsert, PredictionRow) and update other prediction-related Supabase
files to import from the new module so all code uses the single source of truth.
lib/predictions/getOnePredictionHandler.ts (1)

28-40: Deduplicate the repeated 404 response branch.

Both “not found” and “wrong account” return the same payload. Extracting a small helper keeps this handler tighter and easier to maintain.

As per coding guidelines: “Extract shared logic into reusable utilities following Don't Repeat Yourself (DRY) principle” and “Keep functions small and focused.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/predictions/getOnePredictionHandler.ts` around lines 28 - 40, The two
identical 404 branches in getOnePredictionHandler (checking if prediction is
falsy and if prediction.account_id !== accountId) should be deduplicated: add a
small helper function (e.g., notFoundResponse or respondPredictionNotFound) that
returns NextResponse.json({ status: "error", error: "Prediction not found" }, {
status: 404, headers: getCorsHeaders() }) and replace both direct returns with
calls to that helper so the handler uses the single reusable response for both
the missing-prediction and wrong-account checks.
lib/supabase/predictions/selectPredictionById.ts (1)

1-1: Align this Supabase module to the project typing/import pattern.

Please switch to the canonical Supabase import and table-backed typing instead of a local row interface + cast. This keeps schema drift from silently compiling and matches the repository standard for lib/supabase/**.

As per coding guidelines: “Supabase database functions should import from @/lib/supabase/serverClient … and return typed results using Tables<"table_name">.”

Also applies to: 3-16, 36-36

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/supabase/predictions/selectPredictionById.ts` at line 1, Replace the
local supabase import and manual row interface/cast in selectPredictionById with
the project canonical Supabase client and table-backed typing: import the client
from "@/lib/supabase/serverClient" and type results using Tables<"predictions">
(e.g., use PostgrestResponse<Tables["predictions"]> or return value typed as
Tables["predictions"]) instead of a local interface and casts; update the
function selectPredictionById and any related variables/returns (lines
referenced around 3-16 and 36) to use the Tables<"predictions"> type so the
module matches the repo standard and prevents schema drift.
lib/predictions/postCreatePredictionHandler.ts (1)

16-89: Refactor this handler into smaller units to satisfy SRP and size constraints.

postCreatePredictionHandler currently combines parsing, auth, validation, prediction orchestration, persistence, and response mapping in one long function. Please split this into focused helpers (e.g., request parsing, prediction persistence payload mapping, success/error response builders) to reduce complexity and improve testability.

♻️ Suggested refactor shape
 export async function postCreatePredictionHandler(
   request: NextRequest,
 ): Promise<NextResponse> {
-  let body: unknown;
-  try {
-    body = await request.json();
-  } catch {
-    return NextResponse.json(
-      { status: "error", error: "Request body must be valid JSON" },
-      { status: 400, headers: getCorsHeaders() },
-    );
-  }
+  const bodyOrError = await parseJsonBody(request);
+  if (bodyOrError instanceof NextResponse) return bodyOrError;
+  const body = bodyOrError;
   ...
 }
+
+async function parseJsonBody(request: NextRequest): Promise<unknown | NextResponse> {
+  try {
+    return await request.json();
+  } catch {
+    return NextResponse.json(
+      { status: "error", error: "Request body must be valid JSON" },
+      { status: 400, headers: getCorsHeaders() },
+    );
+  }
+}

As per coding guidelines, “lib/**/*.ts: Apply Single Responsibility Principle (SRP): one exported function per file; each file should do one thing well” and “Flag functions longer than 20 lines… Keep functions small and focused… lib/**/*.ts … Keep functions under 50 lines.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/predictions/postCreatePredictionHandler.ts` around lines 16 - 89, Split
the large postCreatePredictionHandler into small focused helpers: extract
request JSON parsing into parseJsonBody(request) that returns validated body or
NextResponse error; move auth + validation usage to a coordinator that calls
validateAuthContext and validateCreatePredictionBody; map the
processPredictRequest result into a persistence payload using
buildPredictionPayload(validated, metrics) and persist via
savePrediction(payload) which wraps insertPrediction; and move response creation
into buildSuccessResponse(row) and buildErrorResponse(errOrMessage) to return
NextResponse objects. Keep postCreatePredictionHandler as a thin orchestrator
that calls parseJsonBody, auth/validation coordinator, processPredictRequest,
buildPredictionPayload, savePrediction, and then uses the response builders.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/api/predictions/`[id]/route.ts:
- Around line 32-37: Validate the incoming route param `id` in the GET function
before calling getOnePredictionHandler: create a Zod schema (e.g.,
predictionIdSchema = z.object({ id: z.string().uuid() })) and parse await
options.params with it, returning a NextResponse with status 400 if validation
fails; on success extract the validated id and pass that to
getOnePredictionHandler(request, id). Ensure this validation occurs in the GET
function (which currently reads const { id } = await options.params) so
malformed UUIDs never reach getOnePredictionHandler.

In `@lib/predictions/getListPredictionsHandler.ts`:
- Line 1: Run Prettier to fix the formatting issues in the file (e.g., the
import line in getListPredictionsHandler.ts). Execute the formatting command
used by the repo (pnpm prettier --write
lib/predictions/getListPredictionsHandler.ts) or run your configured formatter
in the editor to apply consistent styling to the import and the rest of the
file.

In `@lib/predictions/getOnePredictionHandler.ts`:
- Around line 59-63: The catch block in getOnePredictionHandler currently
returns err.message to the client (err and NextResponse.json with
getCorsHeaders), which can leak internal details; instead, log the full error
internally (e.g., console.error(err) or your app logger) and return a generic
error message in NextResponse.json like { status: "error", error: "Internal
server error" } with the same 500 status and getCorsHeaders. Ensure you still
capture the original err for logs but do not surface err.message in the HTTP
response.

In `@lib/predictions/postCreatePredictionHandler.ts`:
- Around line 42-45: The handler currently returns raw internal error text
(result.error and err.message) to clients in postCreatePredictionHandler;
instead, change both error responses so they return a stable public error string
(e.g., "internal_server_error" or "prediction_creation_failed") and HTTP 500,
and log the full internal error server-side using the existing logger or console
(reference the result.type === "error" branch and the catch handling that
references err.message) so internal details are recorded but not exposed to
clients.

In `@lib/supabase/predictions/selectPredictions.ts`:
- Around line 21-23: The selectPredictions function trusts caller-provided
limit/offset even though docs cap limit at 100; sanitize/clamp the incoming
parameters (e.g., ensure limit is at least 1 and at most 100, and offset is >=
0) before using them to compute the start/end for the Supabase query
(.range(...)); update the computation that builds the range using these clamped
values (refer to selectPredictions and the .range(...) call) so the SQL query
behavior is predictable regardless of caller input.

In `@lib/tribe/callTribePredict.ts`:
- Line 1: The file lib/tribe/callTribePredict.ts has formatting issues; run the
formatter to fix them—execute pnpm prettier --write
lib/tribe/callTribePredict.ts (or run your project's Prettier task) to reformat
the import and the rest of the file so the TRIBE_PREDICT_URL import and the
callTribePredict module conform to project style.
- Around line 16-23: The fetch to TRIBE_PREDICT_URL in callTribePredict.ts
currently has no timeout; wrap the request with an AbortController (create
controller, pass controller.signal into fetch called in the response = await
fetch(...) call) and set a timer (e.g., setTimeout) to call controller.abort()
after a reasonable timeout (e.g., 30s), clear the timer after fetch completes,
and handle the abort/error path (catch and log or return an error) in the
function that invokes the fetch so hang-ups are prevented.

In `@lib/tribe/isTribePredictResult.ts`:
- Around line 24-31: The isTribePredictResult type guard is too shallow: enhance
checks inside the function (referencing isTribePredictResult and the local
variable c) to validate array element shapes and regional_activation structure.
For engagement_timeline, peak_moments, and weak_spots ensure Array.isArray(...)
and that every element matches expected types (e.g., objects with specific
numeric/string keys such as timestamp/score or start/end), using
Array.prototype.every to confirm entries are not null and have the required keys
with correct primitive types; for regional_activation verify it's a plain object
(not null/array) and that each value is a number (or matches the expected
subtype), and keep existing numeric checks for engagement_score,
total_duration_seconds, and elapsed_seconds. Update the guard to return false if
any of these stricter per-item or per-key validations fail so malformed TRIBE
responses are rejected.

In `@lib/tribe/processPredictRequest.ts`:
- Line 1: The file import of callTribePredict has formatting issues; run the
project's Prettier formatter on this file (e.g., pnpm prettier --write
lib/tribe/processPredictRequest.ts) to fix spacing/formatting, then
re-stage/commit the formatted changes so the pipeline passes; ensure import line
for callTribePredict remains unchanged semantically after formatting.

---

Nitpick comments:
In `@lib/predictions/getOnePredictionHandler.ts`:
- Around line 28-40: The two identical 404 branches in getOnePredictionHandler
(checking if prediction is falsy and if prediction.account_id !== accountId)
should be deduplicated: add a small helper function (e.g., notFoundResponse or
respondPredictionNotFound) that returns NextResponse.json({ status: "error",
error: "Prediction not found" }, { status: 404, headers: getCorsHeaders() }) and
replace both direct returns with calls to that helper so the handler uses the
single reusable response for both the missing-prediction and wrong-account
checks.

In `@lib/predictions/postCreatePredictionHandler.ts`:
- Around line 16-89: Split the large postCreatePredictionHandler into small
focused helpers: extract request JSON parsing into parseJsonBody(request) that
returns validated body or NextResponse error; move auth + validation usage to a
coordinator that calls validateAuthContext and validateCreatePredictionBody; map
the processPredictRequest result into a persistence payload using
buildPredictionPayload(validated, metrics) and persist via
savePrediction(payload) which wraps insertPrediction; and move response creation
into buildSuccessResponse(row) and buildErrorResponse(errOrMessage) to return
NextResponse objects. Keep postCreatePredictionHandler as a thin orchestrator
that calls parseJsonBody, auth/validation coordinator, processPredictRequest,
buildPredictionPayload, savePrediction, and then uses the response builders.

In `@lib/supabase/predictions/insertPrediction.ts`:
- Around line 3-19: PredictionInsert and PredictionRow are duplicated; extract
them into a single shared types module (e.g., create a new module exporting
interfaces PredictionInsert and PredictionRow), then update this file to import
those types instead of declaring them locally; ensure the exported names match
exactly (PredictionInsert, PredictionRow) and update other prediction-related
Supabase files to import from the new module so all code uses the single source
of truth.

In `@lib/supabase/predictions/selectPredictionById.ts`:
- Line 1: Replace the local supabase import and manual row interface/cast in
selectPredictionById with the project canonical Supabase client and table-backed
typing: import the client from "@/lib/supabase/serverClient" and type results
using Tables<"predictions"> (e.g., use PostgrestResponse<Tables["predictions"]>
or return value typed as Tables["predictions"]) instead of a local interface and
casts; update the function selectPredictionById and any related
variables/returns (lines referenced around 3-16 and 36) to use the
Tables<"predictions"> type so the module matches the repo standard and prevents
schema drift.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7ca7ca66-645d-416e-86fc-55a10acb3dfc

📥 Commits

Reviewing files that changed from the base of the PR and between 0780649 and 00d8759.

⛔ Files ignored due to path filters (6)
  • lib/predictions/__tests__/getListPredictionsHandler.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
  • lib/predictions/__tests__/getOnePredictionHandler.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
  • lib/predictions/__tests__/postCreatePredictionHandler.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
  • lib/tribe/__tests__/callTribePredict.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
  • lib/tribe/__tests__/processPredictRequest.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
  • lib/tribe/__tests__/validateCreatePredictionBody.test.ts is excluded by !**/*.test.*, !**/__tests__/** and included by lib/**
📒 Files selected for processing (17)
  • app/api/predictions/[id]/route.ts
  • app/api/predictions/route.ts
  • lib/const.ts
  • lib/mcp/tools/index.ts
  • lib/mcp/tools/tribe/index.ts
  • lib/mcp/tools/tribe/registerGetPredictionsTool.ts
  • lib/mcp/tools/tribe/registerPredictEngagementTool.ts
  • lib/predictions/getListPredictionsHandler.ts
  • lib/predictions/getOnePredictionHandler.ts
  • lib/predictions/postCreatePredictionHandler.ts
  • lib/supabase/predictions/insertPrediction.ts
  • lib/supabase/predictions/selectPredictionById.ts
  • lib/supabase/predictions/selectPredictions.ts
  • lib/tribe/callTribePredict.ts
  • lib/tribe/isTribePredictResult.ts
  • lib/tribe/processPredictRequest.ts
  • lib/tribe/validateCreatePredictionBody.ts

Comment on lines +32 to +37
export async function GET(
request: NextRequest,
options: { params: Promise<{ id: string }> },
): Promise<NextResponse> {
const { id } = await options.params;
return getOnePredictionHandler(request, id);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate id as UUID before delegating to the handler.

The route currently forwards raw id. Add Zod validation at the boundary and return 400 on invalid params.

Example boundary validation
 import type { NextRequest } from "next/server";
 import { NextResponse } from "next/server";
+import { z } from "zod";
 import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
 import { getOnePredictionHandler } from "@/lib/predictions/getOnePredictionHandler";
+
+const getPredictionParamsSchema = z.object({ id: z.string().uuid() });

 export async function GET(
   request: NextRequest,
   options: { params: Promise<{ id: string }> },
 ): Promise<NextResponse> {
-  const { id } = await options.params;
-  return getOnePredictionHandler(request, id);
+  const parsed = getPredictionParamsSchema.safeParse(await options.params);
+  if (!parsed.success) {
+    return NextResponse.json(
+      { status: "error", error: "Invalid prediction id" },
+      { status: 400, headers: getCorsHeaders() },
+    );
+  }
+  return getOnePredictionHandler(request, parsed.data.id);
 }

As per coding guidelines: “All API endpoints should use a validate function for input parsing using Zod for schema validation.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/predictions/`[id]/route.ts around lines 32 - 37, Validate the
incoming route param `id` in the GET function before calling
getOnePredictionHandler: create a Zod schema (e.g., predictionIdSchema =
z.object({ id: z.string().uuid() })) and parse await options.params with it,
returning a NextResponse with status 400 if validation fails; on success extract
the validated id and pass that to getOnePredictionHandler(request, id). Ensure
this validation occurs in the GET function (which currently reads const { id } =
await options.params) so malformed UUIDs never reach getOnePredictionHandler.

@@ -0,0 +1,40 @@
import type { NextRequest } from "next/server";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Run Prettier to fix formatting.

The pipeline reports formatting issues. Run pnpm prettier --write lib/predictions/getListPredictionsHandler.ts to resolve.

🧰 Tools
🪛 GitHub Actions: Format Check

[warning] 1-1: Prettier (format:check) reported code style issues in this file. Run Prettier with --write to fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/predictions/getListPredictionsHandler.ts` at line 1, Run Prettier to fix
the formatting issues in the file (e.g., the import line in
getListPredictionsHandler.ts). Execute the formatting command used by the repo
(pnpm prettier --write lib/predictions/getListPredictionsHandler.ts) or run your
configured formatter in the editor to apply consistent styling to the import and
the rest of the file.

Comment on lines +59 to +63
} catch (err) {
const message = err instanceof Error ? err.message : "Failed to fetch prediction";
return NextResponse.json(
{ status: "error", error: message },
{ status: 500, headers: getCorsHeaders() },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not expose raw backend error messages in API 500 responses.

err.message is returned directly to clients. This can leak internal details from Supabase/network failures. Return a generic message externally and log the detailed cause internally.

Safer error response pattern
   } catch (err) {
-    const message = err instanceof Error ? err.message : "Failed to fetch prediction";
+    console.error("getOnePredictionHandler failed", err);
     return NextResponse.json(
-      { status: "error", error: message },
+      { status: "error", error: "Failed to fetch prediction" },
       { status: 500, headers: getCorsHeaders() },
     );
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (err) {
const message = err instanceof Error ? err.message : "Failed to fetch prediction";
return NextResponse.json(
{ status: "error", error: message },
{ status: 500, headers: getCorsHeaders() },
} catch (err) {
console.error("getOnePredictionHandler failed", err);
return NextResponse.json(
{ status: "error", error: "Failed to fetch prediction" },
{ status: 500, headers: getCorsHeaders() },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/predictions/getOnePredictionHandler.ts` around lines 59 - 63, The catch
block in getOnePredictionHandler currently returns err.message to the client
(err and NextResponse.json with getCorsHeaders), which can leak internal
details; instead, log the full error internally (e.g., console.error(err) or
your app logger) and return a generic error message in NextResponse.json like {
status: "error", error: "Internal server error" } with the same 500 status and
getCorsHeaders. Ensure you still capture the original err for logs but do not
surface err.message in the HTTP response.

Comment on lines +42 to +45
if (result.type === "error") {
return NextResponse.json(
{ status: "error", error: result.error },
{ status: 500, headers: getCorsHeaders() },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid returning raw upstream/database error messages to clients.

Line 44 and Line 85 expose internal error text directly (result.error, err.message). This can leak backend/provider details. Return a stable public error string and log the internal message server-side.

🔒 Suggested hardening
   if (result.type === "error") {
+    // log internal details here
     return NextResponse.json(
-      { status: "error", error: result.error },
+      { status: "error", error: "Prediction request failed" },
       { status: 500, headers: getCorsHeaders() },
     );
   }
 ...
   } catch (err) {
-    const message = err instanceof Error ? err.message : "Failed to save prediction";
+    // log err internally here
     return NextResponse.json(
-      { status: "error", error: message },
+      { status: "error", error: "Failed to save prediction" },
       { status: 500, headers: getCorsHeaders() },
     );
   }

Also applies to: 83-86

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/predictions/postCreatePredictionHandler.ts` around lines 42 - 45, The
handler currently returns raw internal error text (result.error and err.message)
to clients in postCreatePredictionHandler; instead, change both error responses
so they return a stable public error string (e.g., "internal_server_error" or
"prediction_creation_failed") and HTTP 500, and log the full internal error
server-side using the existing logger or console (reference the result.type ===
"error" branch and the catch handling that references err.message) so internal
details are recorded but not exposed to clients.

Comment on lines +21 to +23
limit = 20,
offset = 0,
): Promise<PredictionSummary[]> {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Enforce limit/offset bounds in code (not just docs).

The JSDoc says max limit is 100, but the function currently trusts caller input. Clamp/sanitize before building .range(...) to keep query behavior predictable.

Suggested guard
 export async function selectPredictions(
   accountId: string,
   limit = 20,
   offset = 0,
 ): Promise<PredictionSummary[]> {
+  const safeLimit = Math.min(Math.max(limit, 1), 100);
+  const safeOffset = Math.max(offset, 0);
+
   const { data, error } = await supabase
     .from("predictions")
     .select("id, file_url, modality, engagement_score, created_at")
     .eq("account_id", accountId)
     .order("created_at", { ascending: false })
-    .range(offset, offset + limit - 1);
+    .range(safeOffset, safeOffset + safeLimit - 1);

Also applies to: 29-29

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/supabase/predictions/selectPredictions.ts` around lines 21 - 23, The
selectPredictions function trusts caller-provided limit/offset even though docs
cap limit at 100; sanitize/clamp the incoming parameters (e.g., ensure limit is
at least 1 and at most 100, and offset is >= 0) before using them to compute the
start/end for the Supabase query (.range(...)); update the computation that
builds the range using these clamped values (refer to selectPredictions and the
.range(...) call) so the SQL query behavior is predictable regardless of caller
input.

@@ -0,0 +1,36 @@
import { TRIBE_PREDICT_URL } from "@/lib/const";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Run Prettier to fix formatting.

The pipeline reports formatting issues. Run pnpm prettier --write lib/tribe/callTribePredict.ts to resolve.

🧰 Tools
🪛 GitHub Actions: Format Check

[warning] 1-1: Prettier (format:check) reported code style issues in this file. Run Prettier with --write to fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/tribe/callTribePredict.ts` at line 1, The file
lib/tribe/callTribePredict.ts has formatting issues; run the formatter to fix
them—execute pnpm prettier --write lib/tribe/callTribePredict.ts (or run your
project's Prettier task) to reformat the import and the rest of the file so the
TRIBE_PREDICT_URL import and the callTribePredict module conform to project
style.

Comment on lines +16 to +23
const response = await fetch(TRIBE_PREDICT_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
file_url: params.file_url,
modality: params.modality,
}),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Consider adding a timeout for the external HTTP call.

The fetch to the Modal endpoint has no timeout configured. GPU inference can be slow or the endpoint could become unresponsive, potentially causing requests to hang indefinitely. This is a reliability concern for external-call hazards.

🛡️ Proposed fix using AbortController
 export async function callTribePredict(
   params: CreatePredictionBody,
 ): Promise<TribePredictResult> {
+  const controller = new AbortController();
+  const timeoutId = setTimeout(() => controller.abort(), 60000); // 60s timeout
+
-  const response = await fetch(TRIBE_PREDICT_URL, {
-    method: "POST",
-    headers: { "Content-Type": "application/json" },
-    body: JSON.stringify({
-      file_url: params.file_url,
-      modality: params.modality,
-    }),
-  });
+  try {
+    const response = await fetch(TRIBE_PREDICT_URL, {
+      method: "POST",
+      headers: { "Content-Type": "application/json" },
+      body: JSON.stringify({
+        file_url: params.file_url,
+        modality: params.modality,
+      }),
+      signal: controller.signal,
+    });
+    clearTimeout(timeoutId);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/tribe/callTribePredict.ts` around lines 16 - 23, The fetch to
TRIBE_PREDICT_URL in callTribePredict.ts currently has no timeout; wrap the
request with an AbortController (create controller, pass controller.signal into
fetch called in the response = await fetch(...) call) and set a timer (e.g.,
setTimeout) to call controller.abort() after a reasonable timeout (e.g., 30s),
clear the timer after fetch completes, and handle the abort/error path (catch
and log or return an error) in the function that invokes the fetch so hang-ups
are prevented.

Comment on lines +24 to +31
typeof c.engagement_score === "number" &&
Array.isArray(c.engagement_timeline) &&
Array.isArray(c.peak_moments) &&
Array.isArray(c.weak_spots) &&
typeof c.regional_activation === "object" &&
c.regional_activation !== null &&
typeof c.total_duration_seconds === "number" &&
typeof c.elapsed_seconds === "number"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Strengthen payload validation in the type guard.

Current checks are shallow: timeline arrays and regional_activation value types aren’t validated. Malformed TRIBE responses can be accepted as TribePredictResult.

Proposed hardening
 export function isTribePredictResult(value: unknown): value is TribePredictResult {
   if (!value || typeof value !== "object") return false;
   const c = value as Record<string, unknown>;
+  const isPoint = (p: unknown): p is { time_seconds: number; score: number } =>
+    !!p &&
+    typeof p === "object" &&
+    typeof (p as Record<string, unknown>).time_seconds === "number" &&
+    typeof (p as Record<string, unknown>).score === "number";
+  const isNumberRecord = (r: unknown): r is Record<string, number> =>
+    !!r &&
+    typeof r === "object" &&
+    Object.values(r as Record<string, unknown>).every((v) => typeof v === "number");
+
   return (
     typeof c.engagement_score === "number" &&
-    Array.isArray(c.engagement_timeline) &&
-    Array.isArray(c.peak_moments) &&
-    Array.isArray(c.weak_spots) &&
-    typeof c.regional_activation === "object" &&
-    c.regional_activation !== null &&
+    Array.isArray(c.engagement_timeline) &&
+    c.engagement_timeline.every(isPoint) &&
+    Array.isArray(c.peak_moments) &&
+    c.peak_moments.every(isPoint) &&
+    Array.isArray(c.weak_spots) &&
+    c.weak_spots.every(isPoint) &&
+    isNumberRecord(c.regional_activation) &&
     typeof c.total_duration_seconds === "number" &&
     typeof c.elapsed_seconds === "number"
   );
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/tribe/isTribePredictResult.ts` around lines 24 - 31, The
isTribePredictResult type guard is too shallow: enhance checks inside the
function (referencing isTribePredictResult and the local variable c) to validate
array element shapes and regional_activation structure. For engagement_timeline,
peak_moments, and weak_spots ensure Array.isArray(...) and that every element
matches expected types (e.g., objects with specific numeric/string keys such as
timestamp/score or start/end), using Array.prototype.every to confirm entries
are not null and have the required keys with correct primitive types; for
regional_activation verify it's a plain object (not null/array) and that each
value is a number (or matches the expected subtype), and keep existing numeric
checks for engagement_score, total_duration_seconds, and elapsed_seconds. Update
the guard to return false if any of these stricter per-item or per-key
validations fail so malformed TRIBE responses are rejected.

@@ -0,0 +1,33 @@
import { callTribePredict } from "./callTribePredict";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Run Prettier to fix formatting.

The pipeline reports formatting issues. Run pnpm prettier --write lib/tribe/processPredictRequest.ts to resolve.

🧰 Tools
🪛 GitHub Actions: Format Check

[warning] 1-1: Prettier (format:check) reported code style issues in this file. Run Prettier with --write to fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/tribe/processPredictRequest.ts` at line 1, The file import of
callTribePredict has formatting issues; run the project's Prettier formatter on
this file (e.g., pnpm prettier --write lib/tribe/processPredictRequest.ts) to
fix spacing/formatting, then re-stage/commit the formatted changes so the
pipeline passes; ensure import line for callTribePredict remains unchanged
semantically after formatting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant