Skip to content

Fix/extra step for zero join prisma adapter#7461

Closed
sitozzz wants to merge 2 commits intomainfrom
fix/extra-step-for-zero-join-prisma-adapter
Closed

Fix/extra step for zero join prisma adapter#7461
sitozzz wants to merge 2 commits intomainfrom
fix/extra-step-for-zero-join-prisma-adapter

Conversation

@sitozzz
Copy link
Copy Markdown
Member

@sitozzz sitozzz commented Apr 13, 2026

Summary by CodeRabbit

  • Performance Improvements

    • Optimized database query handling for large datasets through intelligent filter chunking
    • Improved result deduplication for consistency
  • Bug Fixes

    • Enhanced relationship field filtering to accurately resolve constraints across relationship configurations
  • Infrastructure

    • Updated deployment configuration dependencies

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 13, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0f1829e1-9224-497b-ad79-0d9f2d43e6b1

📥 Commits

Reviewing files that changed from the base of the PR and between c4f10fd and a1f5c2d.

📒 Files selected for processing (2)
  • .helm
  • packages/keystone/databaseAdapters/adapters/PrismaAdapter.js

📝 Walkthrough

Walkthrough

Helm submodule reference updated to a new commit hash. PrismaAdapter.js enhanced with chunking support for large Prisma where filters exceeding 12,000 items and refactored relationship-based where clause processing to use scalar foreign key filtering.

Changes

Cohort / File(s) Summary
Helm submodule
.helm
Submodule commit hash updated from a3b5737f2e0e90f53acc7681386c9a8d1b98af2a to dd27b5b6c66e9363ceed23b4e021a407f3b0b269.
PrismaAdapter relationship and chunking logic
packages/keystone/databaseAdapters/adapters/PrismaAdapter.js
Implemented chunking for large where filters (nested in or OR with array length > 12000) by splitting queries and merging deduplicated results. Refactored relationship-based where clause processing with new helpers (_idsToFilter, _filterByRelatedIdsThroughFk) to resolve scalar FK fields and handle to-one, one-to-many, and N:N relationship constraints via FK-based filtering instead of Prisma relation operators.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant processWheres as processWheres()
    participant Chunking as Chunking Logic
    participant RelHandler as _filterByRelatedIdsThroughFk()
    participant Prisma as Prisma Client
    participant DB as Database

    Client->>processWheres: Pass where clause
    alt Large in/OR detected
        processWheres->>Chunking: Split filter (> 12000 items)
        loop For each chunk
            Chunking->>Prisma: count(chunk)
            Prisma->>DB: Execute count query
            DB-->>Prisma: Return count
            Chunking->>Prisma: findMany(chunk)
            Prisma->>DB: Execute find query
            DB-->>Prisma: Return results
        end
        Chunking->>Chunking: Merge & deduplicate on id
        Chunking-->>processWheres: Combined results
    else Relationship filter detected
        processWheres->>RelHandler: Resolve FK field name
        RelHandler->>RelHandler: Build scalar FK filter
        RelHandler->>Prisma: Query with FK condition
        Prisma->>DB: Execute FK-based query
        DB-->>Prisma: Return filtered results
        Prisma-->>RelHandler: Results
        RelHandler-->>processWheres: Filtered items
    end
    processWheres-->>Client: Processed where clause results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Chunking carrots into portions neat,
Relationships resolved, the logic complete,
Foreign keys dancing through Prisma's embrace,
Twelve thousand filters now find their place!

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/extra-step-for-zero-join-prisma-adapter

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sitozzz sitozzz closed this Apr 13, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a1f5c2dad8

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +129 to +133
const parts = await Promise.all(chunkedFilters.map(chunkFilter => this.model.findMany(chunkFilter)))
const byId = new Map()
for (const part of parts) {
for (const item of part) {
if (item && item.id !== undefined) byId.set(item.id, item)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve global sort/pagination when merging chunked queries

Running findMany once per chunk and then unioning results by id changes query semantics whenever the original filter includes orderBy, skip, or take (which is how list pagination is normally applied): each chunk gets paginated independently, then all chunk pages are merged without a final global sort/page cut. In practice, a request like first: 10 can return up to 10 * chunks rows and in a different order than the single-query result.

Useful? React with 👍 / 👎.

Comment on lines +110 to +111
const counts = await Promise.all(chunkedFilters.map(chunkFilter => this.model.count(chunkFilter)))
count = counts.reduce((acc, value) => acc + value, 0)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid double-counting when summing chunked meta counts

Meta count is computed as a plain sum of count() over each chunk, but chunk partitions are not guaranteed to be disjoint (especially when splitting a large OR, where one row can satisfy predicates in multiple chunks). In those cases the same row is counted multiple times, so meta.count can be inflated compared to the original single-query semantics.

Useful? React with 👍 / 👎.

@sonarqubecloud
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant