Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions docs/06-concepts/06-database/05-crud.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,21 +45,25 @@ In previous versions of Serverpod the `insert` method mutated the input object b

### Ignoring conflicts

When inserting rows that might violate a unique or exclusion constraint, you can set `ignoreConflicts` to `true` on the `insert` method. Rows that would cause a unique constraint violation are silently skipped, and only the non-conflicting rows are inserted.
When inserting rows that might violate a unique or exclusion constraint, you can set `ignoreConflicts` to `true` on the `insert` method. Rows that would cause a unique or exclusion constraint violation are silently skipped, and only the non-conflicting rows are inserted.

```dart
var rows = [Company(name: 'Serverpod'), Company(name: 'Google')];
var inserted = await Company.db.insert(session, rows, ignoreConflicts: true);
```

The method returns only the rows that were successfully inserted. If all rows conflict, an empty list is returned.
The method returns only the rows that were successfully inserted. If all rows conflict, an empty list is returned. Unlike a regular `insert`, which fails entirely if any row violates a constraint, `ignoreConflicts` allows partial inserts where only the non-conflicting rows are written.

This is useful for idempotent operations where you want to insert data without failing on duplicates.

:::note
Under the hood, this uses PostgreSQL's `ON CONFLICT DO NOTHING`. Only unique and exclusion constraint violations are ignored — other errors such as `NOT NULL`, `CHECK`, or foreign key violations will still throw an exception.
:::

:::warning
When using `ignoreConflicts` with models that have [non-persistent fields](models#non-persistent-fields), each row is inserted individually instead of in a single batch. This is necessary because the database cannot report which rows were skipped in a batch insert, making it impossible to correctly match non-persistent field values back to inserted rows. For large numbers of rows, this can cause performance issues. Consider removing non-persistent fields from the model or inserting in smaller batches.
:::

## Read

There are three different read operations available.
Expand Down