this post was submitted on 08 Nov 2025
149 points (97.5% liked)

Programmer Humor

27428 readers
1631 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] tatterdemalion@programming.dev 15 points 1 week ago* (last edited 1 week ago) (3 children)

There is a subtle scenario with read-modify-write transactions in MVCC where SQLite lacks some grace (in my opinion).

In MVCC, transactions work with a point-in-time (read "between atomic transactions") consistent "read snapshot" of the database.

Consider this example:

  1. Transaction A begins and reads from table foo.
  2. Transaction B begins and updates table foo.
  3. Both transactions commit.

There is no conflict here because these transactions are isolated from each other via the snapshot mechanism. Transaction A's read snapshot is immutable and will not see any writes from transaction B, even if they are happening concurrently.

Now what happens in this example (from the OP):

  1. Transaction A begins and reads from table foo.
  2. Transaction B begins and updates table foo.
  3. Transaction B commits.
  4. Transaction A tries to update foo.

This is a true conflict because both transactions are trying to write to foo, and transaction A's writes might be based on what it just read. There is no consistent way for A to proceed, because B already wrote to foo, invalidating A's read snapshot.

So SQLite handles this by returning an error to A, effectively requiring A to restart the transaction.

There are other ways this could be handled though. The DB could optimistically retry the transaction for you. There is even a special BEGIN IMMEDIATE; statement that it could use to proactively take a write lock on foo so that the transaction doesn't get starved by other writers. But SQLite puts all of the responsibility on users to handle this.

I'm not an expert, so there could be a very good reason that SQLite works this way, but it feels a bit annoying as a user.

I don't actually know off the top of my head how PostgresQL handles this particular scenario.

[–] FishFace@piefed.social 4 points 1 week ago (1 children)

Wouldn't that be an error due to serialisation failure if in postgres if you enabled serialisable isolation?

If you didn't, you could get anything, because unless I'm mistaken this is the typical example used to illustrate what goes wrong when you don't have full serialisable isolation.

[–] tatterdemalion@programming.dev 1 points 1 week ago (1 children)

I believe both SERIALIZABLE and REPEATABLE READ isolation levels would raise an error for this example.

And I looked this up: Postgres won't retry the transaction for you either. Though ultimately I think this a good default behavior.

[–] FishFace@piefed.social 2 points 1 week ago (1 children)

Why would repeatable read raise an error? I'm a little rusty but I thought it was only serialisable that could do, and that repeatable read simply ensured that the committed writes of transaction 2 would not be read in transaction 1, so it updating over the top of 2 is fine.

I think you're right. An UPDATE would fail, but not an INSERT.

[–] msage@programming.dev 3 points 1 week ago (1 children)

I mean if you select without FOR UPDATE, you're asking for trouble.

Sqlite does not have a SELECT ... FOR UPDATE feature. The alternative is BEGIN IMMEDIATE.

[–] ulterno@programming.dev 0 points 1 week ago (1 children)

SQLite puts all of the responsibility on users to handle this

Thanks for the explanation.
I feel like the one who decided this behaviour, expected the users of the database, to make their own system of prioritising the transactions.

[–] F04118F@feddit.nl 2 points 1 week ago

More likely: SQLite is built to be small, simple and lightweight, not to be super highly concurrent.

If this situation happens rarely, just make sure you have a retry on the query. If it happens often, switch to postgres.