Index predicate locking and serializability contention - Mailing list pgsql-hackers

From Marcelo Zabani
Subject Index predicate locking and serializability contention
Date
Msg-id CACgY3Qa1W=JYJe_Au2eNsbGLbfDn4_v4WEdS4XO=Gwh3YjwWsA@mail.gmail.com
Whole thread Raw
List pgsql-hackers
Hi!

We're currently having issues with serializable contention at our shop, and after tracking it down very carefully, we found that there are two main reasons for one such conflict:
1. Page-level predicate locks on primary key indexes, whose associated column gets their Id from a sequence.
2. An empty table which gets inserted to but has those inserted rows deleted before committing.


We're confident they are the only remaining impediments to allowing transactions not to conflict with each other, because we have changed the code just in the right places to make sure that no conflicts arise when we do both of:
- In the first case, the sequence's nextval and increment are set so that the first transaction gets an Id that is on a different index page than the Id the second transaction will get.
- Not writing to the table that once got inserted to and emptied. Before this, we also tried setting enable_seqscan to off and inspecting the query plans and SIReadLocks carefully before committing to make sure sequential scans were avoided, but it wasn't sufficient.

I believe in the first case the problem is one of granularity which has been mentioned before at https://www.postgresql.org/message-id/flat/20110503064807.GB85173%40csail.mit.edu#836599e3c18caf54052114d46f929cbb).
In the second case, I believe part of the problem could be due to how empty tables are predicately locked - according to https://dba.stackexchange.com/questions/246179/postgresql-serialisation-failure-on-different-ids.

In our case, we use empty tables to keep complex invariants checked at the DB level by inserting into them with triggers and making sure deferrable constraints will fail if the rows are still there (thus forcing the committer to run a "consistency-enforcing" job before committing).
I'm not sure if our use-case is too particular, but we have found in general that having little data - which some of our tables do, and will still have for the foreseeable future - is sometimes worse than having lots of it due to index locking granularity being at least at page-level.

So I have a few questions:
- Would index-key / index-gap locking avoid avoid creating serialization anomalies for inserts of consecutive Ids that currently fall in the same index page? Is it in the roadmap?
- A colleague made a suggestion which I found no mention of anywhere: would it be possible not to predicate-lock on indices for insertion into GENERATED AS IDENTITY columns, unless of course in the case of UPDATE, INSERT INTO .. OVERRIDING, ALTER TABLE .. RESTART WITH or other similarly conflicting statements?
- Is there something that can be done for the problem with empty tables?

We currently use Postgres 11, and anything that could help us change how we approach the problem on our side is very much welcome too!

Thanks in advance,
Marcelo.

pgsql-hackers by date:

Previous
From: Mark Rofail
Date:
Subject: Re: [HACKERS] GSoC 2017: Foreign Key Arrays
Next
From: Tatsuro Yamada
Date:
Subject: Re: Is it useful to record whether plans are generic or custom?