Re: Could postgres12 support millions of sequences? (like 10 million) - Mailing list pgsql-general

From pabloa98
Subject Re: Could postgres12 support millions of sequences? (like 10 million)
Date
Msg-id CAEjudX4wOd7Ry+dMg51djL99mFMj5WDDq5rUh1dDHchrRy1S4Q@mail.gmail.com
Whole thread Raw
In response to Re: Could postgres12 support millions of sequences? (like 10million)  (rob stone <floriparob@gmail.com>)
List pgsql-general


On Fri, Mar 20, 2020 at 5:39 AM rob stone <floriparob@gmail.com> wrote:
Hello,

On Thu, 2020-03-19 at 14:36 -0700, pabloa98 wrote:
> Hello,
>
> My schema requires a counter for each combination of 2 values.
> Something like:
>
> CREATE TABLE counter(
> group INT NOT NULL,
> element INT NOT NULL,
> seq_number INT NOT NULL default 0,
> CONSTRAINT PRIMARY KEY (group, element)
> );
>
> For each entry in counter, aka for each (group, element) pair, the
> model requires a seq_number.
>
> If I use a table "counter", I could still have counter collisions
> between 2 transactions. I need truly sequence behavior. Is that
> possible by using a table like "counter" table, where the counter
> could be increased out of the transaction so it performs as a
> sequence without having race conditions between concurrent
> transactions?
>
> The other option is to create sequences for each new pair of (group,
> element) using triggers. There are millions of pairs. So this
> approach will generate millions of sequences.
>
> How a PostgreSQL database would behave having millions of sequences
> in a schema? Would it degrade its performance? Is there any negative
> impact?
>
> Regards
>
> Pablo
>
>
>


Have you tried the following:-

BEGIN;
SELECT (seq_number + 1) FROM counter WHERE group = $1
AND element = $2 FOR UPDATE;

If that gives a NOTFOUND exception, then you need to insert into table
counter using a seq_number = 1, or update it with (seq_number + 1).
Then do the insert into whatever table needs the seq_number value.

If no errors detected, then COMMIT else ROLLBACK.
It is all in one transaction block.
We use this approach to assign invoice numbers.


We do that in other parts and with another set of tables. They have higher concurrency than the problem I bring to the list.
The issue in doing this is that the client and PostgreSQL server talks too much. Too many messages! Increasing latency.

We replaced it with a lock on the row so we can increase the counter and commit the transaction. The problem with this approach is that we are serializing access to this table and it has performance implications. It is basically a global serialization of an update on this table crossing all the servers on the cluster.


 
You could pre-seed table counter when a group/element pair is created
for the first time.

HTH,
Robert


pgsql-general by date:

Previous
From: Karsten Hilbert
Date:
Subject: Re: unexpected chunk number 2 (expected 0) for toast value ... inpg_toast_18536
Next
From: Andres Freund
Date:
Subject: Re: PG12 autovac issues