Re: Could postgres12 support millions of sequences? (like 10 million) - Mailing list pgsql-general

From pabloa98
Subject Re: Could postgres12 support millions of sequences? (like 10 million)
Date
Msg-id CAEjudX5=jxdxZmuSXgBCM+nHNhR=0OAcZGB5UaRB1JZe1YRNtA@mail.gmail.com
Whole thread Raw
In response to Re: Could postgres12 support millions of sequences? (like 10 million)  (Rob Sargent <robjsargent@gmail.com>)
Responses Re: Could postgres12 support millions of sequences? (like 10 million)
Re: Could postgres12 support millions of sequences? (like 10 million)
List pgsql-general


On Thu, Mar 19, 2020 at 3:17 PM Rob Sargent <robjsargent@gmail.com> wrote:


On Mar 19, 2020, at 4:13 PM, pabloa98 <pabloa98@gmail.com> wrote:



On Thu, Mar 19, 2020 at 2:50 PM Rob Sargent <robjsargent@gmail.com> wrote:


> On Mar 19, 2020, at 3:36 PM, pabloa98 <pabloa98@gmail.com> wrote:
>
> Hello,
>
> My schema requires a counter for each combination of 2 values. Something like:
>
> CREATE TABLE counter(
> group INT NOT NULL,
> element INT NOT NULL,
> seq_number INT NOT NULL default 0,
> CONSTRAINT PRIMARY KEY (group, element)
> );
>
> For each entry in counter, aka for each (group, element) pair, the model requires a seq_number.
>
> If I use a table "counter", I could still have counter collisions between 2 transactions. I need truly sequence behavior. Is that possible by using a table like "counter" table, where the counter could be increased out of the transaction so it performs as a sequence without having race conditions between concurrent transactions?
>
> The other option is to create sequences for each new pair of (group, element) using triggers. There are millions of pairs. So this approach will generate millions of sequences.
>
> How a PostgreSQL database would behave having millions of sequences in a schema? Would it degrade its performance? Is there any negative impact?
>
> Regards
>
> Pablo
>

To clarify, are you hoping for consecutive numbers as the each row is added to the table, i.e. “serial”?

What is the intension of “seq_number”?
>
>


the idea is to have like a serial sequence, but for each pair of (group, element).

so that when we insert rows in another table, we could have something like:


group, element, event_id, ...
1, 1, 1
1, 1, 2
1, 1, 3
2, 1, 1
1, 1, 4
1, 3, 1
1, 1, 5
1, 3, 2
2, 1, 2
2, 1, 3

The 3rd column is the sequence number we get from the appropriate sequence created by the trigger.
 
I want to implement a variation of https://stackoverflow.com/a/30204854 and that will generate millions of sequences.


Then I don’t thing group/element can be a PRIMARY KEY 


Thank you for your answer. Let me explain:

Table "counter" will have triggers that will create a sequence with a new row is inserted called counter_1_1_seq, counter_2_1_seq, counter_1_3_seq, etc that will be used to insert values in **another** table. It will be used for nothing else. When we insert millions of pairs group/element, the trigger in that table will generate millions of sequences.

My question is how PostgreSQL will behave. Could it deal with millions of sequences? What about system operations as vacuum, etc?

Pablo

pgsql-general by date:

Previous
From: Christophe Pettus
Date:
Subject: Re: How does pg_basebackup manage to create a snapshot of thefilesystem?
Next
From: Andres Freund
Date:
Subject: Re: PG12 autovac issues