Re: Could postgres12 support millions of sequences? (like 10 million) - Mailing list pgsql-general

From pabloa98
Subject Re: Could postgres12 support millions of sequences? (like 10 million)
Date
Msg-id CAEjudX5Zq=do9ZGUR-hFPGYp31hCSQUr20ifkeDOUywZpAngjw@mail.gmail.com
Whole thread Raw
In response to Re: Could postgres12 support millions of sequences? (like 10 million)  (Adrian Klaver <adrian.klaver@aklaver.com>)
Responses Re: Could postgres12 support millions of sequences? (like 10 million)
Re: Could postgres12 support millions of sequences? (like 10 million)
Re: Could postgres12 support millions of sequences? (like 10 million)
List pgsql-general

Nothing I saw that said int could not become bigint.


My bad. The code cannot be a bigint. Or it could be a bigint between 1 to 99999999 :) 
I thought it was not important. The code could be a number from 1 to 99999999 (so an Int will be OK) assigned in order-ish. This is because of business requirements. The code should not change in case a row is deleted. That rules out using windowed functions. At least for this problem. There could be some gaps if they are not too big (similar to a serial column when a transaction is rolled back). We are more concerned about code generation collisions (for example 2 transactions calculating the same code value) than gaps. For each pair (group, element) the code, once assigned should remain constant. This is because the group, the element, and the code will become an id (business-related). So:

group, element, code = bid
1, 3, 1 = bid 131
2, 1, 1 = bid 211
etc

This calculated bid follows the convention described here and it is used as a reference in other tables. Therefore it should not change.
All this weirdness is because of business requirements. I am good with a classic serial column. But that will not map our business rules.

Regarding to the implementation of this. Our concern is to choose something not slow (it does not need to be super fast because there will be more operations in other tables using the same transaction) and I thought that creating/dropping sequences could be a solution. But I was not sure. I am not sure how it will behave with millions of sequences.

If there is another solution different than millions of sequences that do not block, generate few gaps (and those gaps are small) and never generate collisions then I will happily use it.

I hope I described the problem completely.

Pablo



pgsql-general by date:

Previous
From: pabloa98
Date:
Subject: Re: Could postgres12 support millions of sequences? (like 10 million)
Next
From: "David G. Johnston"
Date:
Subject: Re: Could postgres12 support millions of sequences? (like 10 million)