On 10/2/15 4:08 PM, Jonathan Vanasco wrote:
> Using an even distribution as an example, the average width of the keys can increase by 2 places:
Assuming you're using int4 or int8, then that doesn't matter. The only
other possible issue I can think of would be it somehow throwing the
planner stats off, but I think the odds of that are very small.
>> >Sequences are designed to be extremely fast to assign. If you ever did find a single sequence being a bottleneck,
youcould always start caching values in each backend. I think it'd be hard (if not impossible) to turn a single global
sequenceinto a real bottleneck.
> I don't think so either, but everything I've read has been theoretical -- so I was hoping that someone here can give
the"yeah, no issue!" from experience. The closest production stuff I found was via the BDR plugin (only relevant
thingthat came up during search) and there seemed to be anecdotal accounts of issues with sequences becoming
bottlenecks-- but that was from their code that pre-generated allowable sequence ids on each node.
You could always run a custom pg_bench that runs a PREPAREd SELECT
nextval() and compare that to a prepared SELECT currval(). You might
notice a difference at higher client counts with no caching, but I doubt
you'd see that much difference with caching turned on.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com