On Tuesday, May 15, 2012 08:29:11 AM Віталій Тимчишин wrote:
> 2012/5/13 Robert Klemme <shortcutter@googlemail.com>
>
> > On Sun, May 13, 2012 at 10:12 AM, Віталій Тимчишин <tivv00@gmail.com>
> >
> > wrote:
> > > 2012/5/11 Robert Klemme <shortcutter@googlemail.com>
> > >
> > >> On the contrary: what would be the /advantage/ of being able to create
> > >> millions of sequences? What's the use case?
> > >
> > > We are using sequences as statistics counters - they produce almost no
> > > performance impact and we can tolerate it's non-transactional nature. I
> >
> > can
> >
> > > imaging someone who wants to have a sequence per user or other
> > > relation row.
> >
> > I can almost see the point. But my natural choice in that case would
> > be a table with two columns. Would that actually be so much less
> > efficient? Of course you'd have fully transactional behavior and thus
> > locking.
>
> We've had concurrency problems with table solution (a counter that is
> updated by many concurrent queries), so we traded transactionality for
> speed. We are actually using this data to graph pretty graphs in nagios, so
> it's quite OK. But we have only ~10 sequences, not millions :)
I would rather suggest going with a suming table if you need to do something
like that:
sequence_id | value
1 | 3434334
1 | 1
1 | -1
1 | 1
1 | 1
...
You then can get the current value with SELECT SUM(value) WHERE sequence_id =
1. For garbage collection you can delete those values and insert the newly
summed up value again.
That solution won't ever block if done right.
Andres