Re: Sequence Access Method WIP - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Sequence Access Method WIP
Date
Msg-id 20141204180129.GE27550@alap3.anarazel.de
Whole thread Raw
In response to Re: Sequence Access Method WIP  (José Luis Tallón<jltallon@adv-solutions.net>)
List pgsql-hackers
> >>May I possibly suggest a file-per-schema model instead? This approach would
> >>certainly solve the excessive i-node consumption problem that --I guess--
> >>Andres is trying to address here.
> >I don't think that really has any advantages.
> 
> Just spreading the I/O load, nothing more, it seems:
> 
> Just to elaborate a bit on the reasoning, for completeness' sake:
> Given that a relation's segment maximum size is 1GB, we'd have
> (1048576/8)=128k sequences per relation segment.
> Arguably, not many real use cases will have that many sequences.... save for
> *massively* multi-tenant databases.
> 
> The downside being that all that random I/O --- in general, it can't really
> be sequential unless there are very very few sequences--- can't be spread to
> other spindles. Create a "sequence_default_tablespace" GUC + ALTER SEQUENCE
> SET TABLESPACE, to use an SSD for this purpose maybe?
>  (I could take a shot at the patch, if deemed worthwhile)

I think that's just so far outside the sane usecases that I really don't
see us adding complexity to reign it in. If your frequently used
sequences get flushed out to disk by anything but the checkpointer the
schema is just badly designed...

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Alex Shulgin
Date:
Subject: Re: [COMMITTERS] pgsql: Keep track of transaction commit timestamps
Next
From: Peter Geoghegan
Date:
Subject: Re: INSERT ... ON CONFLICT {UPDATE | IGNORE}