hello everybody,
i know that we have discussed this issue already. my view of the problem
has changed in the past couple of weeks, however. maybe other people had
similar experiences.
i have been working on a special purpose application which basically
looks like that:
- 150.000 tables (for several reasons heavily constraint excluded):
small changes made once in a while - XX medium sized tables which are heavily changed. - size: > 5 TB
my DB is facing around 600mio transaction a month. 85% of those contain
at least some small modification so I cannot save on XIDs.
my problem is that I cannot VACUUM FREEZE my 150k tables where most of
the data is as I have a couple of thousand transactions a day modifying
this data.
but, i also have troubles to prevent myself from transaction wraparound
as it is pretty boring to vacuum that much data under heavy load - with
some useful vacuum delay it just takes too long.
i basically have to vacuum the entire database too often to get spare XIDs.
i suggest to introduce a --with-long-xids flag which would give me 62 /
64 bit XIDs per vacuum on the entire database.
this should be fairly easy to implement.
i am not too concerned about the size of the tuple header here - if we
waste 500 gb of storage here i am totally fine.
any chances to get a properly written fix like that in?
maybe somebody else has similar problems? hannu krosing maybe? :-P
hans
--
Cybertec Schönig & Schönig GmbH
PostgreSQL Solutions and Support
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Tel: +43/1/205 10 35 / 340
www.postgresql-support.de, www.postgresql-support.com