Tom Lane escribió:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
> > Marko Kreen escribi�:
> >> I've experienced something similar. The reason turned out to be
> >> combination of overcommit=off, big maint_mem and several parallel
> >> vacuums for fast-changing tables. Seems like VACUUM allocates
> >> full maint_mem before start, whatever the actual size of the table.
>
> > Hmm. Maybe we should have VACUUM estimate how much is the maximum
> > amount of memory that would be used, given the size of the table, and
> > allocate only that much.
>
> Yeah --- given the likelihood of parallel vacuum activity in 8.3,
> it'd be good to not expend memory we certainly aren't going to need.
>
> We could set a hard limit at RelationGetNumberOfBlocks *
> MaxHeapTuplesPerPage TIDs, but that is *extremely* conservative
> (it'd work out to allocating about a quarter of the table's actual size
> in bytes, if I did the math right).
Another idea is to consider applying this patch:
http://thread.gmane.org/gmane.comp.db.postgresql.devel.patches/19384/focus=19393
which is said to reduce the amount of memory needed to store the TID
array.
> Given that the worst-case consequence is extra index vacuum passes,
> which don't hurt that much when a table is small, maybe some smaller
> estimate like 100 TIDs per page would be enough. Or, instead of
> using a hard-wired constant, look at pg_class.reltuples/relpages
> to estimate the average tuple density ...
This sounds like a reasonable compromise.
--
Alvaro Herrera Valdivia, Chile ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
Management by consensus: I have decided; you concede.
(Leonard Liu)