Hi,
On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:
> [...]
> We are running a PostgreSQL 8.4 database, with two tables containing a
> lot (> 1 million) moderatly small rows. It contains some btree indexes,
> and one of the two tables contains a gin full-text index.
>
> We noticed that the autovacuum process tend to use a lot of memory,
> bumping the postgres process near 1Gb while it's running.
>
Well, it could be its own memory (see maintenance_work_mem), or shared
memory. So, it's hard to say if it's really an issue or not.
BTW, how much memory do you have on this server? what values are used
for shared_buffers and maintenance_work_mem?
> I looked in the documentations, but I didn't find the information : do
> you know how to estimate the memory required for the autovacuum if we
> increase the number of rows ? Is it linear ? Logarithmic ?
>
It should use up to maintenance_work_mem. Depends on how much memory you
set on this parameter.
> Also, is there a way to reduce that memory usage ?
Reduce maintenance_work_mem. Of course, if you do that, VACUUM could
take a lot longer to execute.
> Would running the
> autovacuum more frequently lower its memory usage ?
>
Yes.
--
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com