On Tue, 8 May 2007, Guillaume Cottenceau wrote:
> Pomarede Nicolas <npomarede 'at' corp.free.fr> writes:
>
>> Hello to all,
>>
>> I have a table that is used as a spool for various events. Some
>> processes write data into it, and another process reads the resulting
>> rows, do some work, and delete the rows that were just processed.
>>
>> As you can see, with hundreds of thousands events a day, this table
>> will need being vaccumed regularly to avoid taking too much space
>> (data and index).
>>
>> Note that processing rows is quite fast in fact, so at any time a
>> count(*) on this table rarely exceeds 10-20 rows.
>>
>>
>> For the indexes, a good way to bring them to a size corresponding to
>> the actual count(*) is to run 'reindex'.
>>
>> But for the data (dead rows), even running a vacuum analyze every day
>> is not enough, and doesn't truncate some empty pages at the end, so
>> the data size remains in the order of 200-300 MB, when only a few
>> effective rows are there.
>
> As far as I know, you probably need to increase your
> max_fsm_pages, because your pg is probably not able to properly
> track unused pages between subsequent VACUUM's.
>
> http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM
>
> Have you investigated this? It seems that you already know about
> the FSM stuff, according to your question about FSM and 8.3.
>
> You can also run VACUUM ANALYZE more frequently (after all, it
> doesn't lock the table).
thanks, but max FSM is already set to a large enough value (I'm running a
vacuum analyze every day on the whole database, and set max fsm according
to the last lines of vacuum, so all pages are stored in the FSM).
Nicolas