David Mitchell <david.mitchell@telogis.com> writes:
>> If you *are* using 8.0 then we need to look closer.
> Sorry, I should have mentioned, I am using PG 8.0. Also, although this
> is a 'mass insert', it's only kind of mass. While there are millions of
> rows, they are inserted in blocks of 500 (with a commit in between).
> We're thinking we might set up vacuum_cost_limit to around 100 and put
> vacuum_cost_delay at 100 and then just run vacuumdb in a cron job every
> 15 minutes or so, does this sound silly?
It doesn't sound completely silly, but if you are doing inserts and not
updates/deletes then there's not anything for VACUUM to do, really.
An ANALYZE command might get the same result with less effort.
I am however still wondering why 8.0 doesn't get it right without help.
Can you try a few EXPLAIN ANALYZEs as the table grows and watch whether
the cost estimates change?
(Also, if this is actually 8.0.0 and not a more recent dot-release,
I believe there were some bug fixes in this vicinity in 8.0.2.)
regards, tom lane