On Fri, Mar 03, 2006 at 11:40:40AM -0300, Alvaro Herrera wrote:
> Csaba Nagy wrote:
>
> > Now when the queue tables get 1000 times dead space compared to their
> > normal size, I get performance problems. So tweaking vacuum cost delay
> > doesn't buy me anything, as not vacuum per se is the performance
> > problem, it's long run time for big tables is.
>
> So for you it would certainly help a lot to be able to vacuum the first
> X pages of the big table, stop, release locks, create new transaction,
> continue with the next X pages, lather, rinse, repeat.
I think the issue is that even for that small section, you still need
to scan all the indexes to delete the tuples there. So you actually
cause more work because you have to scan the indexes for each portion
of the table rather than just at the end.
However, if this were combined with some optimistic index deletion
code where the tuple was used to find the entry directly rather than
via bulkdelete, maybe it'd be doable. More overall I/O due to the index
lookups but the transactions become shorter. I say optimistic because
if you don't find the tuple the quick way you can always queue it for a
bulkdelete later. Hopefully it will be the uncommon case.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.