Vivek Khera wrote:
>>>>>> "BW" == Bruno Wolff, <Bruno> writes:
>
>>> to see it incremental. This would result in pretty much near zero
>>> internal fragmentation, I think.
>
> BW> Why do you care about about the details of the implementation (rather than
> BW> the performance)? If it were faster to do it that way, that's how it would
> BW> have been done in the first place. The cost of doing the above is almost
> BW> certainly going to be an overall performance loser.
>
> I care for the performance. And how are you so sure that it was
> faster the way it is now? Are you sure it was not done this way
> because of ease of implementation?
Among some locking issues when doing btree deletes as opposed to scan
and insert operations, there is no direct pointer from a data (heap) row
to it's index entries. VACUUM remembers all the ctid's it removed from
the heap in it's batch run and then does a full scan of the indexes to
remove all the index entries pointing to these ctid's. Your idea is (so
far) lacking a place where to remember all the single removed rows and I
assume you're not planning to pay the cost of a full scan over all
indexes of a table to reclaim the space of one data row, are you?
>
> Seriously, how much slower can it be if the backend were to do the
> checking for external references upon updating/deleting a row? The
> cost would be distributed across time as opposed to concentrated at
> once within a vacuum process. I am fairly certian it would reduce
> disk bandwidth requirements since at least one necessary page will
> already be in memory.
I am fairly certain that holds true for tables without indexes only.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #