On Thu, Oct 1, 2009 at 5:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> The elephant in the room here is that if the relation is a million
>> pages of which 1-100,000 and 1,000,000 are in use, no amount of bias
>> is going to help us truncate the relation unless every tuple on page
>> 1,000,000 gets updated or deleted.
>
> Well, there is no way to move a tuple across pages in a user-invisible,
> non-blocking fashion, so our ability to do something automatic about the
> above scenario is limited. The discussion at the moment is about ways
> of reducing the probability of getting into that situation in the first
> place. That doesn't preclude also providing some more-invasive tools
> that people can use when they do get into that situation; but let's
> not let I-want-a-magic-pony syndrome prevent us from doing anything
> at all.
That's fair enough, but it's our usual practice to consider, before
implementing a feature or code change, what fraction of the people it
will actually help and by how much. If there's a way that we can
improve the behavior of the system in this area, I am all in favor of
it, but I have pretty modest expectations for how much real-world
benefit will ensue. I suspect that it's pretty common for large
tables to contain a core of infrequently-updated records, and even a
very light smattering of those, distributed randomly, will be enough
to stop table shrinkage before it can get very far.
...Robert