Alvaro Herrera <alvherre@dcc.uchile.cl> writes:
> I think what Tom is concerned about is that this hasn't been tested
> enough with big datasets. Also there a little loss of index pages but
> it's much less (orders of magnitude, I think) than what was before.
> This is because the index won't shrink "vertically".
The fact that we won't remove levels shouldn't be meaningful at all ---
I mean, if the index was once big enough to require a dozen btree
levels, and you delete everything, are you going to be upset that it
drops to 13 pages rather than 2? I doubt it.
The reason I'm waffling about whether the problem is completely fixed or
not is that the existing code will only remove-and-recycle completely
empty btree pages. As long as you have one key left on a page it will
stay there. So you could end up with ridiculously low percentage-filled
situations. This could be fixed by collapsing together adjacent
more-than-half-empty pages, but we ran into a lot of problems trying to
do that in a concurrent fashion. So I'm waiting to find out if real
usage patterns have a significant issue with this or not.
For example, if you have a timestamp index and you routinely clean out
all entries older than N-days-ago, you won't have a problem in 7.4.
If your pattern is to delete nine out of every ten entries (maybe you
drop minute-by-minute entries and keep only hourly entries after awhile)
then you might find the index loading getting unpleasantly low. We'll
have to see whether it's a problem in practice. I'm willing to revisit
the page-merging problem if it's proven to be a real practical problem,
but it looked hard enough that I think it's more profitable to spend the
development effort elsewhere until it's proven necessary.
regards, tom lane