On Tue, Mar 20, 2012 at 6:16 PM, Greg Stark <stark@mit.edu> wrote:
> On Tue, Mar 20, 2012 at 8:00 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>>> Frankly that analysis didn't make any sense to me at the time.
>>> Comparing integers is fast, sure, but it's still slower than not
>>> having to do any comparison at all.
>>
>> I think you're underestimating how much it costs to call the
>> datatype-specific comparator. My belief is that it's wicked
>> expensive.
>
> I'm totally with you on the datatype-specific comparator being expensive.
>
> But we must be talking about two different scenarios. I don't see why
> Tom's algorithm was slower than Knuth's unless there was a bug. It
> seems to me it should perform exactly as many comparator calls but
> save the integer comparisons and the extra space for them.
It seemed that way to me, too, but it wasn't.
> In the current algorithm, Knuth's, it compares the new tuple against
> the most recently emitted tuple to set the run number then adds it to
> the bottom of the heap and sifts up. If it's from the current run it
> does a bunch of integer comparisons and skips past the next run
> quickly. If it's from the next run it sifts up to the right spot in
> the next run and if it hits the top of the next run it does a quick
> integer comparison and stops.
Check.
> In Tom's algorithm it would perform the same comparison against the
> recently emitted tuple to set the run number and either add it to the
> unsorted list or the bottom of the heap. If it's added to the unsorted
> list we're done,
Check.
> if it's added to the bottom of the heap it performs
> the same siftup it would have done above except that it skips the
> bottom few levels of the heap -- all of which were fast integer
> comparisons.
I think this is where the wheels come off. It's excessively
optimistic to suppose that we're going to skip the "bottom few levels"
of the heap. Half of the elements in the heap have no children at
all; and half of the remainder are parents but not grand-parents. If
you assume, at a given point in time, that half of the tuples we're
holding onto are for the next run, then the heap is ONE level
shallower than it would otherwise have been, and you figure to save
ONE integer comparison. If we're relatively early in the run and only
one-tenth of the tuples we're holding onto are for the next run, then
heap is barely shallower at all and we're scarcely avoiding anything.
But having even a small number of next-run tuples scattered through
the heap gives us the opportunity to "get lucky". If the heap size is
N, even lg N next-run tuples in the heap is potentially enough for us
to avoid most of the calls to a real comparator, if it so happens that
all of those tuples are our ancestors.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company