Re: [PATCH] Incremental sort (was: PoC: Partial sort) - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: [PATCH] Incremental sort (was: PoC: Partial sort)
Date
Msg-id 20190624201601.iqy3ddig72l5lvbz@development
Whole thread Raw
In response to Re: [PATCH] Incremental sort (was: PoC: Partial sort)  (James Coleman <jtc331@gmail.com>)
Responses Re: [PATCH] Incremental sort (was: PoC: Partial sort)
List pgsql-hackers
On Mon, Jun 24, 2019 at 01:00:50PM -0400, James Coleman wrote:
>On Mon, Jun 24, 2019 at 12:56 PM Simon Riggs <simon@2ndquadrant.com> wrote:
>>
>> On Mon, 24 Jun 2019 at 16:10, James Coleman <jtc331@gmail.com> wrote:
>>>
>>> On Thu, Jun 13, 2019 at 11:38:12PM -0400, James Coleman wrote:
>>> >I think the first thing to do is get some concrete numbers on performance if we:
>>> >
>>> >1. Only sort one group at a time.
>>> >2. Update the costing to prefer traditional sort unless we have very
>>> >high confidence we'll win with incremental sort.
>>> >
>>> >It'd be nice not to have to add additional complexity if at all possible.
>>>
>>> I've been focusing my efforts so far on seeing how much we can
>>> eliminate performance penalties (relative to traditional sort). It
>>> seems that if we can improve things enough there that we'd limit the
>>> amount of adjustment needed to costing -- we'd still need to consider
>>> cases where the lower startup cost results in picking significantly
>>> different plans in a broad sense (presumably due to lower startup cost
>>> and the ability to short circuit on a limit). But I'm hopeful then we
>>> might be able to avoid having to consult MCV lists (and we wouldn't
>>> have that available in all cases anyway)
>>>
>>> As I see it the two most significant concerning cases right now are:
>>> 1. Very large batches (in particular where the batch is effectively
>>> all of the matching rows such that we're really just doing a standard
>>> sort).
>>> 2. Many very small batches.
>>
>>
>> What is the specific use case for this? This sounds quite general case.
>
>They are both general cases in some sense, but the concerns lie mostly
>with what happens when they're unexpectedly encountered. For example,
>if the expected row count or group size is off by a good bit and we
>effectively have to perform a sort of all (or most) possible rows.
>
>If we can get the performance to a point where that misestimated row
>count or group size doesn't much matter, then ISTM including the patch
>becomes a much more obvious total win.
>

Yes, that seems like a reasonable approach. Essentially, we're trying to
construct plausible worst case examples, and then minimize the overhead
compared to regular sort. If we get sufficiently close, then it's fine
to rely on somewhat shaky stats - like group size estimates.


>> Do we know something about the nearly-sorted rows that could help us?
>> Or could we introduce some information elsewhere that would help with
>> the sort?
>>
>> Could we for-example, pre-sort the rows block by block, or filter out
>> the rows that are clearly out of order, so we can re-merge them
>> later?
>
>I'm not sure what you mean by "block by block"?
>

I'm not sure what "block by block" means either.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



pgsql-hackers by date:

Previous
From: Alexander Lakhin
Date:
Subject: Prevent invalid memory access in LookupFuncName
Next
From: Tomas Vondra
Date:
Subject: Re: [PATCH] Incremental sort (was: PoC: Partial sort)