Re: [PATCH] Incremental sort (was: PoC: Partial sort) - Mailing list pgsql-hackers

From James Coleman
Subject Re: [PATCH] Incremental sort (was: PoC: Partial sort)
Date
Msg-id CAAaqYe99B9cRLU+FWJudNx-6O4TgnWYCrRtsv2NcUs4TzG5qrw@mail.gmail.com
Whole thread Raw
In response to Re: [PATCH] Incremental sort (was: PoC: Partial sort)  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: [PATCH] Incremental sort (was: PoC: Partial sort)
List pgsql-hackers
On Tue, Mar 31, 2020 at 10:44 PM Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
>
> On Tue, Mar 31, 2020 at 10:12:29PM -0400, James Coleman wrote:
> >On Tue, Mar 31, 2020 at 9:59 PM Tomas Vondra
> ><tomas.vondra@2ndquadrant.com> wrote:
> >>
> >> On Tue, Mar 31, 2020 at 08:42:47PM -0400, James Coleman wrote:
> >> >On Tue, Mar 31, 2020 at 8:38 PM Tomas Vondra
> >> ><tomas.vondra@2ndquadrant.com> wrote:
> >> >>
> >> >> On Tue, Mar 31, 2020 at 08:11:15PM -0400, James Coleman wrote:
> >> >> >On Tue, Mar 31, 2020 at 7:56 PM Tomas Vondra
> >> >> ><tomas.vondra@2ndquadrant.com> wrote:
> >> >> >>
> >> >> >> On Tue, Mar 31, 2020 at 07:09:04PM -0400, James Coleman wrote:
> >> >> >> >On Tue, Mar 31, 2020 at 6:54 PM Tomas Vondra
> >> >> >> ><tomas.vondra@2ndquadrant.com> wrote:
> >> >> >> >>
> >> >> >> >> On Tue, Mar 31, 2020 at 06:35:32PM -0400, Tom Lane wrote:
> >> >> >> >> >Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> >> >> >> >> >> In general, I think it'd be naive that we can make planner smarter with
> >> >> >> >> >> no extra overhead spent on planning, and we can never accept patches
> >> >> >> >> >> adding even tiny overhead. With that approach we'd probably end up with
> >> >> >> >> >> a trivial planner that generates just a single query plan, because
> >> >> >> >> >> that's going to be the fastest planner. A realistic approach needs to
> >> >> >> >> >> consider both the planning and execution phase, and benefits of this
> >> >> >> >> >> patch seem to be clear - if you have queries that do benefit from it.
> >> >> >> >> >
> >> >> >> >> >I think that's kind of attacking a straw man, though.  The thing that
> >> >> >> >> >people push back on, or should push back on IMO, is when a proposed
> >> >> >> >> >patch adds significant slowdown to queries that it has no or very little
> >> >> >> >> >hope of improving.  The trick is to do expensive stuff only when
> >> >> >> >> >there's a good chance of getting a better plan out of it.
> >> >> >> >> >
> >> >> >> >>
> >> >> >> >> Yeah, I agree with that. I think the main issue is that we don't really
> >> >> >> >> know what the "expensive stuff" is in this case, so it's not really
> >> >> >> >> clear how to be smarter :-(
> >> >> >> >
> >> >> >> >To add to this: I agree that ideally you'd check cheaply to know
> >> >> >> >you're in a situation that might help, and then do more work. But here
> >> >> >> >the question is always going to be simply "would we benefit from an
> >> >> >> >ordering, and, if so, do we have it already partially sorted". It's
> >> >> >> >hard to imagine that reducing much conceptually, so we're left with
> >> >> >> >optimizations of that check.
> >> >> >> >
> >> >> >>
> >> >> >> I think it depends on what exactly is the expensive part. For example if
> >> >> >> it's the construction of IncrementalSort paths, then maybe we could try
> >> >> >> do a quick/check check if the path can even be useful by estimating the
> >> >> >> cost and only then building the path.
> >> >> >>
> >> >> >> That's what we do for join algorithms, for example - we first compute
> >> >> >> initial_cost_nestloop and only when that seems cheap enough we do the
> >> >> >> more expensive stuff.
> >> >> >>
> >> >> >> But I'm not sure the path construction is the expensive part, as it
> >> >> >> should be disabled by enable_incrementalsort=off. But the regression
> >> >> >> does not seem to disappear, at least not entirely.
> >> >> >>
> >> >> >>
> >> >> >> >> One possibility is that it's just one of those regressions due to change
> >> >> >> >> in binary layout, but I'm not sure know how to verify that.
> >> >> >> >
> >> >> >> >If we are testing with a case that can't actually add more paths (due
> >> >> >> >to it checking the guc before building them), doesn't that effectively
> >> >> >> >leave one of these two options:
> >> >> >> >
> >> >> >> >1. Binary layout/cache/other untraceable change, or
> >> >> >> >2. Changes due to refactored function calls.
> >> >> >> >
> >> >> >>
> >> >> >> Hmm, but in case of (1) the overhead should be there even with tests
> >> >> >> that don't really have any additional paths to consider, right? I've
> >> >> >> tried with such test (single table with no indexes) and I don't quite
> >> >> >> see any regression (maybe ~1%).
> >> >> >
> >> >> >Not necessarily, if the cost is in sort costing or useful pathkeys
> >> >> >checking, right? We have run that code even without incremental sort,
> >> >> >but it's changed from master.
> >> >> >
> >> >>
> >> >> Ah, I should have mentioned I've done most of the tests on just the
> >> >> basic incremental sort patch (0001+0002), without the additional useful
> >> >> paths. I initially tested the whole patch series, but after discovering
> >> >> the regression I removed the last part (which I suspected might be the
> >> >> root cause). But the regression is still there, so it's not that.
> >> >>
> >> >> It might be in the reworked costing, yeah. But then I'd expect those
> >> >> function to show in the perf profile.
> >> >
> >> >Right. I'm just grasping at straws on that.
> >> >
> >> >> >> (2) might have impact, but I don't see any immediate supects. Did you
> >> >> >> have some functions in mind?
> >> >> >
> >> >> >I guess this is where the lines blur: I didn't see anything obvious
> >> >> >either, but the changes to sort costing...should probably not have
> >> >> >real impact...but...
> >> >> >
> >> >>
> >> >> :-(
> >> >>
> >> >> >> BTW I see the patch adds pathkeys_common but it's not called from
> >> >> >> anywhere. It's probably leftover from an earlier patch version.
> >> >> >>
> >> >
> >> >BTW, I think I'm going to rename the pathkeys_common_contained_in
> >> >function to something like pathkeys_count_contained_in, unless you
> >> >have an objection to that. The name doesn't seem obvious at all to me.
> >> >
> >>
> >> WFM
> >>
> >> >> >> >There's not anything obvious in point (2) that would be a big cost,
> >> >> >> >but there are definitely changes there. I was surprised that just
> >> >> >> >eliminating the loop through the pathkeys on the query and the index
> >> >> >> >was enough to save us ~4%.
> >> >> >> >
> >> >> >> >Tomas: Earlier you'd wondered about if we should try to shortcut the
> >> >> >> >changes in costing...I was skeptical of that originally, but maybe
> >> >> >> >it's worth looking into? I'm going to try backing that out and see
> >> >> >> >what the numbers look like.
> >> >> >> >
> >> >> >
> >> >> >BTW, I did this test, and it looks like we can get back something
> >> >> >close to 1% by reverting that initial fix on partial path costing. But
> >> >> >we can't get rid of it all the time, at the very least. *Maybe* we
> >> >> >could condition it on incremental sort, but I'm not convinced that's
> >> >> >the only place it's needed as a fix.
> >> >> >
> >> >>
> >> >> Sounds interesting. I actually tried how much the add_partial_path
> >> >> change accounts for, and you're right it was quite a bit. But I forgot
> >> >> about that when investigating the rest.
> >> >>
> >> >> I wonder how large would the regression be without add_partial_path and
> >> >> with the fix in pathkeys_common_contained_in.
> >> >>
> >> >> I'm not sure how much we want to make add_partial_path() dependent on
> >> >> particular GUCs, but I guess if it gets rid of the regression, allows us
> >> >> to commit incremental sort and we can reasonably justify that only
> >> >> incremental sort needs those paths, it might be acceptable.
> >> >
> >> >That's a good point.
> >> >
> >> >> >> I've described the idea about something like initial_cost_nestloop and
> >> >> >> so on. But I'm a bit skeptical about it, considering that the GUC only
> >> >> >> has limited effect.
> >> >> >>
> >> >> >>
> >> >> >> A somewhat note is that the number of indexes has pretty significant
> >> >> >> impact on planning time, even on master. This is timing of the same
> >> >> >> explain script (similar to the one shown before) with different number
> >> >> >> of indexes on master:
> >> >> >>
> >> >> >>      0 indexes        7 indexes       49 indexes
> >> >> >>      -------------------------------------------
> >> >> >>          10.85            12.56            27.83
> >> >> >>
> >> >> >> The way I look at incremental sort is that it allows using indexes for
> >> >> >> queries that couldn't use it before, possibly requiring a separate
> >> >> >> index. So incremental sort might easily reduce the number of indexes
> >> >> >> needed, compensating for the overhead we're discussing here. Of course,
> >> >> >> that may or may not be true.
> >> >> >
> >> >> >One small idea, but I'm not yet sure it helps us a whole lot: if the
> >> >> >query pathkeys is only length 1, then we could skip the additional
> >> >> >path creation.
> >> >> >
> >> >>
> >> >> I don't follow. Why would we create incremental sort in this case at
> >> >> all? With single-element query_pathkeys the path is either unsorted or
> >> >> fully sorted - there's no room for incremental sort. No?
> >> >
> >> >Well, we shouldn't, that's what I'm getting. But I didn't see anything
> >> >in the code now that explicitly excludes that case when decided
> >> >whether or not to create an incremental sort path, unless I'm missing
> >> >something obvious.
> >>
> >> Well, my point is that create_ordered_paths() looks like this:
> >>
> >>      is_sorted = pathkeys_common_contained_in(root->sort_patkeys, ...);
> >>
> >>      if (is_sorted)
> >>      {
> >>          ... old code
> >>      }
> >>      else
> >>      {
> >>          if (input_path == cheapest_input_path)
> >>          {
> >>              ... old code
> >>          }
> >>
> >>          /* With incremental sort disabled, don't build those paths. */
> >>          if (!enable_incrementalsort)
> >>              continue;
> >>
> >>          /* Likewise, if the path can't be used for incremental sort. */
> >>          if (!presorted_keys)
> >>              continue;
> >>
> >>          ... incremental sort path
> >>      }
> >>
> >> Now, with single-item sort_pathkeys, the input path can't be partially
> >> sorted. It's either fully sorted - in which case it's handled by the
> >> first branch. Or it's not sorted at all, so presorted_keys==0 and we
> >> never get to the incremental path.
> >>
> >> Or did you mean to use the optimization somewhere else?
> >
> >Hmm, yes, I didn't think through that properly. I'll have to look at
> >the other cases to confirm the same logic applies there.
> >
> >One other thing:in the code above we create the regular sort path
> >inside of `if (input_path == cheapest_input_path)`, but incremental
> >sort is outside of that condition. I'm not sure I'm remembering why
> >that was, and it's not obvious to me reading it right now (though it's
> >getting late here, so maybe I'm just not thinking clearly). Do you
> >happen to remember why that is?
> >
>
> It's because for the regular sort, the path is either already sorted or
> it requires a full sort. But full sort only makes sense on the cheapest
> path, because we assume the additional sort cost is independent of the
> input cost, essentially
>
>     cost(path + Sort) = cost(path) + cost(Sort)
>
> and it's always
>
>      cost(path) + cost(Sort) >= cost(cheapest path) + cost(Sort)
>
> and by checking for cheapest path we simply skip building all the paths
> that we'd end up discarding anyway.
>
> With incremental sort we can't do this, the cost of the incremental sort
> depends on how well presorted is the input path.
>
> >I've included the optimization on the add_partial_path fix and I now
> >have numbers (for your test, slightly modified in how I execute it)
> >like:
> >
> >branch: 0.8354718927735362
> >master: 0.8128127066707269
> >
> >Which is a 2.7% regression (with enable_incrementalsort off).
>
> Can you try a more realistic benchmark, not this focused on the planner
> part? Something like a read-only pgbench with a fairly small data set
> and a single client, or something like that?

A default pgbench run with select-only for 60s got me 99.93% on the
branch of the speed of master.

I've attached my current updates (with the optimization in add_partial_path).

To add some weight to the "stuff beyond the patch's control" theory
I'm pretty sure I've gotten ~1% repeated differences with the included
v49-0004-ignore-single-key-orderings.patch even though that shouldn't
change anything *both* because enable_incremental_sort is off *and*
because logically it shouldn't be needed (though I still haven't
confirmed in all cases)...so that's interesting. I'm not suggesting we
include the patch, but wanted you to at least see it.

I can look at some more pgbench stuff tomorrow, but for now I'm
signing off for the night.

James

Attachment

pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Tab completion for \gx
Next
From: Dilip Kumar
Date:
Subject: Re: pg_stat_statements issue with parallel maintenance (Was Re: WALusage calculation patch)