Re: Improving btree performance through specializing by key shape, take 2 - Mailing list pgsql-hackers

From Matthias van de Meent
Subject Re: Improving btree performance through specializing by key shape, take 2
Date
Msg-id CAEze2WjFDSg7Zu8ZZfTeMhMyvWmpQns+cjk1RJHR57HS3Y9kow@mail.gmail.com
Whole thread Raw
In response to Re: Improving btree performance through specializing by key shape, take 2  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: Improving btree performance through specializing by key shape, take 2
List pgsql-hackers
On Sun, 10 Apr 2022 at 23:45, Peter Geoghegan <pg@bowt.ie> wrote:
>
> On Fri, Apr 8, 2022 at 9:55 AM Matthias van de Meent
> <boekewurm+postgres@gmail.com> wrote:
> > Here's generation 2 of this effort. Instead of proceeding to trying to
> > shoehorn all types of btree keys into one common code path, this
> > patchset acknowledges that there exist different shapes of keys that
> > each have a different best way of accessing each subsequent key
> > attribute. This patch achieves this by specializing the functions to
> > different key shapes.
>
> Cool.
>
> > These patches still have some rough edges (specifically: some
> > functions that are being generated are unused, and intermediate
> > patches don't compile), but I wanted to get this out to get some
> > feedback on this approach.
>
> I attempted to apply your patch series to get some general sense of
> how it affects performance, by using my own test cases from previous
> nbtree project work. I gave up on that pretty quickly, though, since
> the code wouldn't compile. That in itself might have been okay (some
> "rough edges" are generally okay). The real problem was that it wasn't
> clear what I was expected to do about it! You mentioned that some of
> the patches just didn't compile, but which ones? How do I quickly get
> some idea of the benefits on offer here, however imperfect or
> preliminary?
>
> Can you post a version of this that compiles? As a general rule you
> should try to post patches that can be "test driven" easily. An
> opening paragraph that says "here is why you should care about my
> patch" is often something to strive for, too. I suspect that you
> actually could have done that here, but you didn't, for whatever
> reason.

Yes, my bad. I pulled one patch that included unrelated changes from
the set; but I missed that it also contained some changes that
should've been in an earlier commit, thus breaking the set.

I'll send an updated patchset soon (I'm planning on moving around when
what is changed/added); but before that the attached incremental patch
should help. FYI, the patchset has been tested on commit 05023a23, and
compiles (with unused function warnings) after applying the attached
patch.

> > I expect the performance to be at least on par with current btree
> > code, and I'll try to publish a more polished patchset with
> > performance results sometime in the near future. I'll also try to
> > re-attach dynamic page-level prefix truncation, but that depends on
> > the amount of time I have and the amount of feedback on this patchset.
>
> Can you give a few motivating examples? You know, queries that are
> sped up by the patch series, with an explanation of where the benefit
> comes from. You had some on the original thread, but that included
> dynamic prefix truncation stuff as well.

Queries that I expect to be faster are situations where the index does
front-to-back attribute accesses in a tight loop and repeated index
lookups; such as in index builds, data loads, JOINs, or IN () and =
ANY () operations; and then specifically for indexes with only a
single key attribute, or indexes where we can determine based on the
index attributes' types that nocache_index_getattr will be called at
least once for a full _bt_compare call (i.e. att->attcacheoff cannot
be set for at least one key attribute).
Queries that I expect to be slower to a limited degree are hot loops
on btree indexes that do not have a specifically optimized path, as
there is some extra overhead calling the specialized functions. Other
code might also see a minimal performance impact due to an increased
binary size resulting in more cache thrashing.

> Ideally you would also describe where the adversized improvements come
> from for each test case -- which patch, which enhancement (perhaps
> only in rough terms for now).

In the previous iteration, I discerned that there are different
"shapes" of indexes, some of which currently have significant overhead
in the existing btree infrastructure. Especially indexes with multiple
key attributes can see significant overhead while their attributes are
being extracted, which (for a significant part) can be attributed to
the O(n) behaviour of nocache_index_getattr. This O(n) overhead is
currently avoided only by indexes with only a single key attribute and
by indexes in which all key attributes have a fixed size (att->attlen
> 0).

The types of btree keys I discerned were:
CREATE INDEX ON tst ...
... (single_key_attribute)
... (varlen, other, attributes, ...)
... (fixed_size, also_fixed, ...)
... (sometimes_null, other, attributes, ...)

For single-key-attribute btrees, the performance benefits in the patch
are achieved by reducing branching in the attribute extraction: There
are no other key attributes to worry about, so much of the code
dealing with looping over attributes can inline values, and thus
reduce the amount of code generated in the hot path.

For btrees with multiple key attributes, benefits are achieved if some
attributes are of variable length (e.g. text):
On master, if your index looks like CREATE INDEX ON tst (varlen,
fixed, fixed), for the latter attributes the code will always hit the
slow path of nocache_index_getattr. This introduces a significant
overhead; as that function wants to re-establish that the requested
attribute's offset is indeed not cached and not cacheable, and
calculates the requested attributes' offset in the tuple from
effectively zero. That is usually quite wasteful, as (in btree code,
usually) we'd already calculated the previous attribute's offset just
a moment ago; which should be reusable.
In this patch, the code will use an attribute iterator (as described
and demonstrated in the linked thread) to remove this O(n)
per-attribute overhead and change the worst-case complexity of
iterating over the attributes of such an index tuple from O(n^2) to
O(n).

Null attributes in the key are not yet handled in any special manner
in this patch. That is mainly because it is impossible to statically
determine which attribute is going to be null based on the index
definition alone, and thus doesn't benefit as much from statically
generated optimized paths.

-Mathias

Attachment

pgsql-hackers by date:

Previous
From: David Rowley
Date:
Subject: Re: A qsort template
Next
From: Michael Paquier
Date:
Subject: Re: make MaxBackends available in _PG_init