Thread: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
Andres and I discussed bottlenecks in the nbtree code during the
recent PgDay SF community event. Andres observed that the call to
BTreeTupleGetNAtts() in _bt_compare() can become somewhat of a
bottleneck. I came up with the very simple attached POC-quality
patches, which Andres tested and profiled with his original complaint
in mind yesterday. He reported increased throughput on a memory
bound simple pgbench SELECT-only workload.

He reported that the problem went away with the patches applied. The
following pgbench SELECT-only result was sent to me privately:

before:
single:         tps = 30300.202363 (excluding connections establishing)
all cores:      tps = 1026853.447047 (excluding connections establishing)

after:
single:         tps = 31120.452919 (excluding connections establishing)
all cores:      tps = 1032376.361006 (excluding connections establishing)

(Actually, he tested something slightly different -- he inlined
_bt_compare() in his own way instead of using my 0002-*, and didn't
use the 0003-* optimization at all.)

Apparently this was a large multi-socket machine. Those are hard to
come by.

The main idea here is to make _bt_compare() delay
calling BTreeTupleGetNAtts() until the point after the first attribute
turns out to be equal to the _bt_compare() caller's insertion scankey.
Many or most calls to _bt_compare() won't even need to call
BTreeTupleGetNAtts().

This relies on the assumption that any tuple must have at least one
untruncated suffix column in the _bt_compare() loop. It doesn't matter
whether it's a pivot or non-pivot tuple -- the representation of the
first column will be exactly the same.

The negative infinity item on an internal page always has zero
attributes, which might seem like a snag. However, we already treat
that as a special case within _bt_compare(), for historical reasons
(pg_upgrade'd indexes won't have the number of attributes explicitly
set to zero in some cases).

Another separate though related idea in 0003-* is to remove the
negative infinity check. It goes from _bt_compare() to _bt_binsrch(),
since it isn't something that we need to consider more than once per
page -- and only on internal pages. That way, _bt_compare() doesn't
have to look at the page special area to check if it's a leaf page or
an internal page at all. I haven't really profiled this just yet. This is
one of those patches where 95%+ of the work is profiling and
benchmarking.

Andres and I both agree that there is a lot more work to be done in
this area, but that will be a major undertaking. I am quite keen on
the idea of repurposing the redundant-to-nbtree ItemId.lp_len field to
store an abbreviated key. Making that work well is a considerable
undertaking, since you need to use prefix compression to get a high
entropy abbreviated key. It would probably take me the best part of a
whole release cycle to write such a patch. The attached patches get
us a relatively easy win in the short term, though.

Thoughts?
--
Peter Geoghegan

Attachment

RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Floris Van Nee
Date:
> He reported that the problem went away with the patches applied. The
> following pgbench SELECT-only result was sent to me privately:
> 
> before:
> single:         tps = 30300.202363 (excluding connections establishing)
> all cores:      tps = 1026853.447047 (excluding connections establishing)
> 
> after:
> single:         tps = 31120.452919 (excluding connections establishing)
> all cores:      tps = 1032376.361006 (excluding connections establishing)
> 
> (Actually, he tested something slightly different -- he inlined
> _bt_compare() in his own way instead of using my 0002-*, and didn't use the
> 0003-* optimization at all.)
> 
> Apparently this was a large multi-socket machine. Those are hard to come by.
> 

I could do some tests with the patch on some larger machines. What exact tests do you propose? Are there some specific
postgresql.confsettings and pgbench initialization you recommend for this? And was the test above just running 'pgbench
-S'select-only with specific -T, -j and -c parameters?
 

-Floris


Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Andres Freund
Date:
Hi,

On 2020-01-27 15:42:06 +0000, Floris Van Nee wrote:
> 
> > He reported that the problem went away with the patches applied. The
> > following pgbench SELECT-only result was sent to me privately:
> > 
> > before:
> > single:         tps = 30300.202363 (excluding connections establishing)
> > all cores:      tps = 1026853.447047 (excluding connections establishing)
> > 
> > after:
> > single:         tps = 31120.452919 (excluding connections establishing)
> > all cores:      tps = 1032376.361006 (excluding connections establishing)
> > 
> > (Actually, he tested something slightly different -- he inlined
> > _bt_compare() in his own way instead of using my 0002-*, and didn't use the
> > 0003-* optimization at all.)
> > 
> > Apparently this was a large multi-socket machine. Those are hard to
> > come by.

I'd not say "large multi socket", 2 x XeonGold 5215, 192GB RAM.


> I could do some tests with the patch on some larger machines. What
> exact tests do you propose? Are there some specific postgresql.conf
> settings and pgbench initialization you recommend for this? And was
> the test above just running 'pgbench -S' select-only with specific -T,
> -j and -c parameters?

The above test was IIRC:

PGOPTIONS='-c vacuum_freeze_min_age=0' pgbench -i -q -s 300
with a restart here, and a
SELECT SUM(pg_prewarm(oid, 'buffer')) FROM pg_class WHERE relkind IN ('r', 'i', 't');
after starting, and then
PGOPTIONS='-c default_transaction_isolation=repeatable\ read' pgbench -n -M prepared -P1 -c100 -j72 -T1000 -S

The freeze, restart & prewarm are to have fairer comparisons between
tests, without needing to recreate the database from scratch.

Greetings,

Andres Freund



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Andres Freund
Date:
Hi,

On 2020-01-26 14:49:06 -0800, Peter Geoghegan wrote:
> Andres and I discussed bottlenecks in the nbtree code during the
> recent PgDay SF community event. Andres observed that the call to
> BTreeTupleGetNAtts() in _bt_compare() can become somewhat of a
> bottleneck. I came up with the very simple attached POC-quality
> patches, which Andres tested and profiled with his original complaint
> in mind yesterday. He reported increased throughput on a memory
> bound simple pgbench SELECT-only workload.

Yea - it shows up as a pipeline stall, because the loop depends on
having loaded the tuple. Which basically requires two
unlikely-to-be-cached memory loads to complete. Whereas before/after the
patcha good bit of that latency could be hidden by out-of-order
execution, as e.g. the tupledesc and scankey accesses are not dependent
on the memory load for the tuple having finished.



> The main idea here is to make _bt_compare() delay
> calling BTreeTupleGetNAtts() until the point after the first attribute
> turns out to be equal to the _bt_compare() caller's insertion scankey.
> Many or most calls to _bt_compare() won't even need to call
> BTreeTupleGetNAtts().

FWIW, I still think it might be better to *continue* loading ntupatts
where it's currently done, but keep the the change to the loop
termination the way you have it. That way we don't add a branch to check
for ntupatts, and because we don't depend on the result to enter the
loop, we don't have a stall.  I'd even keep the Min() bit.  I'm a bit
afraid that delaying the load will add one (smaller) stall after the key
comparison, and that the additional branches will be noticable too.



> Andres and I both agree that there is a lot more work to be done in
> this area, but that will be a major undertaking. I am quite keen on
> the idea of repurposing the redundant-to-nbtree ItemId.lp_len field to
> store an abbreviated key. Making that work well is a considerable
> undertaking, since you need to use prefix compression to get a high
> entropy abbreviated key. It would probably take me the best part of a
> whole release cycle to write such a patch. The attached patches get
> us a relatively easy win in the short term, though.

My intuition is that a binary search optimized layout (next para) will
be a bigger win, and probably easier. There are pretty clear profiling
indicators that even the access to the ItemId array in the binary search
is most of the time a cache miss and causes a stall - and it makes sense
too.

I.e. instead of a plain sorted order, store the n ItemIds on a page
in the order of
[1/2 n, 1/4 n, 3/4 n, 1/8 n, 3/8 n, 5/8 n, 7/8 n, ...]
as binary search looks first at 1/2, then at either 1/4 or 3/4, then
either (1/8 or 3/8) or (5/8, 7/8), and so on, this layout means that the
commonly the first few four levels of the ItemId array are on a *single*
cacheline. Whereas in contrast, using the normal layout, that *never* is
the case for page with more than a few entries.  And even beyond the
first few levels, the "sub" trees the binary search descends into, are
concentrated onto fewer cachelines.  It's not just the reduced number of
cachelines touched, additionally the layout also is very prefetchable,
because the tree levels are basically laid out sequentially left to
right, which many cpu prefetchers can recognize.

I think this works particularly well for inner btree pages, because we
don't rewrite their itemid lists all that often, so the somewhat higher
cost of that doesn't matter much, and similarly, the higher cost of
sequentially iterating, isn't significant either.

Now that's only the ItemId array - whereas a larger amount of the cache
misses comes from the index tuple accesses. The nice bit there is that
we can just optimize the order of the index tuples on the page without
any format changes (and even the read access code won't change). I.e. we
can just lay out the tuples in an *approximately* binary search
optimized order, without needing to change anything but the layout
"writing" code, as the ItemId.lp_off indirection will hide that.


I do completely agree that having a small high-entropy abbreviated key
inside the ItemId would be an awesome improvement, as it can entirely
remove many of the hard to predict accesses. My gut feeling is just that
a) it's a pretty darn hard project.
b) it'll be a smaller win as long as there's an unpredictable access to
   the abbreviated key

Greetings,

Andres Freund



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Mon, Jan 27, 2020 at 9:14 AM Andres Freund <andres@anarazel.de> wrote:
> > The main idea here is to make _bt_compare() delay
> > calling BTreeTupleGetNAtts() until the point after the first attribute
> > turns out to be equal to the _bt_compare() caller's insertion scankey.
> > Many or most calls to _bt_compare() won't even need to call
> > BTreeTupleGetNAtts().
>
> FWIW, I still think it might be better to *continue* loading ntupatts
> where it's currently done, but keep the the change to the loop
> termination the way you have it. That way we don't add a branch to check
> for ntupatts, and because we don't depend on the result to enter the
> loop, we don't have a stall.  I'd even keep the Min() bit.  I'm a bit
> afraid that delaying the load will add one (smaller) stall after the key
> comparison, and that the additional branches will be noticable too.

I can do it that way. I am not attached to the current approach in
0001-* at all.

> My intuition is that a binary search optimized layout (next para) will
> be a bigger win, and probably easier. There are pretty clear profiling
> indicators that even the access to the ItemId array in the binary search
> is most of the time a cache miss and causes a stall - and it makes sense
> too.
>
> I.e. instead of a plain sorted order, store the n ItemIds on a page
> in the order of
> [1/2 n, 1/4 n, 3/4 n, 1/8 n, 3/8 n, 5/8 n, 7/8 n, ...]
> as binary search looks first at 1/2, then at either 1/4 or 3/4, then
> either (1/8 or 3/8) or (5/8, 7/8), and so on, this layout means that the
> commonly the first few four levels of the ItemId array are on a *single*
> cacheline.

You don't really have to convince me of anything here. I see these as
essentially the same project already. I am only really emphasizing the
abbreviated keys thing because it's obviously unbeatable with the
right workload.

Working on B-Tree stuff for v12 really convinced me of the value of an
integrated approach, at least in this area. Everything affects
everything else, so expanding the scope of a project can actually be
really helpful. It's very normal for these optimizations to be worth a
lot more when combined than they are worth individually. I know that
you have had similar experiences in other areas of the code.

> I think this works particularly well for inner btree pages, because we
> don't rewrite their itemid lists all that often, so the somewhat higher
> cost of that doesn't matter much, and similarly, the higher cost of
> sequentially iterating, isn't significant either.

Obviously all of these techniques are only practical because of the
asymmetry between leaf pages and internal pages. Internal pages are
where the large majority of comparisons are performed in most OLTP
workloads, and yet their tuples are often only about one third of one
percent of the total number of tuples in the B-Tree. That is the
specific ratio within the pgbench indexes, IIRC. Having more than one
percent of all tuples come from internal pages is fairly exceptional
-- you only really see it in indexes that are on very long text
strings.

> I do completely agree that having a small high-entropy abbreviated key
> inside the ItemId would be an awesome improvement, as it can entirely
> remove many of the hard to predict accesses. My gut feeling is just that
> a) it's a pretty darn hard project.
> b) it'll be a smaller win as long as there's an unpredictable access to
>    the abbreviated key

It will be relatively straightforward to come up with a basic
abbreviated keys prototype that targets one particular data
distribution and index type, though. For example, I can focus on your
pgbench SELECT workload. That way, I won't have to do any of the hard
work of making abbreviated keys work with a variety of workloads,
while still getting a good idea of the benefits in one specific case.
For this prototype, I can either not do prefix compression to get a
high entropy abbreviated key, or do the prefix compression in a way
that is totally inflexible, but still works well enough for this
initial test workload.

My estimate is that it would take me 4 - 6 weeks to write a prototype
along those lines. That isn't so bad.

-- 
Peter Geoghegan



RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Floris Van Nee
Date:
> 
> I could do some tests with the patch on some larger machines. What exact
> tests do you propose? Are there some specific postgresql.conf settings and
> pgbench initialization you recommend for this? And was the test above just
> running 'pgbench -S' select-only with specific -T, -j and -c parameters?
> 

With Andres' instructions I ran a couple of tests. With your patches I can reproduce a speedup of ~3% on single core
testsreliably on a dual-socket 36-core machine for the pgbench select-only test case. When using the full scale test my
resultsare way too noisy even for large runs unfortunately. I also tried some other queries (for example select's that
return10 or 100 rows instead of just 1), but can't see much of a speed-up there either, although it also doesn't hurt.
 

So I guess the most noticeable one is the select-only benchmark for 1 core:

<Master>
transaction type: <builtin: select only>
scaling factor: 300
query mode: prepared
number of clients: 1
number of threads: 1
duration: 600 s
number of transactions actually processed: 30255419
latency average = 0.020 ms
latency stddev = 0.001 ms
tps = 50425.693234 (including connections establishing)
tps = 50425.841532 (excluding connections establishing)

<Patched>
transaction type: <builtin: select only>
scaling factor: 300
query mode: prepared
number of clients: 1
number of threads: 1
duration: 600 s
number of transactions actually processed: 31363398
latency average = 0.019 ms
latency stddev = 0.001 ms
tps = 52272.326597 (including connections establishing)
tps = 52272.476380 (excluding connections establishing)

This is the one with 40 clients, 40 threads. Not really an improvement, and quite still quite noisy.
<Master>
transaction type: <builtin: select only>
scaling factor: 300
query mode: prepared
number of clients: 40
number of threads: 40
duration: 600 s
number of transactions actually processed: 876846915
latency average = 0.027 ms
latency stddev = 0.015 ms
tps = 1461407.539610 (including connections establishing)
tps = 1461422.084486 (excluding connections establishing)

<Patched>
transaction type: <builtin: select only>
scaling factor: 300
query mode: prepared
number of clients: 40
number of threads: 40
duration: 600 s
number of transactions actually processed: 872633979
latency average = 0.027 ms
latency stddev = 0.038 ms
tps = 1454387.326179 (including connections establishing)
tps = 1454396.879195 (excluding connections establishing)

For tests that don't use the full machine (eg. 10 clients, 10 threads) I see speed-ups as well, but not as high as the
single-corerun. It seems there are other bottlenecks (on the machine) coming into play.
 

-Floris


Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Tue, Jan 28, 2020 at 1:34 PM Floris Van Nee <florisvannee@optiver.com> wrote:
> With Andres' instructions I ran a couple of tests. With your patches I can reproduce a speedup of ~3% on single core
testsreliably on a dual-socket 36-core machine for the pgbench select-only test case.
 

Thanks for testing!

Attached is v2 of patch series, which makes the changes to 0001-*
requested by Andres. I restructured the loop in a way that allows the
compiler to assume that there will always be at least one loop
iteration -- so I'm not quite as aggressive as I was with v1. We don't
actually delay the call to BTreeTupleGetNAtts() as such in v2.

Can you test this version, Floris? The second two patches are probably
not helping here, so it would be useful if you could just test 0001-*,
and then test all three together. I can toss the latter two patches if
there is no additional speedup.

If we're lucky, then Andres will have been right to suspect that there
might be a smaller stall caused by the new branch in the loop that
existed in v1. Maybe this will show up at higher client counts.

I should point out that the deduplication patch changes the definition
of BTreeTupleGetNAtts(), making it slightly more complicated. With the
deduplication patch, we have to check that the tuple isn't a posting
list tuple, which uses the INDEX_ALT_TID_MASK/INDEX_AM_RESERVED_BIT
bit to indicate a non-standard tuple header format, just like the
current pivot tuple format (we need to check a BT_RESERVED_OFFSET_MASK
bit to further differentiate posting list tuples from pivot tuples).
This increase in the complexity of BTreeTupleGetNAtts() will probably
further tip things in favor of this patch.

There are no changes in either 0002-* or 0003-* patches for v2. I'm
including the same patches here a second time for completeness.

--
Peter Geoghegan

Attachment

RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Floris Van Nee
Date:
> 
> Can you test this version, Floris? The second two patches are probably not
> helping here, so it would be useful if you could just test 0001-*, and then test
> all three together. I can toss the latter two patches if there is no additional
> speedup.
> 

Here's the results for runs with respectively 1 client, 9 clients and 30 clients on master, v2-0001, v2-0001+0002+0003
andfor completeness also the previous v1 version of the patches.
 
I ran the tests for 45 minutes each this time which seems to give more stable results.
I'd say applying just v2-0001 is actually slightly hurtful for single-core performance. Applying all of them gives a
goodimprovement though. It looks like the performance improvement is also more noticeable at higher core counts now.
 

<master>
number of clients: 1
number of threads: 1
duration: 2700 s
number of transactions actually processed: 139314796
latency average = 0.019 ms
latency stddev = 0.001 ms
tps = 51598.071835 (including connections establishing)
tps = 51598.098715 (excluding connections establishing)

<v2-0001>
number of clients: 1
number of threads: 1
duration: 2700 s
number of transactions actually processed: 137257492
latency average = 0.020 ms
latency stddev = 0.001 ms
tps = 50836.107076 (including connections establishing)
tps = 50836.133137 (excluding connections establishing)

<v2-0001+2+3>
number of clients: 1
number of threads: 1
duration: 2700 s
number of transactions actually processed: 141721881
latency average = 0.019 ms
latency stddev = 0.001 ms
tps = 52489.584928 (including connections establishing)
tps = 52489.611373 (excluding connections establishing)

<v1-0001+2+3>
number of clients: 1
number of threads: 1
duration: 2700 s
number of transactions actually processed: 141663780
latency average = 0.019 ms
latency stddev = 0.001 ms
tps = 52468.065549 (including connections establishing)
tps = 52468.093018 (excluding connections establishing)



<master>
number of clients: 9
number of threads: 9
duration: 2700 s
number of transactions actually processed: 1197242115
latency average = 0.020 ms
latency stddev = 0.001 ms
tps = 443422.987601 (including connections establishing)
tps = 443423.306495 (excluding connections establishing)

<v2-0001>
number of clients: 9
number of threads: 9
duration: 2700 s
number of transactions actually processed: 1187890004
latency average = 0.020 ms
latency stddev = 0.002 ms
tps = 439959.241392 (including connections establishing)
tps = 439959.588125 (excluding connections establishing)

<v2-0001+2+3>
number of clients: 9
number of threads: 9
duration: 2700 s
number of transactions actually processed: 1203412941
latency average = 0.020 ms
latency stddev = 0.002 ms
tps = 445708.478915 (including connections establishing)
tps = 445708.798583 (excluding connections establishing)

<v1-0001+2+3>
number of clients: 9
number of threads: 9
duration: 2700 s
number of transactions actually processed: 1195359533
latency average = 0.020 ms
latency stddev = 0.001 ms
tps = 442725.734267 (including connections establishing)
tps = 442726.052676 (excluding connections establishing)


<master>
number of clients: 30
number of threads: 30
duration: 2700 s
number of transactions actually processed: 2617037081
latency average = 0.031 ms
latency stddev = 0.011 ms
tps = 969272.811990 (including connections establishing)
tps = 969273.960316 (excluding connections establishing)

<v2-0001>
number of clients: 30
number of threads: 30
duration: 2700 s
number of transactions actually processed: 2736881585
latency average = 0.029 ms
latency stddev = 0.011 ms
tps = 1013659.581348 (including connections establishing)
tps = 1013660.819277 (excluding connections establishing)

<v2-0001+2+3>
number of clients: 30
number of threads: 30
duration: 2700 s
number of transactions actually processed: 2844199686
latency average = 0.028 ms
latency stddev = 0.011 ms
tps = 1053407.074721 (including connections establishing)
tps = 1053408.220093 (excluding connections establishing)

<v1-0001+2+3>
number of clients: 30
number of threads: 30
duration: 2700 s
number of transactions actually processed: 2693765822
latency average = 0.030 ms
latency stddev = 0.011 ms
tps = 997690.883117 (including connections establishing)
tps = 997692.051005 (excluding connections establishing)


Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Thu, Jan 30, 2020 at 1:19 AM Floris Van Nee <florisvannee@optiver.com> wrote:
> I'd say applying just v2-0001 is actually slightly hurtful for single-core performance. Applying all of them gives a
goodimprovement though. It looks like the performance improvement is also more noticeable at higher core counts now.
 

Many thanks for testing once again!

Your tests show that the overall winner is "<v2-0001+2+3>", which is
strictly better than all other configurations you tested -- it is at
least slightly better than every other configuration at every client
count tested. I was particularly pleased to see that "<v2-0001+2+3>"
is ~8.6% faster than the master branch with 30 clients! That result
greatly exceeded my expectations.

I have been able to independently confirm that you really need the
first two patches together to see the benefits -- that wasn't clear
until recently.

The interesting thing now is the role of the "negative infinity test"
patch (the 0003-* patch) in all of this. I suspect that it may not be
helping us much here. I wonder, could you test the following
configurations to settle this question?

* <master> with 30 clients (i.e. repeat the test that you reported on
most recently)

* <v2-0001+2+3> with 30 clients (i.e. repeat the test that you
reported got us that nice ~8.6% increase in TPS)

* <v2-0001+2> with 30 clients -- a new test, to see if performance is
at all helped by the "negative infinity test" patch (the 0003-*
patch).

It seems like a good idea to repeat the other two tests as part of
performing this third test, out of general paranoia. Intel seem to
roll out a microcode update for a spectre-like security issue about
every other day.

Thanks again
-- 
Peter Geoghegan



RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Floris Van Nee
Date:
> 
> The interesting thing now is the role of the "negative infinity test"
> patch (the 0003-* patch) in all of this. I suspect that it may not be helping us
> much here. I wonder, could you test the following configurations to settle
> this question?
> 
> * <master> with 30 clients (i.e. repeat the test that you reported on most
> recently)
> 
> * <v2-0001+2+3> with 30 clients (i.e. repeat the test that you reported got us
> that nice ~8.6% increase in TPS)
> 
> * <v2-0001+2> with 30 clients -- a new test, to see if performance is at all
> helped by the "negative infinity test" patch (the 0003-* patch).
> 
> It seems like a good idea to repeat the other two tests as part of performing
> this third test, out of general paranoia. Intel seem to roll out a microcode
> update for a spectre-like security issue about every other day.
> 

I ran all the tests on two different machines, several times for 1 hour each time. I'm still having a hard time getting
reliableresults for the 30 clients case though. I'm pretty certain the patches bring a performance benefit, but how
highexactly is difficult to say. As for applying only patch 1+2 or all three patches - I found no significant
differencebetween these two cases. It looks like all the performance benefit is in the first two patches.
 

-Floris


Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Thomas Munro
Date:
On Mon, Feb 10, 2020 at 10:05 PM Floris Van Nee
<florisvannee@optiver.com> wrote:
> > The interesting thing now is the role of the "negative infinity test"
> > patch (the 0003-* patch) in all of this. I suspect that it may not be helping us
> > much here. I wonder, could you test the following configurations to settle
> > this question?
> >
> > * <master> with 30 clients (i.e. repeat the test that you reported on most
> > recently)
> >
> > * <v2-0001+2+3> with 30 clients (i.e. repeat the test that you reported got us
> > that nice ~8.6% increase in TPS)
> >
> > * <v2-0001+2> with 30 clients -- a new test, to see if performance is at all
> > helped by the "negative infinity test" patch (the 0003-* patch).
> >
> > It seems like a good idea to repeat the other two tests as part of performing
> > this third test, out of general paranoia. Intel seem to roll out a microcode
> > update for a spectre-like security issue about every other day.
> >
>
> I ran all the tests on two different machines, several times for 1 hour each time. I'm still having a hard time
gettingreliable results for the 30 clients case though. I'm pretty certain the patches bring a performance benefit, but
howhigh exactly is difficult to say. As for applying only patch 1+2 or all three patches - I found no significant
differencebetween these two cases. It looks like all the performance benefit is in the first two patches. 

The cfbot seems to be showing "pg_regress: initdb failed" on Ubuntu,
with an assertion failure like this:

#2 0x00000000008e594f in ExceptionalCondition
(conditionName=conditionName@entry=0x949098 "BTreeTupleGetNAtts(itup,
rel) >= key->keysz", errorType=errorType@entry=0x938a7d
"FailedAssertion", fileName=fileName@entry=0x949292 "nbtsearch.c",
lineNumber=lineNumber@entry=620) at assert.c:67
No locals.
#3  0x00000000004fdbaa in _bt_compare_inl (offnum=3,
page=0x7ff7904bdf00 "", key=0x7ffde7c9bfa0, rel=0x7ff7a2325c20) at
nbtsearch.c:620
        itup = 0x7ff7904bfec8
        heapTid = <optimized out>
        ski = <optimized out>
        itupdesc = 0x7ff7a2325f50
        scankey = <optimized out>
        ntupatts = 0

https://travis-ci.org/postgresql-cfbot/postgresql/builds/651843143

It's passing on Windows though, so perhaps there is something
uninitialised or otherwise unstable in the patch?



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Tue, Feb 18, 2020 at 12:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:
> The cfbot seems to be showing "pg_regress: initdb failed" on Ubuntu,
> with an assertion failure like this:
>
> #2 0x00000000008e594f in ExceptionalCondition
> (conditionName=conditionName@entry=0x949098 "BTreeTupleGetNAtts(itup,
> rel) >= key->keysz", errorType=errorType@entry=0x938a7d
> "FailedAssertion", fileName=fileName@entry=0x949292 "nbtsearch.c",

This is a legitimate bug in v1 of the patch, which was written in a
hurry. v2 does not have the problem. Floris inadvertently created a
separate thread for this same patch, which I responded to when posting
v2. I've now updated the CF entry for this patch [1] to have both
threads.

BTW, I've noticed that CF Tester is wonky with patches that have
multiple threads with at least one patch file posted to each thread.
The deduplication patch [2] has this problem, for example. It would be
nice if CF Tester knew to prefer one thread over another based on a
simple rule, like "consistently look for patch files on the first
thread connected to a CF app entry, never any other thread".

Maybe you'd rather not go that way -- I guess that it would break
other cases, such as the CF app entry for this patch (which now
technically has one thread that supersedes the other). Perhaps a
compromise is possible. At a minimum, CF Tester should not look for a
patch on the (say) second thread of a CF app entry for a patch just
because somebody posted an e-mail to that thread (an e-mail that did
not contain a new patch). CF Tester will do this even though there is
a more recent patch on the first thread of the CF app entry, that has
already been accepted as passing by CFTester. I believe that CF Tester
will actually pingpong back and forth between the same two patches on
each thread as e-mail is sent to each thread, without anybody ever
posting a new patch.

Thanks

[1] https://commitfest.postgresql.org/27/2429/#
[2] https://commitfest.postgresql.org/27/2202/
-- 
Peter Geoghegan



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Thomas Munro
Date:
On Wed, Feb 19, 2020 at 1:35 PM Peter Geoghegan <pg@bowt.ie> wrote:
> On Tue, Feb 18, 2020 at 12:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:
> > The cfbot seems to be showing "pg_regress: initdb failed" on Ubuntu,
> > with an assertion failure like this:
> >
> > #2 0x00000000008e594f in ExceptionalCondition
> > (conditionName=conditionName@entry=0x949098 "BTreeTupleGetNAtts(itup,
> > rel) >= key->keysz", errorType=errorType@entry=0x938a7d
> > "FailedAssertion", fileName=fileName@entry=0x949292 "nbtsearch.c",
>
> This is a legitimate bug in v1 of the patch, which was written in a
> hurry. v2 does not have the problem. Floris inadvertently created a
> separate thread for this same patch, which I responded to when posting
> v2. I've now updated the CF entry for this patch [1] to have both
> threads.
>
> BTW, I've noticed that CF Tester is wonky with patches that have
> multiple threads with at least one patch file posted to each thread.
> The deduplication patch [2] has this problem, for example. It would be
> nice if CF Tester knew to prefer one thread over another based on a
> simple rule, like "consistently look for patch files on the first
> thread connected to a CF app entry, never any other thread".

Ahh.  Well I had that rule early on, and then had the problem that
some discussions move entirely to a second or third thread and left it
testing some ancient stale patch.

> Maybe you'd rather not go that way -- I guess that it would break
> other cases, such as the CF app entry for this patch (which now
> technically has one thread that supersedes the other). Perhaps a
> compromise is possible. At a minimum, CF Tester should not look for a
> patch on the (say) second thread of a CF app entry for a patch just
> because somebody posted an e-mail to that thread (an e-mail that did
> not contain a new patch). CF Tester will do this even though there is
> a more recent patch on the first thread of the CF app entry, that has
> already been accepted as passing by CFTester. I believe that CF Tester
> will actually pingpong back and forth between the same two patches on
> each thread as e-mail is sent to each thread, without anybody ever
> posting a new patch.

Yeah, for CF entries with multiple threads, it currently looks at
whichever thread has the most recent email on it, and then it finds
the most recent patch set on that thread.  Perhaps it should look at
all the registered threads and find the message with the newest patch
set across all of them, as you say.  I will look into that.



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Tue, Feb 18, 2020 at 4:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:
> Yeah, for CF entries with multiple threads, it currently looks at
> whichever thread has the most recent email on it, and then it finds
> the most recent patch set on that thread.  Perhaps it should look at
> all the registered threads and find the message with the newest patch
> set across all of them, as you say.  I will look into that.

Thanks!

I know that I am a bit unusual in that I use all of the CF app
features at the same time. But the current behavior of CF Tester
disincentivizes using multiple threads. This works against the goal of
having good metadata about patches that are worked on over multiple
releases or years. We have a fair few of those.

-- 
Peter Geoghegan



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Mon, Feb 10, 2020 at 1:05 AM Floris Van Nee <florisvannee@optiver.com> wrote:
> I ran all the tests on two different machines, several times for 1 hour each time. I'm still having a hard time
gettingreliable results for the 30 clients case though. I'm pretty certain the patches bring a performance benefit, but
howhigh exactly is difficult to say. As for applying only patch 1+2 or all three patches - I found no significant
differencebetween these two cases. It looks like all the performance benefit is in the first two patches. 

Attached is v3, which no longer includes the third patch/optimization.
It also inlines (in the second patch) by marking the _bt_compare
definition as inline, while not changing anything in nbtree.h. I
believe that this is portable C99 -- let's see what CF Tester thinks
of it.

I'm going to test this myself. It would be nice if you could repeat
something like the previous experiments with v3, Floris. master vs v3
(both patches together). With a variable number of clients.

Thanks
--
Peter Geoghegan

Attachment

Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Tom Lane
Date:
Peter Geoghegan <pg@bowt.ie> writes:
> It also inlines (in the second patch) by marking the _bt_compare
> definition as inline, while not changing anything in nbtree.h. I
> believe that this is portable C99 -- let's see what CF Tester thinks
> of it.

Boy, I'd be pretty darn hesitant to go there, even with our new
expectation of C99 compilers.  What we found out when we last experimented
with non-static inlines was that the semantics were not very portable nor
desirable.  I've forgotten the details unfortunately.  But why do we need
this, and what exactly are you hoping the compiler will do with it?

            regards, tom lane



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Wed, Feb 19, 2020 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Boy, I'd be pretty darn hesitant to go there, even with our new
> expectation of C99 compilers.  What we found out when we last experimented
> with non-static inlines was that the semantics were not very portable nor
> desirable.  I've forgotten the details unfortunately.

My original approach to inlining was to alter the nbtsearch.c
_bt_compare() callers (the majority) to call _bt_compare_inl(). This
function matches our current _bt_compare() function, except it's a
static inline. A "new" function, _bt_compare(), is also added. That's a
shim function that simply calls _bt_compare_inl().

This earlier approach now seems to work better than the approach I took
in v3. In fact, my overnight testing shows that v3 was a minor loss
for performance. I guess we can scrap the non-static inline thing
right away.

> But why do we need
> this, and what exactly are you hoping the compiler will do with it?

Well, the patch's approach to inlining prior to v3 was kind of ugly,
and it would have been nice to not have to do it that way. That's all.
Some further refinement is probably possible.

More generally, my goal is to realize a pretty tangible performance
benefit from avoiding a pipeline stall -- this work was driven by a
complaint from Andres. I haven't really explained the reason why the
inlining matters here, and there are a few things that need to be
weighed. I'll have to do some kind of microarchitectural analysis
before proceeding with commit. This is still unsettled.

--
Peter Geoghegan



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Tom Lane
Date:
Peter Geoghegan <pg@bowt.ie> writes:
> On Wed, Feb 19, 2020 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Boy, I'd be pretty darn hesitant to go there, even with our new
>> expectation of C99 compilers.  What we found out when we last experimented
>> with non-static inlines was that the semantics were not very portable nor
>> desirable.  I've forgotten the details unfortunately.

> My original approach to inlining was to alter the nbtsearch.c
> _bt_compare() callers (the majority) to call _bt_compare_inl(). This
> function matches our current _bt_compare() function, except it's a
> static inline. A "new" function, _bt_compare(), is also added. That's a
> shim function that simply calls _bt_compare_inl().

Yeah, that's pretty much the approach we concluded was necessary
for portable results.

            regards, tom lane



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Andres Freund
Date:
Hi,

On 2020-02-19 15:55:38 -0500, Tom Lane wrote:
> Peter Geoghegan <pg@bowt.ie> writes:
> > It also inlines (in the second patch) by marking the _bt_compare
> > definition as inline, while not changing anything in nbtree.h. I
> > believe that this is portable C99 -- let's see what CF Tester thinks
> > of it.

> Boy, I'd be pretty darn hesitant to go there, even with our new
> expectation of C99 compilers.  What we found out when we last experimented
> with non-static inlines was that the semantics were not very portable nor
> desirable.  I've forgotten the details unfortunately.

I think most of those problems were about putting extern inlines into
headers however - not about putting an inline onto an extern within one
translation unit only.  Given that potential fallout should be within a
single file, and can fairly easily be fixed with adding wrappers etc, I
think it's a fairly low risk experiment to see what the buildfarm
thinks.

Greetings,

Andres Freund



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Tom Lane
Date:
Andres Freund <andres@anarazel.de> writes:
> On 2020-02-19 15:55:38 -0500, Tom Lane wrote:
>> Boy, I'd be pretty darn hesitant to go there, even with our new
>> expectation of C99 compilers.  What we found out when we last experimented
>> with non-static inlines was that the semantics were not very portable nor
>> desirable.  I've forgotten the details unfortunately.

> I think most of those problems were about putting extern inlines into
> headers however - not about putting an inline onto an extern within one
> translation unit only.  Given that potential fallout should be within a
> single file, and can fairly easily be fixed with adding wrappers etc, I
> think it's a fairly low risk experiment to see what the buildfarm
> thinks.

The buildfarm would only tell you if it compiles, not whether the
performance results are what you hoped for.

            regards, tom lane



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Wed, Feb 19, 2020 at 2:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The buildfarm would only tell you if it compiles, not whether the
> performance results are what you hoped for.

Right. Plus I think that more granular control might make more sense
in this particular instance anyway.

-- 
Peter Geoghegan



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Tom Lane
Date:
Peter Geoghegan <pg@bowt.ie> writes:
> Attached is v3, which no longer includes the third patch/optimization.
> It also inlines (in the second patch) by marking the _bt_compare
> definition as inline, while not changing anything in nbtree.h. I
> believe that this is portable C99 -- let's see what CF Tester thinks
> of it.

The cfbot thinks it doesn't even apply anymore --- conflict with the dedup
patch, no doubt?

            regards, tom lane



RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Floris Van Nee
Date:
> The cfbot thinks it doesn't even apply anymore --- conflict with the dedup
> patch, no doubt?

Minor conflict with that patch indeed. Attached is rebased version. I'm running some tests now - will post the results
herewhen finished. 

-Floris


Attachment

Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Sun, Mar 1, 2020 at 12:15 PM Floris Van Nee <florisvannee@optiver.com> wrote:
> Minor conflict with that patch indeed. Attached is rebased version. I'm running some tests now - will post the
resultshere when finished.
 

Thanks.

We're going to have to go back to my original approach to inlining. At
least, it seemed to be necessary to do that to get any benefit from
the patch on my comparatively modest workstation (using a similar
pgbench SELECT benchmark to the one that you ran). Tom also had a
concern about the portability of inlining without using "static
inline" -- that is another reason to avoid the approach to inlining
taken by v3 + v4.

It's possible (though not very likely) that performance has been
impacted by the deduplication patch (commit 0d861bbb), since it
updated the definition of BTreeTupleGetNAtts() itself.

Attached is v5, which inlines in a targeted fashion, pretty much in
the same way as the earliest version. This is the same as v4 in every
other way. Perhaps you can test this.

-- 
Peter Geoghegan

Attachment

RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Floris Van Nee
Date:
> Attached is v5, which inlines in a targeted fashion, pretty much in the same
> way as the earliest version. This is the same as v4 in every other way.
> Perhaps you can test this.
> 

Thank you for the new patch. With the new one I am indeed able to reproduce a performance increase. It is very
difficultto reliably reproduce it when running on a large number of cores though, due to the NUMA architecture.
 
For tests with a small number of cores, I pin all of them to the same node. With that, I see a significant performance
increasefor v5 compared to master. However, if I pin pgbench to a different node than the node that Postgres is pinned
to,this leads to a 20% performance degradation compared to having all of them on the same node, as well as the stddev
increasingby a factor of 2 (regardless of patch). With that, it becomes very difficult to see any kind of performance
increasedue to the patch. For a large number of pgbench workers, I cannot specifically pin the pgbench worker on the
samenode as the Postgres backend connection it's handling. Leaving it to the OS gives very unreliable results - when I
runthe 30 workers / 30 connections test, I sometimes see periods of up to 30 minutes where it's doing it 'correctly',
butit could also randomly run at the -20% performance for a long time. So far my best bet at explaining this is the
NUMAperformance hit. I'd like to be able to specifically schedule some Postgres backends to run on one node, while
otherPostgres backends run on a different node, but this isn't straightforward.
 

For now, I see no issues with the patch though. However, in real life situations there may be other, more important,
optimizationsfor people that use big multi-node machines.
 

Thoughts?

-Floris


Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Mark Dilger
Date:

> On Mar 2, 2020, at 5:29 PM, Peter Geoghegan <pg@bowt.ie> wrote:
>
> On Sun, Mar 1, 2020 at 12:15 PM Floris Van Nee <florisvannee@optiver.com> wrote:
>> Minor conflict with that patch indeed. Attached is rebased version. I'm running some tests now - will post the
resultshere when finished. 
>
> Thanks.
>
> We're going to have to go back to my original approach to inlining. At
> least, it seemed to be necessary to do that to get any benefit from
> the patch on my comparatively modest workstation (using a similar
> pgbench SELECT benchmark to the one that you ran). Tom also had a
> concern about the portability of inlining without using "static
> inline" -- that is another reason to avoid the approach to inlining
> taken by v3 + v4.
>
> It's possible (though not very likely) that performance has been
> impacted by the deduplication patch (commit 0d861bbb), since it
> updated the definition of BTreeTupleGetNAtts() itself.
>
> Attached is v5, which inlines in a targeted fashion, pretty much in
> the same way as the earliest version. This is the same as v4 in every
> other way. Perhaps you can test this.

Hi Peter, just a quick code review:

The v5 patch files apply and pass the regression tests. I get no errors.  The performance impact is within the noise.
TheTPS with the patch are higher sometimes and lower other times, looking across the 27 subtests of the "select-only"
benchmark. Which subtest is slower or faster changes per run, so that doesn't seem to be relevant.  I ran the
"select-only"six times (three with the patch, three without).  The change you made to the loop is clearly visible in
thenbtsearch.s output, and the change to inline _bt_compare is even more visible, so there doesn't seem to be any
defeatingof your change due to the compiler ignoring the "inline" or such.  I compiled using gcc -O2 

% gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr
--with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.0 (clang-1100.0.33.17)
Target: x86_64-apple-darwin19.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

2.4 GHz 8-Core Intel Core i9
32 GB 2667 MHz DDR4

Reading this thread, I think the lack of a performance impact on laptop hardware was expected, but perhaps confirmation
thatit does not make things worse is useful? 

Since this patch doesn't seem to do any harm, I would mark it as "ready for committer", except that there doesn't yet
seemto be enough evidence that it is a net win. 

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company






Re: RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Anastasia Lubennikova
Date:
Status update for a commitfest entry.

This thread was inactive for a while. The latest review suggests that it is Ready For Committer.
I also took a quick look at the patch and agree that it looks sensible. Maybe add a comment before the
_bt_compare_inl()to explain the need for this code change. 

The new status of this patch is: Ready for Committer

Re: RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Mon, Nov 2, 2020 at 9:46 AM Anastasia Lubennikova
<a.lubennikova@postgrespro.ru> wrote:
> This thread was inactive for a while. The latest review suggests that it is Ready For Committer.
> I also took a quick look at the patch and agree that it looks sensible. Maybe add a comment before the
_bt_compare_inl()to explain the need for this code change.
 

Actually I am probably going to withdraw this patch soon. The idea is
a plausible way of improving things. But at the same time I cannot
really demonstrate any benefit on hardware that I have access to.

This patch was something I worked on based on a private complaint from
Andres (who is CC'd here now) during an informal conversation at pgDay
SF. If Andres is now in a position to test the patch and can show a
benefit on certain hardware, I may well pick it back up. But as things
stand the evidence in support of the patch is pretty weak. I'm not
going to commit a patch like this unless and until it's crystal clear
what the benefits are.

if Andres cannot spend any time on this in the foreseeable future then
I'll withdraw the patch. I intend to formally withdraw the patch on
November 9th, provided no new information comes to light.

Thanks
-- 
Peter Geoghegan



Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Thu, May 28, 2020 at 12:35 PM Mark Dilger
<mark.dilger@enterprisedb.com> wrote:
> Reading this thread, I think the lack of a performance impact on laptop hardware was expected, but perhaps
confirmationthat it does not make things worse is useful?
 
>
> Since this patch doesn't seem to do any harm, I would mark it as "ready for committer", except that there doesn't yet
seemto be enough evidence that it is a net win.
 

Thank you for testing my patch. Sorry for the delay in getting back to this.

-- 
Peter Geoghegan



Re: RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()

From
Peter Geoghegan
Date:
On Mon, Nov 2, 2020 at 1:04 PM Peter Geoghegan <pg@bowt.ie> wrote:
> if Andres cannot spend any time on this in the foreseeable future then
> I'll withdraw the patch. I intend to formally withdraw the patch on
> November 9th, provided no new information comes to light.

I have now formally withdrawn the patch in the CF app.

Thanks
-- 
Peter Geoghegan