Thread: [HACKERS] Small improvement to compactify_tuples

[HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Good day, everyone.

I've been playing a bit with unlogged tables - just random updates on 
simple
key-value table. I've noticed amount of cpu spent in a compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort of
itemidbase items.

itemidbase array is bounded by number of tuples in a page, and 
itemIdSortData
structure is simple, so specialized version could be a better choice.

Attached patch adds combination of one pass of prefix sort with 
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be 
reused.

I've tested following table:

     create unlogged table test3 (
         id integer PRIMARY KEY with (fillfactor=85),
         val text
     ) WITH (fillfactor=85);
     insert into test3 select i, '!'||i from generate_series(1, 10000000) 
as i;

With pgbench script:

     \set id1 RANDOM(1, :scale)
     \set id2 RANDOM(1, :scale)

     select * from test3 where id = :id1;
     update test3 set val = '!'|| :id2 where id = :id1;

And command:

     pgbench -M prepared -c 3 -s 10000000 -T 1000 -P 3 -n -f test3.sql 
testdb

Using 1GB shared_buffers and synchronous_commit=off.

On my notebook improvement is:

before patch:

     progress: 63.0 s, 15880.1 tps, lat 0.189 ms stddev 0.127
     progress: 66.0 s, 15975.8 tps, lat 0.188 ms stddev 0.122
     progress: 69.0 s, 15904.1 tps, lat 0.189 ms stddev 0.152
     progress: 72.0 s, 15000.9 tps, lat 0.200 ms stddev 0.213
     progress: 75.0 s, 15101.7 tps, lat 0.199 ms stddev 0.192
     progress: 78.0 s, 15854.2 tps, lat 0.189 ms stddev 0.158
     progress: 81.0 s, 15803.3 tps, lat 0.190 ms stddev 0.158
     progress: 84.0 s, 15242.9 tps, lat 0.197 ms stddev 0.203
     progress: 87.0 s, 15184.1 tps, lat 0.198 ms stddev 0.215

after patch:

     progress: 63.0 s, 17108.5 tps, lat 0.175 ms stddev 0.140
     progress: 66.0 s, 17271.9 tps, lat 0.174 ms stddev 0.155
     progress: 69.0 s, 17243.5 tps, lat 0.174 ms stddev 0.143
     progress: 72.0 s, 16675.3 tps, lat 0.180 ms stddev 0.206
     progress: 75.0 s, 17187.4 tps, lat 0.175 ms stddev 0.157
     progress: 78.0 s, 17293.0 tps, lat 0.173 ms stddev 0.159
     progress: 81.0 s, 16289.8 tps, lat 0.184 ms stddev 0.180
     progress: 84.0 s, 16131.2 tps, lat 0.186 ms stddev 0.170
     progress: 87.0 s, 16741.1 tps, lat 0.179 ms stddev 0.165

I understand that it is quite degenerate test case.
But probably this improvement still has sense.

With regards,
-- 
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Heikki Linnakangas
Date:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
> Good day, everyone.
>
> I've been playing a bit with unlogged tables - just random updates on
> simple
> key-value table. I've noticed amount of cpu spent in a compactify_tuples
> (called by PageRepareFragmentaion). Most of time were spent in qsort of
> itemidbase items.

Ah, I played with this too a couple of years ago, see 
https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com, but 
got distracted by other things and never got around to commit that.

> itemidbase array is bounded by number of tuples in a page, and
> itemIdSortData
> structure is simple, so specialized version could be a better choice.
>
> Attached patch adds combination of one pass of prefix sort with
> insertion
> sort for larger array and shell sort for smaller array.
> Insertion sort and shell sort are implemented as macros and could be
> reused.

Cool! Could you compare that against the bucket sort I posted in the 
above thread, please?

At a quick glance, your "prefix sort" seems to be the the same algorithm 
as the bucket sort that I implemented. You chose 256 buckets, where I 
picked 32. And you're adding a shell sort implementation, for small 
arrays, while I used a straight insertion sort. Not sure what these 
differences mean in practice.

- Heikki




Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Heikki Linnakangas писал 2017-05-15 12:06:
> On 05/14/2017 09:47 PM, Sokolov Yura wrote:
>> Good day, everyone.
>> 
>> I've been playing a bit with unlogged tables - just random updates on
>> simple
>> key-value table. I've noticed amount of cpu spent in a 
>> compactify_tuples
>> (called by PageRepareFragmentaion). Most of time were spent in qsort 
>> of
>> itemidbase items.
> 
> Ah, I played with this too a couple of years ago, see
> https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com,
> but got distracted by other things and never got around to commit
> that.
> 
>> itemidbase array is bounded by number of tuples in a page, and
>> itemIdSortData
>> structure is simple, so specialized version could be a better choice.
>> 
>> Attached patch adds combination of one pass of prefix sort with
>> insertion
>> sort for larger array and shell sort for smaller array.
>> Insertion sort and shell sort are implemented as macros and could be
>> reused.
> 
> Cool! Could you compare that against the bucket sort I posted in the
> above thread, please?
> 
> At a quick glance, your "prefix sort" seems to be the the same
> algorithm as the bucket sort that I implemented. You chose 256
> buckets, where I picked 32. And you're adding a shell sort
> implementation, for small arrays, while I used a straight insertion
> sort. Not sure what these differences mean in practice.
> 
> - Heikki

Thank you for attention.

My first version of big page sort was almost exactly same to yours.
I had a bug in my insertion sort, so I had to refactor it.
(bug were fixed)

I found that items in itemidbase are almost sorted, so it is important
to try keep its order in prefix sort. So I've changed --count[i] to
count[i+1]++.

And it looks like it is better to have more buckets:
- with 256 buckets, size of single bucket is almost always less than 2,
so array is almost always sorted after prefix sort pass.

But it looks like it is better to sort each bucket separately, as you
did, and as it was in my early version.

Also I used your names for functions and some comments.

I attached new version of the patch.

I left memcpy intact cause it looks like it doesn't take noticable
cpu time.

-- 
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian PostgreSQL Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Sokolov Yura писал 2017-05-15 15:08:
> Heikki Linnakangas писал 2017-05-15 12:06:
>> On 05/14/2017 09:47 PM, Sokolov Yura wrote:
>>> Good day, everyone.
>>> 
>>> I've been playing a bit with unlogged tables - just random updates on
>>> simple
>>> key-value table. I've noticed amount of cpu spent in a 
>>> compactify_tuples
>>> (called by PageRepareFragmentaion). Most of time were spent in qsort 
>>> of
>>> itemidbase items.
>> 
>> Ah, I played with this too a couple of years ago, see
>> https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com,
>> but got distracted by other things and never got around to commit
>> that.
>> 
>>> itemidbase array is bounded by number of tuples in a page, and
>>> itemIdSortData
>>> structure is simple, so specialized version could be a better choice.
>>> 
>>> Attached patch adds combination of one pass of prefix sort with
>>> insertion
>>> sort for larger array and shell sort for smaller array.
>>> Insertion sort and shell sort are implemented as macros and could be
>>> reused.
>> 
>> Cool! Could you compare that against the bucket sort I posted in the
>> above thread, please?
>> 
>> At a quick glance, your "prefix sort" seems to be the the same
>> algorithm as the bucket sort that I implemented. You chose 256
>> buckets, where I picked 32. And you're adding a shell sort
>> implementation, for small arrays, while I used a straight insertion
>> sort. Not sure what these differences mean in practice.
>> 
>> - Heikki
> 
> Thank you for attention.
> 
> My first version of big page sort was almost exactly same to yours.
> I had a bug in my insertion sort, so I had to refactor it.
> (bug were fixed)
> 
> I found that items in itemidbase are almost sorted, so it is important
> to try keep its order in prefix sort. So I've changed --count[i] to
> count[i+1]++.
> 
> And it looks like it is better to have more buckets:
> - with 256 buckets, size of single bucket is almost always less than 2,
> so array is almost always sorted after prefix sort pass.
> 
> But it looks like it is better to sort each bucket separately, as you
> did, and as it was in my early version.
> 
> Also I used your names for functions and some comments.
> 
> I attached new version of the patch.
> 
> I left memcpy intact cause it looks like it doesn't take noticable
> cpu time.

In a sequel, I propose to simplify PageRepairFragmentation in attached
patch.

-- 
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Alvaro Herrera
Date:
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Alvaro Herrera писал 2017-05-15 18:04:
> Please add these two patches to the upcoming commitfest,
> https://commitfest.postgresql.org/

Thank you for suggestion.

I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach both patches to single email
to be show correctly in commitfest topic. So I do it with this email.

Please, correct me, if I do something wrong.

With regards.
-- 
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Sokolov Yura писал 2017-05-15 18:23:
> Alvaro Herrera писал 2017-05-15 18:04:
>> Please add these two patches to the upcoming commitfest,
>> https://commitfest.postgresql.org/
> 
> Thank you for suggestion.
> 
> I've created https://commitfest.postgresql.org/14/1138/
> As I could understand, I should attach both patches to single email
> to be show correctly in commitfest topic. So I do it with this email.
> 
> Please, correct me, if I do something wrong.
> 
> With regards.

Looks like it should be single file.

-- 
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Alvaro Herrera
Date:
Sokolov Yura wrote:
> Sokolov Yura писал 2017-05-15 18:23:
> > Alvaro Herrera писал 2017-05-15 18:04:
> > > Please add these two patches to the upcoming commitfest,
> > > https://commitfest.postgresql.org/
> > 
> > Thank you for suggestion.
> > 
> > I've created https://commitfest.postgresql.org/14/1138/
> > As I could understand, I should attach both patches to single email
> > to be show correctly in commitfest topic. So I do it with this email.

> Looks like it should be single file.

As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea.  If you don't edit
the patches further, then you're all set because we already have the
previously archived patches.  Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files.  (Some would even argue
that each should be its own thread, but I don't think that's necessary.)

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Alvaro Herrera писал 2017-05-15 20:13:
> As I understand, these patches are logically separate, so putting them
> together in a single file isn't such a great idea.  If you don't edit
> the patches further, then you're all set because we already have the
> previously archived patches.  Next commitfest starts in a few months
> yet, and if you feel the need to submit corrected versions in the
> meantime, please do submit in separate files.  (Some would even argue
> that each should be its own thread, but I don't think that's 
> necessary.)

Thank you for explanation.

I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
   (ie 0 or 1 element). In this case no need to sort buckets at all.

-- 
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
On 2017-05-17 17:46, Sokolov Yura wrote:
> Alvaro Herrera писал 2017-05-15 20:13:
>> As I understand, these patches are logically separate, so putting them
>> together in a single file isn't such a great idea.  If you don't edit
>> the patches further, then you're all set because we already have the
>> previously archived patches.  Next commitfest starts in a few months
>> yet, and if you feel the need to submit corrected versions in the
>> meantime, please do submit in separate files.  (Some would even argue
>> that each should be its own thread, but I don't think that's 
>> necessary.)
> 
> Thank you for explanation.
> 
> I'm adding new version of first patch with minor improvement:
> - I added detection of a case when all buckets are trivial
>   (ie 0 or 1 element). In this case no need to sort buckets at all.

I'm putting rebased version of second patch.

-- 
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
On 2017-07-21 13:49, Sokolov Yura wrote:
> On 2017-05-17 17:46, Sokolov Yura wrote:
>> Alvaro Herrera писал 2017-05-15 20:13:
>>> As I understand, these patches are logically separate, so putting 
>>> them
>>> together in a single file isn't such a great idea.  If you don't edit
>>> the patches further, then you're all set because we already have the
>>> previously archived patches.  Next commitfest starts in a few months
>>> yet, and if you feel the need to submit corrected versions in the
>>> meantime, please do submit in separate files.  (Some would even argue
>>> that each should be its own thread, but I don't think that's 
>>> necessary.)
>> 
>> Thank you for explanation.
>> 
>> I'm adding new version of first patch with minor improvement:
>> - I added detection of a case when all buckets are trivial
>>   (ie 0 or 1 element). In this case no need to sort buckets at all.
> 
> I'm putting rebased version of second patch.

Again rebased version of both patches.
Now second patch applies cleanly independent of first patch.

-- 
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
> On 2017-07-21 13:49, Sokolov Yura wrote:
>>
>> On 2017-05-17 17:46, Sokolov Yura wrote:
>>>
>>> Alvaro Herrera писал 2017-05-15 20:13:
>>>>
>>>> As I understand, these patches are logically separate, so putting them
>>>> together in a single file isn't such a great idea.  If you don't edit
>>>> the patches further, then you're all set because we already have the
>>>> previously archived patches.  Next commitfest starts in a few months
>>>> yet, and if you feel the need to submit corrected versions in the
>>>> meantime, please do submit in separate files.  (Some would even argue
>>>> that each should be its own thread, but I don't think that's necessary.)
>>>
>>>
>>> Thank you for explanation.
>>>
>>> I'm adding new version of first patch with minor improvement:
>>> - I added detection of a case when all buckets are trivial
>>>   (ie 0 or 1 element). In this case no need to sort buckets at all.
>>
>>
>> I'm putting rebased version of second patch.
>
>
> Again rebased version of both patches.
> Now second patch applies cleanly independent of first patch.

Patch 1 applies cleanly, builds, and make check runs fine.

The code looks similar in style to surrounding code too, so I'm not
going to complain about the abundance of underscores in the macros :-p

I can reproduce the results in the OP's benchmark, with slightly
different numbers, but an overall improvement of ~6%, which matches
the OP's relative improvement.

Algorithmically, everything looks sound.


A few minor comments about patch 1:

+    if (max == 1)
+        goto end;

That goto is unnecessary, you could just as simply say

if (max > 1)
{  ...
}


+#define pg_shell_sort_pass(elem_t, cmp, off) \
+    do { \
+        int _i, _j; \
+        elem_t _temp; \
+        for (_i = off; _i < _n; _i += off) \
+        { \

_n right there isn't declared in the macro, and it isn't an argument
either. It should be an argument, having stuff inherited from the
enclosing context like that is confusing.

Same with _arr, btw.


Patch 2 LGTM.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Sokolov Yura
Date:
Hello, Claudio.

Thank you for review and confirm of improvement.

On 2017-09-23 01:12, Claudio Freire wrote:
> On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
> <funny.falcon@postgrespro.ru> wrote:
>> On 2017-07-21 13:49, Sokolov Yura wrote:
>>> 
>>> On 2017-05-17 17:46, Sokolov Yura wrote:
>>>> 
>>>> Alvaro Herrera писал 2017-05-15 20:13:
>>>>> 
>>>>> As I understand, these patches are logically separate, so putting 
>>>>> them
>>>>> together in a single file isn't such a great idea.  If you don't 
>>>>> edit
>>>>> the patches further, then you're all set because we already have 
>>>>> the
>>>>> previously archived patches.  Next commitfest starts in a few 
>>>>> months
>>>>> yet, and if you feel the need to submit corrected versions in the
>>>>> meantime, please do submit in separate files.  (Some would even 
>>>>> argue
>>>>> that each should be its own thread, but I don't think that's 
>>>>> necessary.)
>>>> 
>>>> 
>>>> Thank you for explanation.
>>>> 
>>>> I'm adding new version of first patch with minor improvement:
>>>> - I added detection of a case when all buckets are trivial
>>>>   (ie 0 or 1 element). In this case no need to sort buckets at all.
>>> 
>>> 
>>> I'm putting rebased version of second patch.
>> 
>> 
>> Again rebased version of both patches.
>> Now second patch applies cleanly independent of first patch.
> 
> Patch 1 applies cleanly, builds, and make check runs fine.
> 
> The code looks similar in style to surrounding code too, so I'm not
> going to complain about the abundance of underscores in the macros :-p
> 
> I can reproduce the results in the OP's benchmark, with slightly
> different numbers, but an overall improvement of ~6%, which matches
> the OP's relative improvement.
> 
> Algorithmically, everything looks sound.
> 
> 
> A few minor comments about patch 1:
> 
> +    if (max == 1)
> +        goto end;
> 
> That goto is unnecessary, you could just as simply say
> 
> if (max > 1)
> {
>    ...
> }

Done.
(I don't like indentation, though :-( )

> 
> 
> +#define pg_shell_sort_pass(elem_t, cmp, off) \
> +    do { \
> +        int _i, _j; \
> +        elem_t _temp; \
> +        for (_i = off; _i < _n; _i += off) \
> +        { \
> 
> _n right there isn't declared in the macro, and it isn't an argument
> either. It should be an argument, having stuff inherited from the
> enclosing context like that is confusing.
> 
> Same with _arr, btw.

pg_shell_sort_pass is not intended to be used outside pg_shell_sort
and ph_insertion_sort, so I think, stealing from their context is ok.
Nonetheless, done.

> 
> 
> Patch 2 LGTM.

-- 
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Sat, Sep 23, 2017 at 5:56 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
> Hello, Claudio.
>
> Thank you for review and confirm of improvement.
>
>
> On 2017-09-23 01:12, Claudio Freire wrote:
>>
>>
>> Patch 1 applies cleanly, builds, and make check runs fine.
>>
>> The code looks similar in style to surrounding code too, so I'm not
>> going to complain about the abundance of underscores in the macros :-p
>>
>> I can reproduce the results in the OP's benchmark, with slightly
>> different numbers, but an overall improvement of ~6%, which matches
>> the OP's relative improvement.
>>
>> Algorithmically, everything looks sound.
>>
>>
>> A few minor comments about patch 1:
>>
>> +    if (max == 1)
>> +        goto end;
>>
>> That goto is unnecessary, you could just as simply say
>>
>> if (max > 1)
>> {
>>    ...
>> }
>
>
> Done.
> (I don't like indentation, though :-( )
>
>>
>>
>> +#define pg_shell_sort_pass(elem_t, cmp, off) \
>> +    do { \
>> +        int _i, _j; \
>> +        elem_t _temp; \
>> +        for (_i = off; _i < _n; _i += off) \
>> +        { \
>>
>> _n right there isn't declared in the macro, and it isn't an argument
>> either. It should be an argument, having stuff inherited from the
>> enclosing context like that is confusing.
>>
>> Same with _arr, btw.
>
>
> pg_shell_sort_pass is not intended to be used outside pg_shell_sort
> and ph_insertion_sort, so I think, stealing from their context is ok.
> Nonetheless, done.

Looks good.

Marking this patch as ready for committer


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
Sokolov Yura <funny.falcon@postgrespro.ru> writes:
> [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]

I started to review this patch.  I spent a fair amount of time on
beautifying the code, because I found it rather ugly and drastically
undercommented.  Once I had it to the point where it seemed readable,
I went to check the shellsort algorithm against Wikipedia's entry,
and found that this appears to be an incorrect implementation of
shellsort: where pg_shell_sort_pass has

        for (_i = off; _i < _n; _i += off) \

it seems to me that we need to have

        for (_i = off; _i < _n; _i += 1) \

or maybe just _i++.  As-is, this isn't h-sorting the whole file,
but just the subset of entries that have multiple-of-h indexes
(ie, the first of the h distinct subfiles that should get sorted).
The bug is masked by the final pass of plain insertion sort, but
we are not getting the benefit we should get from the earlier passes.

However, I'm a bit dubious that it's worth fixing that; instead
my inclination would be to rip out the shellsort implementation
entirely.  The code is only using it for the nitems <= 48 case
(which makes the first three offset steps certainly no-ops) and
I am really unconvinced that it's worth expending the code space
for a shellsort rather than plain insertion sort in that case,
especially when we have good reason to think that the input data
is nearly sorted.

BTW, the originally given test case shows no measurable improvement
on my box.  I was eventually able to convince myself by profiling
that the patch makes us spend less time in compactify_tuples, but
this test case isn't a very convincing one.

So, quite aside from the bug, I'm not excited about committing the
attached as-is.  I think we should remove pg_shell_sort and just
use pg_insertion_sort.  If somebody can show a test case that
provides a measurable speed improvement from the extra code,
I could be persuaded to reconsider.

I also wonder if the nitems <= 48 cutoff needs to be reconsidered
in light of this.  But since I can hardly measure any benefit from
the patch at all, I'm not in the best position to test different
values for that cutoff.

Have not looked at the 0002 patch yet.

            regards, tom lane

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..1af1b85 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
***************
*** 18,23 ****
--- 18,24 ----
  #include "access/itup.h"
  #include "access/xlog.h"
  #include "storage/checksum.h"
+ #include "utils/inline_sort.h"
  #include "utils/memdebug.h"
  #include "utils/memutils.h"

*************** typedef struct itemIdSortData
*** 425,439 ****
  } itemIdSortData;
  typedef itemIdSortData *itemIdSort;

! static int
  itemoffcompare(const void *itemidp1, const void *itemidp2)
  {
-     /* Sort in decreasing itemoff order */
      return ((itemIdSort) itemidp2)->itemoff -
          ((itemIdSort) itemidp1)->itemoff;
  }

  /*
   * After removing or marking some line pointers unused, move the tuples to
   * remove the gaps caused by the removed items.
   */
--- 426,542 ----
  } itemIdSortData;
  typedef itemIdSortData *itemIdSort;

! /* Comparator for sorting in decreasing itemoff order */
! static inline int
  itemoffcompare(const void *itemidp1, const void *itemidp2)
  {
      return ((itemIdSort) itemidp2)->itemoff -
          ((itemIdSort) itemidp1)->itemoff;
  }

  /*
+  * Sort an array of itemIdSort's on itemoff, descending.
+  *
+  * This uses Shell sort.  Given that array is small and itemoffcompare
+  * can be inlined, it is much faster than general-purpose qsort.
+  */
+ static void
+ sort_itemIds_small(itemIdSort itemidbase, int nitems)
+ {
+     pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+ }
+
+ /*
+  * Sort an array of itemIdSort's on itemoff, descending.
+  *
+  * This uses bucket sort:
+  * - single pass of stable prefix sort on high 8 bits of itemoffs
+  * - then insertion sort on buckets larger than 1 element
+  */
+ static void
+ sort_itemIds(itemIdSort itemidbase, int nitems)
+ {
+     /* number of buckets to use: */
+ #define NSPLIT 256
+     /* divisor to scale input values into 0..NSPLIT-1: */
+ #define PREFDIV (BLCKSZ / NSPLIT)
+     /* per-bucket counts; we need two extra elements, see below */
+     uint16        count[NSPLIT + 2];
+     itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+     int            i,
+                 max,
+                 total,
+                 pos,
+                 highbits;
+
+     Assert(nitems <= lengthof(copy));
+
+     /*
+      * Count how many items in each bucket.  We assume all itemoff values are
+      * less than BLCKSZ, therefore dividing by PREFDIV gives a value less than
+      * NSPLIT.
+      */
+     memset(count, 0, sizeof(count));
+     for (i = 0; i < nitems; i++)
+     {
+         highbits = itemidbase[i].itemoff / PREFDIV;
+         count[highbits]++;
+     }
+
+     /*
+      * Now convert counts to bucket position info, placing the buckets in
+      * decreasing order.  After this loop, count[k+1] is start of bucket k
+      * (for 0 <= k < NSPLIT), count[k] is end+1 of bucket k, and therefore
+      * count[k] - count[k+1] is length of bucket k.
+      *
+      * Also detect whether any buckets have more than one element.  For this
+      * purpose, "max" is set to the OR of all the counts (not really the max).
+      */
+     max = total = count[NSPLIT - 1];
+     for (i = NSPLIT - 2; i >= 0; i--)
+     {
+         max |= count[i];
+         total += count[i];
+         count[i] = total;
+     }
+     Assert(count[0] == nitems);
+
+     /*
+      * Now copy the data to be sorted into appropriate positions in the copy[]
+      * array.  We increment each bucket-start pointer as we insert data into
+      * its bucket; hence, after this loop count[k+1] is the end+1 of bucket k,
+      * count[k+2] is the start of bucket k, and count[k+1] - count[k+2] is the
+      * length of bucket k.
+      */
+     for (i = 0; i < nitems; i++)
+     {
+         highbits = itemidbase[i].itemoff / PREFDIV;
+         pos = count[highbits + 1]++;
+         copy[pos] = itemidbase[i];
+     }
+     Assert(count[1] == nitems);
+
+     /*
+      * If any buckets are larger than 1 item, we must sort them.  They should
+      * be small enough to make insertion sort effective.
+      */
+     if (max > 1)
+     {
+         /* i is bucket number plus 1 */
+         for (i = NSPLIT; i > 0; i--)
+         {
+             pg_insertion_sort(itemIdSortData,
+                               copy + count[i + 1],
+                               count[i] - count[i + 1],
+                               itemoffcompare);
+         }
+     }
+
+     /* And transfer the sorted data back to the caller */
+     memcpy(itemidbase, copy, sizeof(itemIdSortData) * nitems);
+ }
+
+ /*
   * After removing or marking some line pointers unused, move the tuples to
   * remove the gaps caused by the removed items.
   */
*************** compactify_tuples(itemIdSort itemidbase,
*** 445,452 ****
      int            i;

      /* sort itemIdSortData array into decreasing itemoff order */
!     qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
!           itemoffcompare);

      upper = phdr->pd_special;
      for (i = 0; i < nitems; i++)
--- 548,558 ----
      int            i;

      /* sort itemIdSortData array into decreasing itemoff order */
!     /* empirically, bucket sort is worth the trouble above 48 items */
!     if (nitems > 48)
!         sort_itemIds(itemidbase, nitems);
!     else
!         sort_itemIds_small(itemidbase, nitems);

      upper = phdr->pd_special;
      for (i = 0; i < nitems; i++)
diff --git a/src/include/utils/inline_sort.h b/src/include/utils/inline_sort.h
index ...c97a248 .
*** a/src/include/utils/inline_sort.h
--- b/src/include/utils/inline_sort.h
***************
*** 0 ****
--- 1,88 ----
+ /*-------------------------------------------------------------------------
+  *
+  * inline_sort.h
+  *      Macros to perform specialized types of sorts.
+  *
+  *
+  * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+  * Portions Copyright (c) 1994, Regents of the University of California
+  *
+  * src/include/utils/inline_sort.h
+  *
+  *-------------------------------------------------------------------------
+  */
+ #ifndef INLINE_SORT_H
+ #define INLINE_SORT_H
+
+ /*
+  * pg_shell_sort - sort for small arrays with inlinable comparator.
+  *
+  * This is best used with arrays smaller than 200 elements, and could be
+  * safely used with up to 1000 elements.  But it degrades fast after that.
+  *
+  * Since this is implemented as a macro it can be optimized together with
+  * comparison function; using a macro or inlinable function is recommended.
+  *
+  * Arguments:
+  *     elem_t - type of array elements (for declaring temporary variables)
+  *     array    - pointer to elements to be sorted
+  *     nitems - number of elements to be sorted
+  *     cmp    - comparison function that accepts addresses of 2 elements
+  *              (same API as qsort comparison function).
+  * cmp argument should be a function or macro name.
+  * array and nitems arguments are evaluated only once.
+  *
+  * This uses Shellsort (see e.g. wikipedia's entry), with gaps selected as
+  * "gap(i) = smallest prime number below e^i".  These are close to the gaps
+  * recommended by Incerpi & Sedwick, but look to be better on average.
+  */
+ #define pg_shell_sort(elem_t, array, nitems, cmp) \
+     do { \
+         elem_t *_arr = (array); \
+         int        _n = (nitems); \
+         static const int _offsets[] = {401, 139, 53, 19, 7, 3}; \
+         int        _noff; \
+         for (_noff = 0; _noff < lengthof(_offsets); _noff++) \
+         { \
+             int        _off = _offsets[_noff]; \
+             pg_shell_sort_pass(elem_t, cmp, _off, _arr, _n); \
+         } \
+         pg_shell_sort_pass(elem_t, cmp, 1, _arr, _n); \
+     } while (0)
+
+ /*
+  * pg_insertion_sort - plain insertion sort.
+  * Useful for very small array, or if array was almost sorted already.
+  * Same API as pg_shell_sort.
+  */
+ #define pg_insertion_sort(elem_t, array, nitems, cmp) \
+     do { \
+         elem_t *_arr = (array); \
+         int        _n = (nitems); \
+         pg_shell_sort_pass(elem_t, cmp, 1, _arr, _n); \
+     } while (0)
+
+ /*
+  * One pass of Shellsort: simple insertion sort of the subset of entries
+  * at stride "off".  Not intended to be used outside of above macros.
+  */
+ #define pg_shell_sort_pass(elem_t, cmp, off, _arr, _n) \
+     do { \
+         int        _i; \
+         for (_i = off; _i < _n; _i += off) \
+         { \
+             if (cmp(_arr + _i - off, _arr + _i) > 0) \
+             { \
+                 elem_t    _temp = _arr[_i]; \
+                 int        _j = _i; \
+                 do \
+                 { \
+                     _arr[_j] = _arr[_j - off]; \
+                     _j -= off; \
+                 } while (_j >= off && cmp(_arr + _j - off, &_temp) > 0); \
+                 _arr[_j] = _temp; \
+             } \
+         } \
+     } while (0)
+
+ #endif                            /* INLINE_SORT_H */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Sokolov Yura <funny.falcon@postgrespro.ru> writes:
>> [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
>
> I started to review this patch.  I spent a fair amount of time on
> beautifying the code, because I found it rather ugly and drastically
> undercommented.  Once I had it to the point where it seemed readable,
> I went to check the shellsort algorithm against Wikipedia's entry,
> and found that this appears to be an incorrect implementation of
> shellsort: where pg_shell_sort_pass has
>
>                 for (_i = off; _i < _n; _i += off) \
>
> it seems to me that we need to have
>
>                 for (_i = off; _i < _n; _i += 1) \
>
> or maybe just _i++.  As-is, this isn't h-sorting the whole file,
> but just the subset of entries that have multiple-of-h indexes
> (ie, the first of the h distinct subfiles that should get sorted).
> The bug is masked by the final pass of plain insertion sort, but
> we are not getting the benefit we should get from the earlier passes.
>
> However, I'm a bit dubious that it's worth fixing that; instead
> my inclination would be to rip out the shellsort implementation
> entirely.  The code is only using it for the nitems <= 48 case
> (which makes the first three offset steps certainly no-ops) and
> I am really unconvinced that it's worth expending the code space
> for a shellsort rather than plain insertion sort in that case,
> especially when we have good reason to think that the input data
> is nearly sorted.

I actually noticed that and benchmarked some variants. Neither
made any noticeable difference in performance, so I decided not
to complain about them.

I guess the same case can be made for removing the shell sort.
So I'm inclined to agree.

> BTW, the originally given test case shows no measurable improvement
> on my box.

I did manage to reproduce the original test and got a consistent improvement.

> I was eventually able to convince myself by profiling
> that the patch makes us spend less time in compactify_tuples, but
> this test case isn't a very convincing one.
>
> So, quite aside from the bug, I'm not excited about committing the
> attached as-is.  I think we should remove pg_shell_sort and just
> use pg_insertion_sort.  If somebody can show a test case that
> provides a measurable speed improvement from the extra code,
> I could be persuaded to reconsider.

My tests modifying the shell sort didn't produce any measurable
difference, but I didn't test removing it altogether.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
Claudio Freire <klaussfreire@gmail.com> writes:
> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> BTW, the originally given test case shows no measurable improvement
>> on my box.

> I did manage to reproduce the original test and got a consistent improvement.

It occurred to me that I could force the issue by hacking bufpage.c to
execute compactify_tuples multiple times each time it was called, as in
the first patch attached below.  This has nothing directly to do with real
performance of course, but it's making use of the PG system to provide
realistic test data for microbenchmarking compactify_tuples.  I was a bit
surprised to find that I had to set the repeat count to 1000 to make
compactify_tuples really dominate the runtime (while using the originally
posted test case ... maybe there's a better one?).  But once I did get it
to dominate the runtime, perf gave me this for the CPU hotspots:

+   27.97%    27.88%        229040  postmaster       libc-2.12.so                 [.] memmove
+   14.61%    14.57%        119704  postmaster       postgres                     [.] compactify_tuples
+   12.40%    12.37%        101566  postmaster       libc-2.12.so                 [.] _wordcopy_bwd_aligned
+   11.68%    11.65%         95685  postmaster       libc-2.12.so                 [.] _wordcopy_fwd_aligned
+    7.67%     7.64%         62801  postmaster       postgres                     [.] itemoffcompare
+    7.00%     6.98%         57303  postmaster       postgres                     [.] compactify_tuples_loop
+    4.53%     4.52%         37111  postmaster       postgres                     [.] pg_qsort
+    1.71%     1.70%         13992  postmaster       libc-2.12.so                 [.] memcpy

which says that micro-optimizing the sort step is a complete, utter waste
of time, and what we need to be worried about is the data copying part.

The memcpy part of the above is presumably from the scaffolding memcpy's
in compactify_tuples_loop, which is interesting because that's moving as
much data as the memmove's are.  So at least with RHEL6's version of
glibc, memmove is apparently a lot slower than memcpy.

This gave me the idea to memcpy the page into some workspace and then use
memcpy, not memmove, to put the tuples back into the caller's copy of the
page.  That gave me about a 50% improvement in observed TPS, and a perf
profile like this:

+   38.50%    38.40%        299520  postmaster       postgres                       [.] compactify_tuples
+   31.11%    31.02%        241975  postmaster       libc-2.12.so                   [.] memcpy
+    8.74%     8.72%         68022  postmaster       postgres                       [.] itemoffcompare
+    6.51%     6.49%         50625  postmaster       postgres                       [.] compactify_tuples_loop
+    4.21%     4.19%         32719  postmaster       postgres                       [.] pg_qsort
+    1.70%     1.69%         13213  postmaster       postgres                       [.] memcpy@plt

There still doesn't seem to be any point in replacing the qsort,
but it does seem like something like the second attached patch
might be worth doing.

So I'm now wondering why my results seem to be so much different
from those of other people who have tried this, both as to whether
compactify_tuples is worth working on at all and as to what needs
to be done to it if so.  Thoughts?

            regards, tom lane

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..bf6d308 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*************** compactify_tuples(itemIdSort itemidbase,
*** 465,470 ****
--- 465,489 ----
      phdr->pd_upper = upper;
  }

+ static void
+ compactify_tuples_loop(itemIdSort itemidbase, int nitems, Page page)
+ {
+     itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+     union {
+         char page[BLCKSZ];
+         double align;
+     } pagecopy;
+     int i;
+
+     for (i = 1; i < 1000; i++)
+     {
+         memcpy(copy, itemidbase, sizeof(itemIdSortData) * nitems);
+         memcpy(pagecopy.page, page, BLCKSZ);
+         compactify_tuples(copy, nitems, pagecopy.page);
+     }
+     compactify_tuples(itemidbase, nitems, page);
+ }
+
  /*
   * PageRepairFragmentation
   *
*************** PageRepairFragmentation(Page page)
*** 560,566 ****
                       errmsg("corrupted item lengths: total %u, available space %u",
                              (unsigned int) totallen, pd_special - pd_lower)));

!         compactify_tuples(itemidbase, nstorage, page);
      }

      /* Set hint bit for PageAddItem */
--- 579,585 ----
                       errmsg("corrupted item lengths: total %u, available space %u",
                              (unsigned int) totallen, pd_special - pd_lower)));

!         compactify_tuples_loop(itemidbase, nstorage, page);
      }

      /* Set hint bit for PageAddItem */
*************** PageIndexMultiDelete(Page page, OffsetNu
*** 940,946 ****
      phdr->pd_lower = SizeOfPageHeaderData + nused * sizeof(ItemIdData);

      /* and compactify the tuple data */
!     compactify_tuples(itemidbase, nused, page);
  }


--- 959,965 ----
      phdr->pd_lower = SizeOfPageHeaderData + nused * sizeof(ItemIdData);

      /* and compactify the tuple data */
!     compactify_tuples_loop(itemidbase, nused, page);
  }


diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..e485398 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*************** static void
*** 441,446 ****
--- 441,450 ----
  compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
  {
      PageHeader    phdr = (PageHeader) page;
+     union {
+         char page[BLCKSZ];
+         double align;
+     } pagecopy;
      Offset        upper;
      int            i;

*************** compactify_tuples(itemIdSort itemidbase,
*** 448,464 ****
      qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
            itemoffcompare);

      upper = phdr->pd_special;
      for (i = 0; i < nitems; i++)
      {
          itemIdSort    itemidptr = &itemidbase[i];
          ItemId        lp;

-         lp = PageGetItemId(page, itemidptr->offsetindex + 1);
          upper -= itemidptr->alignedlen;
!         memmove((char *) page + upper,
!                 (char *) page + itemidptr->itemoff,
!                 itemidptr->alignedlen);
          lp->lp_off = upper;
      }

--- 452,470 ----
      qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
            itemoffcompare);

+     memcpy(pagecopy.page, page, BLCKSZ);
+
      upper = phdr->pd_special;
      for (i = 0; i < nitems; i++)
      {
          itemIdSort    itemidptr = &itemidbase[i];
          ItemId        lp;

          upper -= itemidptr->alignedlen;
!         memcpy((char *) page + upper,
!                pagecopy.page + itemidptr->itemoff,
!                itemidptr->alignedlen);
!         lp = PageGetItemId(page, itemidptr->offsetindex + 1);
          lp->lp_off = upper;
      }


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
I wrote:
> Have not looked at the 0002 patch yet.

I looked at that one, and it seems to be a potential win with no
downside, so pushed.  (I tweaked it slightly to avoid an unnecessary
conflict with the test patch I posted earlier.)
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Fri, Nov 3, 2017 at 4:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Claudio Freire <klaussfreire@gmail.com> writes:
>> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> BTW, the originally given test case shows no measurable improvement
>>> on my box.
>
>> I did manage to reproduce the original test and got a consistent improvement.
>
> This gave me the idea to memcpy the page into some workspace and then use
> memcpy, not memmove, to put the tuples back into the caller's copy of the
> page.  That gave me about a 50% improvement in observed TPS, and a perf
> profile like this:
>
> +   38.50%    38.40%        299520  postmaster       postgres                       [.] compactify_tuples
> +   31.11%    31.02%        241975  postmaster       libc-2.12.so                   [.] memcpy
> +    8.74%     8.72%         68022  postmaster       postgres                       [.] itemoffcompare
> +    6.51%     6.49%         50625  postmaster       postgres                       [.] compactify_tuples_loop
> +    4.21%     4.19%         32719  postmaster       postgres                       [.] pg_qsort
> +    1.70%     1.69%         13213  postmaster       postgres                       [.] memcpy@plt
>
> There still doesn't seem to be any point in replacing the qsort,
> but it does seem like something like the second attached patch
> might be worth doing.
>
> So I'm now wondering why my results seem to be so much different
> from those of other people who have tried this, both as to whether
> compactify_tuples is worth working on at all and as to what needs
> to be done to it if so.  Thoughts?
>
>                         regards, tom lane
>

I'm going to venture a guess that the version of gcc and libc, and
build options used both in the libc (ie: the distro) and postgres may
play a part here.

I'm running with glibc 2.22, for instance, and building with gcc 4.8.5.

I will try and benchmark memcpy vs memmove and see what the
performance difference is there with my versions, too. This may
heavily depend on compiler optimizations that may vary between
versions, since memcpy/memmove tend to be inlined a lot.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:
2017-11-03 5:46 GMT+03:00 Tom Lane <tgl@sss.pgh.pa.us>:
>
> Sokolov Yura <funny.falcon@postgrespro.ru> writes:
> > [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
>
> I went to check the shellsort algorithm against Wikipedia's entry,
> and found that this appears to be an incorrect implementation of
> shellsort: where pg_shell_sort_pass has
>
>                 for (_i = off; _i < _n; _i += off) \
>
> it seems to me that we need to have
>
>                 for (_i = off; _i < _n; _i += 1) \
>
> or maybe just _i++.


Shame on me :-(
I've wrote shell sort several times, so I forgot to recheck myself once again.
And looks like best gap sequence from wikipedia is really best
( {301, 132, 57, 23, 10 , 4} in my notation),


2017-11-03 17:37 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> BTW, the originally given test case shows no measurable improvement
>> on my box.
>
> I did manage to reproduce the original test and got a consistent improvement.

I've rechecked my self using my benchmark.
Without memmove, compactify_tuples comsumes:
- with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare + compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
- with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare also inlined, so whole is compactify_tuples)
- with just shell sort 5,98% cpu (sort is inlined again)
- with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 + 0.46)

(memmove consumes 1.29% cpu)

tps is also reflects changes:
~17ktps with qsort
~19ktps with bucket sort

Also vacuum of benchmark's table is also improved:
~3s with qsort,
~2.4s with bucket sort

Of course, this benchmark is quite synthetic: table is unlogged, and tuple is small,
and synchronous commit is off. Though, such table is still useful in some situations
(think of not-too-important, but useful counters, like "photo watch count").
And patch affects not only this synthetic benchmark. It affects restore performance,
as Heikki mentioned, and cpu consumption of Vacuum (though vacuum is more io
bound).

I think we should remove pg_shell_sort and just use pg_insertion_sort.

Using shell sort is just a bit safer. Doubtfully worst pattern (for insertion sort) will
appear, but what if? Shellsort is a bit better on whole array (5.98% vs 6.65%).
Though on small array difference will be much smaller.

With regards,
Sokolov Yura aka funny_falcon

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
Юрий Соколов <funny.falcon@gmail.com> wrote:
>tps is also reflects changes:
>~17ktps with qsort
>~19ktps with bucket sort
>
>Also vacuum of benchmark's table is also improved:
>~3s with qsort,
>~2.4s with bucket sort

One thing that you have to be careful with when it comes to our qsort
with partially presored inputs is what I like to call "banana skin
effects":

https://postgr.es/m/CAH2-WzkU2xK2dpZ7N8-A1MvuUTTUvhqkfnA+eUtwNwCtgyCJgw@mail.gmail.com

This may have nothing at all to do with your results; I'm just pointing
it out as a possibility.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Sat, Nov 4, 2017 at 8:07 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> 2017-11-03 5:46 GMT+03:00 Tom Lane <tgl@sss.pgh.pa.us>:
>>
>> Sokolov Yura <funny.falcon@postgrespro.ru> writes:
>> > [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
>>
>> I went to check the shellsort algorithm against Wikipedia's entry,
>> and found that this appears to be an incorrect implementation of
>> shellsort: where pg_shell_sort_pass has
>>
>>                 for (_i = off; _i < _n; _i += off) \
>>
>> it seems to me that we need to have
>>
>>                 for (_i = off; _i < _n; _i += 1) \
>>
>> or maybe just _i++.
>
>
> Shame on me :-(
> I've wrote shell sort several times, so I forgot to recheck myself once
> again.
> And looks like best gap sequence from wikipedia is really best
> ( {301, 132, 57, 23, 10 , 4} in my notation),
>
>
> 2017-11-03 17:37 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> BTW, the originally given test case shows no measurable improvement
>>> on my box.
>>
>> I did manage to reproduce the original test and got a consistent
>> improvement.
>
> I've rechecked my self using my benchmark.
> Without memmove, compactify_tuples comsumes:
> - with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare +
> compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
> - with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare also
> inlined, so whole is compactify_tuples)
> - with just shell sort 5,98% cpu (sort is inlined again)
> - with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 +
> 0.46)

Is that just insertion sort without bucket sort?

Because I think shell sort has little impact in your original patch
because it's rarely exercised. With bucket sort, most buckets are very
small, too small for shell sort to do any useful work.

That's why I'm inclined to agree with Tom in that we could safely
simplify it out, remove it, without much impact.

Maybe leave a fallback to qsort if some corner case produces big buckets?


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:

2017-11-05 20:44 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>
> On Sat, Nov 4, 2017 at 8:07 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> > 2017-11-03 5:46 GMT+03:00 Tom Lane <tgl@sss.pgh.pa.us>:
> >>
> >> Sokolov Yura <funny.falcon@postgrespro.ru> writes:
> >> > [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
> >>
> >> I went to check the shellsort algorithm against Wikipedia's entry,
> >> and found that this appears to be an incorrect implementation of
> >> shellsort: where pg_shell_sort_pass has
> >>
> >>                 for (_i = off; _i < _n; _i += off) \
> >>
> >> it seems to me that we need to have
> >>
> >>                 for (_i = off; _i < _n; _i += 1) \
> >>
> >> or maybe just _i++.
> >
> >
> > Shame on me :-(
> > I've wrote shell sort several times, so I forgot to recheck myself once
> > again.
> > And looks like best gap sequence from wikipedia is really best
> > ( {301, 132, 57, 23, 10 , 4} in my notation),
> >
> >
> > 2017-11-03 17:37 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
> >> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >>> BTW, the originally given test case shows no measurable improvement
> >>> on my box.
> >>
> >> I did manage to reproduce the original test and got a consistent
> >> improvement.
> >
> > I've rechecked my self using my benchmark.
> > Without memmove, compactify_tuples comsumes:
> > - with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare +
> > compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
> > - with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare also
> > inlined, so whole is compactify_tuples)
> > - with just shell sort 5,98% cpu (sort is inlined again)
> > - with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 +
> > 0.46)
>
> Is that just insertion sort without bucket sort?

Yes. Just to show that inlined insertion sort is better than non-inlined qsort
in this particular use-case.
 
> Because I think shell sort has little impact in your original patch
> because it's rarely exercised. With bucket sort, most buckets are very
> small, too small for shell sort to do any useful work.

Yes. In the patch, buckets are sorted with insertion sort. Shell sort is used
only on full array if its size less than 48.
Bucket sort has constant overhead of traversing all buckets, even if they
are empty. That is why I think, shell sort for small arrays is better. Though,
I didn't measure that carefully. And probably insertion sort for small arrays
will be just enough.

> Maybe leave a fallback to qsort if some corner case produces big buckets?

For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
most 1 heap-tuple per bucket, and for index pages it is at most 2 index
tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
per bucket.
It will be unnecessary overhead to call non-inlineable qsort in this cases

So, I think, shell sort could be removed, but insertion sort have to remain.

I'd prefer shell sort to remain also. It could be useful in other places also,
because it is easily inlinable, and provides comparable to qsort performance
up to several hundreds of elements.

With regards,
Sokolov Yura aka funny_falcon.

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов <funny.falcon@gmail.com> wrote:
>> Maybe leave a fallback to qsort if some corner case produces big buckets?
>
> For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
> most 1 heap-tuple per bucket, and for index pages it is at most 2 index
> tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
> per bucket.
> It will be unnecessary overhead to call non-inlineable qsort in this cases
>
> So, I think, shell sort could be removed, but insertion sort have to remain.
>
> I'd prefer shell sort to remain also. It could be useful in other places
> also,
> because it is easily inlinable, and provides comparable to qsort performance
> up to several hundreds of elements.

I'd rather have an inlineable qsort.

And I'd recommend doing that when there is a need, and I don't think
this patch really needs it, since bucket sort handles most cases
anyway.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:

2017-11-06 17:55 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>
> On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> >> Maybe leave a fallback to qsort if some corner case produces big buckets?
> >
> > For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
> > most 1 heap-tuple per bucket, and for index pages it is at most 2 index
> > tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
> > per bucket.
> > It will be unnecessary overhead to call non-inlineable qsort in this cases
> >
> > So, I think, shell sort could be removed, but insertion sort have to remain.
> >
> > I'd prefer shell sort to remain also. It could be useful in other places
> > also,
> > because it is easily inlinable, and provides comparable to qsort performance
> > up to several hundreds of elements.
>
> I'd rather have an inlineable qsort.

But qsort is recursive. It is quite hard to make it inlineable. And still it will be
much heavier than insertion sort (btw, all qsort implementations uses insertion
sort for small arrays). And it will be heavier than shell sort for small arrays.

I can do specialized qsort for this case. But it will be larger bunch of code, than
shell sort.

> And I'd recommend doing that when there is a need, and I don't think
> this patch really needs it, since bucket sort handles most cases
> anyway.

And it still needs insertion sort for buckets.
I can agree to get rid of shell sort. But insertion sort is necessary. 

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Mon, Nov 6, 2017 at 6:58 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
>
> 2017-11-06 17:55 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>>
>> On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов <funny.falcon@gmail.com>
>> wrote:
>> >> Maybe leave a fallback to qsort if some corner case produces big
>> >> buckets?
>> >
>> > For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
>> > most 1 heap-tuple per bucket, and for index pages it is at most 2 index
>> > tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
>> > per bucket.
>> > It will be unnecessary overhead to call non-inlineable qsort in this
>> > cases
>> >
>> > So, I think, shell sort could be removed, but insertion sort have to
>> > remain.
>> >
>> > I'd prefer shell sort to remain also. It could be useful in other places
>> > also,
>> > because it is easily inlinable, and provides comparable to qsort
>> > performance
>> > up to several hundreds of elements.
>>
>> I'd rather have an inlineable qsort.
>
> But qsort is recursive. It is quite hard to make it inlineable. And still it
> will be
> much heavier than insertion sort (btw, all qsort implementations uses
> insertion
> sort for small arrays). And it will be heavier than shell sort for small
> arrays.

I haven't seen this trick used in postgres, nor do I know whether it
would be well received, so this is more like throwing an idea to see
if it sticks...

But a way to do this without macros is to have an includable
"template" algorithm that simply doesn't define the comparison
function/type, it rather assumes it:

qsort_template.h

#define QSORT_NAME qsort_ ## QSORT_SUFFIX

static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
{   ... if (ELEM_LESS(arr[a], arr[b]))   ...
}

#undef QSORT_NAME

Then, in "offset_qsort.h":

#define QSORT_SUFFIX offset
#define ELEM_TYPE offset
#define ELEM_LESS(a,b) ((a) < (b))

#include "qsort_template.h"

#undef QSORT_SUFFIX
#undef ELEM_TYPE
#undef ELEM_LESS

Now, I realize this may have its cons, but it does simplify
maintainance of type-specific or parameterized variants of
performance-critical functions.

> I can do specialized qsort for this case. But it will be larger bunch of
> code, than
> shell sort.
>
>> And I'd recommend doing that when there is a need, and I don't think
>> this patch really needs it, since bucket sort handles most cases
>> anyway.
>
> And it still needs insertion sort for buckets.
> I can agree to get rid of shell sort. But insertion sort is necessary.

I didn't suggest getting rid of insertion sort. But the trick above is
equally applicable to insertion sort.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:
2017-11-07 1:14 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>
> On Mon, Nov 6, 2017 at 6:58 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> >
> > 2017-11-06 17:55 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
> >>
> >> On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов <funny.falcon@gmail.com>
> >> wrote:
> >> >> Maybe leave a fallback to qsort if some corner case produces big
> >> >> buckets?
> >> >
> >> > For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
> >> > most 1 heap-tuple per bucket, and for index pages it is at most 2 index
> >> > tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
> >> > per bucket.
> >> > It will be unnecessary overhead to call non-inlineable qsort in this
> >> > cases
> >> >
> >> > So, I think, shell sort could be removed, but insertion sort have to
> >> > remain.
> >> >
> >> > I'd prefer shell sort to remain also. It could be useful in other places
> >> > also,
> >> > because it is easily inlinable, and provides comparable to qsort
> >> > performance
> >> > up to several hundreds of elements.
> >>
> >> I'd rather have an inlineable qsort.
> >
> > But qsort is recursive. It is quite hard to make it inlineable. And still it
> > will be
> > much heavier than insertion sort (btw, all qsort implementations uses
> > insertion
> > sort for small arrays). And it will be heavier than shell sort for small
> > arrays.
>
> I haven't seen this trick used in postgres, nor do I know whether it
> would be well received, so this is more like throwing an idea to see
> if it sticks...
>
> But a way to do this without macros is to have an includable
> "template" algorithm that simply doesn't define the comparison
> function/type, it rather assumes it:
>
> qsort_template.h
>
> #define QSORT_NAME qsort_ ## QSORT_SUFFIX
>
> static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
> {
>     ... if (ELEM_LESS(arr[a], arr[b]))
>     ...
> }
>
> #undef QSORT_NAME
>
> Then, in "offset_qsort.h":
>
> #define QSORT_SUFFIX offset
> #define ELEM_TYPE offset
> #define ELEM_LESS(a,b) ((a) < (b))
>
> #include "qsort_template.h"
>
> #undef QSORT_SUFFIX
> #undef ELEM_TYPE
> #undef ELEM_LESS
>
> Now, I realize this may have its cons, but it does simplify
> maintainance of type-specific or parameterized variants of
> performance-critical functions.
>
> > I can do specialized qsort for this case. But it will be larger bunch of
> > code, than
> > shell sort.
> >
> >> And I'd recommend doing that when there is a need, and I don't think
> >> this patch really needs it, since bucket sort handles most cases
> >> anyway.
> >
> > And it still needs insertion sort for buckets.
> > I can agree to get rid of shell sort. But insertion sort is necessary.
>
> I didn't suggest getting rid of insertion sort. But the trick above is
> equally applicable to insertion sort.

This trick is used in simplehash.h . I agree, it could be useful for qsort.
This will not make qsort inlineable, but will reduce overhead much.

This trick is too heavy-weight for insertion sort alone, though. Without
shellsort, insertion sort could be expressed in 14 line macros ( 8 lines
without curly braces). But if insertion sort will be defined together with
qsort (because qsort still needs it), then it is justifiable.

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Mon, Nov 6, 2017 at 9:08 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> 2017-11-07 1:14 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>>
>> I haven't seen this trick used in postgres, nor do I know whether it
>> would be well received, so this is more like throwing an idea to see
>> if it sticks...
>>
>> But a way to do this without macros is to have an includable
>> "template" algorithm that simply doesn't define the comparison
>> function/type, it rather assumes it:
>>
>> qsort_template.h
>>
>> #define QSORT_NAME qsort_ ## QSORT_SUFFIX
>>
>> static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
>> {
>>     ... if (ELEM_LESS(arr[a], arr[b]))
>>     ...
>> }
>>
>> #undef QSORT_NAME
>>
>> Then, in "offset_qsort.h":
>>
>> #define QSORT_SUFFIX offset
>> #define ELEM_TYPE offset
>> #define ELEM_LESS(a,b) ((a) < (b))
>>
>> #include "qsort_template.h"
>>
>> #undef QSORT_SUFFIX
>> #undef ELEM_TYPE
>> #undef ELEM_LESS
>>
>> Now, I realize this may have its cons, but it does simplify
>> maintainance of type-specific or parameterized variants of
>> performance-critical functions.
>>
>> > I can do specialized qsort for this case. But it will be larger bunch of
>> > code, than
>> > shell sort.
>> >
>> >> And I'd recommend doing that when there is a need, and I don't think
>> >> this patch really needs it, since bucket sort handles most cases
>> >> anyway.
>> >
>> > And it still needs insertion sort for buckets.
>> > I can agree to get rid of shell sort. But insertion sort is necessary.
>>
>> I didn't suggest getting rid of insertion sort. But the trick above is
>> equally applicable to insertion sort.
>
> This trick is used in simplehash.h . I agree, it could be useful for qsort.
> This will not make qsort inlineable, but will reduce overhead much.
>
> This trick is too heavy-weight for insertion sort alone, though. Without
> shellsort, insertion sort could be expressed in 14 line macros ( 8 lines
> without curly braces). But if insertion sort will be defined together with
> qsort (because qsort still needs it), then it is justifiable.

What do you mean by heavy-weight?

Aside from requiring all that include magic, if you place specialized
sort functions in a reusable header, using it is as simple as
including the type-specific header (or declaring the type macros and
including the template), and using them as regular functions. There's
no runtime overhead involved, especially if you declare the comparison
function as a macro or a static inline function. The sort itself can
be declared static inline as well, and the compiler will decide
whether it's worth inlining.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:


2017-11-07 17:15 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>
> On Mon, Nov 6, 2017 at 9:08 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> > 2017-11-07 1:14 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
> >>
> >> I haven't seen this trick used in postgres, nor do I know whether it
> >> would be well received, so this is more like throwing an idea to see
> >> if it sticks...
> >>
> >> But a way to do this without macros is to have an includable
> >> "template" algorithm that simply doesn't define the comparison
> >> function/type, it rather assumes it:
> >>
> >> qsort_template.h
> >>
> >> #define QSORT_NAME qsort_ ## QSORT_SUFFIX
> >>
> >> static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
> >> {
> >>     ... if (ELEM_LESS(arr[a], arr[b]))
> >>     ...
> >> }
> >>
> >> #undef QSORT_NAME
> >>
> >> Then, in "offset_qsort.h":
> >>
> >> #define QSORT_SUFFIX offset
> >> #define ELEM_TYPE offset
> >> #define ELEM_LESS(a,b) ((a) < (b))
> >>
> >> #include "qsort_template.h"
> >>
> >> #undef QSORT_SUFFIX
> >> #undef ELEM_TYPE
> >> #undef ELEM_LESS
> >>
> >> Now, I realize this may have its cons, but it does simplify
> >> maintainance of type-specific or parameterized variants of
> >> performance-critical functions.
> >>
> >> > I can do specialized qsort for this case. But it will be larger bunch of
> >> > code, than
> >> > shell sort.
> >> >
> >> >> And I'd recommend doing that when there is a need, and I don't think
> >> >> this patch really needs it, since bucket sort handles most cases
> >> >> anyway.
> >> >
> >> > And it still needs insertion sort for buckets.
> >> > I can agree to get rid of shell sort. But insertion sort is necessary.
> >>
> >> I didn't suggest getting rid of insertion sort. But the trick above is
> >> equally applicable to insertion sort.
> >
> > This trick is used in simplehash.h . I agree, it could be useful for qsort.
> > This will not make qsort inlineable, but will reduce overhead much.
> >
> > This trick is too heavy-weight for insertion sort alone, though. Without
> > shellsort, insertion sort could be expressed in 14 line macros ( 8 lines
> > without curly braces). But if insertion sort will be defined together with
> > qsort (because qsort still needs it), then it is justifiable.
>
> What do you mean by heavy-weight?


I mean, I've already made reusable sort implementation with macros
that is called like a function (with type parameter). If we are talking
only about insertion sort, then such macros looks much prettier than
including file.

But qsort is better implemented with included template-header.

BTW, there is example of defining many functions with call to template
macro instead of including template header:
But it looks ugly.

>
> Aside from requiring all that include magic, if you place specialized
> sort functions in a reusable header, using it is as simple as
> including the type-specific header (or declaring the type macros and
> including the template), and using them as regular functions. There's
> no runtime overhead involved, especially if you declare the comparison
> function as a macro or a static inline function. The sort itself can
> be declared static inline as well, and the compiler will decide
> whether it's worth inlining.

Ok, if no one will complain against another one qsort implementation,
I will add template header for qsort. Since qsort needs insertion sort,
it will be in a same file.
Do you approve of this?

With regards,
Sokolov Yura

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Tue, Nov 7, 2017 at 11:42 AM, Юрий Соколов <funny.falcon@gmail.com> wrote:
>
>
> 2017-11-07 17:15 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
>> Aside from requiring all that include magic, if you place specialized
>> sort functions in a reusable header, using it is as simple as
>> including the type-specific header (or declaring the type macros and
>> including the template), and using them as regular functions. There's
>> no runtime overhead involved, especially if you declare the comparison
>> function as a macro or a static inline function. The sort itself can
>> be declared static inline as well, and the compiler will decide
>> whether it's worth inlining.
>
> Ok, if no one will complain against another one qsort implementation,
> I will add template header for qsort. Since qsort needs insertion sort,
> it will be in a same file.
> Do you approve of this?
>
> With regards,
> Sokolov Yura

If you need it. I'm not particularly fond of writing code before it's needed.

If you can measure the impact for that particular case where qsort
might be needed, and it's a real-world case, then by all means.

Otherwise, if it's a rarely-encountered corner case, I'd recommend
simply calling the stdlib's qsort.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Andres Freund
Date:
On 2017-11-07 12:12:02 -0300, Claudio Freire wrote:
> If you need it. I'm not particularly fond of writing code before it's needed.

+1

> Otherwise, if it's a rarely-encountered corner case, I'd recommend
> simply calling the stdlib's qsort.

FWIW, we always map qsort onto our own implementation:

#define qsort(a,b,c,d) pg_qsort(a,b,c,d)

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
I've been getting less and less excited about this patch, because I still
couldn't measure any above-the-noise performance improvement without
artificial exaggerations, and some cases seemed actually slower.

However, this morning I had an epiphany: why are we sorting at all?

There is no requirement that these functions preserve the physical
ordering of the tuples' data areas, only that the line-pointer ordering be
preserved.  Indeed, reorganizing the data areas into an ordering matching
the line pointers is probably a good thing, because it should improve
locality of access in future scans of the page.  This is trivial to
implement if we copy the data into a workspace area and back again, as
I was already proposing to do to avoid memmove.  Moreover, at that point
there's little value in a separate compactify function at all: we can
integrate the data-copying logic into the line pointer scan loops in
PageRepairFragmentation and PageIndexMultiDelete, and get rid of the
costs of constructing the intermediate itemIdSortData arrays.

That led me to the attached patch, which is the first version of any
of this work that produces an above-the-noise performance win for me.
I'm seeing 10-20% gains on this modified version of Yura's original
example:

psql -f test3setup.sql
pgbench -M prepared -c 3 -s 10000000 -T 300 -P 3 -n -f test3.sql

(sql scripts also attached below; I'm using 1GB shared_buffers and
fsync off, other parameters stock.)

However, there are a couple of objections that could be raised to
this patch:

1. It's trading off per-byte work, in the form of an extra memcpy,
to save sorting work that has per-tuple costs.  Therefore, the relatively
narrow tuples used in Yura's example offer a best-case scenario;
with wider tuples the performance might be worse.

2. On a platform with memmove not so much worse than memcpy as I'm
seeing on my RHEL6 server, trading memmove for memcpy might not be
such a win.

To address point 1, I tried some measurements on the standard pgbench
scenario, which uses significantly wider tuples.  In hopes of addressing
point 2, I also ran the measurements on a laptop running Fedora 25
(gcc 6.4.1, glibc 2.24); I haven't actually checked memmove vs memcpy
on that machine, but at least it's a reasonably late-model glibc.

What I'm getting from the standard pgbench measurements, on both machines,
is that this patch might be a couple percent slower than HEAD, but that is
barely above the noise floor so I'm not too sure about it.

So I think we should seriously consider the attached, but it'd be a
good idea to benchmark it on a wider variety of platforms and test
cases.

            regards, tom lane

drop table if exists test3;

create unlogged table test3 (
         id integer PRIMARY KEY with (fillfactor=85),
         val text
     ) WITH (fillfactor=85);

insert into test3 select i, '!'||i from generate_series(1, 10000000) as i;

vacuum analyze; checkpoint;

create or replace function dotest3(n int, scale float8) returns void
language plpgsql as $$
begin
for i in 1..n loop
  declare
    id1 int := random() * scale;
    id2 int := random() * scale;
  begin
    perform * from test3 where id = id1;
    update test3 set val = '!'|| id2 where id = id1;
  end;
end loop;
end $$;
select dotest3(100, :scale);
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index b6aa2af..73b73de 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*************** PageRestoreTempPage(Page tempPage, Page
*** 415,471 ****
  }

  /*
-  * sorting support for PageRepairFragmentation and PageIndexMultiDelete
-  */
- typedef struct itemIdSortData
- {
-     uint16        offsetindex;    /* linp array index */
-     int16        itemoff;        /* page offset of item data */
-     uint16        alignedlen;        /* MAXALIGN(item data len) */
- } itemIdSortData;
- typedef itemIdSortData *itemIdSort;
-
- static int
- itemoffcompare(const void *itemidp1, const void *itemidp2)
- {
-     /* Sort in decreasing itemoff order */
-     return ((itemIdSort) itemidp2)->itemoff -
-         ((itemIdSort) itemidp1)->itemoff;
- }
-
- /*
-  * After removing or marking some line pointers unused, move the tuples to
-  * remove the gaps caused by the removed items.
-  */
- static void
- compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
- {
-     PageHeader    phdr = (PageHeader) page;
-     Offset        upper;
-     int            i;
-
-     /* sort itemIdSortData array into decreasing itemoff order */
-     qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
-           itemoffcompare);
-
-     upper = phdr->pd_special;
-     for (i = 0; i < nitems; i++)
-     {
-         itemIdSort    itemidptr = &itemidbase[i];
-         ItemId        lp;
-
-         lp = PageGetItemId(page, itemidptr->offsetindex + 1);
-         upper -= itemidptr->alignedlen;
-         memmove((char *) page + upper,
-                 (char *) page + itemidptr->itemoff,
-                 itemidptr->alignedlen);
-         lp->lp_off = upper;
-     }
-
-     phdr->pd_upper = upper;
- }
-
- /*
   * PageRepairFragmentation
   *
   * Frees fragmented space on a page.
--- 415,420 ----
*************** PageRepairFragmentation(Page page)
*** 481,494 ****
      Offset        pd_lower = ((PageHeader) page)->pd_lower;
      Offset        pd_upper = ((PageHeader) page)->pd_upper;
      Offset        pd_special = ((PageHeader) page)->pd_special;
!     itemIdSortData itemidbase[MaxHeapTuplesPerPage];
!     itemIdSort    itemidptr;
!     ItemId        lp;
!     int            nline,
!                 nstorage,
!                 nunused;
!     int            i;
!     Size        totallen;

      /*
       * It's worth the trouble to be more paranoid here than in most places,
--- 430,444 ----
      Offset        pd_lower = ((PageHeader) page)->pd_lower;
      Offset        pd_upper = ((PageHeader) page)->pd_upper;
      Offset        pd_special = ((PageHeader) page)->pd_special;
!     int            new_pd_upper,
!                 nline,
!                 nunused,
!                 i;
!     union
!     {
!         char        page[BLCKSZ];
!         double        align;        /* force workspace to be MAXALIGN'd */
!     }            workspace;

      /*
       * It's worth the trouble to be more paranoid here than in most places,
*************** PageRepairFragmentation(Page page)
*** 508,563 ****
                          pd_lower, pd_upper, pd_special)));

      /*
!      * Run through the line pointer array and collect data about live items.
       */
      nline = PageGetMaxOffsetNumber(page);
!     itemidptr = itemidbase;
!     nunused = totallen = 0;
      for (i = FirstOffsetNumber; i <= nline; i++)
      {
!         lp = PageGetItemId(page, i);
          if (ItemIdIsUsed(lp))
          {
              if (ItemIdHasStorage(lp))
              {
!                 itemidptr->offsetindex = i - 1;
!                 itemidptr->itemoff = ItemIdGetOffset(lp);
!                 if (unlikely(itemidptr->itemoff < (int) pd_upper ||
!                              itemidptr->itemoff >= (int) pd_special))
                      ereport(ERROR,
                              (errcode(ERRCODE_DATA_CORRUPTED),
!                              errmsg("corrupted item pointer: %u",
!                                     itemidptr->itemoff)));
!                 itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
!                 totallen += itemidptr->alignedlen;
!                 itemidptr++;
              }
          }
          else
          {
!             /* Unused entries should have lp_len = 0, but make sure */
!             ItemIdSetUnused(lp);
              nunused++;
          }
      }

!     nstorage = itemidptr - itemidbase;
!     if (nstorage == 0)
!     {
!         /* Page is completely empty, so just reset it quickly */
!         ((PageHeader) page)->pd_upper = pd_special;
!     }
!     else
!     {
!         /* Need to compact the page the hard way */
!         if (totallen > (Size) (pd_special - pd_lower))
!             ereport(ERROR,
!                     (errcode(ERRCODE_DATA_CORRUPTED),
!                      errmsg("corrupted item lengths: total %u, available space %u",
!                             (unsigned int) totallen, pd_special - pd_lower)));

!         compactify_tuples(itemidbase, nstorage, page);
!     }

      /* Set hint bit for PageAddItem */
      if (nunused > 0)
--- 458,526 ----
                          pd_lower, pd_upper, pd_special)));

      /*
!      * We build updated copies of the line pointer array and tuple data area
!      * in workspace.page, and then copy them back to the real page when done.
!      * This ensures that if we error out partway through, we have not changed
!      * the real page.  It also lets us use memcpy rather than memmove for the
!      * data transfers, which is faster on some machines.
!      *
!      * A useful side effect of this approach is that the tuples are re-sorted
!      * so that their physical order matches their line pointer order, which
!      * should improve locality of access in future scans of the page.
       */
      nline = PageGetMaxOffsetNumber(page);
!     new_pd_upper = pd_special;
!     nunused = 0;
      for (i = FirstOffsetNumber; i <= nline; i++)
      {
!         ItemId        lp = PageGetItemId(page, i);
!         ItemId        newlp = PageGetItemId(workspace.page, i);
!
          if (ItemIdIsUsed(lp))
          {
+             *newlp = *lp;
              if (ItemIdHasStorage(lp))
              {
!                 int            offset = ItemIdGetOffset(lp);
!                 int            alignedlen = MAXALIGN(ItemIdGetLength(lp));
!
!                 new_pd_upper -= alignedlen;
!                 newlp->lp_off = new_pd_upper;
!                 if (unlikely(offset < (int) pd_upper ||
!                              (offset + alignedlen) > (int) pd_special ||
!                              offset != MAXALIGN(offset) ||
!                              new_pd_upper < (int) pd_lower))
                      ereport(ERROR,
                              (errcode(ERRCODE_DATA_CORRUPTED),
!                              errmsg("corrupted item pointer: offset = %u, length = %u",
!                                     offset, alignedlen)));
!                 memcpy(workspace.page + new_pd_upper,
!                        (char *) page + offset,
!                        alignedlen);
              }
          }
          else
          {
!             /* We can just zero out all the fields in *newlp */
!             ItemIdSetUnused(newlp);
              nunused++;
          }
      }

!     /*
!      * Okay, copy lower and upper workspace areas back to the real page.
!      */
!     if (pd_lower > SizeOfPageHeaderData)
!         memcpy((char *) page + SizeOfPageHeaderData,
!                workspace.page + SizeOfPageHeaderData,
!                pd_lower - SizeOfPageHeaderData);
!     if (new_pd_upper < pd_special)
!         memcpy((char *) page + new_pd_upper,
!                workspace.page + new_pd_upper,
!                pd_special - new_pd_upper);

!     /* Page's pd_lower doesn't change, but pd_upper does */
!     ((PageHeader) page)->pd_upper = new_pd_upper;

      /* Set hint bit for PageAddItem */
      if (nunused > 0)
*************** PageIndexTupleDelete(Page page, OffsetNu
*** 831,853 ****
  void
  PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
  {
!     PageHeader    phdr = (PageHeader) page;
!     Offset        pd_lower = phdr->pd_lower;
!     Offset        pd_upper = phdr->pd_upper;
!     Offset        pd_special = phdr->pd_special;
!     itemIdSortData itemidbase[MaxIndexTuplesPerPage];
!     ItemIdData    newitemids[MaxIndexTuplesPerPage];
!     itemIdSort    itemidptr;
!     ItemId        lp;
!     int            nline,
!                 nused;
!     Size        totallen;
!     Size        size;
!     unsigned    offset;
!     int            nextitm;
      OffsetNumber offnum;
!
!     Assert(nitems <= MaxIndexTuplesPerPage);

      /*
       * If there aren't very many items to delete, then retail
--- 794,813 ----
  void
  PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
  {
!     Offset        pd_lower = ((PageHeader) page)->pd_lower;
!     Offset        pd_upper = ((PageHeader) page)->pd_upper;
!     Offset        pd_special = ((PageHeader) page)->pd_special;
!     int            new_pd_upper,
!                 nline,
!                 nextnew,
!                 nextitm,
!                 lpalen;
      OffsetNumber offnum;
!     union
!     {
!         char        page[BLCKSZ];
!         double        align;        /* force workspace to be MAXALIGN'd */
!     }            workspace;

      /*
       * If there aren't very many items to delete, then retail
*************** PageIndexMultiDelete(Page page, OffsetNu
*** 878,906 ****
                          pd_lower, pd_upper, pd_special)));

      /*
!      * Scan the item pointer array and build a list of just the ones we are
!      * going to keep.  Notice we do not modify the page yet, since we are
!      * still validity-checking.
       */
      nline = PageGetMaxOffsetNumber(page);
!     itemidptr = itemidbase;
!     totallen = 0;
!     nused = 0;
      nextitm = 0;
      for (offnum = FirstOffsetNumber; offnum <= nline; offnum = OffsetNumberNext(offnum))
      {
-         lp = PageGetItemId(page, offnum);
-         Assert(ItemIdHasStorage(lp));
-         size = ItemIdGetLength(lp);
-         offset = ItemIdGetOffset(lp);
-         if (offset < pd_upper ||
-             (offset + size) > pd_special ||
-             offset != MAXALIGN(offset))
-             ereport(ERROR,
-                     (errcode(ERRCODE_DATA_CORRUPTED),
-                      errmsg("corrupted item pointer: offset = %u, length = %u",
-                             offset, (unsigned int) size)));
-
          if (nextitm < nitems && offnum == itemnos[nextitm])
          {
              /* skip item to be deleted */
--- 838,853 ----
                          pd_lower, pd_upper, pd_special)));

      /*
!      * As in PageRepairFragmentation, we build new copies of the line pointer
!      * array and tuple data area in workspace.page, then transfer them back to
!      * the real page.
       */
      nline = PageGetMaxOffsetNumber(page);
!     new_pd_upper = pd_special;
!     nextnew = FirstOffsetNumber;
      nextitm = 0;
      for (offnum = FirstOffsetNumber; offnum <= nline; offnum = OffsetNumberNext(offnum))
      {
          if (nextitm < nitems && offnum == itemnos[nextitm])
          {
              /* skip item to be deleted */
*************** PageIndexMultiDelete(Page page, OffsetNu
*** 908,920 ****
          }
          else
          {
!             itemidptr->offsetindex = nused; /* where it will go */
!             itemidptr->itemoff = offset;
!             itemidptr->alignedlen = MAXALIGN(size);
!             totallen += itemidptr->alignedlen;
!             newitemids[nused] = *lp;
!             itemidptr++;
!             nused++;
          }
      }

--- 855,884 ----
          }
          else
          {
!             ItemId        lp = PageGetItemId(page, offnum);
!             ItemId        newlp;
!             int            offset;
!             int            alignedlen;
!
!             Assert(ItemIdHasStorage(lp));
!             offset = ItemIdGetOffset(lp);
!             alignedlen = MAXALIGN(ItemIdGetLength(lp));
!             new_pd_upper -= alignedlen;
!             if (unlikely(offset < (int) pd_upper ||
!                          (offset + alignedlen) > (int) pd_special ||
!                          offset != MAXALIGN(offset) ||
!                          new_pd_upper < (int) pd_lower))
!                 ereport(ERROR,
!                         (errcode(ERRCODE_DATA_CORRUPTED),
!                          errmsg("corrupted item pointer: offset = %u, length = %u",
!                                 offset, alignedlen)));
!             memcpy(workspace.page + new_pd_upper,
!                    (char *) page + offset,
!                    alignedlen);
!             newlp = PageGetItemId(workspace.page, nextnew);
!             *newlp = *lp;
!             newlp->lp_off = new_pd_upper;
!             nextnew++;
          }
      }

*************** PageIndexMultiDelete(Page page, OffsetNu
*** 922,942 ****
      if (nextitm != nitems)
          elog(ERROR, "incorrect index offsets supplied");

-     if (totallen > (Size) (pd_special - pd_lower))
-         ereport(ERROR,
-                 (errcode(ERRCODE_DATA_CORRUPTED),
-                  errmsg("corrupted item lengths: total %u, available space %u",
-                         (unsigned int) totallen, pd_special - pd_lower)));
-
      /*
!      * Looks good. Overwrite the line pointers with the copy, from which we've
!      * removed all the unused items.
       */
!     memcpy(phdr->pd_linp, newitemids, nused * sizeof(ItemIdData));
!     phdr->pd_lower = SizeOfPageHeaderData + nused * sizeof(ItemIdData);

!     /* and compactify the tuple data */
!     compactify_tuples(itemidbase, nused, page);
  }


--- 886,906 ----
      if (nextitm != nitems)
          elog(ERROR, "incorrect index offsets supplied");

      /*
!      * Okay, copy lower and upper workspace areas back to the real page.
       */
!     lpalen = (nextnew - FirstOffsetNumber) * sizeof(ItemIdData);
!     if (lpalen > 0)
!         memcpy((char *) page + SizeOfPageHeaderData,
!                workspace.page + SizeOfPageHeaderData,
!                lpalen);
!     if (new_pd_upper < pd_special)
!         memcpy((char *) page + new_pd_upper,
!                workspace.page + new_pd_upper,
!                pd_special - new_pd_upper);

!     ((PageHeader) page)->pd_lower = SizeOfPageHeaderData + lpalen;
!     ((PageHeader) page)->pd_upper = new_pd_upper;
  }



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
)

On Tue, Nov 7, 2017 at 1:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> So I think we should seriously consider the attached, but it'd be a
> good idea to benchmark it on a wider variety of platforms and test
> cases.

> create unlogged table test3 (
>          id integer PRIMARY KEY with (fillfactor=85),
>          val text
>      ) WITH (fillfactor=85);

Passing observation:  Unlogged table B-Tree indexes have a much
greater tendency for LP_DEAD setting/kill_prior_tuple() working out
following commit 2ed5b87f9 [1], because unlogged tables were
unaffected by that commit. (I've been meaning to follow up with my
analysis of that regression, actually.)

The same is true of unique indexes vs. non-unique. There are workloads
where the opportunistic LP_DEAD setting performed by
_bt_check_unique() is really important (it calls ItemIdMarkDead()).
Think high contention workloads, like when Postgres is used to
implement a queue table.

My point is only that it's worth considering that this factor affects
how representative your sympathetic case is. It's not clear how many
PageIndexMultiDelete() calls are from opportunistic calls to
_bt_vacuum_one_page(), how important that subset of calls is, and so
on. Maybe it doesn't matter at all.

[1] https://postgr.es/m/CAH2-WzmYry7MNJf0Gw5wTk3cSZh3gQfHHoXVSYUNO5pk8Cu7AA@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
Peter Geoghegan <pg@bowt.ie> writes:
> My point is only that it's worth considering that this factor affects
> how representative your sympathetic case is. It's not clear how many
> PageIndexMultiDelete() calls are from opportunistic calls to
> _bt_vacuum_one_page(), how important that subset of calls is, and so
> on. Maybe it doesn't matter at all.

According to the perf measurements I took earlier, essentially all the
compactify_tuple calls in this test case are from PageRepairFragmentation
(from heap_page_prune), not PageIndexMultiDelete.

I'd be the first to agree that I doubt that test case is really
representative.  I'd been whacking around Yura's original case to
try to get PageRepairFragmentation's runtime up to some measurable
fraction of the total, and while I eventually succeeded, I'm not
sure that too many real workloads will look like that.  However,
if we can make it smaller as well as faster, that seems like a win
even if it's not a measurable fraction of most workloads.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:


2017-11-08 1:11 GMT+03:00 Peter Geoghegan <pg@bowt.ie>:
>
> The same is true of unique indexes vs. non-unique.

offtopic: recently I'd a look at setting LP_DEAD in indexes.
I didn't found huge difference between unique and non-unique indices.
There is codepath that works only for unique, but it is called less
frequently than common codepath that also sets LP_DEAD.

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
On Tue, Nov 7, 2017 at 2:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Peter Geoghegan <pg@bowt.ie> writes:
>> My point is only that it's worth considering that this factor affects
>> how representative your sympathetic case is. It's not clear how many
>> PageIndexMultiDelete() calls are from opportunistic calls to
>> _bt_vacuum_one_page(), how important that subset of calls is, and so
>> on. Maybe it doesn't matter at all.
>
> According to the perf measurements I took earlier, essentially all the
> compactify_tuple calls in this test case are from PageRepairFragmentation
> (from heap_page_prune), not PageIndexMultiDelete.

For a workload with high contention (e.g., lots of updates that follow
a Zipfian distribution) lots of important cleanup has to occur within
_bt_vacuum_one_page(), and with an exclusive buffer lock held. It may
be that making PageIndexMultiDelete() faster pays off
disproportionately well there, but I'd only expect to see that at
higher client count workloads with lots of contention -- workloads
that we still do quite badly on (note that we always have not done
well here, even prior to commit 2ed5b87f9 -- Yura showed this at one
point).

It's possible that this work influenced Yura in some way.

When Postgres Pro did some benchmarking of this at my request, we saw
that the bloat got really bad past a certain client count. IIRC there
was a clear point at around 32 or 64 clients where TPS nosedived,
presumably because cleanup could not keep up. This was a 128 core box,
or something like that, so you'll probably have difficulty recreating
it with what's at hand.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
On Tue, Nov 7, 2017 at 2:40 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
>> The same is true of unique indexes vs. non-unique.
>
> offtopic: recently I'd a look at setting LP_DEAD in indexes.
> I didn't found huge difference between unique and non-unique indices.
> There is codepath that works only for unique, but it is called less
> frequently than common codepath that also sets LP_DEAD.

I meant to say that this is only important with UPDATEs + contention.
The extra LP_DEAD setting within _bt_check_unique() makes quite a
noticeable difference, at least in terms of index bloat (though less
so in terms of raw TPS).

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Robert Haas
Date:
On Tue, Nov 7, 2017 at 4:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> What I'm getting from the standard pgbench measurements, on both machines,
> is that this patch might be a couple percent slower than HEAD, but that is
> barely above the noise floor so I'm not too sure about it.

Hmm.  It seems like slowing down single client performance by a couple
of percent is something that we really don't want to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Tue, Nov 7, 2017 at 4:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> What I'm getting from the standard pgbench measurements, on both machines,
>> is that this patch might be a couple percent slower than HEAD, but that is
>> barely above the noise floor so I'm not too sure about it.

> Hmm.  It seems like slowing down single client performance by a couple
> of percent is something that we really don't want to do.

I do not think there is any change here that can be proven to always be a
win.  Certainly the original patch, which proposes to replace an O(n log n)
sort algorithm with an O(n^2) one, should not be thought to be that.
The question to focus on is what's the average case, and I'm not sure how
to decide what the average case is.  But more than two test scenarios
would be a good start.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Robert Haas
Date:
On Wed, Nov 8, 2017 at 10:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I do not think there is any change here that can be proven to always be a
> win.  Certainly the original patch, which proposes to replace an O(n log n)
> sort algorithm with an O(n^2) one, should not be thought to be that.
> The question to focus on is what's the average case, and I'm not sure how
> to decide what the average case is.  But more than two test scenarios
> would be a good start.

I appreciate the difficulties here; I'm just urging caution.  Let's
not change things just to clear this patch off our plate.

Just to throw a random idea out here, we currently have
gen_qsort_tuple.pl producing qsort_tuple() and qsort_ssup().  Maybe it
could be modified to also produce a specialized qsort_itemids().  That
might be noticeably faster that our general-purpose qsort() for the
reasons mentioned in the comments in gen_qsort_tuple.pl, viz:

# The major effects are (1) inlining simple tuple comparators is much faster
# than jumping through a function pointer and (2) swap and vecswap operations
# specialized to the particular data type of interest (in this case, SortTuple)
# are faster than the generic routines.

I don't remember any more just how much faster qsort_tuple() and
qsort_ssup() are than plain qsort(), but it was significant enough to
convince me to commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> Just to throw a random idea out here, we currently have
> gen_qsort_tuple.pl producing qsort_tuple() and qsort_ssup().  Maybe it
> could be modified to also produce a specialized qsort_itemids().  That
> might be noticeably faster that our general-purpose qsort() for the
> reasons mentioned in the comments in gen_qsort_tuple.pl, viz:

+1 for somebody trying that (I'm not volunteering, though).
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
On Wed, Nov 8, 2017 at 8:19 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> I don't remember any more just how much faster qsort_tuple() and
> qsort_ssup() are than plain qsort(), but it was significant enough to
> convince me to commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0...

IIRC, qsort_ssup() was about 20% faster at the time, while
qsort_tuple() was 5% - 10% faster.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Claudio Freire
Date:
On Wed, Nov 8, 2017 at 12:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Tue, Nov 7, 2017 at 4:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> What I'm getting from the standard pgbench measurements, on both machines,
>>> is that this patch might be a couple percent slower than HEAD, but that is
>>> barely above the noise floor so I'm not too sure about it.
>
>> Hmm.  It seems like slowing down single client performance by a couple
>> of percent is something that we really don't want to do.
>
> I do not think there is any change here that can be proven to always be a
> win.  Certainly the original patch, which proposes to replace an O(n log n)
> sort algorithm with an O(n^2) one, should not be thought to be that.
> The question to focus on is what's the average case, and I'm not sure how
> to decide what the average case is.  But more than two test scenarios
> would be a good start.
>
>                         regards, tom lane

Doing no change to the overall algorithm and replacing qsort with an
inlineable type-specific one should be a net win in all cases.

Doing bucket sort with a qsort of large buckets (or small tuple
arrays) should also be a net win in all cases.

Using shell sort might not seem clear, but lets not forget the
original patch only uses it in very small arrays and very infrequently
at that.

What's perhaps not clear is whether there are better ideas. Like
rebuilding the page as Tom proposes, which doesn't seem like a bad
idea. Bucket sort already is O(bytes), just as memcopy, only it has a
lower constant factor (it's bytes/256 in the original patch), which
might make copying the whole page an extra time lose against bucket
sort in a few cases.

Deciding that last point does need more benchmarking. That doesn't
mean the other improvements can't be pursued in the meanwhile, right?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Tom Lane
Date:
Claudio Freire <klaussfreire@gmail.com> writes:
> What's perhaps not clear is whether there are better ideas. Like
> rebuilding the page as Tom proposes, which doesn't seem like a bad
> idea. Bucket sort already is O(bytes), just as memcopy, only it has a
> lower constant factor (it's bytes/256 in the original patch), which
> might make copying the whole page an extra time lose against bucket
> sort in a few cases.

> Deciding that last point does need more benchmarking. That doesn't
> mean the other improvements can't be pursued in the meanwhile, right?

Well, I doubt we're going to end up committing more than one of these
ideas.  The question is which way is best.  If people are willing to
put in the work to test all of them, let's do it.

BTW, it strikes me that in considering the rebuild-the-page approach,
we should not have blinders on and just measure the speed of
PageRepairFragmentation.  Rather, we should take a look at what happens
subsequently given a physically-ordered set of tuples.  I can recall
Andres or someone moaning awhile ago about lack of locality of access in
index page searches --- maybe applying that approach while vacuuming
indexes will help?
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:
2017-11-08 20:02 GMT+03:00 Tom Lane <tgl@sss.pgh.pa.us>:
>
> Claudio Freire <klaussfreire@gmail.com> writes:
> > What's perhaps not clear is whether there are better ideas. Like
> > rebuilding the page as Tom proposes, which doesn't seem like a bad
> > idea. Bucket sort already is O(bytes), just as memcopy, only it has a
> > lower constant factor (it's bytes/256 in the original patch), which
> > might make copying the whole page an extra time lose against bucket
> > sort in a few cases.
>
> > Deciding that last point does need more benchmarking. That doesn't
> > mean the other improvements can't be pursued in the meanwhile, right?
>
> Well, I doubt we're going to end up committing more than one of these
> ideas.  The question is which way is best.  If people are willing to
> put in the work to test all of them, let's do it.
>
> BTW, it strikes me that in considering the rebuild-the-page approach,
> we should not have blinders on and just measure the speed of
> PageRepairFragmentation.  Rather, we should take a look at what happens
> subsequently given a physically-ordered set of tuples.  I can recall
> Andres or someone moaning awhile ago about lack of locality of access in
> index page searches --- maybe applying that approach while vacuuming
> indexes will help?
>
>                         regards, tom lane

I'd like to add qsort_template.h as Claudio suggested, ie in a way close to
simplehash.h. With such template header, there will be no need in

With regards,
Sokolov Yura

Re: [HACKERS] Small improvement to compactify_tuples

From
Andres Freund
Date:
On 2017-11-08 12:02:40 -0500, Tom Lane wrote:
> BTW, it strikes me that in considering the rebuild-the-page approach,
> we should not have blinders on and just measure the speed of
> PageRepairFragmentation.  Rather, we should take a look at what happens
> subsequently given a physically-ordered set of tuples.  I can recall
> Andres or someone moaning awhile ago about lack of locality of access in
> index page searches --- maybe applying that approach while vacuuming
> indexes will help?

I complained about multiple related things, I'm not exactly sure what
exactly you're referring to here:
- The fact that HeapTupleHeaderData's are commonly iterated over in reverse order is bad for performance. For shared
buffersresident workloads involving seqscans that yields 15-25% slowdowns for me. It's trivial to fix that by just
changingiteration order, but that obviously changes results. But we could reorder the page during heap pruning.
 
 But that's fairly independent of indexes, so I'm not sure whether that's what you're referring.

- The layout of items in index pages is suboptimal. We regularly do binary searches over the the linearly ordered
items,which is cache inefficient. So instead we should sort items as [1/2, 1/4, 3/4, ...] elements, which will access
itemsin a close-ish to linear manner.
 
 But that's fairly independent of pruning, so I'm not sure whether that's what you're referring to, either.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
On Wed, Nov 8, 2017 at 12:59 PM, Andres Freund <andres@anarazel.de> wrote:
> I complained about multiple related things, I'm not exactly sure what
> exactly you're referring to here:
> - The fact that HeapTupleHeaderData's are commonly iterated over in
>   reverse order is bad for performance. For shared buffers resident
>   workloads involving seqscans that yields 15-25% slowdowns for me. It's
>   trivial to fix that by just changing iteration order, but that
>   obviously changes results. But we could reorder the page during heap
>   pruning.

FWIW, the classic page layout (the one that appears in Gray's
Transaction Processing Systems, at any rate) has the ItemId array at
the end of the page and the tuples at the start (immediately after a
generic page header) -- it's the other way around.

I think that that has its pros and cons.

> - The layout of items in index pages is suboptimal. We regularly do
>   binary searches over the the linearly ordered items, which is cache
>   inefficient. So instead we should sort items as [1/2, 1/4, 3/4, ...]
>   elements, which will access items in a close-ish to linear manner.

I still think that we can repurpose each ItemId's lp_len as an
abbreviated key in internal index pages [1], and always get IndexTuple
size through the index tuple header. I actual got as far as writing a
very rough prototype of that. That's obviously a significant project,
but it seems doable.

[1] https://www.postgresql.org/message-id/CAH2-Wz=mV4dmOaPFicRSyNtv2KinxEOtBwUY5R7fXXOC-OearA@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:
2017-11-08 23:44 GMT+03:00 Юрий Соколов : > > 2017-11-08 20:02 GMT+03:00 Tom Lane : > > > > Claudio Freire writes: > > > What's perhaps not clear is whether there are better ideas. Like > > > rebuilding the page as Tom proposes, which doesn't seem like a bad > > > idea. Bucket sort already is O(bytes), just as memcopy, only it has a > > > lower constant factor (it's bytes/256 in the original patch), which > > > might make copying the whole page an extra time lose against bucket > > > sort in a few cases. > > > > > Deciding that last point does need more benchmarking. That doesn't > > > mean the other improvements can't be pursued in the meanwhile, right? > > > > Well, I doubt we're going to end up committing more than one of these > > ideas. The question is which way is best. If people are willing to > > put in the work to test all of them, let's do it. > > > > BTW, it strikes me that in considering the rebuild-the-page approach, > > we should not have blinders on and just measure the speed of > > PageRepairFragmentation. Rather, we should take a look at what happens > > subsequently given a physically-ordered set of tuples. I can recall > > Andres or someone moaning awhile ago about lack of locality of access in > > index page searches --- maybe applying that approach while vacuuming > > indexes will help? > > > > regards, tom lane > > I'd like to add qsort_template.h as Claudio suggested, ie in a way close to > simplehash.h. With such template header, there will be no need in > qsort_tuple_gen.pl . Attached patched replaces gen_qsort_tuple.pl with qsort_template.h - generic qsort template header. Some tests do not specify exact order (ie their output depends on order of equal elements). Such tests output wes fixed. I didn't apply this qsort to compactify_tuples yet. Will do soon. With regards, Sokolov Yura aka funny_falcon.

Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
On Mon, Nov 27, 2017 at 11:46 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> Attached patched replaces gen_qsort_tuple.pl with qsort_template.h - generic
> qsort template header.
> Some tests do not specify exact order (ie their output depends on order of
> equal elements). Such tests output wes fixed.
>
> I didn't apply this qsort to compactify_tuples yet. Will do soon.

Seems you forgot to attach the patch.

--
Peter Geoghegan


Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:
2017-11-28 11:49 GMT+03:00 Peter Geoghegan <pg@bowt.ie>:
>
> On Mon, Nov 27, 2017 at 11:46 PM, Юрий Соколов <funny.falcon@gmail.com> wrote:
> > Attached patched replaces gen_qsort_tuple.pl with qsort_template.h - generic
> > qsort template header.
> > Some tests do not specify exact order (ie their output depends on order of
> > equal elements). Such tests output wes fixed.
> >
> > I didn't apply this qsort to compactify_tuples yet. Will do soon.
>
> Seems you forgot to attach the patch.

Oh, you are right. Thank you for reminding.

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Andres Freund
Date:
Hi,

On 2017-11-28 23:30:53 +0300, Юрий Соколов wrote:
> index 1a8ee08c8b..607ed6a781 100644
> --- a/src/port/qsort.c
> +++ b/src/port/qsort.c
> @@ -8,7 +8,7 @@
>   *      in favor of a simple check for presorted input.
>   *      Take care to recurse on the smaller partition, to bound stack usage.
>   *
> - *    CAUTION: if you change this file, see also qsort_arg.c, gen_qsort_tuple.pl
> + *    CAUTION: if you change this file, see also qsort_arg.c, qsort_template.h
>   *
>   *    src/port/qsort.c
>   */

Maybe it's a stupid question. But would we still want to have this after
the change? These should be just specializations of the template version
imo.

Greetings,

Andres Freund


Re: [HACKERS] Small improvement to compactify_tuples

From
Peter Geoghegan
Date:
On Tue, Nov 28, 2017 at 2:41 PM, Andres Freund <andres@anarazel.de> wrote:
> Maybe it's a stupid question. But would we still want to have this after
> the change? These should be just specializations of the template version
> imo.

I also wonder why regression test output has changed. Wasn't this
supposed to be a mechanical change in how the templating is
implemented? Why would the behavior of the algorithm change, even if
the change is only a change in the output order among equal elements?

Also, is that one last raw CHECK_FOR_INTERRUPTS() in the template
definition supposed to be there?

-- 
Peter Geoghegan


Re: [HACKERS] Small improvement to compactify_tuples

From
Michael Paquier
Date:
On Wed, Nov 29, 2017 at 8:00 AM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Tue, Nov 28, 2017 at 2:41 PM, Andres Freund <andres@anarazel.de> wrote:
>> Maybe it's a stupid question. But would we still want to have this after
>> the change? These should be just specializations of the template version
>> imo.
>
> I also wonder why regression test output has changed. Wasn't this
> supposed to be a mechanical change in how the templating is
> implemented? Why would the behavior of the algorithm change, even if
> the change is only a change in the output order among equal elements?
>
> Also, is that one last raw CHECK_FOR_INTERRUPTS() in the template
> definition supposed to be there?

As work is still going on here I am moving the patch to next CF.
-- 
Michael


Re: [HACKERS] Small improvement to compactify_tuples

From
Юрий Соколов
Date:
hi,

On Wed, Nov 29, 2017 at 8:00 AM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Tue, Nov 28, 2017 at 2:41 PM, Andres Freund <andres@anarazel.de> wrote:
>> Maybe it's a stupid question. But would we still want to have this after
>> the change? These should be just specializations of the template version
>> imo.

"generic" version operates on bytes, and it will be a bit hard to combine it with
templated version. Not impossible, but it will look ugly.

> I also wonder why regression test output has changed. Wasn't this
> supposed to be a mechanical change in how the templating is
> implemented? Why would the behavior of the algorithm change, even if
> the change is only a change in the output order among equal elements?

I did some change to algorithm then. But I reverted changes, and now no need
in test fixes.

> Also, is that one last raw CHECK_FOR_INTERRUPTS() in the template
> definition supposed to be there?

There was error. Fixed.

In attach fixed qsort_template version.
And version for compactify_tuples with bucket_sort and templated qsort.

With regards,
Sokolov Yura.
Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Stephen Frost
Date:
Greetings,

* Юрий Соколов (funny.falcon@gmail.com) wrote:
> On Wed, Nov 29, 2017 at 8:00 AM, Peter Geoghegan <pg@bowt.ie> wrote:
> > On Tue, Nov 28, 2017 at 2:41 PM, Andres Freund <andres@anarazel.de> wrote:
> >> Maybe it's a stupid question. But would we still want to have this after
> >> the change? These should be just specializations of the template version
> >> imo.
>
> "generic" version operates on bytes, and it will be a bit hard to combine
> it with
> templated version. Not impossible, but it will look ugly.

If that's the case then does it really make sense to make this change..?

> In attach fixed qsort_template version.
> And version for compactify_tuples with bucket_sort and templated qsort.

While having the patch is handy, I'm not seeing any performance numbers
on this version, and I imagine others watching this thread are also
wondering about things like a test run that just uses the specialized
qsort_itemIds() without the bucketsort.

Are you planning to post some updated numbers and/or an updated test
case that hopefully shows best/worst case with this change?  Would be
good to get that on a couple of platforms too, if possible, since we've
seen that the original benchmarks weren't able to be consistently
repeated across different platforms.  Without someone doing that
leg-work, this doesn't seem like it'll be moving forward.

Marking as Waiting on Author.

Thanks!

Stephen

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Yura Sokolov
Date:
23.01.2018 06:34, Stephen Frost пишет:
> Greetings,
>
> * Юрий Соколов (funny.falcon@gmail.com) wrote:
>> On Wed, Nov 29, 2017 at 8:00 AM, Peter Geoghegan <pg@bowt.ie> wrote:
>>> On Tue, Nov 28, 2017 at 2:41 PM, Andres Freund <andres@anarazel.de> wrote:
>>>> Maybe it's a stupid question. But would we still want to have this after
>>>> the change? These should be just specializations of the template version
>>>> imo.
>>
>> "generic" version operates on bytes, and it will be a bit hard to combine
>> it with
>> templated version. Not impossible, but it will look ugly.
>
> If that's the case then does it really make sense to make this change..?

I don't think it is really necessary to implement generic version
through templated. It is much better to replace generic version with
templated in places where it matters for performance.

>
>> In attach fixed qsort_template version.
>> And version for compactify_tuples with bucket_sort and templated qsort.
>
> While having the patch is handy, I'm not seeing any performance numbers
> on this version, and I imagine others watching this thread are also
> wondering about things like a test run that just uses the specialized
> qsort_itemIds() without the bucketsort.
>
> Are you planning to post some updated numbers and/or an updated test
> case that hopefully shows best/worst case with this change?  Would be
> good to get that on a couple of platforms too, if possible, since we've
> seen that the original benchmarks weren't able to be consistently
> repeated across different platforms.  Without someone doing that
> leg-work, this doesn't seem like it'll be moving forward.

Updated numbers are (same benchmark on same notebook, but with new
master, new ubuntu and later patch version) (average among 6 runs):

master               - 16135tps
with templated qsort - 16199tps
with bucket sort     - 16956tps

Difference is still measurable, but less significant. I don't know why.

Rebased version of first patch (qsorted tamplate) is in atttach.

With regards,
Sokolov Yura.

Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Andres Freund
Date:
Hi,

On 2018-02-25 21:39:46 +0300, Yura Sokolov wrote:
> > If that's the case then does it really make sense to make this change..?
> 
> I don't think it is really necessary to implement generic version
> through templated.

Why?

> Updated numbers are (same benchmark on same notebook, but with new
> master, new ubuntu and later patch version) (average among 6 runs):
> 
> master               - 16135tps
> with templated qsort - 16199tps
> with bucket sort     - 16956tps
> 
> Difference is still measurable, but less significant. I don't know why.
> 
> Rebased version of first patch (qsorted tamplate) is in atttach.

Hm, that's a bit underwhelming. It's nice to deduplicate, but 16135tps
-> 16199tps is barely statistically significant?

- Andres


Re: [HACKERS] Small improvement to compactify_tuples

From
Yura Sokolov
Date:
01.03.2018 22:22, Andres Freund пишет:
> Hi,
>
> On 2018-02-25 21:39:46 +0300, Yura Sokolov wrote:
>>> If that's the case then does it really make sense to make this change..?
>>
>> I don't think it is really necessary to implement generic version
>> through templated.
>
> Why?

It is better to replace use of generic version with templated in
appropriate places.
Generic version uses variable size of element. It will be difficult to
describe through template.

>
>> Updated numbers are (same benchmark on same notebook, but with new
>> master, new ubuntu and later patch version) (average among 6 runs):
>>
>> master               - 16135tps
>> with templated qsort - 16199tps
>> with bucket sort     - 16956tps
>>
>> Difference is still measurable, but less significant. I don't know why.
>>
>> Rebased version of first patch (qsorted tamplate) is in atttach.
>
> Hm, that's a bit underwhelming. It's nice to deduplicate, but 16135tps
> -> 16199tps is barely statistically significant?

I mean bucket sort is measurably faster than both generic and templated
sort (16956 vs 16199 and 16135). So initial goal remains: to add bucket
sort in this place.

BTW, I have small change to templated version that improves sorting of
random tuples a bit (1-1.5%). Will post it a bit later with test.

> - Andres
>

With regards,
Yura.


Attachment

Re: [HACKERS] Small improvement to compactify_tuples

From
Andrew Dunstan
Date:

On 03/04/2018 04:57 AM, Yura Sokolov wrote:
>
> BTW, I have small change to templated version that improves sorting of
> random tuples a bit (1-1.5%). Will post it a bit later with test.
>
>


There doesn't seem to have been any progress since this email.

cheers

andrew

-- 
Andrew Dunstan                https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Small improvement to compactify_tuples

From
Michael Paquier
Date:
On Mon, Jul 02, 2018 at 05:49:06PM -0400, Andrew Dunstan wrote:
> There doesn't seem to have been any progress since this email.

Indeed, none.  I am marking it as returned with feedback...  The patch
has rotten quite some time ago as well.
--
Michael

Attachment