On Sat, May 13, 2017 at 8:19 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, May 11, 2017 at 6:09 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>> This work would be helpful not only for existing workload but also
>> future works like some parallel utility commands, which is discussed
>> on other threads[1]. At least for parallel vacuum, this feature helps
>> to solve issue that the implementation of parallel vacuum has.
>>
>> I ran pgbench for 10 min three times(scale factor is 5000), here is a
>> performance measurement result.
>>
>> clients TPS(HEAD) TPS(Patched)
>> 4 2092.612 2031.277
>> 8 3153.732 3046.789
>> 16 4562.072 4625.419
>> 32 6439.391 6479.526
>> 64 7767.364 7779.636
>> 100 7917.173 7906.567
>>
>> * 16 core Xeon E5620 2.4GHz
>> * 32 GB RAM
>> * ioDrive
>>
>> In current implementation, it seems there is no performance degradation so far.
>>
>
> I think it is good to check pgbench, but we should do tests of the
> bulk load as this lock is stressed during such a workload. Some of
> the tests we have done when we have improved the performance of bulk
> load can be found in an e-mail [1].
>
Thank you for sharing.
I've measured using two test scripts attached on that thread. Here is result.
* Copy test script
Client HEAD Patched
4 452.60 455.53
8 561.74 561.09
16 592.50 592.21
32 602.53 599.53
64 605.01 606.42
* Insert test script
Client HEAD Patched
4 159.04 158.44
8 169.41 169.69
16 177.11 178.14
32 182.14 181.99
64 182.11 182.73
It seems there is no performance degradation so far.
Regards,
--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center