Re: Parallel copy - Mailing list pgsql-hackers

From vignesh C
Subject Re: Parallel copy
Date
Msg-id CALDaNm2EYd67r3NwScaFh9_onbX_vpKJVS9p-=+TX22q47m+Zg@mail.gmail.com
Whole thread Raw
In response to Re: Parallel copy  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
Responses Re: Parallel copy
List pgsql-hackers
The patches were not applying because of the recent commits.
I have rebased the patch over head & attached.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com

On Thu, Jul 23, 2020 at 6:07 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:
On Thu, Jul 23, 2020 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
>>
>> I ran tests for partitioned use cases - results are similar to that of non partitioned cases[1].
>
>
> I could see the gain up to 10-11 times for non-partitioned cases [1], can we use similar test case here as well (with one of the indexes on text column or having gist index) to see its impact?
>
> [1] - https://www.postgresql.org/message-id/CALj2ACVR4WE98Per1H7ajosW8vafN16548O2UV8bG3p4D3XnPg%40mail.gmail.com
>

Thanks Amit! Please find the results of detailed testing done for partitioned use cases:

Range Partitions: consecutive rows go into the same partitions.
parallel workerstest case 1(exec time in sec): copy from csv file, 2 indexes on integer columns and 1 index on text column, 4 range partitionstest case 2(exec time in sec): copy from csv file, 1 gist index on text column, 4 range partitionstest case 3(exec time in sec): copy from csv file, 3 indexes on integer columns, 4 range partitions
01051.924(1X)785.052(1X)205.403(1X)
2589.576(1.78X)421.974(1.86X)114.724(1.79X)
4321.960(3.27X)230.997(3.4X)99.017(2.07X)
8199.245(5.23X)156.132(5.02X)99.722(2.06X)
16127.343(8.26X)173.696(4.52X)98.147(2.09X)
20122.029(8.62X)186.418(4.21X)97.723(2.1X)
30142.876(7.36X)214.598(3.66X)97.048(2.11X)

On Thu, Jul 23, 2020 at 10:21 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
>
> I think, when doing the performance testing for partitioned table, it would be good to also mention about the distribution of data in the input file. One possible data distribution could be that we have let's say 100 tuples in the input file, and every consecutive tuple belongs to a different partition.
>

To address Ashutosh's point, I used hash partitioning. Hope this helps to clear the doubt.

Hash Partitions: where there are high chances that consecutive rows may go into different partitions.
parallel workerstest case 1(exec time in sec): copy from csv file, 2 indexes on integer columns and 1 index on text column, 4 hash partitionstest case 2(exec time in sec): copy from csv file, 1 gist index on text column, 4 hash partitionstest case 3(exec time in sec): copy from csv file, 3 indexes on integer columns, 4 hash partitions
01060.884(1X)812.283(1X)207.745(1X)
2572.542(1.85X)418.454(1.94X)107.850(1.93X)
4298.132(3.56X)227.367(3.57X)83.895(2.48X)
8169.449(6.26X)137.993(5.89X)85.411(2.43X)
16112.297(9.45X)95.167(8.53X)96.136(2.16X)
20101.546(10.45X)90.552(8.97X)97.066(2.14X)
30113.877(9.32X)127.17(6.38X)96.819(2.14X)


With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com
Attachment

pgsql-hackers by date:

Previous
From: Bharath Rupireddy
Date:
Subject: Can a background worker exist without shared memory access for EXEC_BACKEND cases?
Next
From: Justin Pryzby
Date:
Subject: Re: display offset along with block number in vacuum errors