Re: Parallel copy - Mailing list pgsql-hackers

From Greg Nancarrow
Subject Re: Parallel copy
Date
Msg-id CAJcOf-dUchi35jTZu7Qdjs9P6=u3t73oLsLXSiW6EqK0=eY6dg@mail.gmail.com
Whole thread Raw
In response to Re: Parallel copy  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
Responses Re: Parallel copy  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
List pgsql-hackers
Hi Bharath,

On Tue, Sep 15, 2020 at 11:49 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> Few questions:
>  1. Was the run performed with default postgresql.conf file? If not,
> what are the changed configurations?
Yes, just default settings.

>  2. Are the readings for normal copy(190.891sec, mentioned by you
> above) taken on HEAD or with patch, 0 workers?
With patch

>How much is the runtime
> with your test case on HEAD(Without patch) and 0 workers(With patch)?
TBH, I didn't test that. Looking at the changes, I wouldn't expect a
degradation of performance for normal copy (you have tested, right?).

>  3. Was the run performed on release build?
For generating the perf data I sent (normal copy vs parallel copy with
1 worker), I used a debug build (-g -O0), as that is needed for
generating all the relevant perf data for Postgres code. Previously I
ran with a release build (-O2).

>  4. Were the readings taken on multiple runs(say 3 or 4 times)?
The readings I sent were from just one run (not averaged), but I did
run the tests several times to verify the readings were representative
of the pattern I was seeing.


Fortunately I have been given permission to share the exact table
definition and data I used, so you can check the behaviour and timings
on your own test machine.
Please see the attachment.
You can create the table using the table.sql and index_4.sql
definitions in the "sql" directory.
The data.csv file (to be loaded by COPY) can be created with the
included "dupdata" tool in the "input" directory, which you need to
build, then run, specifying a suitable number of records and path of
the template record (see README). Obviously the larger the number of
records, the larger the file ...
The table can then be loaded using COPY with "format csv" (and
"parallel N" if testing parallel copy).

Regards,
Greg Nancarrow
Fujitsu Australia

Attachment

pgsql-hackers by date:

Previous
From: "Andrey M. Borodin"
Date:
Subject: Re: Yet another fast GiST build
Next
From: Surafel Temesgen
Date:
Subject: Re: [PATCH] distinct aggregates within a window function WIP