Re: Creating table with data from a join - Mailing list pgsql-general

From Igor Stassiy
Subject Re: Creating table with data from a join
Date
Msg-id CAKVOjewjdfVRpRjiKs3qZGM7rwfuH=QJ__0jX-=4XkMUe7zTEA@mail.gmail.com
Whole thread Raw
In response to Re: Creating table with data from a join  (David Rowley <david.rowley@2ndquadrant.com>)
List pgsql-general
Thank you David. I tried to measure the time of COPY (SELECT * FROM a JOIN b ON a.id = b.id) TO '/tmp/dump.sql' and it took an order of magnitude time less (~10x) than the complete command (together with INSERT), so conversion is probably not the main factor of slowdown (unless conversion from text ->internal is significantly slower than that of from internal -> text). 

I will also try your suggestion with limiting the ids range.

On Tue, Jul 14, 2015 at 1:42 PM David Rowley <david.rowley@2ndquadrant.com> wrote:
On 14 July 2015 at 21:12, Igor Stassiy <istassiy@gmail.com> wrote:
Hello,

I am benchmarking different ways of putting data into table on table creation:

1. INSERT INTO c SELECT * FROM a JOIN b on a.id = b.id;
2. CREATE TABLE c AS SELECT * FROM a JOIN b on a.id = b.id;
3. psql -c "COPY (SELECT * FROM a JOIN b on a.id = b.id) TO STDOUT" | 
parallel --block 128M --jobs 4 --pipe psql -c "COPY c FROM STDIN";

(the parallel command is available as part of parallel deb package in Ubuntu for example, it splits the stdin by newline character and feeds it to the corresponding command)

Both tables a and b have ~16M records and one of the columns in a is geometry (ranging from several KB in size to several MB). Columns in b are mostly integers.

The machine that I am running these commands on has the following parameters:

default_statistics_target = 50 # pgtune wizard 2012-06-06 maintenance_work_mem = 1GB # pgtune wizard 2012-06-06 constraint_exclusion = on # pgtune wizard 2012-06-06 checkpoint_completion_target = 0.9 # pgtune wizard 2012-06-06 effective_cache_size = 48GB # pgtune wizard 2012-06-06
work_mem = 80MB # pgtune wizard 2012-06-06 
wal_buffers = 8MB # pgtune wizard 2012-06-06 
checkpoint_segments = 16 # pgtune wizard 2012-06-06 
shared_buffers = 16GB # pgtune wizard 2012-06-06 
max_connections = 400 # pgtune wizard 2012-06-06

One would expect the 3rd option to be faster than 1 and 2, however 2 outperforms both by a large margin (sometimes x2). This is especially surprising taking into account that COPY doesn't acquire a global lock on the table, only a RowExclusiveLock 

So is option 2 a winner by design? Could you please suggest other alternatives to try (if there are any)? And what might be the reason that 3 is not outperforming the other 2?

I would imagine that the calling of the output function to translate the each value's internal representation to it's user visible/external representation plus all the overhead of sending results to the client would be a likely candidate of the slow down. In either case 3 would only be as fast as the query generating the output. With 1 and 2 all the tuple representations of each record stays in the internal format.

If you have some logical way to break the query down into parts, then maybe that would be a place to look.
For example:

INSERT INTO c SELECT * FROM a JOIN b on a.id = b.id WHERE a.id < 8000000;
INSERT INTO c SELECT * FROM a JOIN b on a.id = b.id WHERE a.id >= 8000000;

Of course, you'd need to be very careful to ensure that the results of each SELECT never overlap. It would be nice to invent some better way than this that divided the workload evenly even when the tables grow. 

Then you could run these concurrently.

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

pgsql-general by date:

Previous
From: David Rowley
Date:
Subject: Re: Creating table with data from a join
Next
From: Michael Paquier
Date:
Subject: Re: Where to place suggestions for documentation improvements