Re: Performance on Bulk Insert to Partitioned Table - Mailing list pgsql-performance

From Stephen Frost
Subject Re: Performance on Bulk Insert to Partitioned Table
Date
Msg-id 20121220200234.GJ12354@tamriel.snowman.net
Whole thread Raw
In response to Performance on Bulk Insert to Partitioned Table  (Charles Gomes <charlesrg@outlook.com>)
Responses Re: Performance on Bulk Insert to Partitioned Table
List pgsql-performance
Charles,

* Charles Gomes (charlesrg@outlook.com) wrote:
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.

Do you end up having multiple threads writing to the same, underlying,
tables..?  If so, I've seen that problem before.  Look at pg_locks while
things are running and see if there are 'extend' locks that aren't being
immediately granted.

Basically, there's a lock that PG has on a per-relation basis to extend
the relation (by a mere 8K..) which will block other writers.  If
there's a lot of contention around that lock, you'll get poor
performance and it'll be faster to have independent threads writing
directly to the underlying tables.  I doubt rewriting the trigger in C
will help if the problem is the extent lock.

If you do get this working well, I'd love to hear what you did to
accomplish that.  Note also that you can get bottle-necked on the WAL
data, unless you've taken steps to avoid that WAL.

    Thanks,

        Stephen

Attachment

pgsql-performance by date:

Previous
From: Charles Gomes
Date:
Subject: Re: Performance on Bulk Insert to Partitioned Table
Next
From: Charles Gomes
Date:
Subject: Re: Performance on Bulk Insert to Partitioned Table