Re: Performance on Bulk Insert to Partitioned Table - Mailing list pgsql-performance

From Jeff Janes
Subject Re: Performance on Bulk Insert to Partitioned Table
Date
Msg-id CAMkU=1xbdrp-HsPQ5kJZDxDTWa8NnvLiYB9kXiCYN8X9WSGv8Q@mail.gmail.com
Whole thread Raw
In response to Re: Performance on Bulk Insert to Partitioned Table  (Charles Gomes <charlesrg@outlook.com>)
Responses Re: Performance on Bulk Insert to Partitioned Table  (Charles Gomes <charlesrg@outlook.com>)
List pgsql-performance
On Thursday, December 20, 2012, Charles Gomes wrote:
True, that's the same I feel, I will be looking to translate the trigger to C if I can find good examples, that should accelerate.

I think your performance bottleneck is almost certainly the dynamic SQL.  Using C to generate that dynamic SQL isn't going to help much, because it is still the SQL engine that has to parse, plan, and execute it.

Are the vast majority if your inserts done on any given day for records from that same day or the one before; or are they evenly spread over the preceding year?  If the former, you could use static SQL in IF and ELSIF for those days, and fall back on the dynamic SQL for the exceptions in the ELSE block.  Of course that means you have to update the trigger every day.

 
Using rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.

Maintaining 365 lines of IF is what Perl was invented for.  That goes for triggers w/ static SQL as well as for rules.

If you do the static SQL in a trigger and the dates of the records are evenly scattered over the preceding year, make sure your IFs are nested like a binary search, not a linear search.  And if they are mostly for "today's" date, then make sure you search backwards.

Cheers,

Jeff

pgsql-performance by date:

Previous
From: Jeff Janes
Date:
Subject: Re: Performance on Bulk Insert to Partitioned Table
Next
From: Jeff Janes
Date:
Subject: Re: Why is PostgreSQL 9.2 slower than 9.1 in my tests?