Re: Improving performance of merging data between tables - Mailing list pgsql-general

From Pawel Veselov
Subject Re: Improving performance of merging data between tables
Date
Msg-id CAMnJ+BdCaH8_2Cpyq8bxKdVMFim-LB7M76ROK+kzsXcbknpkQQ@mail.gmail.com
Whole thread Raw
In response to Re: Improving performance of merging data between tables  (Pawel Veselov <pawel.veselov@gmail.com>)
Responses Re: Improving performance of merging data between tables  (Maxim Boguk <maxim.boguk@gmail.com>)
List pgsql-general


On Mon, Dec 29, 2014 at 9:29 PM, Pawel Veselov <pawel.veselov@gmail.com> wrote:

[skipped]


1) How do I find out what exactly is consuming the CPU in a PL/pgSQL
function? All I see is that the calls to merge_all() function take long
time, and the CPU is high while this is going on.

 
[skipped] 

2) try pg_stat_statements, setting "pg_stat_statements.track = all".  see:
http://www.postgresql.org/docs/9.4/static/pgstatstatements.html

I have used this to profile some functions, and it worked pretty well. Mostly I use it on a test box, but once ran it on the live, which was scary, but worked great.

That looks promising. Turned it on, waiting for when I can turn the server at the next "quiet time".

I have to say this turned out into a bit of a disappointment for this use case. It only measures total time spent in a call. So, it sends up operations that waited a lot on some lock. It's good, but it would be great if total_time was provided along with wait_time (and io_time may be as well, since I also see operations that just naturally have to fetch a lot of data)

[skipped]

pgsql-general by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: [HACKERS] ON_ERROR_ROLLBACK
Next
From: Maxim Boguk
Date:
Subject: Re: Improving performance of merging data between tables