Thread: Profiling PostgreSQL

Profiling PostgreSQL

From
Dimitris Karampinas
Date:
Is there any way to get the call stack of a function when profiling PostgreSQL with perf ?
I configured with --enable-debug, I run a benchmark against the system and I'm able to identify a bottleneck.
40% of the time is spent on an spinlock yet I cannot find out the codepath that gets me there.
Using --call-graph with perf record didn't seem to help.

Any ideas ?

Cheers,
Dimitris

Re: Profiling PostgreSQL

From
David Boreham
Date:
On 5/22/2014 7:27 AM, Dimitris Karampinas wrote:
> Is there any way to get the call stack of a function when profiling
> PostgreSQL with perf ?
> I configured with --enable-debug, I run a benchmark against the system
> and I'm able to identify a bottleneck.
> 40% of the time is spent on an spinlock yet I cannot find out the
> codepath that gets me there.
> Using --call-graph with perf record didn't seem to help.
>
> Any ideas ?
>
Can you arrange to run 'pstack' a few times on the target process
(either manually or with a shell script)?
If the probability of the process being in the spinning state is high,
then this approach should snag you at least one call stack.




Re: Profiling PostgreSQL

From
Tom Lane
Date:
Dimitris Karampinas <dkarampin@gmail.com> writes:
> Is there any way to get the call stack of a function when profiling
> PostgreSQL with perf ?
> I configured with --enable-debug, I run a benchmark against the system and
> I'm able to identify a bottleneck.
> 40% of the time is spent on an spinlock yet I cannot find out the codepath
> that gets me there.
> Using --call-graph with perf record didn't seem to help.

Call graph data usually isn't trustworthy unless you built the program
with -fno-omit-frame-pointer ...

            regards, tom lane


Re: Profiling PostgreSQL

From
Michael Paquier
Date:
On Thu, May 22, 2014 at 10:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Call graph data usually isn't trustworthy unless you built the program
> with -fno-omit-frame-pointer ...
This page is full of ideas as well:
https://wiki.postgresql.org/wiki/Profiling_with_perf
--
Michael


Re: Profiling PostgreSQL

From
Dimitris Karampinas
Date:
Thanks for your answers. A script around pstack worked for me.

(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)

For the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).
Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.
However I did the two following experiments:
1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.
2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. 

Any insight why the system behaves like this ?

Cheers,
Dimitris


On Fri, May 23, 2014 at 1:39 AM, Michael Paquier <michael.paquier@gmail.com> wrote:
On Thu, May 22, 2014 at 10:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Call graph data usually isn't trustworthy unless you built the program
> with -fno-omit-frame-pointer ...
This page is full of ideas as well:
https://wiki.postgresql.org/wiki/Profiling_with_perf
--
Michael

Re: Profiling PostgreSQL

From
Pavel Stehule
Date:


Dne 23.5.2014 16:41 "Dimitris Karampinas" <dkarampin@gmail.com> napsal(a):
>
> Thanks for your answers. A script around pstack worked for me.
>
> (I'm not sure if I should open a new thread, I hope it's OK to ask another question here)
>
> For the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).
> Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.
> However I did the two following experiments:
> 1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.
> 2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. 
>
> Any insight why the system behaves like this ?

Physical limits, there two possible botleneck: cpu or io. Postgres use one cpu per session, and if you have cpu intensive benchmark, then max should be in cpu related workers. Later a workers shares cpu, bu total throughput should be same to cca 10xCpu (depends on test)

>
> Cheers,
> Dimitris
>
>
> On Fri, May 23, 2014 at 1:39 AM, Michael Paquier <michael.paquier@gmail.com> wrote:
>>
>> On Thu, May 22, 2014 at 10:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> > Call graph data usually isn't trustworthy unless you built the program
>> > with -fno-omit-frame-pointer ...
>> This page is full of ideas as well:
>> https://wiki.postgresql.org/wiki/Profiling_with_perf
>> --
>> Michael
>
>

Re: Profiling PostgreSQL

From
Jeff Janes
Date:
On Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas <dkarampin@gmail.com> wrote:
Thanks for your answers. A script around pstack worked for me.

(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)

For the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).
Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.
However I did the two following experiments:
1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.

Currently acquisition and release of all LWLock, even in shared mode, are protected by spinlocks, which are exclusive. So they cause a lot of contention even on read-only workloads.  Also if the working set fits in RAM but not in shared_buffers, you will have a lot of exclusive locks on the buffer freelist and the buffer mapping tables.

 
2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. 

Hardware matters!  How did you change the number of cores?

Cheers,

Jeff 

Re: Profiling PostgreSQL

From
Dimitris Karampinas
Date:
I want to bypass any disk bottleneck so I store all the data in ramfs (the purpose the project is to profile pg so I don't care for data loss if anything goes wrong).
Since my data are memory resident, I thought the size of the shared buffers wouldn't play much role, yet I have to admit that I saw difference in performance when modifying shared_buffers parameter.

I use taskset to control the number of cores that PostgreSQL is deployed on.

Is there any parameter/variable in the system that is set dynamically and depends on the number of cores ?

Cheers,
Dimitris


On Fri, May 23, 2014 at 6:52 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
On Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas <dkarampin@gmail.com> wrote:
Thanks for your answers. A script around pstack worked for me.

(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)

For the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).
Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.
However I did the two following experiments:
1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.

Currently acquisition and release of all LWLock, even in shared mode, are protected by spinlocks, which are exclusive. So they cause a lot of contention even on read-only workloads.  Also if the working set fits in RAM but not in shared_buffers, you will have a lot of exclusive locks on the buffer freelist and the buffer mapping tables.

 
2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. 

Hardware matters!  How did you change the number of cores?

Cheers,

Jeff 

Re: Profiling PostgreSQL

From
Jeff Janes
Date:
On Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas <dkarampin@gmail.com> wrote:
I want to bypass any disk bottleneck so I store all the data in ramfs (the purpose the project is to profile pg so I don't care for data loss if anything goes wrong).
Since my data are memory resident, I thought the size of the shared buffers wouldn't play much role, yet I have to admit that I saw difference in performance when modifying shared_buffers parameter.

In which direction?  If making shared_buffers larger improves things, that suggests that you have contention on the BufFreelistLock.  Increasing shared_buffers reduces buffer churn (assuming you increase it by enough) and so decreases that contention.
 

I use taskset to control the number of cores that PostgreSQL is deployed on.

It can be important what bits you set.  For example if you have 4 sockets, each one with a quadcore, you would probably maximize the consequences of spinlock contention by putting one process on each socket, rather than putting them all on the same socket.
 

Is there any parameter/variable in the system that is set dynamically and depends on the number of cores ?

The number of spins a spinlock goes through before sleeping, spins_per_delay, is determined dynamically based on how often a tight loop "pays off".  But I don't think this is very sensitive to the exact number of processors, just the difference between 1 and more than 1.

 

Re: Profiling PostgreSQL

From
Dimitris Karampinas
Date:
Increasing the shared_buffers size improved the performance by 15%. The trend remains the same though: steep drop in performance after a certain number of clients.

My deployment is "NUMA-aware". I allocate cores that reside on the same socket. Once I reach the maximum number of cores, I start allocating cores from a neighbouring socket.

I'll try to print the number of spins_per_delay for each experiment... just in case I get something interesting.


On Fri, May 23, 2014 at 7:57 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
On Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas <dkarampin@gmail.com> wrote:
I want to bypass any disk bottleneck so I store all the data in ramfs (the purpose the project is to profile pg so I don't care for data loss if anything goes wrong).
Since my data are memory resident, I thought the size of the shared buffers wouldn't play much role, yet I have to admit that I saw difference in performance when modifying shared_buffers parameter.

In which direction?  If making shared_buffers larger improves things, that suggests that you have contention on the BufFreelistLock.  Increasing shared_buffers reduces buffer churn (assuming you increase it by enough) and so decreases that contention.
 

I use taskset to control the number of cores that PostgreSQL is deployed on.

It can be important what bits you set.  For example if you have 4 sockets, each one with a quadcore, you would probably maximize the consequences of spinlock contention by putting one process on each socket, rather than putting them all on the same socket.
 

Is there any parameter/variable in the system that is set dynamically and depends on the number of cores ?

The number of spins a spinlock goes through before sleeping, spins_per_delay, is determined dynamically based on how often a tight loop "pays off".  But I don't think this is very sensitive to the exact number of processors, just the difference between 1 and more than 1.

 

Re: Profiling PostgreSQL

From
Matheus de Oliveira
Date:

On Sun, May 25, 2014 at 1:26 PM, Dimitris Karampinas <dkarampin@gmail.com> wrote:
My deployment is "NUMA-aware". I allocate cores that reside on the same socket. Once I reach the maximum number of cores, I start allocating cores from a neighbouring socket.

I'm not sure if it solves your issue, but on a NUMA environemnt and recent version of Linux kernel, you should try to disable vm.zone_reclaim_mode, as it seems to cause performance degradation for database workloads, see [1] and [2].
Best regards,
--
Matheus de Oliveira
Analista de Banco de Dados
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres