Re: to many locks held - Mailing list pgsql-performance

From Michael Paquier
Subject Re: to many locks held
Date
Msg-id CAB7nPqQaZ9TaMn6cSMc9GwCWDNcBS=pv=6=r715+vN9YPNO+xw@mail.gmail.com
Whole thread Raw
In response to Re: to many locks held  (bricklen <bricklen@gmail.com>)
List pgsql-performance



On Tue, Jul 30, 2013 at 11:48 PM, bricklen <bricklen@gmail.com> wrote:
On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote:
memory ram: 128 GB
cores: 32

max_connections: 900

I would say you might be better off using a connection pooler if you need this many connections.
Yeah that's a lot. pgbouncer might be a good option in your case.

work_mem = 1024MB

work_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like "External merge Disk" in your EXPLAIN ANALYZE plans)?
work_mem is a per-operation setting for sort/hash operations. So in your case you might finish with a maximum of 900GB of memory allocated based on the maximum number of sessions that can run in parallel on your server. Simply reduce the value of work_mem to something your server can manage and you should be able to solve your problems of OOM.
--
Michael

pgsql-performance by date:

Previous
From: bricklen
Date:
Subject: Re: to many locks held
Next
From: Tasos Petalas
Date:
Subject: PG performance issues related to storage I/O waits