Re: connections not getting closed on a replica - Mailing list pgsql-general

From FarjadFarid\(ChkNet\)
Subject Re: connections not getting closed on a replica
Date
Msg-id 00a801d13512$d39bc6a0$7ad353e0$@checknetworks.com
Whole thread Raw
In response to Re: connections not getting closed on a replica  (Kevin Grittner <kgrittn@gmail.com>)
List pgsql-general
Assuming you have at least 16GB of memory. These numbers on a good hardware server is not a real problem. On a bad
servermotherboard. Might as well use a standard PC. With 32GB I have tested 10 times more connections. Not to
postgresql. 

I would investigate everything from bottom up.

Also under Tcp/Ip the flow and validity of the transaction is guaranteed.  So I would look for other issues that is
lockingsystem.  

For a good motherboard design check out Intel's motherboards.

-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Kevin Grittner
Sent: 11 December 2015 22:13
To: Carlo Cabanilla
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] connections not getting closed on a replica

On Fri, Dec 11, 2015 at 3:37 PM, Carlo Cabanilla <carlo@datadoghq.com> wrote:

> 16 cores

> a default pool size of 650, steady state of 500-600 server connections

With so many more connections than resources to serve them, one thing that can happen is that just by happen-stance
enoughprocesses become busy at one time that they start context switching a lot before they finish, leaving spinlocks
blockedand causing other contention that slows all query run times.  This causes bloat to increase because some
databasetransactions are left active for longer times.  If the client software and/or pooler don't queue requests at
thatpoint there will be more connections made because connections have not been freed because of the contention causing
slowness-- which exacerbates that problem and leads to a downward spiral.  That can become so bad that there is no
recoveryuntil either the clients software is stopped or the database is restarted. 

>> I don't suppose you have vmstat 1 output from the incident?  If it
>> happens again, try to capture that.
>
> Are you looking for a stat in particular?

Not really; what I like about `vmstat 1` is how many useful pieces of information are on each line, allowing me to get
agood overview of what's going on.  For example, if system CPU time is high, it is very likely to be a problem with
transparenthuge pages, which is one thing that can cause these symptoms.  A "write glut" can also do so, which can be
controlledby adjusting checkpoint and background writer settings, plus the OS vm.dirty_* settings (and maybe keeping
shared_bufferssmaller than you otherwise might). 
NUMA problems are not at issue, since there is only one memory node.

Without more evidence of what is causing the problem, suggestions for a solution are shots in the dark.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



pgsql-general by date:

Previous
From: Shay Cohavi
Date:
Subject: postgresql 9.3 failover time
Next
From: Oleg Bartunov
Date:
Subject: Re: json indexing and data types