Re: max_connections limit violation not showing in pg_stat_activity - Mailing list pgsql-general

From Charles Clavadetscher
Subject Re: max_connections limit violation not showing in pg_stat_activity
Date
Msg-id 04df01d24576$753af020$5fb0d060$@swisspug.org
Whole thread Raw
In response to Re: max_connections limit violation not showing in pg_stat_activity  (Kevin Grittner <kgrittn@gmail.com>)
Responses Re: max_connections limit violation not showing in pg_stat_activity
List pgsql-general
Hello Kevin

Thank you very much for your input. I appreciate it very much.

> -----Original Message-----
> From: Kevin Grittner [mailto:kgrittn@gmail.com]
> Sent: Dienstag, 22. November 2016 22:37
> To: Charles Clavadetscher <clavadetscher@swisspug.org>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] max_connections limit violation not showing in pg_stat_activity
>
> On Tue, Nov 22, 2016 at 12:48 PM, Charles Clavadetscher <clavadetscher@swisspug.org> wrote:
>
> > We are using PostgreSQL 9.3.10 on RedHat (probably 6.x).
>
> Is it possible to upgrade?  You are missing over a year's worth of fixes for serious bugs and security
> vulnerabilities.

Yes. Actually it is foreseen to move to 9.6 (and RedHat 7.x).

> https://www.postgresql.org/support/versioning/
>
> > Among other thing the database is the backend for a web application
> > that expects a load of a some hundred users at a time (those are
> > participans to online surveys that we use for computing economic
> > indicators and access the system every month). The whole amount of
> > people expected is above 5000, but we don't expect a too high
> > concurrent access to the database. As mentioned a few hundreds at the beginning of the surveys.
> >
> > To be sure that we won't have problems with the peak times we created
> > a load test using gatling that ramps up to 1000 users in 5 minutes in
> > bunches of 10. At the beginning we had problems with the web server
> > response that we were able to correct. Now we face problem with the
> > max_connections limit of PostgreSQL. Currently it is set to the
> > default of 100. We are going to look into it and either increase that
> > limit or consider connections pooling.
>
> On a web site with about 3000 active users, I found (through adjusting the connection pool size on the production
> database and monitoring performance) that we got best performance with a pool of about 40 connections.  This was on
> a machine with 16 cores (never count HT "threads" as cores), 512GB RAM, and a RAID with 40 drives of spinning rust.

OK. I will have to check with our hosting people how many cores we have or can have on the new environment.
I have seen that there is pgBouncer and pgPool. Would you recommend one of those? AFAICS both are installed on the
clientside, so that we should be able to use them. 

> http://tbeitr.blogspot.com/2015/11/for-better-service-please-take-number.html
>
> > What bothers me however is that running a query on pg_stat_activity
> > with a watch of 1 seconds never shows any value higher than 37 of
> > concurrent active connections.
> >
> > SELECT count(*) FROM pg_stat_activity; watch 1;
>
> At the times when the resources are overloaded by more connections than the resources can efficiently service --
> well that's precisely the time that a sleeping "monitoring" process is least likely to be given a time slice to run.
> If you can manage to get pgbadger to run on your environment, and you turn on logging of connections and
> disconnections, you will be able to get far more accurate information.

Yes, it sounds reasonable. I assumed that this kind of measurements have a higher priority or reserved slots for them.
Inthose occasions is when they are most needed. 

And thank you for the hint to pgbadger. I will take a look into it, but an installation on the server completely
dependson our hosting service people. I am not sure this is feasible, but I can imagine an alternative scenario, using
alocal installation for tuning and then pass the parameters to the hosters for implementation. Might help. 

Regards
Charles

>
> > Increasing max_connections has repercussions on the configuration of
> > work_mem (if I remember well)
>
> Each connection can allocate one work_mem allocation per node which requires a sort, hash, CTE, etc.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



pgsql-general by date:

Previous
From: Albe Laurenz
Date:
Subject: Re: min/max_wal_size
Next
From: Poul Kristensen
Date:
Subject: Re: Postgresql 9.5 and Shell scripts/variables vs. C programming/defining a value to be used