Re: max_connections limit violation not showing in pg_stat_activity - Mailing list pgsql-general

From Kevin Grittner
Subject Re: max_connections limit violation not showing in pg_stat_activity
Date
Msg-id CACjxUsNpbptqWn8gjk=OkF1VzM9ziutmCkEbih1DT=VxBZatzQ@mail.gmail.com
Whole thread Raw
In response to max_connections limit violation not showing in pg_stat_activity  (Charles Clavadetscher <clavadetscher@swisspug.org>)
Responses Re: max_connections limit violation not showing in pg_stat_activity
List pgsql-general
On Tue, Nov 22, 2016 at 12:48 PM, Charles Clavadetscher
<clavadetscher@swisspug.org> wrote:

> We are using PostgreSQL 9.3.10 on RedHat (probably 6.x).

Is it possible to upgrade?  You are missing over a year's worth of
fixes for serious bugs and security vulnerabilities.

https://www.postgresql.org/support/versioning/

> Among other thing the database is the backend for a web application that
> expects a load of a some hundred users at a time (those are participans
> to online surveys that we use for computing economic indicators and
> access the system every month). The whole amount of people expected is
> above 5000, but we don't expect a too high concurrent access to the
> database. As mentioned a few hundreds at the beginning of the surveys.
>
> To be sure that we won't have problems with the peak times we created a
> load test using gatling that ramps up to 1000 users in 5 minutes in
> bunches of 10. At the beginning we had problems with the web server
> response that we were able to correct. Now we face problem with the
> max_connections limit of PostgreSQL. Currently it is set to the default
> of 100. We are going to look into it and either increase that limit or
> consider connections pooling.

On a web site with about 3000 active users, I found (through
adjusting the connection pool size on the production database and
monitoring performance) that we got best performance with a pool of
about 40 connections.  This was on a machine with 16 cores (never
count HT "threads" as cores), 512GB RAM, and a RAID with 40 drives
of spinning rust.

http://tbeitr.blogspot.com/2015/11/for-better-service-please-take-number.html

> What bothers me however is that running a query on pg_stat_activity with
> a watch of 1 seconds never shows any value higher than 37 of concurrent
> active connections.
>
> SELECT count(*) FROM pg_stat_activity; watch 1;

At the times when the resources are overloaded by more connections
than the resources can efficiently service -- well that's precisely
the time that a sleeping "monitoring" process is least likely to be
given a time slice to run.  If you can manage to get pgbadger to
run on your environment, and you turn on logging of connections and
disconnections, you will be able to get far more accurate
information.

> Increasing max_connections has repercussions on the configuration
> of work_mem (if I remember well)

Each connection can allocate one work_mem allocation per node which
requires a sort, hash, CTE, etc.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: min/max_wal_size
Next
From: Patrick B
Date:
Subject: Wal files - Question | Postgres 9.2