Re: Big number of connections - Mailing list pgsql-performance

From Pavel Stehule
Subject Re: Big number of connections
Date
Msg-id CAFj8pRDqLJZ8M9NRKUywoQj72RUrNbsJ2nbCx=TutqxB_0rgOg@mail.gmail.com
Whole thread Raw
In response to Re: Big number of connections  ("Mike Sofen" <msofen@runbox.com>)
Responses Re: Big number of connections  (Moreno Andreo <moreno.andreo@evolu-s.it>)
List pgsql-performance
Hi

2016-04-04 15:14 GMT+02:00 Mike Sofen <msofen@runbox.com>:
From: Jim Nasby Sent: Sunday, April 03, 2016 10:19 AM

>>On 4/1/16 2:54 AM, jarek wrote:
>> I'll be happy to hear form users of big PostgreSQL installations, how
>> many users do you have and what kind of problems we may expect.
>> Is there any risk, that huge number of roles will slowdown overall
>> performance ?

>Assuming you're on decent sized hardware though, 3000-4000 open connections shouldn't be much of an >issue *as long as very few are active at once*. If you get into a situation where there's a surge of activity >and you suddenly have 2x more active connections than cores, you won't be happy. I've seen that push >servers into a state where the only way to recover was to disconnect everyone.
>--
>Jim Nasby

Jim - I don't quite understand the math here: on a server with 20 cores, it can only support 40 active users?

I come from the SQL Server world where a single 20 core server could support hundreds/thousands of active users and/or many dozens of background/foreground data processes.  Is there something fundamentally different between the two platforms relative to active user loads?  How would we be able to use Postgres for larger web apps?

PostgreSQL doesn't contain integrated pooler - so any connection to Postgres enforces one PostgreSQL proces. A performance benchmarks is showing maximum performance about 10x cores.  With high number of connections you have to use low size of work_mem, what enforces can have negative impact on performance too. Too high number of active PostgreSQL processes increase a risk of performance problems with spin locks, etc.

Usually Web frameworks has own pooling solution - so just use it. If you need more logical connection than is optimum against number of cores, then you should to use external pooler like pgpool II or pgbouncer.

http://www.pgpool.net/mediawiki/index.php/Main_Page
http://pgbouncer.github.io/

Pgbouncer is light with only necessary functions, pgpool is little bit heavy with lot of functions.

Regards

Pavel
 

Mike Sofen





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

pgsql-performance by date:

Previous
From: "Mike Sofen"
Date:
Subject: Re: Big number of connections
Next
From: Moreno Andreo
Date:
Subject: Re: Big number of connections