Re: 100 simultaneous connections, critical limit? - Mailing list pgsql-performance

From Thomas Swan
Subject Re: 100 simultaneous connections, critical limit?
Date
Msg-id 400783AD.4090101@idigx.com
Whole thread Raw
In response to Re: 100 simultaneous connections, critical limit?  ("scott.marlowe" <scott.marlowe@ihs.com>)
List pgsql-performance
scott.marlowe wrote:

>On Wed, 14 Jan 2004, Adam Alkins wrote:
>
>
>
>>scott.marlowe wrote:
>>
>>
>>
>>>A few tips from an old PHP/Apache/Postgresql developer.
>>>
>>>1: Avoid pg_pconnect unless you are certain you have load tested the
>>>system and it will behave properly.  pg_pconnect often creates as many
>>>issues as it solves.
>>>
>>>
>>>
>>>
>>I share the above view. I've had little success with persistent
>>connections. The cost of pg_connect is minimal, pg_pconnect is not a
>>viable solution IMHO. Connections are rarely actually reused.
>>
>>
>
>I've found that for best performance with pg_pconnect, you need to
>restrict the apache server to a small number of backends, say 40 or 50,
>extend keep alive to 60 or so seconds, and use the same exact connection
>string all over the place.  Also, set max.persistant.connections or
>whatever it is in php.ini to 1 or 2.  Note that max.persistant.connections
>is PER BACKEND, not total, in php.ini, so 1 or 2 should be enough for most
>types of apps.  3 tops.  Then, setup postgresql for 200 connections, so
>you'll never run out.  Tis better to waste a little shared memory and be
>safe than it is to get the dreaded out of connections error from
>postgresql.
>
>
>
I disagree.   With the server I have been running for the last two years
we found the the pconnect settings with long keep-alives in apache
consumed far more resources than you would imagine.   We found the
because some clients would not support keep-alive (older IE clients)
correctly.  They would hammer the server with 20-30 individual requests;
apache would keep those processes in keep-alive mode.   When the number
of apache processes were restricted there were DoS problems.   The
short-keep alive pattern works best to keep a single pages related
requests to be served effeciently.   In fact the best performance and
the greatest capacity in real life was with a 3 second timeout for
keep-alive requests.   A modem connection normally won't have sufficient
lag as to time-out on related loads and definitely not a broadband
connection.

Also, depending on your machine you should time the amount of time it
takes to connect to the db.   This server ran about 3-4 milliseconds on
average to connect without pconnect, and it was better to conserve
memory so that none postgresql scripts and applications didn't have the
extra memory footprint of a postgresql connection preventing memory
exhaustion and excessive swapping.

Please keep in mind that this was on a dedicated server with apache and
postgresql and a slew of other processes running on the same machine.
The results may be different for separate process oriented setups.

>If you do all of the above, pg_pconnect can work pretty well, on things
>like dedicated app servers where only one thing is being done and it's
>being done a lot.  On general purpose servers with 60 databases and 120
>applications, it adds little, although extending the keep alive timeout
>helps.
>
>but if you just start using pg_pconnect without reconfiguring and then
>testing, it's quite likely your site will topple over under load with out
>of connection errors.
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 7: don't forget to increase your free space map settings
>
>


pgsql-performance by date:

Previous
From: Syd
Date:
Subject: IDE/SCSI disk tools to turn off write caching
Next
From: "Chris Travers"
Date:
Subject: Re: Trigger question