Re: postgresql + apache under heavy load - Mailing list pgsql-general

From Alex Madon
Subject Re: postgresql + apache under heavy load
Date
Msg-id 400ED6D4.5070206@bestlinuxjobs.com
Whole thread Raw
In response to Re: postgresql + apache under heavy load  (Ericson Smith <eric@did-it.com>)
List pgsql-general
Hello Ericson,
Thank you for your reply.
Ericson Smith wrote:

> Could be problem be that PHP is not using connection efficiently?
> Apache KeepAlive with PHP, is a dual edged sword with you holding the
> blade :-)

I turned off the KeepAlive option in httpd.conf

[
I think keepalive is not used by default by "ab" and that apche uses it
only on static content)-- see the last paragraph of:
http://httpd.apache.org/docs/keepalive.html
]
and set
pgsql.allow_persistent = Off
in php.ini,
it didn't work for me.
thanks
Alex

>
> If I am not mistaken, what happens is that a connection is kept alive
> because Apache believes that other requests will come in from the
> client who made the initial  connection. So 10 concurrent connections
> are fine, but they are not released timely enough with 100 concurrent
> connections. The system ends up waiting around for other KeepAlive
> connections to timeout before Apache allows others to come in. We had
> this exact problem in an environment with millions of impressions per
> day going to the database. Because of the nature of our business, we
> were able to disable KeepAlive and the load immediately dropped
> (concurrent connection on the Postgresql database also dropped
> sharply). We also turned off PHP persistent connections to the database.
>
> The drawback is that connections are built up and torn down all the
> time, and with Postgresql, it is sort of expensive. But thats a
> fraction of the expense of having KeepAlive on.
>
> Warmest regards, Ericson Smith
> Tracking Specialist/DBA
> +-----------------------+--------------------------------------+
> | http://www.did-it.com | "Crush my enemies, see then driven   |
> | eric@did-it.com       | before me, and hear the lamentations |
> | 516-255-0500          | of their women." - Conan             |
> +-----------------------+--------------------------------------+
>
>
> Alex Madon wrote:
>
>> Hello,
>> I am testing a web application (using the DBX PHP function to call a
>> Postgresql backend).
>> I have 375Mb RAM on my test home box.
>> I ran ab (apache benchmark) to test the behaviour of the application
>> under heavy load.
>> When increasing the number of requests, all my memory is filled, and
>> the Linux server begins to cache and remains frozen.
>>
>> ab -n 100 -c 10 http://localsite/testscript
>> behaves OK.
>>
>> If I increases to
>> ab -n 1000 -c 100 http://localsite/testscript
>> I get this memory problem.
>>
>> If I eliminate the connection to the (UNIX) socket of Postgresql, the
>> script behaves well even under very high load (and of course with
>> much less time spent per request).
>>
>> I tried to change some parameters in postgresql.conf
>> max_connections = 32
>> to max_connections = 8
>>
>> and
>>
>> shared_buffers = 64
>> to shared_buffers = 16
>>
>> without success.
>>
>> I tried to use pmap on httpd and postmaster Process ID but don't get
>> much help.
>>
>> Does anybody have some idea to help to debug/understand/solve this
>> issue? Any feedback is appreciated.
>> To me, it would not be a problem if the box is very slow under heavy
>> load (DoS like), but I really dislike having my box out of service
>> after such a DoS attack.
>> I am looking for a way to limit the memory used by postgres.
>>
>> Thanks
>> Alex
>>
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 9: the planner will ignore your desire to choose an index scan if
>> your
>>      joining column's datatypes do not match
>>



pgsql-general by date:

Previous
From: Alex Madon
Date:
Subject: Re: postgresql + apache under heavy load
Next
From: Tom Lane
Date:
Subject: Re: Compiling postgres-7.4.1