postgresql + apache under heavy load - Mailing list pgsql-general

From Alex Madon
Subject postgresql + apache under heavy load
Date
Msg-id 400E889A.6040306@bestlinuxjobs.com
Whole thread Raw
Responses Re: postgresql + apache under heavy load  ("scott.marlowe" <scott.marlowe@ihs.com>)
Re: postgresql + apache under heavy load  ("Joshua D. Drake" <jd@commandprompt.com>)
Re: postgresql + apache under heavy load  (Richard Huxton <dev@archonet.com>)
Re: postgresql + apache under heavy load  (Ericson Smith <eric@did-it.com>)
List pgsql-general
Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and the
Linux server begins to cache and remains frozen.

ab -n 100 -c 10 http://localsite/testscript
behaves OK.

If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.

If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with much
less time spent per request).

I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8

and

shared_buffers = 64
to shared_buffers = 16

without success.

I tried to use pmap on httpd and postmaster Process ID but don't get
much help.

Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service after
such a DoS attack.
I am looking for a way to limit the memory used by postgres.

Thanks
Alex


pgsql-general by date:

Previous
From: "Philippe Lang"
Date:
Subject: Idle connections - Too many clients connected already
Next
From: "Brian Maguire"
Date:
Subject: Linux Magazine/Postgres Article