Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and the
Linux server begins to cache and remains frozen.
ab -n 100 -c 10 http://localsite/testscript
behaves OK.
If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.
If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with much
less time spent per request).
I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8
and
shared_buffers = 64
to shared_buffers = 16
without success.
I tried to use pmap on httpd and postmaster Process ID but don't get
much help.
Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service after
such a DoS attack.
I am looking for a way to limit the memory used by postgres.
Thanks
Alex