Re: postgresql + apache under heavy load - Mailing list pgsql-general

From scott.marlowe
Subject Re: postgresql + apache under heavy load
Date
Msg-id Pine.LNX.4.33.0401210846500.17367-100000@css120.ihs.com
Whole thread Raw
In response to postgresql + apache under heavy load  (Alex Madon <alex.madon@bestlinuxjobs.com>)
Responses Re: postgresql + apache under heavy load  (Alex Madon <alex.madon@bestlinuxjobs.com>)
List pgsql-general
On Wed, 21 Jan 2004, Alex Madon wrote:

> Hello,
> I am testing a web application (using the DBX PHP function to call a
> Postgresql backend).

I'm not familiar with DBX.  Is that connection pooling or what?

> I have 375Mb RAM on my test home box.
> I ran ab (apache benchmark) to test the behaviour of the application
> under heavy load.
> When increasing the number of requests, all my memory is filled, and the
> Linux server begins to cache and remains frozen.

Are you SURE all your memory is in use?  What exactly does top say about
things like cached and buff memory (I'm assuming you're on linux, any
differences in top on another OS would be minor.)  If the kernel still
shows a fair bit of cached and buff memory, your memory is not getting all
used up.

> ab -n 100 -c 10 http://localsite/testscript
> behaves OK.

Keep in mind, this is 10 simo users beating the machine continuously.
that's functionally equivalent to about 100 to 200 people running through
pages as fast as people can.

> If I increases to
> ab -n 1000 -c 100 http://localsite/testscript
> I get this memory problem.

Where's the break point?  Just wondering.  Does it show up at 20, 40, 60,
80, or only at 100?  If so, that's really not bad.

> If I eliminate the connection to the (UNIX) socket of Postgresql, the
> script behaves well even under very high load (and of course with much
> less time spent per request).

Of course, the database is the most expensive part of an application,
CPU/Memory wise, written on apache/php

> I tried to change some parameters in postgresql.conf
> max_connections = 32
> to max_connections = 8

Wrong direction.  The number of connections postgresql CAN create costs
very little.  The number of connections it does create, still, costs very
little.  Have you checked to see if ab is getting valid pages, and not
"connection failed, too many connections already open" pages?

> shared_buffers = 64
> to shared_buffers = 16

Way the wrong way.  Shared buffers are the max memory all the backends
together share.  The old setting was 512k ram, now you're down to 128k.
while 128k would be a lot of memory for a Commodore 128, for a machine
with 384 meg ram, it's nothing.  Since this is a TOTAL shared memory
setting, not a per process thing, you can hand it a good chunk of ram and
not usually worry about it.  Set it to 512 and just leave it.  That's only
4 megs of shared memory, if your machine is running that low, other things
have gone wrong.

> without success.
>
> I tried to use pmap on httpd and postmaster Process ID but don't get
> much help.
>
> Does anybody have some idea to help to debug/understand/solve this
> issue? Any feedback is appreciated.
> To me, it would not be a problem if the box is very slow under heavy
> load (DoS like), but I really dislike having my box out of service after
> such a DoS attack.

Does it not come back?  That's bad.


> I am looking for a way to limit the memory used by postgres.

Don't it's likely not using too much.

What does top say is the highest memory user?


pgsql-general by date:

Previous
From: "Sathiamoorthy Balasubramaniyan (ext_TCS)"
Date:
Subject: Compiling postgres-7.4.1
Next
From: "Joshua D. Drake"
Date:
Subject: Re: postgresql + apache under heavy load