In response to Jessica Richard <rjessil@yahoo.com>:
> On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I know it is bad, but how bad can it be?
Justtrying to understand the impact the "shmmax" parameter can have on Postgres and the entire system after Postgres
comesup on this number.
It's not bad by definition. shmmax is a cap on the max that can be used.
Just because you set it to 4G doesn't mean any application is going to
use all of that. With PostgreSQL, the maximum amount of shared memory it
will allocate is governed by the shared_buffers setting in the
postgresql.conf.
It _is_ a good idea to set shmmax to a reasonable size to prevent
a misbehaving application from eating up all the memory on a system,
but I've yet to see PostgreSQL misbehave in this manner. Perhaps I'm
too trusting.
> What is the reasonable setting for shmmax on a 4G total machine?
If you mean what's a reasonable setting for shared_buffers, conventional
wisdom says to start with 25% of the available RAM and increase it or
decrease it if you discover your workload benefits from more or less.
By "available RAM" is meant the free RAM after all other applications
are running, which will be 4G if this machine only runs PostgreSQL, but
could be less if it runs other things like a web server.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
wmoran@collaborativefusion.com
Phone: 412-422-3463x4023