Thread: shared_buffers and shmmax what are the max recommended values?

shared_buffers and shmmax what are the max recommended values?

From
"Anton Melser"
Date:
Hi all,
We have a web app that is using a 32 bit 8.1.4 (I know but upgrading
is not an option for another couple of months...) running on Suse 10.
We just increased from 3GO to 6GO of RAM, and I increased the various
memory related values...

First info - the server ONLY does one pg db (that is its purpose in
life, and it runs pretty much nothing else, and certainly nothing we
care about, all the apps that access it are on separate servers).

I did, however, realise that I don't (didn't?) understand what shmmax
REALLY is. It was at the default value (:-)), so even for 3GO was
ridiculously low. I saw some recommendations on the list mentioning
that shared_buffers (for an 8GO machine) should be set to 250000 or
something like that. So I merrily increased shmmax to 128MO and tried
to start pg. Ouch! I needed to put it to much more than that...

So the main question - what is the maximum recommended shmmax setting?
I currently have it set to 1GO, but I think it probably needs to go
higher - no?

Here are the values that aren't at their defaults:

shared_buffers = 50000
work_mem = 8192
effective_cache_size = 525000
max_prepared_transactions = 100
maintenance_work_mem = 262144

max_fsm_pages = 300000
max_fsm_relations = 10000

Any suggestions most welcome.
Cheers
Anton

Re: shared_buffers and shmmax what are the max recommended values?

From
Greg Smith
Date:
On Fri, 7 Mar 2008, Anton Melser wrote:

> We have a web app that is using a 32 bit 8.1.4 (I know but upgrading
> is not an option for another couple of months...)

You do know that upgrading takes a second if you do it right?  You might
want to avoid VACUUM FULL until you can upgrade to >=8.1.9.

> I did, however, realise that I don't (didn't?) understand what shmmax
> REALLY is.

It's just a limit on how much shared memory a process can allocate. The
database server will allocate what it wants regardless, and all SHMMAX can
do is cause that to fail and the server to crash (on startup). If your
primary thing running here is PostgreSQL, you might as well set it to the
maximum you can so it gets out of the way. A popular setting is 2GB:

kernel.shmmax=2147483648

With 6GB of RAM, after that you could merrily increase shared_buffers to
200000 or so and possibly increase performance.  Just watch your
checkpoints--they'll have more activity as you increase the buffer size,
and from your description you've still got checkpoint_segments at the tiny
default size.  You say this is a web app, those typically aren't write
heavy so you'll probably be OK.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: shared_buffers and shmmax what are the max recommended values?

From
"Anton Melser"
Date:
>  With 6GB of RAM, after that you could merrily increase shared_buffers to
>  200000 or so and possibly increase performance.  Just watch your
>  checkpoints--they'll have more activity as you increase the buffer size,
>  and from your description you've still got checkpoint_segments at the tiny
>  default size.  You say this is a web app, those typically aren't write
>  heavy so you'll probably be OK.

Thanks, I'll get onto that...
There is actually quite a bit of write (at least the dump is
increasing far more than what is being added manually by content
writers... and I'm not even certain where it is coming from but that
is another story!), and yes checkpoint_segments is at the default...
if I increase to 10 or so will that be better?
btw, we have a warm standby via wal copies if that makes a difference...
Cheers
Anton

Re: shared_buffers and shmmax what are the max recommended values?

From
Greg Smith
Date:
On Fri, 7 Mar 2008, Anton Melser wrote:

> There is actually quite a bit of write (at least the dump is increasing
> far more than what is being added manually by content writers... and I'm
> not even certain where it is coming from but that is another story!)

If you look at pg_stat_user_tables regularly that should give you an idea
what's being added/updated/deleted.

> yes checkpoint_segments is at the default... if I increase to 10 or so
> will that be better?

There will be less checkpoints, which may be better for you.  But each
checkpoint could be doing more work, so they will be more disruptive,
which can be worse.  It's not a parameter you can increase and that will
always be an improvement.

Normally people doing write-heavy work set that into the 30-100 range.
You will use more disk space for the WAL files used by the server, and
recovery from a crash will take longer as well.  The default of 3 keeps
WAL files at a total of about 112MB; increasing to 10 raises that to
336MB, and at 30 you can expect to have 1GB of WAL files around on the
primary server.

> btw, we have a warm standby via wal copies if that makes a difference...

Changing checkpoint_segments doesn't alter how often WAL files are moved
over, but it will increase how many of them you need to keep around on the
secondary in order to rebuild the server after a crash.  You should be
careful making changes here until you understand how all that fits
together.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: shared_buffers and shmmax what are the max recommended values?

From
"Anton Melser"
Date:
That's crystal. Thanks for your advice!
Cheers
Anton

On 07/03/2008, Greg Smith <gsmith@gregsmith.com> wrote:
> On Fri, 7 Mar 2008, Anton Melser wrote:
>
>
> > There is actually quite a bit of write (at least the dump is increasing
>  > far more than what is being added manually by content writers... and I'm
>  > not even certain where it is coming from but that is another story!)
>
>
> If you look at pg_stat_user_tables regularly that should give you an idea
>  what's being added/updated/deleted.
>
>
>  > yes checkpoint_segments is at the default... if I increase to 10 or so
>  > will that be better?
>
>
> There will be less checkpoints, which may be better for you.  But each
>  checkpoint could be doing more work, so they will be more disruptive,
>  which can be worse.  It's not a parameter you can increase and that will
>  always be an improvement.
>
>  Normally people doing write-heavy work set that into the 30-100 range.
>  You will use more disk space for the WAL files used by the server, and
>  recovery from a crash will take longer as well.  The default of 3 keeps
>  WAL files at a total of about 112MB; increasing to 10 raises that to
>  336MB, and at 30 you can expect to have 1GB of WAL files around on the
>  primary server.
>
>
>  > btw, we have a warm standby via wal copies if that makes a difference...
>
>
> Changing checkpoint_segments doesn't alter how often WAL files are moved
>  over, but it will increase how many of them you need to keep around on the
>  secondary in order to rebuild the server after a crash.  You should be
>  careful making changes here until you understand how all that fits
>  together.
>
>
>  --
>  * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>


--
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...