Re: Would my postgresql 8.4.12 profit from doubling RAM? - Mailing list pgsql-general

From Scott Marlowe
Subject Re: Would my postgresql 8.4.12 profit from doubling RAM?
Date
Msg-id CAOR=d=1Ox+A04oNr5BicdGgnghBs0vzgVDAi6Ay9i3KfH+-aDQ@mail.gmail.com
Whole thread Raw
In response to Would my postgresql 8.4.12 profit from doubling RAM?  (Alexander Farber <alexander.farber@gmail.com>)
Responses Re: Would my postgresql 8.4.12 profit from doubling RAM?  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-general
On Thu, Aug 30, 2012 at 5:42 AM, Alexander Farber
<alexander.farber@gmail.com> wrote:
> Hello,
>
> I run CentOS 6.3 server with 16 GB RAM and:
>     postgresql-8.4.12-1.el6_2.x86_64
>     pgbouncer-1.3.4-1.rhel6.x86_64
>
> The modified params in postgresql.conf are:
>     max_connections = 100
>     shared_buffers = 4096MB

That's probably plenty.  Remember Postgresql uses memory in other ways
so handing it all over to shared buffers is not a good idea anyway.
You might consider upping work_mem a bit, depending on your workload.

> Below is a typical top output, the pref.pl is my game daemon:
>
> Mem:  16243640k total, 14091172k used,  2152468k free,   621072k buffers
> Swap:  2096056k total,        0k used,  2096056k free,  8929900k cached

This is the important stuff right here.  Note you've got 2G free mem,
and almost 9G of cache.  That's good.  There's no memory pressure on
your server right now.  Let's say that you average about 100 active
users, and currently work_mem is set to 1M (the default.)  If you
increase that to 16M, that'd be max 1.6G of memory, which you have
free anyway right now.  I've found that small increases of work_mem
into the 8 to 16M zone often increase performance because they allow
bigger queries to fit results into hash_* operations which are a
pretty fast way of joining larger data sets.

If pgbouncer keeps your actual connections in the range of a few dozen
at a time then you can look at going high on work_mem, but often with
workloads like the one you're talking about you won't see any increase
in performance.  Once the data set used for hash_* operations fits in
memory, more work_mem won't help.

Note that work_mem is PER op, not per connections, per user or per
database, so it can add up REAL fast, hence why it's so small to start
with (1M). Overdoing it can result in a server that falls to its knees
during heavy load.


pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: String comparision in PostgreSQL
Next
From: Scott Marlowe
Date:
Subject: Re: Would my postgresql 8.4.12 profit from doubling RAM?