Thread: How much RAM is too much ?

How much RAM is too much ?

From
A J
Date:
Cost aside, are there any technical factors to consider before increasing RAM (assuming the system can support it) ? Is there a sweet spot beyond which more RAM does not help Postgres or actually harms ?

If my database is only couple of hundred Gigs, can I not just have RAM that big and have read from buffer all the time ?

Thank you !
-AJ

Re: How much RAM is too much ?

From
"Kevin Grittner"
Date:
A J <s5aly@yahoo.com> wrote:

> Cost aside, are there any technical factors to consider before
> increasing RAM (assuming the system can support it) ? Is there a
> sweet spot beyond which more RAM does not help Postgres or
> actually harms ?

I've never heard of putting more RAM in a machine hurting
performance, as long as it's the right type of RAM for that machine.
On the other hand, you can easily configure PostgreSQL to allocate
more RAM for specific purposes than is good.

> If my database is only couple of hundred Gigs, can I not just have
> RAM that big and have read from buffer all the time ?

Sure, as long as you allow enough extra for the OS and all other
processes.  If you're looking at hundreds of GB of RAM on a box,
plan on letting the OS use the vast majority of that for caching.

This doesn't obviate the need for a RAID controller with its own BBU
RAM if you want write performance to be good.

-Kevin

Re: How much RAM is too much ?

From
Scott Marlowe
Date:
On Thu, Jul 22, 2010 at 2:45 PM, A J <s5aly@yahoo.com> wrote:
> Cost aside, are there any technical factors to consider before increasing
> RAM (assuming the system can support it) ? Is there a sweet spot beyond
> which more RAM does not help Postgres or actually harms ?
> If my database is only couple of hundred Gigs, can I not just have RAM that
> big and have read from buffer all the time ?

On some architectures, as you add memory the memory itself becomes
slower.  In the old days it was Intel's brain dead VX pentium
architecture that could only cache the first 64Megs of ram but could
handle 256M that had issues.  Since windows tended to load things from
the top of memory down, having > 64 Meg could make a machine
noticeably slower.

Nowadays it's nothing so drastic.  Some hardware can only support say
800MHz memory with 2 or 4 banks of DRAM, then 667, then 500 as you
fill them up.  I avoid hardware like that since dbs are often memory
hungry machines.

Most modern machines are actually faster as you add memory because it
can now address more banks at the same time, and with AMD and Intel's
latest creations more banks filled is almost always a win.  To the
point that having the same amount of memory, but using smaller DIMMs
makes the machine faster than using fewer, larger DIMMs.

Generally speaking, unless your hardware is the kind that acts up with
more memory, no, adding memory will almost never hurt.  But once you
can cache your entire db with some room to spare in memory, adding
memory won't usually help.

So, what CPU / Memory architecture are you running?

-- To understand recursion, one must first understand recursion.

Re: How much RAM is too much ?

From
Greg Smith
Date:
Scott Marlowe wrote:
> On some architectures, as you add memory the memory itself becomes
> slower....Nowadays it's nothing so drastic.

Last week I saw an Intel system where the measured memory speed dropped
about 30% when going from half filled (36GB) to completely filled
(72GB).  So it can still be pretty drastic.  That only turned into about
a 10% drop in actual throughput running the database though, which was
more than made up for by reduced reading from the disk due to the extra
caching.

Stepping back from that trivia, generally adding memory helps only when
the working size of the data you access frequently didn't fit in there
already.  The working set and the total database size are loosely
related, but not directly.  Other than these weird cases where memory
slows down when there's too much of it due to motherboard limitations,
there are few situations where adding more RAM hurts performance.

However, note that the amount of memory you dedicated to the
database--what the shared_buffers tunable controls--has been reported by
multiple people I've talked with to top out at somewhere around 8GB of
space.  If you increase that parameter beyond that size, it could end up
hurting performance rather than helping.  Since PostgreSQL also uses the
operating system cache, this doesn't make for a real limit in the amount
of memory the database can use.  You just shouldn't give much more than
this amount to the database.

A second concern to be aware of is that if you're using Linux in
particular, it's easy to get the operating system in a state where it
has cached way too many writes, which causes upticks in latency when the
database issues its periodic checkpoints.  I can easily make a Linux
system with 72GB of RAM pause regularly for >30 seconds if I try to
trigger this behavior.  If you write heavily to the database, it's
something to be concerned about with that OS and large memory systems.

In news I don't consider unrelated, FreeBSD is now working on adding
DTrace support:
http://freebsdfoundation.blogspot.com/2010/06/dtrace-userland-project.html
which will give me yet another reason to consider deploying on that OS
instead of Linux.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: How much RAM is too much ?

From
Bryan Hinton
Date:
FreeBSD has good kernel level support for DTrace
This is what it looks like on an amd64 FreeBSD stable-8 server with a
custom kernel and a few extra options. The kernel build is straightforward.

Kernel modules loaded on this box
 5    0xfff... ...   dtraceall.ko
 6    0xfff... ...   profile.ko
 7    0xfff... ...   opensolaris.ko
 8    0xfff... ...   cyclic.ko
 9    0xfff... ...   dtrace.ko
10    0xfff... ...   systrace.ko
11    0xfff... ...   sdt.ko
12    0xfff... ...   lockstat.ko
13    0xfff... ...   fbt.ko
14    0xfff... ...   dtnfsclient.ko
15    0xfff... ...   dtmalloc.ko

DTrace is showing 26218 probes and support for the following providers
dtrace dtmalloc nfsclient fbt lockstat mac mac_framework proc priv
callout_execute vfs syscall profile

Of the syscall probes, of which many are very useful for figuring out
what is going on with say a userland daemon that forks, execs, and
context switches back and forth,  dtrace is publishing entry / return
probes for read, write, and other syscalls.  There are function boundary
tracing probes for fsync in the kernel.

Thought you might find that interesting.

Bryan



On 7/22/10 6:48 PM, Greg Smith wrote:
> Scott Marlowe wrote:
>> On some architectures, as you add memory the memory itself becomes
>> slower....Nowadays it's nothing so drastic.
>
> Last week I saw an Intel system where the measured memory speed dropped
> about 30% when going from half filled (36GB) to completely filled
> (72GB).  So it can still be pretty drastic.  That only turned into about
> a 10% drop in actual throughput running the database though, which was
> more than made up for by reduced reading from the disk due to the extra
> caching.
>
> Stepping back from that trivia, generally adding memory helps only when
> the working size of the data you access frequently didn't fit in there
> already.  The working set and the total database size are loosely
> related, but not directly.  Other than these weird cases where memory
> slows down when there's too much of it due to motherboard limitations,
> there are few situations where adding more RAM hurts performance.
>
> However, note that the amount of memory you dedicated to the
> database--what the shared_buffers tunable controls--has been reported by
> multiple people I've talked with to top out at somewhere around 8GB of
> space.  If you increase that parameter beyond that size, it could end up
> hurting performance rather than helping.  Since PostgreSQL also uses the
> operating system cache, this doesn't make for a real limit in the amount
> of memory the database can use.  You just shouldn't give much more than
> this amount to the database.
>
> A second concern to be aware of is that if you're using Linux in
> particular, it's easy to get the operating system in a state where it
> has cached way too many writes, which causes upticks in latency when the
> database issues its periodic checkpoints.  I can easily make a Linux
> system with 72GB of RAM pause regularly for >30 seconds if I try to
> trigger this behavior.  If you write heavily to the database, it's
> something to be concerned about with that OS and large memory systems.
>
> In news I don't consider unrelated, FreeBSD is now working on adding
> DTrace support:
> http://freebsdfoundation.blogspot.com/2010/06/dtrace-userland-project.html
> which will give me yet another reason to consider deploying on that OS
> instead of Linux.
>

Re: How much RAM is too much ?

From
Dimitri Fontaine
Date:
Greg Smith <greg@2ndquadrant.com> writes:
> In news I don't consider unrelated, FreeBSD is now working on adding DTrace
> support:
> http://freebsdfoundation.blogspot.com/2010/06/dtrace-userland-project.html
> which will give me yet another reason to consider deploying on that OS
> instead of Linux.

Well, there's debian/kFreeBSD too, which will be supported in the next
debian stable release I think.

  http://release.debian.org/squeeze/arch_qualify.html

Regards,
--
dim