Enough RAM for entire Database.. cost aside, is this going to be fastest? - Mailing list pgsql-general

From Andy B
Subject Enough RAM for entire Database.. cost aside, is this going to be fastest?
Date
Msg-id zlAEc.3401$vC4.613@fe1.news.blueyonder.co.uk
Whole thread Raw
Responses Re: Enough RAM for entire Database.. cost aside, is this  (Shridhar Daithankar <shridhar@frodo.hserus.net>)
Re: Enough RAM for entire Database.. cost aside, is this going to be fastest?  (Soeren Gerlach <soeren@all-about-shift.com>)
List pgsql-general
Hello,

Sorry for this newbish question.

Briefly, my problem:
------------------
I expect the database I'm working on to reach something in the order of
12-16 Gigabytes, and I am interested in understanding as much as I can about
how I can make this go as fast as possible on a linux system. I haven't run
such a large database before. The nature of the database is such that
successive queries are very likely to lead to  poor cache performance.

I have lots of experience with disks, operating systems, caching, virtual
memory, RAM etc. - just none running gargantuan databases very quickly!
-------------------

I've read all the performance tuning and configuration stuff I can, but
there is one basic question I can't get an answer to:

My question:
--------------------
If I can afford to have the entire database residing in RAM (within the
postgresql shared buffer cache, swapped into real memory) without impacting
other services on the machine, is there any reason why I shouldn't do it,
other than cost? (!)
--------------------


Basically, I'm finding it almost impossible to predict 'how much RAM is
right'. I know I don't need the *entire* database to sit in RAM, and a lot
of this answer depends on a lot of things -  the speed of IO, the nature of
queries etc. But when you get to a certain amount of RAM, (specifically, the
amount where nothing needs to be swapped out), then surely things get a bit
more certain... or do they?

So, could I, for example, setup postgresql with a 16 GB shared buffer cache
and expect the postgresql backend processes to fly like the wind (CPU, RAM
and disk write speed permitting)?

I understand that writes can delay the progression of updates if setup in a
certain way, and that's ok - I'm really just wondering if there are some
*other* boundaries that will get in the way. I've read that I should be able
to configure a linux box (with the right processors) to address up to 64GB
of RAM, and I'm aware of more esoteric boxes like the SGI Altix 3000 which
can go far higher, but maybe that's overkill..

If there are any resources out there that point to other experiences of
others trying to coerce a huge database to run largely from RAM, I'd be
grateful for the links.

Many thanks
Andy
____





pgsql-general by date:

Previous
From: "Chuck Bernardes"
Date:
Subject: cygpopt-0.dll problem, ipc-daemon2
Next
From: o.blomqvist@secomintl.com (Otto Blomqvist)
Date:
Subject: pg_dump and pg_restore problems