On 11 Dec 2002 at 9:08, Ricardo Ryoiti S. Junior wrote:
> > Initially upping the shared buffers help but at some pointthe advantage starts
> > to disappear. Decide that figure with trial and error but certainly it will be
> > around 100-200MB for most cases..
>
> Are there any studies around this? I remember that there where
Well, you should be able to test it if you have big enough setup but.. anyway
(I don't have it now either)
> other people saying the same thing. But at least logically, it seems that
> the database server should know better than the OS what it needs cached or
> not. Also, doesn't the db buffer cache requires a bit lower "overload" for
> data to be accessed?
Well, the thing is postgresql caches data tuples and not the data file. So only
those tuples will be in memory which are required but OS can cache the data
file and serve the data faster.
And for any database, file caching is as much important as data caching.
However postgresql does not try to outsmart OS as with say, Oracle. It relies
on OS to be smart enough to handle caching properly.
A test of this is to load a database and do continous heavy activity on it as
in real life. To start with set shared buffers very high. Monitor output of
ipcs as in how much shared memory postgresql typically uses. That is all it
needs and typically it is much lower than half the RAM for multi-GIG machines.
In that case, rest of the shared buffers for postgresql are actually waste as
shared memory pages are locked in memory and can not be reused by OS.
HTH
Bye
Shridhar
--
Sometimes a man will tell his bartender things he'll never tell his doctor. --
Dr. Phillip Boyce, "The Menagerie" ("The Cage"), stardate unknown.