Re: DB cache size strategies - Mailing list pgsql-general

From scott.marlowe
Subject Re: DB cache size strategies
Date
Msg-id Pine.LNX.4.33.0402101547470.29897-100000@css120.ihs.com
Whole thread Raw
In response to Re: DB cache size strategies  ("Ed L." <pgsql@bluepolka.net>)
Responses Re: DB cache size strategies  ("Ed L." <pgsql@bluepolka.net>)
List pgsql-general
On Tue, 10 Feb 2004, Ed L. wrote:

> On Tuesday February 10 2004 1:42, Martijn van Oosterhout wrote:
> > I generally give Postgresql about 64-128MB of shared memory, which covers
> > all of the system tables and the most commonly used small tables. The
> > rest of the memory (this is a 1GB machine) I leave for the kernel to
> > manage for the very large tables.
>
> Interesting.  Why leave very large tables to the kernel instead of the db
> cache?  Assuming a dedicated DB server and a DB smaller than available RAM,
> why not give the DB enough RAM to get the entire DB into the DB cache?
> (Assuming you have the RAM).

Because the kernel is more efficient (right now) at caching large data
sets.

With the ARC cache manager that will likely wend it's way into 7.5, it's
quite a likely possibility that postgresql will be able to efficiently
handle a larger cache, but it will still be a shared memory cache, and
those are still usually much slower than the kernel's cache.


pgsql-general by date:

Previous
From: "scott.marlowe"
Date:
Subject: Re: Join query on 1M row table slow
Next
From: CSN
Date:
Subject: Re: Join query on 1M row table slow