Re: per table random-page-cost? - Mailing list pgsql-hackers

From Greg Stark
Subject Re: per table random-page-cost?
Date
Msg-id 407d949e0910191639k6bc9d71bu2c5638a260ce13a3@mail.gmail.com
Whole thread Raw
In response to Re: per table random-page-cost?  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: per table random-page-cost?  (Jeff Davis <pgsql@j-davis.com>)
List pgsql-hackers
On Mon, Oct 19, 2009 at 2:54 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> How about calculating an effective percentage based on other
> information.  effective_cache_size, along with relation and database
> size, come to mind.

I think previous proposals for this have fallen down when you actually
try to work out a formula for this. The problem is that you could have
a table which is much smaller than effective_cache_size but is never
in cache due to it being one of many such tables.

I think it would still be good to have some naive kind of heuristic
here as long as it's fairly predictable for DBAs.

But the long-term strategy here I think is to actually have some way
to measure the real cache hit rate on a per-table basis. Whether it's
by timing i/o operations, programmatic access to dtrace, or some other
kind of os interface, if we could know the real cache hit rate it
would be very helpful.

Perhaps we could extrapolate from the shared buffer cache percentage.
If there's a moderately high percentage in shared buffers then it
seems like a reasonable supposition to assume the filesystem cache
would have a similar distribution.

--
greg


pgsql-hackers by date:

Previous
From: Greg Stark
Date:
Subject: Re: per table random-page-cost?
Next
From: Robert Haas
Date:
Subject: Re: per table random-page-cost?