On Wed, Dec 21, 2011 at 1:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> It strikes me that one simple thing we could do is extend the current
> heuristic that says "pin the latest page". That is, pin the last K
> pages into SLRU, and apply LRU or some other method across the rest.
> If K is large enough, that should get us down to where the differential
> in access probability among the older pages is small enough to neglect,
> and then we could apply associative bucketing or other methods to the
> rest without fear of getting burnt by the common usage pattern. I don't
> know what K would need to be, though. Maybe it's worth instrumenting
> a benchmark run or two so we can get some facts rather than guesses
> about the access frequencies?
I guess the point is that it seems to me to depend rather heavily on
what benchmark you run. For something like pgbench, we initialize the
cluster with one or a few big transactions, so the page containing
those XIDs figures to stay hot for a very long time. Then after that
we choose rows to update randomly, which will produce the sort of
newer-pages-are-hotter-than-older-pages effect that you're talking
about. But the slope of the curve depends heavily on the scale
factor. If we have scale factor 1 (= 100,000 rows) then chances are
that when we randomly pick a row to update, we'll hit one that's been
touched within the last few hundred thousand updates - i.e. the last
couple of CLOG pages. But if we have scale factor 100 (= 10,000,000
rows) we might easily hit a row that hasn't been updated for many
millions of transactions, so there's going to be a much longer tail
there. And some other test could yield very different results - e.g.
something that uses lots of subtransactions might well have a much
longer tail, while something that does more than one update per
transaction would presumably have a shorter one.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company