Re: reducing random_page_cost from 4 to 2 to force index scan - Mailing list pgsql-performance

From Greg Smith
Subject Re: reducing random_page_cost from 4 to 2 to force index scan
Date
Msg-id 4DD487CF.8010003@2ndquadrant.com
Whole thread Raw
In response to Re: reducing random_page_cost from 4 to 2 to force index scan  (Jim Nasby <jim@nasby.net>)
Responses Re: reducing random_page_cost from 4 to 2 to force index scan
List pgsql-performance
Jim Nasby wrote:
> I think the challenge there would be how to define the scope of the hot-spot. Is it the last X pages? Last X serial
values?Something like correlation? 
>
> Hmm... it would be interesting if we had average relation access times for each stats bucket on a per-column basis;
thatwould give the planner a better idea of how much IO overhead there would be for a given WHERE clause 

You've already given one reasonable first answer to your question here.
If you defined a usage counter for each histogram bucket, and
incremented that each time something from it was touched, that could
lead to a very rough way to determine access distribution.  Compute a
ratio of the counts in those buckets, then have an estimate of the total
cached percentage; multiplying the two will give you an idea how much of
that specific bucket might be in memory.  It's not perfect, and you need
to incorporate some sort of aging method to it (probably weighted
average based), but the basic idea could work.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


pgsql-performance by date:

Previous
From: Scott Carey
Date:
Subject: Re: hash semi join caused by "IN (select ...)"
Next
From: Robert Haas
Date:
Subject: Re: reducing random_page_cost from 4 to 2 to force index scan