Thread: Re: [WIP] cache estimates, cache access cost

Re: [WIP] cache estimates, cache access cost

From
"Kevin Grittner"
Date:
Greg Smith  wrote:
> I'm not too concerned about the specific case you warned about
> because I don't see how sequential scan vs. index costing will be
> any different on a fresh system than it is now.
I think the point is that if, on a fresh system, the first access to
a table is something which uses a tables scan -- like select count(*)
-- that all indexed access would then  tend to be suppressed for that
table.  After all, for each individual query, selfishly looking at
its own needs in isolation, it likely *would* be faster to use the
cached heap data.
I see two ways out of that -- one hard and one easy.  One way would
be to somehow look at the impact on the cache of potential plans and
the resulting impact on overall throughput of the queries being run
with various cache contents.  That's the hard one, in case anyone
wasn't clear.  ;-)  The other way would be to run some percentage of
the queries *without* considering current cache contents, so that the
cache can eventually adapt to the demands.
-Kevin


Re: [WIP] cache estimates, cache access cost

From
Greg Smith
Date:
On 06/19/2011 06:15 PM, Kevin Grittner wrote:
> I think the point is that if, on a fresh system, the first access to
> a table is something which uses a tables scan -- like select count(*)
> -- that all indexed access would then  tend to be suppressed for that
> table.  After all, for each individual query, selfishly looking at
> its own needs in isolation, it likely *would* be faster to use the
> cached heap data.
>    

If those accesses can compete with other activity, such that the data 
really does stay in the cache rather than being evicted, then what's 
wrong with that?  We regularly have people stop by asking for how to pin 
particular relations to the cache, to support exactly this sort of scenario.

What I was would expect on any mixed workload is that the table would 
slowly get holes shot in it, as individual sections were evicted for 
more popular index data.  And eventually there'd be little enough left 
for it to win over an index scan.  But if people keep using the copy of 
the table in memory instead, enough so that it never really falls out of 
cache, well that's not necessarily even a problem--it could be 
considered a solution for some.

The possibility that people can fit their entire table into RAM and it 
never leaves there is turning downright probable in some use cases now.  
A good example are cloud instances using EC2, where people often 
architect their systems such that the data set put onto any one node 
fits into RAM.  As soon as that's not true you suffer too much from disk 
issues, so breaking the databases into RAM sized pieces turns out to be 
very good practice.  It's possible to tune fairly well for this case 
right now--just make the page costs all low.  The harder case that I see 
a lot is where all the hot data fits into cache, but there's a table or 
two of history/archives that don't.  And that would be easier to do the 
right thing with given this bit of "what's in the cache?" percentages.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: [WIP] cache estimates, cache access cost

From
"Kevin Grittner"
Date:
Greg Smith <greg@2ndQuadrant.com> wrote:
> On 06/19/2011 06:15 PM, Kevin Grittner wrote:
>> I think the point is that if, on a fresh system, the first access
>> to a table is something which uses a tables scan -- like select
>> count(*) -- that all indexed access would then  tend to be
>> suppressed for that table.  After all, for each individual query,
>> selfishly looking at its own needs in isolation, it likely
>> *would* be faster to use the cached heap data.
> 
> If those accesses can compete with other activity, such that the
> data really does stay in the cache rather than being evicted, then
> what's wrong with that?
The problem is that if somehow the index *does* find its way into
cache, the queries might all run an order of magnitude faster by
using it.  The *first* query to bite the bullet and read through the
index wouldn't, of course, since it would have all that random disk
access.  But its not hard to imagine an application mix where this
feature could cause a surprising ten-fold performance drop after
someone does a table scan which could persist indefinitely.  I'm not
risking that in production without a clear mechanism to
automatically recover from that sort of cache skew.
-Kevin


Re: [WIP] cache estimates, cache access cost

From
Greg Smith
Date:
Kevin Grittner wrote:
> But its not hard to imagine an application mix where this
> feature could cause a surprising ten-fold performance drop after
> someone does a table scan which could persist indefinitely.  I'm not
> risking that in production without a clear mechanism to
> automatically recover from that sort of cache skew

The idea that any of this will run automatically is a dream at this 
point, so saying you want to automatically recover from problems with 
the mechanism that doesn't even exist yet is a bit premature.  Some of 
the implementation ideas here might eventually lead to where real-time 
cache information is used, and that is where the really scary feedback 
loops you are right to be worried about come into play.  The idea for 
now is that you'll run this new type of ANALYZE CACHE operation 
manually, supervised and at a time where recent activity reflects the 
sort of workload you want to optimize for.  And then you should review 
its results to make sure the conclusions it drew about your cache 
population aren't really strange.

To help with that, I was thinking of writing a sanity check tool that 
showed how the cached percentages this discovers compare against the 
historical block hit percentages for the relation.  An example of how 
values changed from what they were already set to after a second ANALYZE 
CACHE is probably useful too.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: [WIP] cache estimates, cache access cost

From
"Kevin Grittner"
Date:
Greg Smith <greg@2ndquadrant.com> wrote:
> The idea that any of this will run automatically is a dream at
> this point, so saying you want to automatically recover from
> problems with the mechanism that doesn't even exist yet is a bit
> premature.
Well, I certainly didn't mean it to be a reason not to move forward
with development -- I wouldn't have raised the issue had you not
said this upthread:
> I don't see how sequential scan vs. index costing will be any
> different on a fresh system than it is now.
All I was saying is: I do; here's how...
Carry on.
-Kevin