Re: Performance - Mailing list pgsql-performance

From tv@fuzzy.cz
Subject Re: Performance
Date
Msg-id 5c6c67e9f0c4abed2b7ac84e83fe1f32.squirrel@sq.gransy.com
Whole thread Raw
In response to Re: Performance  (Claudio Freire <klaussfreire@gmail.com>)
Responses Re: Performance
List pgsql-performance
> On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra <tv@fuzzy.cz> wrote:
>> Workload A: Touches just a very small portion of the database, to the
>> 'active' part actually fits into the memory. In this case the cache hit
>> ratio can easily be close to 99%.
>>
>> Workload B: Touches large portion of the database, so it hits the drive
>> very often. In this case the cache hit ratio is usually around RAM/(size
>> of the database).
>
> You've answered it yourself without even realized it.
>
> This particular factor is not about an abstract and opaque "Workload"
> the server can't know about. It's about cache hit rate, and the server
> can indeed measure that.

OK, so it's not a matter of tuning random_page_cost/seq_page_cost? Because
tuning based on cache hit ratio is something completely different (IMHO).

Anyway I'm not an expert in this field, but AFAIK something like this
already happens - btw that's the purpose of effective_cache_size. But I'm
afraid there might be serious fail cases where the current model works
better, e.g. what if you ask for data that's completely uncached (was
inactive for a long time). But if you have an idea on how to improve this,
great - start a discussion in the hackers list and let's see.

regards
Tomas


pgsql-performance by date:

Previous
From: Václav Ovsík
Date:
Subject: Re: poor execution plan because column dependence
Next
From: Florian Weimer
Date:
Subject: Re: Linux: more cores = less concurrency.