On 01/17/2012 09:00 PM, Jim Nasby wrote:
> Could we expose both?
>
> On our systems writes are extremely cheap... we don't do a ton of them (relatively speaking), so they tend to just
fitinto BBU cache. Reads on the other hard are a lot more expensive, at least if they end up actually hitting disk. So
weactually set page_dirty and page_hit the same.
My thinking had been that you set as the rate tunable, and then the
rates of the others can be adjusted by advanced users using the ratio
between the primary and the other ones. So at the defaults:
vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 20
Setting a read rate cap will imply a write rate cap at 1/2 the value.
Your setup would then be:
vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 1
Which would still work fine if the new tunable was a read cap. If the
cap is a write one, though, this won't make any sense. It would allow
reads to happen at 10X the speed of writes, which is weird.
I need to go back and consider each of the corner cases here, where
someone wants one of [hit,miss,dirty] to be an unusual value relative to
the rest. If I can't come up with a way to make that work as it does
now in the new code, that's a problem. I don't think it really is, it's
just that people in that situation will need to all three upwards. It's
still a simpler thing to work out than the current situation, and this
is an unusual edge case.
--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com