On Jan 18, 2012, at 3:49 AM, Greg Smith wrote:
> On 01/17/2012 09:00 PM, Jim Nasby wrote:
>> Could we expose both?
>>
>> On our systems writes are extremely cheap... we don't do a ton of them (relatively speaking), so they tend to just
fitinto BBU cache. Reads on the other hard are a lot more expensive, at least if they end up actually hitting disk. So
weactually set page_dirty and page_hit the same.
>
> My thinking had been that you set as the rate tunable, and then the rates of the others can be adjusted by advanced
usersusing the ratio between the primary and the other ones. So at the defaults:
>
> vacuum_cost_page_hit = 1
> vacuum_cost_page_miss = 10
> vacuum_cost_page_dirty = 20
>
> Setting a read rate cap will imply a write rate cap at 1/2 the value. Your setup would then be:
>
> vacuum_cost_page_hit = 1
> vacuum_cost_page_miss = 10
> vacuum_cost_page_dirty = 1
>
> Which would still work fine if the new tunable was a read cap. If the cap is a write one, though, this won't make
anysense. It would allow reads to happen at 10X the speed of writes, which is weird.
>
> I need to go back and consider each of the corner cases here, where someone wants one of [hit,miss,dirty] to be an
unusualvalue relative to the rest. If I can't come up with a way to make that work as it does now in the new code,
that'sa problem. I don't think it really is, it's just that people in that situation will need to all three upwards.
It'sstill a simpler thing to work out than the current situation, and this is an unusual edge case.
What about doing away with all the arbitrary numbers completely, and just state data rate limits for hit/miss/dirty?
BTW, this is a case where it would be damn handy to know if the miss was really a miss or not... in the case where
we'realready rate limiting vacuum, could we afford the cost of get_time_of_day() to see if a miss actually did have to
comefrom disk?
--
Jim C. Nasby, Database Architect jim@nasby.net
512.569.9461 (cell) http://jim.nasby.net