Re: Vacuum rate limit in KBps - Mailing list pgsql-hackers
From | Greg Smith |
---|---|
Subject | Re: Vacuum rate limit in KBps |
Date | |
Msg-id | 4F189BAD.60300@2ndQuadrant.com Whole thread Raw |
In response to | Re: Vacuum rate limit in KBps (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: Vacuum rate limit in KBps
Re: Vacuum rate limit in KBps Re: Vacuum rate limit in KBps |
List | pgsql-hackers |
On 1/19/12 1:10 PM, Robert Haas wrote: > I have to say that I find that intensely counterintuitive. The > current settings are not entirely easy to tune correctly, but at least > they're easy to explain. I attempt to explain those settings to people in training classes about once a month. It's never been anything but a complete disaster. I am barely concerned about preserving the current UI because, as far as I've been able to tell, there are only a handful of PostgreSQL installatinos on the planet that have managed to use it happily. Even the ones that do have a non-default setup that works usually flailed about for some time until they get something that works, over a few frustrating months. And the result are settings few dare touch forfear of breaking it. It's also worth pointing out that VACUUM problems are very close to the top of the list of problems larger sites run into. So right now we have an inscrutable UI around an often essential part of the database to tune, one that any production site that gets over a few hundred GB of data in it will run into problems with. I wouldn't care about this area if it weren't for people screaming about how bad it is every time the topic comes up. If there's anyone out there who has run a larger PostgreSQL database and not at some point been extremely frustrated with how the current VACUUM settings are controlled, please speak up and say I'm wrong about this. I thought it was well understood the UI was near unusably bad, it just wasn't obvious what to do about it. > What does that 8MB mean and how does it > relate to vacuum_cost_page_miss? If I double vacuum_rate_page_miss, > does that effectively also double the cost limit, so that dirty pages > and hits become relatively cheaper? If so, then I think what that > really means is that the limit is 8MB only if there are no hits and no > dirtied pages - otherwise it's less, and the amount by which it is > less is the result of some arcane calculation. Ugh! Saying what I suggested is an arcane calculation strikes me as pretty weird--we'd be hard pressed to design a more arcane calculation than the one that's already happening. The feedback here so far seems to lead toward making independent read and write knobs. I'm going to chew on the scenarios Robert described and the ones Jim has been commenting on and see if I can refactor this into something friendlier that addresses them. As for the suggestion that I'm bringing this up a bit late in the release cycle, I've been trying. My first submission pushing in this direction--improving the logging first, which is needed before you can usefully measure a behavior change--happened back in September. I've been moving this area as fast as I can get it to budge. I'm concerned now that much will be made of improved performance in 9.2, leading to people converting even larger systems than they used to. And it's not hard at all to find a large system where inability to tune vacuum easily is the top limiting factor on overall performance. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
pgsql-hackers by date: