Thread: Autotuning Group Commit
Currently, we have group commit functionality via GUC parameterscommit_delay and commit_siblings Group commit is either off or on. Since we do not have a log writer daemon, there is no way to know whether that is optimal. There is research to show that setting group commit on when it is not useful actually causes a performance degradation. Clearly, this means that on a server that is sometimes busy and sometimes not, you will be unsure of how to set these parameters. ISTM that we can autotune the group commit functionality: Each transaction commit gets the current time(NULL) immediately before it commits. If we store that value in shared memory behind XLogInsertLock, then each time we commit we would be able to tell how long it has been since the last commit. We could thus make a true/false judgement as to whether it would have gained us anything to wait for the commit_delay time before committing. If we store the results of the last 10 commits (various ways...), then if we have 9+ out of 10 last commits as potentially beneficial group commits then we have a reasonably probability that commits are happening on average faster than commit_delay. As a result, we know to turn on the group commit feature by setting group_commit_recommendation = true. Each backend would start with group commit turned off. Each time it commits it reads the current setting of group_commit_recommendation. If this is set, it copies the group_commit_recommendation to a local variable, so that the next time it commits it will wait for CommitDelay. If CommitDelay is not set, then we would avoid the calculation altogether and this would remain the default. With this proposal, group commit will turn on or off according to recent history reacting within 10*commit_delay milliseconds of a heavy transaction load starting, turning off again even more quickly. None of that would require knowledge, or tuning by the administrator. That is sufficient to react to even small bursts of activity. We would also be able to remove the commit_siblings GUC. It represents a simple heuristic only for determining whether commit_delay should be applied, so is effectively superceded by this proposal. There would be no additional memory per backend and a minor additional shared memory overhead, which could easily be optimised with some crafty code. Overall, the additional minor CPU cost per transaction commit would be worth the potential saving of 10ms on many transactions where group commit would not gain performance at all. In any case, the functionality would be optional and turned off by default. Any comments, please? -- Best Regards, Simon Riggs
1) I'm in favor of autotuning anything possible. 2) In addition to turning group_commit on and off, what about also adjusting the commit delay, based on statistics of recent commits? It might require a slightly larger sample set (maybe the last 100 commits), but it seems it would provide more flexibility (hence more usefulness) to the autotuning. I belive you'd want to first calculate the elapsed time between each commit in the sample set, then look for groupings of elapsed time. If you have a set that looks like this: Time (ms) Number 2 * 4 * 6 8 ** 10 * 12 ****** 14 **** 16 ** 18 20 * then you'd want a delay of 16ms. I think this calculation could be done fairly quickly by grouping the commits into buckets of different elapsed times, then look for the largest elapsed time that has a number of commits greater than the mean number of commits for all the buckets. But I'm not a statistician, hopefully someone here is. :) -- Jim C. Nasby, Database Consultant decibel@decibel.org Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"
On Sat, 2005-01-22 at 00:18 -0600, Jim C. Nasby wrote: > 1) I'm in favor of autotuning anything possible. > 2) In addition to turning group_commit on and off, what about also > adjusting the commit delay, based on statistics of recent commits? It > might require a slightly larger sample set (maybe the last 100 commits), > but it seems it would provide more flexibility (hence more usefulness) > to the autotuning. > > I belive you'd want to first calculate the elapsed time between each > commit in the sample set, then look for groupings of elapsed time. If > you have a set that looks like this: > > Time (ms) Number > 2 * > 4 * > 6 > 8 ** > 10 * > 12 ****** > 14 **** > 16 ** > 18 > 20 * > > then you'd want a delay of 16ms. I think this calculation could be done > fairly quickly by grouping the commits into buckets of different elapsed > times, then look for the largest elapsed time that has a number of > commits greater than the mean number of commits for all the buckets. But > I'm not a statistician, hopefully someone here is. :) Yes, I considered that, but since we're talking about a frequently executed piece of code, I was hoping to keep it short and sweet. What do others think? The other issue is the likely time granularity for many OS will be 10ms anyway, so you have a choice of 0, 10, 20, 30ms... Overall, group commit isn't much use until the log disk is getting very busy. Delays don't really need to be more than a disk rotation, so even a laptop can manage 11ms between sequential writes. I'd suggest hardcoding commit_delay at 10ms, but we still need an on/off switch, so it seems sensible to keep it. We may be in a better position to use fine grained settings in the future. -- Best Regards, Simon Riggs
On Sat, Jan 22, 2005 at 08:47:37AM +0000, Simon Riggs wrote: > On Sat, 2005-01-22 at 00:18 -0600, Jim C. Nasby wrote: > > 1) I'm in favor of autotuning anything possible. > > 2) In addition to turning group_commit on and off, what about also > > adjusting the commit delay, based on statistics of recent commits? It > > might require a slightly larger sample set (maybe the last 100 commits), > > but it seems it would provide more flexibility (hence more usefulness) > > to the autotuning. > > > > I belive you'd want to first calculate the elapsed time between each > > commit in the sample set, then look for groupings of elapsed time. If > > you have a set that looks like this: > > > > Time (ms) Number > > 2 * > > 4 * > > 6 > > 8 ** > > 10 * > > 12 ****** > > 14 **** > > 16 ** > > 18 > > 20 * > > > > then you'd want a delay of 16ms. I think this calculation could be done > > fairly quickly by grouping the commits into buckets of different elapsed > > times, then look for the largest elapsed time that has a number of > > commits greater than the mean number of commits for all the buckets. But > > I'm not a statistician, hopefully someone here is. :) > > Yes, I considered that, but since we're talking about a frequently > executed piece of code, I was hoping to keep it short and sweet. > What do others think? I don't think the frequently executed code would need to differ between the two options. The remaining analysis would be done by a background process. -- Jim C. Nasby, Database Consultant decibel@decibel.org Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"
Simon Riggs <simon@2ndquadrant.com> writes: > Each transaction commit gets the current time(NULL) immediately before > it commits. time() has 1 second resolution and ergo is utterly useless for this. gettimeofday() may have sufficient resolution, but the resolution is not specified anywhere. regards, tom lane
On Fri, 21 Jan 2005 23:52:51 +0000, Simon Riggs <simon@2ndquadrant.com> wrote: >Currently, we have group commit functionality via GUC parameters > commit_delay >and commit_siblings And since 7.3 we have ganged WAL writes (c.f. the thread starting at http://archives.postgresql.org/pgsql-hackers/2002-10/msg00331.php) which IMHO is a better solution to the same problem. Maybe the code dealing with commit_xxx parameters should just be removed. Are you or is anybody else aware of benchmarks showing that group commit via commit_xxx is still useful? ServusManfred
On Mon, 2005-01-24 at 12:12 +0100, Manfred Koizar wrote: > On Fri, 21 Jan 2005 23:52:51 +0000, Simon Riggs <simon@2ndquadrant.com> > wrote: > >Currently, we have group commit functionality via GUC parameters > > commit_delay > >and commit_siblings > > And since 7.3 we have ganged WAL writes (c.f. the thread starting at > http://archives.postgresql.org/pgsql-hackers/2002-10/msg00331.php) which > IMHO is a better solution to the same problem. Maybe the code dealing > with commit_xxx parameters should just be removed. Thanks for making me aware of that explanatory link. The comments in the code say something along those lines...I've done time in xlog.c :( but I misunderstood the effect of that code. > Maybe the code dealing > with commit_xxx parameters should just be removed. Based upon the description, I would be inclined to agree. > Are you or is anybody else aware of benchmarks showing that group commit > via commit_xxx is still useful? Now that you mention it, no, but then I had thought other contention masked it. My understanding was that group commit could often slow performance if inappropriately applied, so seeing no benefit was not evidence that there was no benefit to be had. Actually, the reason for raising the subject was for the very reason you suggest. I'm about to benchmark the system under heavy load and was looking at ways of deciding whether that part of the code would ever be worthwhile.... the autotuning capability is a side effect of being able to dynamically measure the utility of that feature. (That thought could be applied widely). So: objective: measure whether commit_delay is worth keeping. -- Best Regards, Simon Riggs
Simon Riggs <simon@2ndquadrant.com> writes: > So: objective: measure whether commit_delay is worth keeping. My guess is that it would only be useful in highly specialized cases, but since the code is so small and localized, it's hard to argue that there's any great value in ripping it out either. regards, tom lane