Re: Cost-Based Vacuum Delay tuning - Mailing list pgsql-performance
From | Erik Jones |
---|---|
Subject | Re: Cost-Based Vacuum Delay tuning |
Date | |
Msg-id | E4870E01-5239-4DAD-8D8E-47D4A84D99A8@myemma.com Whole thread Raw |
In response to | Re: Cost-Based Vacuum Delay tuning (Guillaume Cottenceau <gc@mnc.ch>) |
List | pgsql-performance |
On Dec 7, 2007, at 10:44 AM, Guillaume Cottenceau wrote: > Erik Jones <erik 'at' myemma.com> writes: > >>> vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200 >>> 40/200 100/1000 150/1000 200/1000 300/1000 >>> >>> VACUUM ANALYZE time 54 s 112 s 188 >>> s 109 s 152 s 190 s 274 s >>> SELECT time 50 s 28 s 26 >>> s 24 s 22 s 20 s 19 s >> >> While you do mention that the table you're running your select on is >> too big to fit in the shared_buffers, the drop in time between the >> first run and the rest most likely still reflects the fact that when > > These figures don't show a difference between first run and > subsequent runs. For each parameter tuning, a couple of runs are > fired after database restart, and once the value is approximately > constant, it's picked and put in this table. The "deactivated" > shows the (stable, from subsequent runs) figure when vacuum delay > is disabled (vacuum_cost_delay parameter quoted), not the first > run, if that's where the confusion came from. It was. > Is it on pgsql-hackers? I haven't found much stuff in > pgsql-performance while looking for "vacuum_cost_delay tuning". > >> would be good have multiple autovacuum workers that could be tuned >> for different times or workloads. I know Alvarro was going to work > > Sounds interesting. Run the initial archive search against pgsql-general over the last year for a thread called 'Autovacuum Improvements' >>> I'm wondering if it would not be possible to dynamically ignore >>> (or lower, if it makes more sense?) the Cost-Based Vacuum Delay >>> during vacuum full, if a configurable amount of queries are >>> waiting for the lock? >>> >>> (please save yourself from answering "you should never run VACUUM >>> FULL if you're vacuuming enough" - as long as VACUUM FULL is >>> available in PostgreSQL, there's no reason to not make it as >>> practically usable as possible, albeit with low dev priority) >> >> Ok, I won't say what you said not to say. But, I will say that I >> don't agree with you're conjecture that VACUUM FULL should be made >> more lightweight, it's like using dynamite to knock a whole in a wall >> for a window. > > Thanks for opening a new kind of trol^Hargument against VACUUM > FULL, that one's more fresh (at least to me, who doesn't follow > the list too close anyway). > Just for the record, I inherited a poorly (actually, "not" would > be more appropriate) tuned database, containing more than 90% of > dead tuples on large tables, and I witnessed quite some > performance improvement while I could fix that. If you really want the VACUUM FULL effect without having to deal with vacuum_cost_delay, use CLUSTER. It also re-writes the table and, AFAIK, is not subject to any of the vacuum related configuration parameters. I'd argue that if you really need VACUUM FULL, you may as well use CLUSTER to get a good ordering of the re-written table. Erik Jones Software Developer | Emma® erik@myemma.com 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com
pgsql-performance by date: