Re: [SPAM] Re: autovacuum disk IO - Mailing list pgsql-performance
From | Scott Marlowe |
---|---|
Subject | Re: [SPAM] Re: autovacuum disk IO |
Date | |
Msg-id | CAOR=d=0vOZGfBbYB4iR8YwLnN+MKB58kPXG45f=18yMzr5UD2w@mail.gmail.com Whole thread Raw |
In response to | Re: [SPAM] Re: autovacuum disk IO (Moreno Andreo <moreno.andreo@evolu-s.it>) |
Responses |
Re: [SPAM] Re: autovacuum disk IO
|
List | pgsql-performance |
On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote: > Il 02/03/2016 16:49, Scott Marlowe ha scritto: >> >> On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe <scott.marlowe@gmail.com> >> wrote: >>> >>> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk <admin@leboutique.com> >>> wrote: >>>> >>>> Hi. >>>> >>>> I've noticed that autovac. process worked more than 10 minutes, during >>>> this >>>> zabbix logged more than 90% IO disk utilization on db volume.... >>>> >>>> ===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic >>>> vacuum of table "lb_upr.public._reference32": index scans: 1 >>>> pages: 0 removed, 263307 remain >>>> tuples: 298 removed, 1944753 remain, 0 are dead but not yet removable >>>> buffer usage: 67814 hits, 265465 misses, 15647 dirtied >>>> avg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s >>>> system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec >>>> >>>> Is it possible to log autovac. io impact during it execution? >>>> Is there any way to limit or "nice" autovac. process? >>> >>> I'll assume you're running a fairly recent version of postgresql. >>> >>> There are a few settings that adjust how hard autovacuum works when >>> it's working. >>> >>> autovacuum_max_workers tells autovacuum how many threads to vacuum >>> with. Lowering this will limit the impact of autovacuum, but generally >>> the default setting of 3 is reasonable on most machines. >>> >>> autovacuum_vacuum_cost_delay sets how to wail between internal rounds. >>> Raising this makes autovacuum take bigger pauses internally. The >>> default of 20ms is usually large enough to keep you out of trouble, >>> but feel free to raise it and see if your IO utilization lowers. >>> >>> autovacuum_vacuum_cost_limit sets a limit to how much work to do >>> between the pauses set by the cost delay above. Lowering this will >>> cause autovac to do less work between pauses. >>> >>> Most of the time I'm adjusting these I'm making vacuum more >>> aggressive, not less aggressive because vacuum falling behind is a >>> problem on the large, fast production systems I work on. In your case >>> you want to watch for when autovacuum IS running, and using a tool >>> like vmstat or iostat or iotop, watch it for % utilization. You can >>> then adjust cost delay and cost limit to make it less aggressive and >>> see if your io util goes down. >>> >>> Note though that 90% utilization isn't 100% so it's not likely >>> flooding the IO. But if you say raise cost delay from 20 to 40ms, it >>> might drop to 75% or so. The primary goal here is to arrive at numbers >>> that left autovacuum keep up with reclaiming the discarded tuples in >>> the database without getting in the way of the workload. >>> >>> If your workload isn't slowing down, or isn't slowing down very much, >>> during autobvacuum then you're OK. >> >> Just to add a point here. If you're machine can't keep up with >> production load AND the job of vacuuming, then your IO subsystem is >> too slow and needs upgrading. The difference between a pair of >> spinning 7200RPM drives and a pair of enterprise class SSDs (always >> with power off safe writing etc, consumer SSDs can eat your data on >> power off) can be truly huge. I've seen improvements from a few >> hundred transactions per second to thousands of transactions per >> second by a simple upgrade like that. >> >> > ... or maybe add some more RAM to have more disk caching (if you're on > *nix).... this worked for me in the past... even if IMHO it's more a > temporary "patch" while upgrading (if it can't be done in a hurry) than a > real solution... Oh yeah, definitely worth looking at. But RAM can't speed up writes, just reads, so it's very workload dependent. If you're IO subsystem is maxing out on writes, faster drives / IO. If it's maxing out on reads, more memory. But if your dataset is much bigger than memory (say 64GB RAM and a 1TB data store) then more RAM isn't going to be the answer. So as usual, to help out OP we might want to know more about his system. There's a lot of helpful tips for reporting slow queries / performance issues here: https://wiki.postgresql.org/wiki/SlowQueryQuestions
pgsql-performance by date: