On 04/11/2019 08:09 PM, Jeff Janes wrote:
On 04/11/2019 07:40 PM, Jeff Janes wrote:
The disk usage doesn't reach a steady state after one or two autovacs? Or it does, but you are just unhappy about the ratio between the steady state size and the theoretical fully packed size?
Cheers,
Jeff
Since we dump&restore production DB daily into staging environment, the difference in size (as reported by psql's \l+) is 11GB in a freshly restored DB as opposed to 70GB in production.
Yeah, that seems like a problem. Do you have long lived transactions/snapshots that are preventing vacuuming from removing dead tuples? You can run a manual "vacuum verbose" and see how many dead but nonremovable tuples there were, or set log_autovacuum_min_duration to some non-negative value less than the autovac takes, and do the same.
vacuum frees tuples just fine. It's just that by the time each run finishes many more accumulate due to table update activity, ad nauseum. So this unused space constantly grows. Here's a sample autovacuum run:
2019-04-11 19:39:44.450841500 [] LOG: automatic vacuum of table "foo.public.bar": index scans: 1
2019-04-11 19:39:44.450843500 pages: 0 removed, 472095 remain, 4 skipped due to pins, 39075 skipped frozen
2019-04-11 19:39:44.450844500 tuples: 19150 removed, 2725811 remain, 465 are dead but not yet removable
2019-04-11 19:39:44.450845500 buffer usage: 62407557 hits, 6984769 misses, 116409 dirtied
2019-04-11 19:39:44.450846500 avg read rate: 16.263 MB/s, avg write rate: 0.271 MB/s
2019-04-11 19:39:44.450847500 system usage: CPU 59.05s/115.26u sec elapsed 3355.28 sec
(Indeed, those dumps you take daily might be the source of those long-lived snapshots. How long does a dump take?)
The daily dumps are taken daily from the slave server as part of stock FreeBSD postgres port activity.
I don't think it impacts the master server.
Thanks, I'll try those. But as I said freshly restored DB is only 11GB in size, not 70 (only public schema is used).