Re: Canceling a vacuum freeze - Mailing list pgsql-admin

From Alvaro Herrera
Subject Re: Canceling a vacuum freeze
Date
Msg-id 20160523224600.GA407273@alvherre.pgsql
Whole thread Raw
In response to Canceling a vacuum freeze  (Natalie Wenz <nataliewenz@ebureau.com>)
Responses Re: Canceling a vacuum freeze
List pgsql-admin
Natalie Wenz wrote:
> Question:
> When you cancel a vacuum freeze (or auto vacuum that is running because the table's max xid exceeded
autovacuum_max_freeze_age)that's running on a large table, how much work is potentially lost?  
>
> I ask because I have a vacuum freeze running in single-user mode on a database that is 46 TB, 35 TB of which is one
table.I recently migrated it to 9.5.2 (on a new host), but because so much of the data is in one ridiculous table, I
dumpedthe old database, excluding that big table, with pg_dump, restored everything else using pg_restore, and then
usedcopy statements to dump and then load the data from the big table in chunks, so I could dump and reload that table
withsome parallelism. I got everything migrated, and started logging live data to it. Everything went well until a few
weekslater, when the auto vacuum wasn't able to keep up with our transaction load. The database shut itself down and I
gotit running in single-user mode and started a vacuum freeze. After a while we realized (you probably know where this
isgoing) that the vacuum has to freeze allllllll of the rows in the 35 TB table. I'm wondering if it would be worth it
toshut it down again and retune for a more aggressive vacuum. I'm under the impression that some work would probably be
lost,but is there a limit to how much?  

The problem with freeze vacuums is that they need to scan the whole
table to completion.  If you cancel one, the only thing you really lose
is the portion of the scan that has executed to completion thus far.  If
you put a second one to work, it will start from scratch; depending on
how slow the original one was, it might catch up pretty quickly.

The second point to keep in mind is that if there are lots of dead
tuples that need to be cleaned up, scanning the indexes might take
considerable additional time.  You want to avoid this as much as
possible by having the largest maintenance_work_mem you can afford
(keeping in mind that it's overridden by autovacuum_work_mem); but as I
recall the limit is hardcoded at 1GB which is rather unhelpful for very
large tables.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


pgsql-admin by date:

Previous
From: Natalie Wenz
Date:
Subject: Canceling a vacuum freeze
Next
From: Keith
Date:
Subject: Re: Canceling a vacuum freeze