Thread: finding out vacuum completion %, and vacuum VS vacuum full
Hi. We have archived and removed majority of data from a database, the main impact was on 4 tables, which lost several million rows (3 tables) and several dozen million rows (one table). Naturally we decided to execute VACUUM FULL on the database to reclaim all the space; it keeps running for 22 hours already. Can VACUUM get stuck and run forever e.g. should we cancel it? Is there any way to diagnose the amount of time remaining for it to run? Should we rather cancel it, and run a normal VACUUM? 8.2 manual said you should use FULL when the majority of data is deleted but I'm concerned about the time it takes :) Also, would increasing maintenance_work_mem while the query is running help? It's currently 2Gb out of 4Gb RAM, postgres proccess supposedly running the vacuum is slowly eating up memory but it's not even at 30% yet...
Ok here's the update after ~30 hours we have killed vacuum full and did vacuum on the tables we freed. However, VACUUM hasn't freed any space at all 0_o We want to launch vacuum full on per-table basis but we can't have any more downtime right now so we will launch it at night today. The original question still stands, is there any way to diagnose vacuum full time-to-run? Or any way to optimize it besides the obvious (maintenace_work_mem & max_fsm_pages increases and no workload)? Can someone please help with this one? I wonder why are people only trying to help w/simple question or when I flame 0_o
On Aug 7, 2007, at 1:17 AM, Sergei Shelukhin wrote: > Ok here's the update after ~30 hours we have killed vacuum full and > did vacuum on the tables we freed. > However, VACUUM hasn't freed any space at all 0_o > We want to launch vacuum full on per-table basis but we can't have any > more downtime right now so we will launch it at night today. > > The original question still stands, is there any way to diagnose > vacuum full time-to-run? It could easily take many days. VACUUM FULL is painfully slow. Dropping indexes and suchlike can make it faster, but it's still painfully slow. > Or any way to optimize it besides the obvious (maintenace_work_mem & > max_fsm_pages increases and no workload)? > Can someone please help with this one? VACUUM FULL is about the worst thing you can do in this case. If you have adequate disk space free (enough to hold another copy of the new table) and the table has an index on it, then CLUSTER the table. If not, dump and restore the table. Cheers, Steve
"Steve Atkins" <steve@blighty.com> writes: > On Aug 7, 2007, at 1:17 AM, Sergei Shelukhin wrote: > >> Or any way to optimize it besides the obvious (maintenace_work_mem & >> max_fsm_pages increases and no workload)? >> Can someone please help with this one? What does the output of "vacuum verbose" say? > If you have adequate disk space free (enough to hold another > copy of the new table) and the table has an index on it, then > CLUSTER the table. Or you can use ALTER TABLE to change the type of a column which forces the whole table to be rewritten. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com
On Tue, Aug 07, 2007 at 08:40:47AM -0700, Steve Atkins wrote: > If you have adequate disk space free (enough to hold another > copy of the new table) and the table has an index on it, then > CLUSTER the table. Be advised that there's some MVCC issues with CLUSTER in current versions, but normally you'd only run into them in a serialized transaction. If you're not using that you're probably fine, but remember that pg_dump and pg_dumpall use serialized transactions. -- Decibel!, aka Jim Nasby decibel@decibel.org EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Attachment
> If not, dump and restore the table. Unfortunately we do not have adequate disk space, we wanted to reduce the database size in order to back it up, cause there is no more space for backups either 0_o Is there any way to prevent Dump & restore - you mean pg_dump?