Re: full vacuum of a very large table - Mailing list pgsql-admin

From raghu ram
Subject Re: full vacuum of a very large table
Date
Msg-id AANLkTi=CBjmnmPuKFGXx9gKR7RE=d88j=30SupN+h-LF@mail.gmail.com
Whole thread Raw
In response to full vacuum of a very large table  ("Nic Chidu" <nic@chidu.net>)
List pgsql-admin


On Tue, Mar 29, 2011 at 9:26 PM, Nic Chidu <nic@chidu.net> wrote:
Got a situation where a 130 mil rows (137GB) table needs to be brought down in size to  10 mil records (most recent)
with the least amount of downtime.

Doing a full vacuum would be faster on:
 - 120 mil rows deleted and 10 mil active (delete most of them then full vacuum)
 - 10 mil deleted and 120 mil active. (delete small batches and full vacuum after each delete).

Any other suggestions?


Best recommended way is, take the dump of the table after dropping un-used rows from the table and restored back to the database. Dump and reload would be faster than a VACUUM FULL.

--Raghu Ram 

Thanks,

Nic

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

pgsql-admin by date:

Previous
From: "Plugge, Joe R."
Date:
Subject: Re: full vacuum of a very large table
Next
From: Ravi Thati
Date:
Subject: pg_restore on windows with pipe