Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it? - Mailing list pgsql-general

From Scott Marlowe
Subject Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?
Date
Msg-id CAOR=d=3031k1RjxZuX5LFQR518pyfSemYqThyh3RO6TgHFQNFQ@mail.gmail.com
Whole thread Raw
In response to Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?  (Aleksey Tsalolikhin <atsaloli.tech@gmail.com>)
List pgsql-general
On Fri, Mar 16, 2012 at 1:20 PM, Aleksey Tsalolikhin
<atsaloli.tech@gmail.com> wrote:
> On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
> <atsaloli.tech@gmail.com> wrote:
>> Is there any way to consolidate the pages on the slave without taking
>> replication offline?
>
> Filip Rembiałkowski suggested:   maybe CLUSTER?
>
> Greg Williamson suggested: pg_reorg
>
>
> Thank you, Filip and Greg.  They would both work IF I had enough free
> space on the slave,
> which, sadly, I do not. :(
>
> CLUSTER requires free space at least equal to the sum of the table
> size and the index sizes.
>
> pg_reorg rquires amount of space twice larger than target table and indexes.
>
> Too bad I can't say "CLUSTER TABLE tablename USING_ARRAY
> /dev/temp/array" or something
> like that, using an external array for temporary storage just for the
> cluster.  I do have an external
> USB drive with more than enough free space on it.
>
> I've got a maintenance window scheduled for tomorrow night to get rid
> of the holes in the pages on the
> slave.  My plan is to shut down the application, destroy the
> replication set, re-create it, and start
> replication, which will do a full sync.  It's a litle overkill but I
> have this procedure documented
> and I've done it before.

If you've got other big tables in the set, you could put that one
table into its own set, then drop that set and resubscribe a new set
with just that table in it, then merge the sets.

pgsql-general by date:

Previous
From: prem tolani
Date:
Subject: Query
Next
From: Adrian Klaver
Date:
Subject: Re: Query