Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it? - Mailing list pgsql-general

From dennis jenkins
Subject Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?
Date
Msg-id CAAEzAp-HxQEsO-JKUP=ceEAkw1556rOWi1CGgTnORsJfw5HF0A@mail.gmail.com
Whole thread Raw
In response to Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?  (Aleksey Tsalolikhin <atsaloli.tech@gmail.com>)
Responses Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?  (Mark Felder <feld@feld.me>)
List pgsql-general
On Fri, Mar 16, 2012 at 2:20 PM, Aleksey Tsalolikhin
<atsaloli.tech@gmail.com> wrote:
> On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
> <atsaloli.tech@gmail.com> wrote:

> Our database is about 200 GB - over a WAN link, last time it took 8
> hours to do a full sync, I expect it'll be
> more like 9 or 10 hours this time.
>

Aleksey, a suggestion:  The vast majority of the postgresql wire
protocol compresses well.  If your WAN link is not already compressed,
construct a compressed SSH tunnel for the postgresql TCP port in the
WAN link.  I've done this when rebuilding a 300GB database (via slony)
over a bandwidth-limited (2MB/s) VPN link and it cut the replication
resync time down significantly.

pgsql-general by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Temporal foreign keys
Next
From: Alban Hertroys
Date:
Subject: Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?