Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it? - Mailing list pgsql-general

From Aleksey Tsalolikhin
Subject Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?
Date
Msg-id CA+jMWodvN_9x0Snwty3Q1157mxUeTOyJ-=YYBGR-f+HvWmQrwg@mail.gmail.com
Whole thread Raw
In response to Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?  (Aleksey Tsalolikhin <atsaloli.tech@gmail.com>)
Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-general
Executive summary:  Why would the TOAST table on the slave have 4x the
page count of the master?  Is there a way to compact it if I don't
have enough disk space to duplicate the table?  How do I prevent this
situation from recurring?

On Wed, Mar 14, 2012 at 7:38 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>
> The good news is that if the table is
> bloated, it should be able to just write to the free space in the
> table that's already there.

Thank you, I got it.  The table is not bloated, as per
check_postgres.pl --action=bloat

I compared number of pages on the large table between the Slony origin
node and the slave, using "SELECT relname, relpages FROM pg_class"

origin:  386,918
slave: 421,235

origin toast: 19,211,059
slave toast: 70,035,481

How about that?

reltuples on origin toast table:  6.76368e+07
reltuples on slave toast table:  7.00354e+07

Here is my situation:

1. I've got an 800 GB (not TB as I wrote earlier today) filesystem.

2. 595 GB is in use by the Postgres database, and over 527 GB by this
large table (including TOAST)

3. 155 GB is free.

4. pg_dump of this large table comes out to 212 GB in size (so
physical size is nearly 3x the logical size)

Why does the slave TOAST table have 4x the page count of the master?

Best,
Aleksey

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Did xpath_table get dropped.
Next
From: Aleksey Tsalolikhin
Date:
Subject: Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?