Re: big database resulting in small dump - Mailing list pgsql-general

From Lonni J Friedman
Subject Re: big database resulting in small dump
Date
Msg-id CAP=oouFU+++Qpgun8zW1s-TQ9zviC8ripEGxiyG=v=uMKLaUXA@mail.gmail.com
Whole thread Raw
In response to big database resulting in small dump  (Ilya Ivanov <forn@ngs.ru>)
Responses Re: big database resulting in small dump  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On Fri, Jul 20, 2012 at 11:05 AM, Ilya Ivanov <forn@ngs.ru> wrote:
> I have a 8.4 database (installed on ubuntu 10.04 x86_64). It holds Zabbix
> database. The database on disk takes 10Gb. SQL dump takes only 2Gb. I've
> gone through
> http://archives.postgresql.org/pgsql-general/2008-08/msg00316.php and got
> some hints. Naturally, the biggest table is history (the second biggest is
> history_uint. Together they make about 95% of total size). I've tried to
> perform CLUSTER on it, but seemed to be taking forever (3 hours and still
> not completed). So I cancelled it and went with database drop and restore.
> It resulted in database taking up 6.4Gb instead of 10Gb. This is a good
> improvement, but still isn't quite what I expect. I would appreciate some
> clarification.

Its not entirely clear what behavior you expect here.  Assuming that
you're referring to running pg_dump, then you should just about never
expect the size of the resulting dump to be equal to the amount of
disk space the database server files consume on disk.  For example,
when I pg_dump a database that consumes about 290GB of disk, the
resulting dump is about 1.3GB.  This is normal & expected behavior.

pgsql-general by date:

Previous
From: Ilya Ivanov
Date:
Subject: big database resulting in small dump
Next
From: Tom Lane
Date:
Subject: Re: big database resulting in small dump