Re: How to backup a postgreSQL of 80 GByte ? - Mailing list pgsql-general

From SZUCS Gábor
Subject Re: How to backup a postgreSQL of 80 GByte ?
Date
Msg-id 004701c2aa6f$9f3cd110$0a03a8c0@fejleszt2
Whole thread Raw
In response to How to backup a postgreSQL of 80 GByte ?  (Michelle Konzack <linux.mailinglists@freenet.de>)
Responses Re: How to backup a postgreSQL of 80 GByte ?  (Michelle Konzack <linux.mailinglists@freenet.de>)
List pgsql-general
----- Original Message -----
From: "Michelle Konzack" <linux.mailinglists@freenet.de>
To: <pgsql-general@postgresql.org>
Sent: Friday, December 20, 2002 12:55 AM
Subject: [GENERAL] How to backup a postgreSQL of 80 GByte ?


> Hello,
>
> I am running a postgreSQL under Debian/GNU-Linux 3.0r0 and a Ultra-
> ATA-133 RAID-1 with three 120 GByte IBM-Drives.
>
> Now I have the Problem to backup the stuff.
>
> I was thinking, to export all tables seperately to text files, gzip
> and backup it. There will be around 830.000 millons tables !


Call me suspicious, but 830 THOUSAND millions of tables in 80Gbyte is a
little too much to believe (less than 1 bit for a table on the average)...
Even if you also want to dump views, for example, which, regarding the size
of the db, I can't imagine (managing such a large db you can't be that lame
;) ).

Is 80GB the binary DB size, the dump size, or the compressed dump size? Is
it really 830 billion tables, or is it 830 thousand/million? Just for
curiosity...

As for the hardware, I agree with Scott regarding tapes. The typical thing
to backup/archive. As for its speed-- well, let's hope you won't have to
restore from it ;)

As for compression, bzip2 is really much better on text files. Also don't
forget the "-9" flag (though I think this should be the default).

Another surprisingly good compression method is using plain zip (!) twice
(!). I only know one that's even better -- it's zip.rar :) Try to find the
most effective zip and rar programs, though. I used a quite old zip and the
newest winrar (but there is also a rar3.10 for linux, for example) These
methods have the inconvenience of uneasy decompressing or browsing, which,
according to the raw data size, is really a bad thing. Make a test on a 80MB
dump part first to see if it's worth for you at all -- IIRC it can be more
than 10% better than bzip2.

G.
--
while (!asleep()) sheep++;

---------------------------- cut here ------------------------------


pgsql-general by date:

Previous
From: Karel Zak
Date:
Subject: Re: Getting 2002-12-18T17:32:40-05:00 (ISO 8601) from to_date()
Next
From:
Date:
Subject: server closed the connectio unexpectedly