Thread: copying a database without dumping it
Hi list, I know this was asked a lot of times on this mailing list. But actually no reply satisfied me :-) However: I've a running postgres database. It's about 6 GB big. Now I want to copy this database to another host. Clearly pg_dumpall comes to the mind. But there're 2 problems: 1. The new host hasn't got an internet connection. So I've to copy the data via DVD. Doesn't sound like a problem, eh? But it is! I've to copy the database to 4 different hosts. So let's calculate: 4 hosts, about 30 GB of sql statements (plain), therefore about 8 DVDs = lot's of annoying DVD changes. Which brings me to the next problem. Time. 2. All these 4 hosts have to be updated within 2-3 hours. Which is simply impossible with a plain text export. If I remember correctly the initial import of the data took about 3-4 hours. On a Sun 240. So I'll took much longer on a (much slower) i386 system. Because of these problems I tought about just cp the data folder. This didn't work. Is there some sort of evil trick to do so? Is postgres binary compatible? (Wouldn't be a huge problem if it's not) Now I'l try a custom dump. Perhaps this will suffice. But I guess it's impossible for a dump to be as fast as a cp. So if a cp would be possible I would favour it. Oh, and did I tell: Everything have to be done tommorow..... *sigh* Many thanks Marc
This link explains lot of useful techniques for backup and restore http://www.postgresql.org/docs/8.1/static/backup.html On 12/15/05, Marc Brünink <mbruen@smartsoft.de> wrote: > Hi list, > > I know this was asked a lot of times on this mailing list. But actually > no reply satisfied me :-) > However: I've a running postgres database. It's about 6 GB big. Now I > want to copy this database to another host. Clearly pg_dumpall comes to > the mind. But there're 2 problems: > > 1. The new host hasn't got an internet connection. So I've to copy the > data via DVD. Doesn't sound like a problem, eh? But it is! I've to copy > the database to 4 different hosts. So let's calculate: 4 hosts, about > 30 GB of sql statements (plain), therefore about 8 DVDs = lot's of > annoying DVD changes. Which brings me to the next problem. Time. > > 2. All these 4 hosts have to be updated within 2-3 hours. Which is > simply impossible with a plain text export. If I remember correctly the > initial import of the data took about 3-4 hours. On a Sun 240. So I'll > took much longer on a (much slower) i386 system. > > Because of these problems I tought about just cp the data folder. This > didn't work. Is there some sort of evil trick to do so? Is postgres > binary compatible? (Wouldn't be a huge problem if it's not) > > Now I'l try a custom dump. Perhaps this will suffice. But I guess it's > impossible for a dump to be as fast as a cp. So if a cp would be > possible I would favour it. Oh, and did I tell: Everything have to be > done tommorow..... *sigh* > > > Many thanks > Marc > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org > -- Regards Pandu
Marc Brünink schrieb: > Hi list, > ... > Now I'l try a custom dump. Perhaps this will suffice. But I guess it's > impossible for a dump to be as fast as a cp. So if a cp would be > possible I would favour it. Oh, and did I tell: Everything have to be > done tommorow..... *sigh* > Actually its faster. Custom dump is the way to go because its much more flexible then dumping plaintext. I'd not use pg_dumpall but pg_dump for each DB in turn. See also the various compression options or if short on free space try uncompressed custom dump and rar. Since the dump only dumps DDL and Data, its much less data then your pg_data directory currently has. HTH Tino Wildenhain
On Donnerstag, Dez 15, 2005, at 12:11 Europe/Berlin, Tino Wildenhain wrote: > Marc Brünink schrieb: >> Hi list, > ... >> Now I'l try a custom dump. Perhaps this will suffice. But I guess >> it's impossible for a dump to be as fast as a cp. So if a cp would >> be possible I would favour it. Oh, and did I tell: Everything have to >> be done tommorow..... *sigh* > Actually its faster. Custom dump is the way to go > because its much more flexible then dumping plaintext. [..] > Since the dump only dumps DDL and Data, its much less > data then your pg_data directory currently has. Oh yes! It's damn fast! All hail to the postgres crew! Gosh. My dump file is just 1.3 GB big. Impressive. I'm looking forward for the import. Guess I'll get another surprise... I used: pg_dump -f shape.postgresql -F c -o -U postgres -Z 9 -v shape One last thing: Will I have to re-cluster my tables? Thanks Marc