Re: Large Dump Files - Mailing list pgsql-admin

From reina@nsi.edu (Tony Reina)
Subject Re: Large Dump Files
Date
Msg-id f40d3195.0207191001.2b8f21a0@posting.google.com
Whole thread Raw
In response to Large Dump Files  (Mike Baker <bakerlmike@yahoo.com>)
List pgsql-admin
Mike,

   Are you sure that 'split' won't work? It is specifically designed
to break apart your files into smaller chunks:


pg_dump dbname | split -b 1000m - filename

Reload with

createdb dbname
cat filename* | psql dbname

-Tony



bakerlmike@yahoo.com (Mike Baker) wrote in message news:<20020718173602.10572.qmail@web13808.mail.yahoo.com>...
> Hi.
>
> I am running postgresql 7.1 on Red Hat Linux, kernel
> build 2.4.2-2.  I am in the process of updating
> postresql to the latest version.
>
> When I dump my database using all the compressions
> tricks in:
> http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE
>
> my dump file is still over 2gigs and thus the dump
> fails.  We have a large amount of BLOB data in the
> database.
>
> I am wondering:
>
> will cat filename* | psql dbname work if my dump file
> has large binary objects in it?
>
> If not, does anyone have experience getting Red Hat to
> deal with large files.  I can find no documentation
> that deals with large files for the kernel build have.
>
> Thanks.
>
> Mike Baker
>
> __________________________________________________
> Do You Yahoo!?
> Yahoo! Autos - Get free new car price quotes
> http://autos.yahoo.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

pgsql-admin by date:

Previous
From: "Chad R. Larson"
Date:
Subject: Re: Tape/DVD Backup Suggestions?
Next
From: "Robson Martins"
Date:
Subject: pg_dump