Re: [HACKERS] tables > 1 gig - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: [HACKERS] tables > 1 gig
Date
Msg-id 376B6492.8BE29B7F@trust.ee
Whole thread Raw
In response to Re: [HACKERS] tables > 1 gig  (Ole Gjerde <gjerde@icebox.org>)
List pgsql-hackers
Ole Gjerde wrote:
> 
> On Fri, 18 Jun 1999, Bruce Momjian wrote:
> [snip - mdtruncate patch]
> 
> While talking about this whole issue, there is one piece missing.
> Currently there is no way to dump a database/table over 2 GB.
> When it hits the 2GB OS limit, it just silently stops and gives no
> indication that it didn't finish.
> 
> It's not a problem for me yet, but I'm getting very close.  I have one
> database with 3 tables over 2GB(in postgres space), but they still come
> out under 2GB after a dump.  I can't do a pg_dump on the whole database
> however, which would be very nice.
> 
> I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do
> something similar to what postgres does with segments.  I haven't looked
> at it yet however, so I can't say for sure.
> 
> Comments?

As pg_dump writes to stdout, you can just use standard *nix tools:

1. use compressed dumps

pg_dump really_big_db | gzip > really_big_db.dump.gz

reload with

gunzip -c really_big_db.dump.gz | psql newdb
or
cat really_big_db.dump.gz | gunzip | psql newdb

2. use split

pg_dump really_big_db | split -b 1m - really_big_db.dump.

reload with

cat really_big_db.dump.* | pgsql newdb

-----------------------
Hannu


pgsql-hackers by date:

Previous
From: Wayne Piekarski
Date:
Subject: Update on my 6.4.2 progress
Next
From: Hannu Krosing
Date:
Subject: Re: [HACKERS] Update on my 6.4.2 progress