Re: [HACKERS] tables > 1 gig - Mailing list pgsql-hackers

From Ole Gjerde
Subject Re: [HACKERS] tables > 1 gig
Date
Msg-id Pine.LNX.4.05.9906181318570.13506-100000@snowman.icebox.org
Whole thread Raw
In response to Re: [HACKERS] tables > 1 gig  (Bruce Momjian <maillist@candle.pha.pa.us>)
List pgsql-hackers
On Fri, 18 Jun 1999, Bruce Momjian wrote:
[snip - mdtruncate patch]

While talking about this whole issue, there is one piece missing.
Currently there is no way to dump a database/table over 2 GB.
When it hits the 2GB OS limit, it just silently stops and gives no
indication that it didn't finish.

It's not a problem for me yet, but I'm getting very close.  I have one
database with 3 tables over 2GB(in postgres space), but they still come
out under 2GB after a dump.  I can't do a pg_dump on the whole database
however, which would be very nice.

I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do
something similar to what postgres does with segments.  I haven't looked
at it yet however, so I can't say for sure.

Comments?

Ole Gjerde



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: [HACKERS] Installation procedure wishest
Next
From: Jeff Hoffmann
Date:
Subject: has anybody else used r-tree indexes in 6.5?