Re: pg_dump and large files - is this a problem? - Mailing list pgsql-hackers

From Giles Lean
Subject Re: pg_dump and large files - is this a problem?
Date
Msg-id 13309.1033679729@nemeton.com.au
Whole thread Raw
In response to Re: pg_dump and large files - is this a problem?  (Philip Warner <pjw@rhyme.com.au>)
Responses Re: pg_dump and large files - is this a problem?  (Philip Warner <pjw@rhyme.com.au>)
List pgsql-hackers
Philip Warner writes:

> My limited reading of off_t stuff now suggests that it would be brave to 
> assume it is even a simple 64 bit number (or even 3 32 bit numbers).

What are you reading??  If you find a platform with 64 bit file
offsets that doesn't support 64 bit integral types I will not just be
surprised but amazed.

> One alternative, which I am not terribly fond of, is to have pg_dump
> write multiple files - when we get to 1 or 2GB, we just open another
> file, and record our file positions as a (file number, file
> position) pair. Low tech, but at least we know it would work.

That does avoid the issue completely, of course, and also avoids
problems where a platform might have large file support but a
particular filesystem might or might not.

> Unless anyone knows of a documented way to get 64 bit uint/int file 
> offsets, I don't see we have mush choice.

If you're on a platform that supports large files it will either have
a straightforward 64 bit off_t or else will support the "large files
API" that is common on Unix-like operating systems.

What are you trying to do, exactly?

Regards,

Giles





pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: CVS checkout errors
Next
From: Alvaro Herrera
Date:
Subject: Re: DROP COLUMN misbehaviour with multiple inheritance