Re: pg_dump and large files - is this a problem? - Mailing list pgsql-hackers

From Philip Warner
Subject Re: pg_dump and large files - is this a problem?
Date
Msg-id 5.1.0.14.0.20021003230559.032fd028@mail.rhyme.com.au
Whole thread Raw
In response to pg_dump and large files - is this a problem?  (Philip Warner <pjw@rhyme.com.au>)
Responses Re: pg_dump and large files - is this a problem?  (Giles Lean <giles@nemeton.com.au>)
List pgsql-hackers
At 11:06 AM 2/10/2002 -0400, Tom Lane wrote:
>It needs to get done; AFAIK no one has stepped up to do it.  Do you want
>to?

My limited reading of off_t stuff now suggests that it would be brave to 
assume it is even a simple 64 bit number (or even 3 32 bit numbers). One 
alternative, which I am not terribly fond of, is to have pg_dump write 
multiple files - when we get to 1 or 2GB, we just open another file, and 
record our file positions as a (file number, file position) pair. Low tech, 
but at least we know it would work.

Unless anyone knows of a documented way to get 64 bit uint/int file 
offsets, I don't see we have mush choice.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                 |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



pgsql-hackers by date:

Previous
From: "Shridhar Daithankar"
Date:
Subject: Large databases, performance
Next
From: "Mario Weilguni"
Date:
Subject: Re: pg_dump and large files - is this a problem?