Re: pg_dump and large files - is this a problem? - Mailing list pgsql-hackers

From Tom Lane
Subject Re: pg_dump and large files - is this a problem?
Date
Msg-id 16708.1035208030@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_dump and large files - is this a problem?  (Philip Warner <pjw@rhyme.com.au>)
Responses Re: pg_dump and large files - is this a problem?
List pgsql-hackers
Philip Warner <pjw@rhyme.com.au> writes:
> It might be good if someone who knows a little more than me about
> endianness etc has a look at the patch - specifically this bit of code:

> #if __BYTE_ORDER == __LITTLE_ENDIAN

Well, the main problem with that is there's no such symbol as
__BYTE_ORDER ...

I'd prefer not to introduce one, either, if we can possibly avoid it.
I know that we have BYTE_ORDER defined in the port header files, but
I think it's quite untrustworthy, since there is no other place in the
main distribution that uses it anymore (AFAICS only contrib/pgcrypto
uses it at all).

The easiest way to write and reassemble an arithmetic value in a
platform-independent order is via shifting.  For instance,
// write, LSB firstfor (i = 0; i < sizeof(off_t); i++){    writebyte(val & 0xFF);    val >>= 8;}
// read, LSB firstval = 0;shift = 0;for (i = 0; i < sizeof(off_t); i++){    val |= (readbyte() << shift);    shift +=
8;}

(This assumes readbyte delivers an unsigned byte, else you might need to
mask it with 0xFF before shifting.)
        regards, tom lane


pgsql-hackers by date:

Previous
From: "Shridhar Daithankar"
Date:
Subject: Re: Postgresql and multithreading
Next
From: Olivier PRENANT
Date:
Subject: Please help