On Mon, Aug 21, 2006 at 09:16:46AM -0400, mark@mark.mielke.cc wrote:
> This is what I mean by after thought. PostgreSQL is designed for
> 32-bit processors. Which is fine. I'm not complaining. The question
> was whether there is an interest in pursuing 64-bit specific
> optimizations. In the PostgreSQL code, a quick check points me only to
> "has long int 64" as a 64-bit source code #ifdef. Of the six places
> that reference this, five of them actually slow down the code, as they
> check for overflow of the 'long int' result beyond 4 bytes of
> data. The sixth place is used to define the 64-bit type in use by
> PostgreSQL, which I suspect is infrequently used.
There are two defines, the end result being to declare an int64 type
which is used a fair bit around the place. biginteger and bigserial
being the obvious ones.
The checks I see relate to strtol, where the code only wants an int4.
There's no strtoi so on 32 bit the range check is built-in, but if long
is 64 bit you have to do the check seperatly.
That's just an interface problem, there's not a lot we can do about
that really.
> I believe the answer is no. No or few 64-bit optimization possibilities
> have been chased down, probably because some or many of these would:
>
> 1) require significant re-architecture
>
> 2) reduce the performance in a 32-bit world
Can you think of any places at all where 64-bit would make a difference
to processing? 64-bit gives you more memory, and on some x86 chips, more
registers, but that's it.
Have anice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.