Re: SPI bug. - Mailing list pgsql-hackers

From Thomas Hallgren
Subject Re: SPI bug.
Date
Msg-id thhal-0o+FRAwk/yicy9vp4OGTMr5iXa1neg6@mailblocks.com
Whole thread Raw
In response to Re: SPI bug.  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Tom Lane wrote:

>Thomas Hallgren <thhal@mailblocks.com> writes:
>  
>
>>Exactly. Why should a user of the SPI API be exposed to or even 
>>concerned with this at all? As an application programmer you couldn't 
>>care less. You want your app to perform equally well on all platforms 
>>without surprises. IMHO, PostgreSQL should make a decision whether the 
>>SPI functions support 32-bit or the 64-bit sizes for result sets and the 
>>API should reflect that choice. Having the maximum number of rows 
>>dependent on platform ports is a bad design.
>>    
>>
>
>The fact that 64-bit platforms can tackle bigger problems than 32-bit
>ones is not a bug to be worked around, and so I don't see any problem
>with the use of "long" for tuple counts.
>
I'm not concerned with the use of 32 or 64 bits. I would be equally 
happy with both. What I am concerned is that the problem that started 
this "SPI bug" was caused by the differences in how platforms handle the 
int and long types. Instead of rectifying this problem once and for all, 
the type was just changed to a long.

>  Furthermore, we have never
>promised ABI-level compatibility across versions inside the backend,
>and we are quite unlikely to make such a promise in the foreseeable
>future.
>
I know that no promises has been made but PostgreSQL is improved every 
day and this would be a very easy promise to make.

>  (Most of the time you are lucky if you get source-level
>compatibility ;-).)  So I can't get excited about avoiding platform
>dependency in this particular tiny aspect of the API.
>  
>
Maybe I've misunderstood the objectives behind the SPI layer altogether 
but since it's well documented and seems to be the "public interface" of 
the backend that extensions are supposed to use, I think it would be an 
excellent idea to make that interface as stable and platform independent 
as possible. I can't really see the disadvantages.

The use of int, long, and long long is often a source of bugs (as with 
this one) and many recommend that you avoid them when possible. The 
definition of int is meant to be a datatype used for storing integers 
where size of that datatype equals natural size of processor. The long 
is defined as 'at least as big as int' and the 'long long' is 'bigger 
than long'. I wonder what that makes 'long long' on a platform where the 
int is 64 bits. 128 bits? Also, the interpretation of the definition 
vary between compiler vendors. On Windows Itanium, the int is 32 bit. On 
Unix it's 64. It's a mess...

The 1998 revision of C declares the following types for a good reason:
   int8_t , int16_t,  int32_t   int64_t,  uint8_t, uint16_t, uint32_t, uint64_t.

Why not use them unless you have very specific requirements? And why not 
*always* use them in a public interface like the SPI?

Regards,
Thomas Hallgren




pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Feature freeze date for 8.1
Next
From: Tom Lane
Date:
Subject: Re: pg_locks needs a facelift