>> For numeric types (with no explicit scale and precision) JDBC returns 0
for
>> both precision and scale (ResultSetMetaData.getPrecision and getScale
>> methods). This is breaking my app and IMO does not reflect true state of
>> things since Postgres docs state: "NUMERIC without any precision or scale
>> creates a column in which numeric values of any precision and scale can
be
>> stored, up to the implementation limit on precision".
>>
>> Shouldn't PG JDBC driver return maximum possible values for precision and
>> scale in such cases?
>>
>
> There are a couple of issues here:
>
> 1) While PG supports larger numeric values, the maximum precision you can
> constrain it to is 1000.
>
> jurka=# create temp table tt (a numeric (1001, 1));
> ERROR: NUMERIC precision 1001 must be between 1 and 1000
>
> So do you return 1000 as the maximum precision because that's the maximum
> that they can pass into create table even if they might actually receive a
> value back that has a higher precision? Or do you return something larger
> (the internal pg limit which I don't know offhand) even though they can't
> create a table column with that limit?
> What we do now certainly isn't all that great, but I haven't heard a
> concrete suggestion on what's better.
OK, I hear ya. How about allowing user to set the default precision and
scale values for "numeric" type? Not sure how this would be done in JDBC
world but should be doable.
IMO a reasonable default maxing out both precision and scale (1000,500)
should work for most applications. Numeric type is not all that common
anyway.
Peter