> We need to define what the precision of a result should be,
> if it is not assigned to a column (where the precision can be
> the atttypmod). Is there any standard defined for? If not,
> what about this:
>
> Internal representation holds different precision for DISPLAY
> and CALC.
>
> On any operation, the DISPLAY precision is set to the higher
> of the two operands.
>
> On add/subtract, the CALC precision becomes the higher of the
> two.
>
> On multiply, the CALC precision is adjusted to hold the exact
> result up to a (variable settable?) maximum.
>
> On divide, the CALC precision is set to max and after it to
> the number of used digits.
>
> If the result get's assigned to an attribute, it is rounded
> to it's atttypmod and both precisions set to that.
>
> The types output function rounds it to the DISPLAY precision.
>
> The input function sets both precisions to the number of
> digits present after decimal point.
>
> Needless to say that there will be special functions to round
> explicitly and set the precisions.
I have a routine that does the necessary rounding on 8 byte floating points to a
precision up to 8 decimal places. Not exactly based on higher math but it
does the job in many financial applications on all assignments, comparisons
and when displayed.
I never use it on arithmetic operations.
The interface is as follows
double RoundDouble(double value, int decimals);
I am happy to submit it for use in the postgres code.
Regards
Theo