On 2021-Sep-12, Dean Rasheed wrote:
> So the fix is just to remove the upper bound on this local_rscale, as
> we do for the full-precision calculation. This doesn't impact
> performance, because it's only computing the logarithm to 8
> significant digits at this stage, and when x is very close to 1 like
> this, ln_var() has very little work to do -- there is no argument
> reduction to do, and the Taylor series terminates on the second term,
> since 1-x is so small.
I came here just to opine that there should be a comment about there not
being a clamp to the maximum scale. For example, log_var says "Set the
scales .. so that they each have more digits ..." which seems clear
enough; I think the new comment is a bit on the short side.
> Coming up with a test case that doesn't have thousands of digits is a
> bit fiddly, so I chose one where most of the significant digits of the
> result are a long way after the decimal point and shifted them up,
> which makes the loss of precision in HEAD more obvious. The expected
> result can be verified using bc with a scale of 2000.
I couldn't get bc (version 1.07.1) to output the result; it says
Runtime warning (func=(main), adr=47): non-zero scale in exponent
Runtime error (func=(main), adr=47): exponent too large in raise
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/