Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:
> On Wed, 30 Oct 2002, Pedro M. Ferreira wrote:
>> I looked at some of these emails and it seemed to me that the problem
>> was that Tom did'nt want a parameter that would force people to know
>> about printf number formatting. I think the first solution above (the
>> SHORT and LONG way) is simple, maintains usual output as default and
>> enables 'maximum' precision at request.
> That seems reasonable then, Tom'll probably give any other objections he
> might have if he has any.
My recollection is that other people (perhaps Peter?) were the ones
objecting before. However I'd be somewhat unhappy with the proposal
as given:
>>Option 'SHORT' would be default and produce the standard sprintf(ascii,...
>>Option 'LONG' would produce sprintf(ascii, "%25.18g", num).
since this seems to me to hardwire inappropriate assumptions about the
number of significant digits in a double. (Yes, I know practically
everyone uses IEEE floats these days. But it's inappropriate for PG
to assume that.)
AFAICT the real issue here is that binary float representations will
have a fractional decimal digit of precision beyond what DBL_DIG claims.
I think I could support adding an option that switches between the
current output format:sprintf(ascii, "%.*g", DBL_DIG, num);
and:sprintf(ascii, "%.*g", DBL_DIG+1, num);
and similarly for float4. Given carefully written float I/O routines,
reading the latter output should reproduce the originally stored value.
(And if the I/O routines are not carefully written, you probably lose
anyway.) I don't see a need for allowing more flexibility than that.
Comments?
regards, tom lane