On Tue, Mar 24, 2015 at 10:05:12AM -0400, Bruce Momjian wrote:
> On Tue, Mar 24, 2015 at 09:47:56AM -0400, Noah Misch wrote:
> > On Sun, Mar 22, 2015 at 10:53:12PM -0400, Bruce Momjian wrote:
> > > On Sun, Mar 22, 2015 at 04:41:19PM -0400, Noah Misch wrote:
> > > > On Wed, Mar 18, 2015 at 05:52:44PM -0400, Bruce Momjian wrote:
> > > > > This "junk" digit zeroing matches the Oracle behavior:
> > > > >
> > > > > SELECT to_char(1.123456789123456789123456789d, '9.9999999999999999999999999999999999999') as x from
dual;
> > > > > ------
> > > > > 1.1234567891234568000000000000000000000
> > > > >
> > > > > Our output with the patch would be:
> > > > >
> > > > > SELECT to_char(float8 '1.123456789123456789123456789', '9.9999999999999999999999999999999999999');
> > > > > ------
> > > > > 1.1234567891234500000000000000000000000
> >
> > > > These outputs show Oracle treating 17 digits as significant while PostgreSQL
> > > > treats 15 digits as significant. Should we match Oracle in this respect while
> > > > we're breaking compatibility anyway? I tend to think yes.
> > >
> > > Uh, I am hesistant to adjust our precision to match Oracle as I don't
> > > know what they are using internally.
> >
> > http://sqlfiddle.com/#!4/8b4cf/5 strongly implies 17 significant digits for
> > float8 and 9 digits for float4.
>
> OK, I am fine in using those values if you can find them as compiler
> defines, but I don't see how we can grab those values from a user test
> on Oracle.
>
> There are some "invisible" float digits that don't appear in %f but can
> be shown if desired --- I think we used to do that in the regression
> tests, but found they added too much platform-specific randomness. Do
> we want to go in that direction?
How about if we have to_char() honor our extra_float_digits GUC, so
users who want those digits can get them?
-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB
http://enterprisedb.com
+ Everyone has their own god. +