Re: Reducing the overhead of NUMERIC data - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: Reducing the overhead of NUMERIC data
Date
Msg-id 1130955981.8300.1753.camel@localhost.localdomain
Whole thread Raw
In response to Re: Reducing the overhead of NUMERIC data  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Reducing the overhead of NUMERIC data
List pgsql-hackers
On Tue, 2005-11-01 at 17:55 -0500, Tom Lane wrote:
> "Jim C. Nasby" <jnasby@pervasive.com> writes:
> > FWIW, most databases I've used limit NUMERIC to 38 digits, presumably to
> > fit length info into 1 or 2 bytes. So there's something to be said for a
> > small numeric type that has less overhead and a large numeric (what we
> > have today).
>
> I don't think it'd be worth having 2 types.  Remember that the weight is
> measured in base-10k digits.  Suppose for instance
>     sign        1 bit
>     weight        7 bits (-64 .. +63)
>     dscale        8 bits (0..255)
> This gives us a dynamic range of 1e-256 to 1e255 as well as the ability
> to represent up to 255 displayable fraction digits.  Does anyone know
> any real database applications where that's not enough?
>
> (I'm neglecting NaN here in supposing we need only 1 bit for sign,
> but we could have a special encoding for NaN.  Perhaps disallow the
> weight = -64 case and use that to signal NaN.)

I've coded a short patch to do this, which is the result of two
alternate patches and some thinking, but maybe not enough yet.

The patch given here is different on two counts from above:

This sets...
#define NUMERIC_MAX_PRECISION        64

since

#define NUMERIC_MAX_RESULT_SCALE    (NUMERIC_MAX_PRECISION * 2)

We don't seem to be able to use all of the bits actually available to us
in the format. Perhaps we need to decouple these now? Previously, we had
room for 14 bits, which gave a maximum of 16384. We were using
NUMERIC_MAX of 1000, so doubling it didn't give problems.

The above on-disk format showed sign & weight together, whereas the
current code has sign and dscale together. Trying to put sign and weight
together is somewhat difficult, since weight is itself a signed value.

I coded it up that way around, which is reasonably straightforward but
harder than the patch enclosed here. But AFAICS - which isn't that far
normally I grant you, doing things that way around would require some
twos-complement work to get things correct when weight is negative. That
worries me.

IMHO we should accept the step down in maximum numeric precision (down
to "only" 64 digits) rather than put extra processing into every
manipulation of a NUMERIC datatype. With luck, I've misunderstood and we
can have both performance and precision.

If not, I commend 64 digits to you as sufficient for every imaginable
purpose - saving 2 bytes off every numeric column. (And still 28 decimal
places more accurate than Oracle).

Best Regards, Simon Riggs

Attachment

pgsql-hackers by date:

Previous
From: Idar Tollefsen
Date:
Subject: Re: 8.1RC1 fails to build on OS X (10.4)
Next
From: "Jim C. Nasby"
Date:
Subject: Re: slru.c race condition (was Re: TRAP: FailedAssertion("!((itemid)->lp_flags