Re: Application user name attribute on connection pool - Mailing list pgsql-general

From Radosław Smogura
Subject Re: Application user name attribute on connection pool
Date
Msg-id 201008022343.00122.rsmogura@softperience.eu
Whole thread Raw
In response to Re: Application user name attribute on connection pool  (John R Pierce <pierce@hogranch.com>)
Responses Re: Application user name attribute on connection pool
List pgsql-general
> how would you handle scale factors?   numeric represents a BCD data
> type, with a decimal fractional component.   how would you represent,
> say,  10000.001  in your version?  how would you add 1.001 to 10000.01
> in your binary representation?

I think about datastructure something like this
[precision 16bits][scale 15 bits][1 bit sign]int[n] (here n can be always
calculeted as the (size of datatype - 8) / 4.

In this way the number 10000.001 will be stored as the single element array
8,3,+,{10000001}

If scale is same typically in aggregate, then it's just adding this array of
integers.

If scales aren't same then one of argument's must be multiplied by 10^(scales
diff).

In this way the result of 1.001 + 10000.01 will be
1001 + 1000001*10 with scale 3.

I think there is no big algorithmic difference beteween nbase encoding, and
encoding on full bytes - becuase in nbase encoding the carry in addition you
take as the (a+b)/1000. Here the difference is only that carry will be taken
from shifting longs eg:
long l = a[0] + b[0];
carry = L >> 32;
s[0] = l & 0xffffff;

> PostgreSQL already has BIGINT aka INT8, which are 8 bytes, and can
> represent integers up to like 9 billion billion (eg, 9 * 10^18).
But I think about numbers with precision - you can use float for moneys, etc
(rounding problems), and dividing each value in application by some scale
isn't nice, too.

pgsql-general by date:

Previous
From: Felipe de Jesús Molina Bravo
Date:
Subject: Re: solaris slow
Next
From: "Hu, William"
Date:
Subject: Postgresql database for distributed transactions