Thread: Decimal vs. Bigint memory usage

Decimal vs. Bigint memory usage

From
Gregor Vollmer
Date:
Dear List,

we are currently updating our application to use multiple database
backends, the main backend on our site will be Postgres, though.

Some of our columns exceed the limit of the 'integer' type, now we are
discussing the alternatives. Personally, I would use bigint since it
should suffice in most cases, but using decimal is under discussion,
too, because our modules would be able to precisely specify their
required column sizes.

We do not do any arithmetic on the columns, only saving and retrieval,
is decimal as fast as bigint in that case?

How does decimal store the number internally, is it a fixed size
through-out all rows and how does it compare to bigint?

Thanks in advance,
Gregor

Re: Decimal vs. Bigint memory usage

From
Marti Raudsepp
Date:
On Thu, Sep 29, 2011 at 15:15, Gregor Vollmer
<vollmer@ekp.uni-karlsruhe.de> wrote:
> We do not do any arithmetic on the columns, only saving and retrieval, is
> decimal as fast as bigint in that case?

It's slightly slower because numeric is passed around by pointer, not
by value. Bigint is by-value in 64-bit Postgres versions. But that's
probably only noticeable if you're processing lots of values in a
single query, or running CREATE INDEX.

Personally I'd choose bigint for efficiency reasons. But always keep
in mind that arithmetic works differently on integers and numerics:

db=# select 1::bigint/10 as x;
 x
---
 0
db=# select 1::numeric/10 as x;
           x
------------------------
 0.10000000000000000000

> How does decimal store the number internally, is it a fixed size through-out
> all rows and how does it compare to bigint?

Numeric (decimal) is always variable-length. The specification in
column type doesn't affect storage. Bigint is always 8 bytes.

For numbers with less than 8 digits, numeric is slightly smaller than
bigint. For larger numbers, bigint is smaller.

create table dec (i numeric);
insert into dec values(0), (1), (11), (101), (1001), (10001),
(100001), (1000001), (10000001), (100000001), (1000000001),
(10000000001), (100000000001), (1000000000001);

select pg_column_size(i), i::text from dec;
 pg_column_size |       i
----------------+---------------
              3 | 0
              5 | 1
              5 | 11
              5 | 101
              5 | 1001
              7 | 10001
              7 | 100001
              7 | 1000001
              7 | 10000001
              9 | 100000001
              9 | 1000000001
              9 | 10000000001
              9 | 100000000001
             11 | 1000000000001

Regards,
Marti

Re: Decimal vs. Bigint memory usage

From
planas
Date:
Gregor

On Thu, 2011-09-29 at 14:15 +0200, Gregor Vollmer wrote:
Dear List,

we are currently updating our application to use multiple database 
backends, the main backend on our site will be Postgres, though.

Some of our columns exceed the limit of the 'integer' type, now we are 
discussing the alternatives. Personally, I would use bigint since it 
should suffice in most cases, but using decimal is under discussion, 
too, because our modules would be able to precisely specify their 
required column sizes.

We do not do any arithmetic on the columns, only saving and retrieval, 
is decimal as fast as bigint in that case?

How does decimal store the number internally, is it a fixed size 
through-out all rows and how does it compare to bigint?

Thanks in advance,
Gregor

My preference is to use bigint since you are already using integer. It depends on what the values mean and how you use them whether decimal could be used. Are these values used at all in any queries?

--
Jay Lozier
jslozier@gmail.com

Re: Decimal vs. Bigint memory usage

From
Craig Ringer
Date:
On 29/09/2011 9:13 PM, Marti Raudsepp wrote:


>> How does decimal store the number internally,

Numeric is stored as binary-coded decimal. This isn't fast.

I'd say using a bigint is a no-brainer unless you think there's a chance
you might exceed the maximum size for a 64-bit integer.

--
Craig Ringer