createtable testb asselect'true'::bool as b from generate_series(1,1000000) i;
SELECT1000000
$ \dt+ testb List of relations Schema │ Name │ Type │ Owner │ Persistence │ Access method │ Size │ Description
────────┼───────┼───────┼────────┼─────────────┼───────────────┼───────┼───────────── public │ testb │ table │ depesz │ permanent │ heap │ 35 MB │
Why is that, though? From what I gather the answer is: performance. I don't know low-level details, but based on what I understand, processors process data in arch-dependent block sizes. 64bit processor works on 64 bits. And this means that if you want to do something on int4 value, that is part of 8 byte block, you have to add operation to zero the other 32 bits.
For now, yes... I'm suggesting that the documentation be completed by adding a few sentences few extra sentences.
Troy.
#
On Mon, Feb 14, 2022, 12:51 Bruce Momjian <bruce@momjian.us> wrote:
On Fri, Feb 11, 2022 at 08:12:08PM +0000, PG Doc comments form wrote: > The following documentation comment has been logged on the website: > > Page: https://www.postgresql.org/docs/13/datatype-numeric.html > Description: > > > The actual storage requirement is two bytes for each group of four decimal > digits, plus three to eight bytes overhead. > > Please describe what 'overhead' means. > > I'd like to be able to calculate the data size of NUMBER(19,4). I can > calculate 2 bytes per 4 digits... with 19 digits, I have 5 groups of 4 > digits, > > so the data length I seek is 5 bytes + overhead... then I'm left hanging. > :(
Well, you can create it and then call pg_column_size():