Re: Fixed length data types issue - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Fixed length data types issue
Date
Msg-id 4133.1157994943@sss.pgh.pa.us
Whole thread Raw
In response to Re: Fixed length data types issue  (Gregory Stark <stark@enterprisedb.com>)
Responses Re: Fixed length data types issue  (Gregory Stark <stark@enterprisedb.com>)
Re: Fixed length data types issue  (mark@mark.mielke.cc)
List pgsql-hackers
Gregory Stark <stark@enterprisedb.com> writes:
> In any case it seems a bit backwards to me. Wouldn't it be better to
> preserve bits in the case of short length words where they're precious
> rather than long ones? If we make 0xxxxxxx the 1-byte case it means ...

Well, I don't find that real persuasive: you're saying that it's
important to have a 1-byte not 2-byte header for datums between 64 and
127 bytes long.  Which is by definition less than a 2% savings for those
values.  I think its's more important to pick bitpatterns that reduce
the number of cases heap_deform_tuple has to think about while decoding
the length of a field --- every "if" in that inner loop is expensive.

I realized this morning that if we are going to preserve the rule that
4-byte-header and compressed-header cases can be distinguished from the
data alone, there is no reason to be very worried about whether the
2-byte cases can represent the maximal length of an in-line datum.
If you want to do 16K inline (and your page is big enough for that)
you can just fall back to the 4-byte-header case.  So there's no real
disadvantage if the 2-byte headers can only go up to 4K or so.  This
gives us some more flexibility in the bitpattern choices.

Another thought that occurred to me is that if we preserve the
convention that a length word's value includes itself, then for a
1-byte header the bit pattern 10000000 is meaningless --- the count
has to be at least 1.  So one trick we could play is to take over
this value as the signal for "toast pointer follows", with the
assumption that the tuple-decoder code knows a-priori how big a
toast pointer is.  I am not real enamored of this, because it certainly
adds one case to the inner heap_deform_tuple loop and it'll give us
problems if we ever want more than one kind of toast pointer.  But
it's a possibility.

Anyway, a couple of encodings that I'm thinking about now involve
limiting uncompressed data to 1G (same as now), so that we can play
with the first 2 bits instead of just 1:

00xxxxxx    4-byte length word, aligned, uncompressed data (up to 1G)
01xxxxxx    4-byte length word, aligned, compressed data (up to 1G)
100xxxxx    1-byte length word, unaligned, TOAST pointer
1010xxxx    2-byte length word, unaligned, uncompressed data (up to 4K)
1011xxxx    2-byte length word, unaligned, compressed data (up to 4K)
11xxxxxx    1-byte length word, unaligned, uncompressed data (up to 63b)

or

00xxxxxx    4-byte length word, aligned, uncompressed data (up to 1G)
010xxxxx    2-byte length word, unaligned, uncompressed data (up to 8K)
011xxxxx    2-byte length word, unaligned, compressed data (up to 8K)
10000000    1-byte length word, unaligned, TOAST pointer
1xxxxxxx    1-byte length word, unaligned, uncompressed data (up to 127b)    (xxxxxxx not all zero)

This second choice allows longer datums in both the 1-byte and 2-byte
header formats, but it hardwires the length of a TOAST pointer and
requires four cases to be distinguished in the inner loop; the first
choice only requires three cases, because TOAST pointer and 1-byte
header can be handled by the same rule "length is low 6 bits of byte".
The second choice also loses the ability to store in-line compressed
data above 8K, but that's probably an insignificant loss.

There's more than one way to do it ...
        regards, tom lane


pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: Emacs local vars at the tail of every file
Next
From: Stefan Kaltenbrunner
Date:
Subject: -HEAD planner issue wrt hash_joins on dbt3 ?