I wrote:
> That's a fair question. I did a very very simple hack to replace the item
> offsets with item lengths -- turns out that that mostly requires removing
> some code that changes lengths to offsets ;-). I then loaded up Larry's
> example of a noncompressible JSON value, and compared pg_column_size()
> which is just about the right thing here since it reports datum size after
> compression. Remembering that the textual representation is 12353 bytes:
> json: 382 bytes
> jsonb, using offsets: 12593 bytes
> jsonb, using lengths: 406 bytes
Oh, one more result: if I leave the representation alone, but change
the compression parameters to set first_success_by to INT_MAX, this
value takes up 1397 bytes. So that's better, but still more than a
3X penalty compared to using lengths. (Admittedly, this test value
probably is an outlier compared to normal practice, since it's a hundred
or so repetitions of the same two strings.)
regards, tom lane