Hi!
Please, do not use top-posting, i.e. reply style where you quote whole message under your response. It makes reading of
archivesterse.
> 24 июня 2019 г., в 7:53, Binguo Bao <djydewang@gmail.com> написал(а):
>
>> This is not correct: L bytes of compressed data do not always can be decoded into at least L bytes of data. At worst
wehave one control byte per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8 bytes with current pglz
format.
>
> Good catch! I've corrected the related code in the patch.
> ...
> <0001-Optimize-partial-TOAST-decompression-2.patch>
I've took a look into the code.
I think we should extract function for computation of max_compressed_size and put it somewhere along with pglz code.
Justin case something will change something about pglz so that they would not forget about compression algorithm
assumption.
Also I suggest just using 64 bit computation to avoid overflows. And I think it worth to check if max_compressed_size
iswhole data and use min of (max_compressed_size, uncompressed_data_size).
Also you declared needsize and max_compressed_size too far from use. But this will be solved by function extraction
anyway.
Thanks!
Best regards, Andrey Borodin.