On Mon, Mar 22, 2021 at 8:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Dilip Kumar <dilipbalaut@gmail.com> writes:
> > On Mon, Mar 22, 2021 at 5:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> Also, after studying the documentation for LZ4_decompress_safe
> >> and LZ4_decompress_safe_partial, I realized that liblz4 is also
> >> counting on the *output* buffer size to not be a lie. So we
> >> cannot pass it a number larger than the chunk's true decompressed
> >> size. The attached patch resolves the issue I'm seeing.
>
> > Okay, the fix makes sense. In fact, IMHO, in general also this fix
> > looks like an optimization, I mean when slicelength >=
> > VARRAWSIZE_4B_C(value), then why do we need to allocate extra memory
> > even in the case of pglz. So shall we put this check directly in
> > toast_decompress_datum_slice instead of handling it at the lz4 level?
>
> Yeah, I thought about that too, but do we want to assume that
> VARRAWSIZE_4B_C is the correct way to get the decompressed size
> for all compression methods?
Yeah, VARRAWSIZE_4B_C is the macro getting the rawsize of the data
stored in the compressed varlena.
> (If so, I think it would be better style to have a less opaque macro
> name for the purpose.)
Okay, I have added another macro that is less opaque and came up with
this patch.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com