------- Original Message -------
On Tuesday, May 9th, 2023 at 2:54 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:
>
>
> On 5/9/23 00:10, Michael Paquier wrote:
>
> > On Mon, May 08, 2023 at 08:00:39PM +0200, Tomas Vondra wrote:
> >
> > > The LZ4Stream_write() forgot to move the pointer to the next chunk, so
> > > it was happily decompressing the initial chunk over and over. A bit
> > > embarrassing oversight :-(
> > >
> > > The custom format calls WriteDataToArchiveLZ4(), which was correct.
> > >
> > > The attached patch fixes this for me.
> >
> > Ouch. So this was corrupting the dumps and the compression when
> > trying to write more than two chunks at once, not the decompression
> > steps. That addresses the issue here as well, thanks!
>
>
> Yeah. Thanks for the report, should have been found during review.
Thank you both for looking. A small consolation is that now there are
tests for this case.
Moving on to the other open item for this, please find attached v2
of the patch as requested.
Cheers,
//Georgios
>
>
> regards
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company