Thread: Not able to restore database - error: could not decompress data: Allocation error : not enough memory

Our database consists of 408032 blobs (pg_largeobject).

Backup was made using zstd and it weights 70GB.

While restoring the used memory increases constantly (from 5 MB to 5GB – i have got 8GB RAM, after reaching 5GB it uses 5GB for some time, maybe swaping) and then crashes with the error.

During this period it has restored about 48000 blobs.

 

pg_restore: restoring large object with OID 169685

pg_restore: restoring large object with OID 169686

pg_restore: restoring large object with OID 169687

pg_restore: restoring large object with OID 169688

pg_restore: restoring large object with OID 169689

pg_restore: restoring large object with OID 169690

pg_restore: restoring large object with OID 169691

pg_restore: restoring large object with OID 169692

pg_restore: restoring large object with OID 169693

pg_restore: restoring large object with OID 169694

pg_restore: restoring large object with OID 169695

pg_restore: restoring large object with OID 169696

pg_restore: restoring large object with OID 169697

pg_restore: restoring large object with OID 169698

pg_restore: restoring large object with OID 169699

pg_restore: error: could not decompress data: Allocation error : not enough memory

 

It was checked with the original pg_restore.exe and libzstd.dll as well as with compiled by mingw both 32 and 64 bits. Several libzstd.dll was used. Without success.

 

Dump: pg_dump.exe --format=custom --compress=zstd:4 --file=zstd.zstd  dbname=TEST

Restore: pg_restore -d TEST zstd.zstd

 

Best regards

Thomas

Tomasz Szypowski <tomasz.szypowski@asseco.pl> writes:
> Our database consists of 408032 blobs (pg_largeobject).
> Backup was made using zstd and it weights 70GB.
> While restoring the used memory increases constantly (from 5 MB to 5GB - i have got 8GB RAM, after reaching 5GB it
uses5GB for some time, maybe swaping) and then crashes with the error. 

Yeah, leak reproduced here.  Apparently it's specific to the zstd
code path, because I don't see it with the default compression
method.  Should be easy to fix (awaiting valgrind results),
but in the meantime just use default compression.

Thanks for the report!

            regards, tom lane



Thank you for the quick response.
Do you think it will be fixed within few months? Next year we plan to upgrade our clients to 17 and zstd compression is
abouttwo times faster than gzip and produces about 20% less backup. 

Regards
Thomas


-----Original Message-----
From: Tom Lane <tgl@sss.pgh.pa.us>
Sent: Wednesday, December 18, 2024 3:22 AM
To: Tomasz Szypowski <tomasz.szypowski@asseco.pl>
Cc: pgsql-bugs@lists.postgresql.org
Subject: Re: Not able to restore database - error: could not decompress data: Allocation error : not enough memory

[Nie otrzymujesz cz?sto wiadomo?ci e-mail z tgl@sss.pgh.pa.us. Dowiedz si?, dlaczego jest to wa?ne, na stronie
https://aka.ms/LearnAboutSenderIdentification] 

Tomasz Szypowski <tomasz.szypowski@asseco.pl> writes:
> Our database consists of 408032 blobs (pg_largeobject).
> Backup was made using zstd and it weights 70GB.
> While restoring the used memory increases constantly (from 5 MB to 5GB - i have got 8GB RAM, after reaching 5GB it
uses5GB for some time, maybe swaping) and then crashes with the error. 

Yeah, leak reproduced here.  Apparently it's specific to the zstd code path, because I don't see it with the default
compressionmethod.  Should be easy to fix (awaiting valgrind results), but in the meantime just use default
compression.

Thanks for the report!

                        regards, tom lane



Tomasz Szypowski <tomasz.szypowski@asseco.pl> writes:
> Do you think it will be fixed within few months? Next year we plan to upgrade our clients to 17 and zstd compression
isabout two times faster than gzip and produces about 20% less backup. 

The fix will be in February's releases.

            regards, tom lane