Thank you for the quick response.
Do you think it will be fixed within few months? Next year we plan to upgrade our clients to 17 and zstd compression is
abouttwo times faster than gzip and produces about 20% less backup.
Regards
Thomas
-----Original Message-----
From: Tom Lane <tgl@sss.pgh.pa.us>
Sent: Wednesday, December 18, 2024 3:22 AM
To: Tomasz Szypowski <tomasz.szypowski@asseco.pl>
Cc: pgsql-bugs@lists.postgresql.org
Subject: Re: Not able to restore database - error: could not decompress data: Allocation error : not enough memory
[Nie otrzymujesz cz?sto wiadomo?ci e-mail z tgl@sss.pgh.pa.us. Dowiedz si?, dlaczego jest to wa?ne, na stronie
https://aka.ms/LearnAboutSenderIdentification]
Tomasz Szypowski <tomasz.szypowski@asseco.pl> writes:
> Our database consists of 408032 blobs (pg_largeobject).
> Backup was made using zstd and it weights 70GB.
> While restoring the used memory increases constantly (from 5 MB to 5GB - i have got 8GB RAM, after reaching 5GB it
uses5GB for some time, maybe swaping) and then crashes with the error.
Yeah, leak reproduced here. Apparently it's specific to the zstd code path, because I don't see it with the default
compressionmethod. Should be easy to fix (awaiting valgrind results), but in the meantime just use default
compression.
Thanks for the report!
regards, tom lane