Our database consists of 408032 blobs (pg_largeobject).
Backup was made using zstd and it weights 70GB.
While restoring the used memory increases constantly (from 5 MB to 5GB – i have got 8GB RAM, after reaching 5GB it uses 5GB for some time, maybe swaping) and then crashes with the error.
During this period it has restored about 48000 blobs.
pg_restore: restoring large object with OID 169685
pg_restore: restoring large object with OID 169686
pg_restore: restoring large object with OID 169687
pg_restore: restoring large object with OID 169688
pg_restore: restoring large object with OID 169689
pg_restore: restoring large object with OID 169690
pg_restore: restoring large object with OID 169691
pg_restore: restoring large object with OID 169692
pg_restore: restoring large object with OID 169693
pg_restore: restoring large object with OID 169694
pg_restore: restoring large object with OID 169695
pg_restore: restoring large object with OID 169696
pg_restore: restoring large object with OID 169697
pg_restore: restoring large object with OID 169698
pg_restore: restoring large object with OID 169699
pg_restore: error: could not decompress data: Allocation error : not enough memory
It was checked with the original pg_restore.exe and libzstd.dll as well as with compiled by mingw both 32 and 64 bits. Several libzstd.dll was used. Without success.
Dump: pg_dump.exe --format=custom --compress=zstd:4 --file=zstd.zstd dbname=TEST
Restore: pg_restore -d TEST zstd.zstd
Best regards
Thomas