On Fri, 2023-10-20 at 12:26 +0200, Janning Vygen wrote:
> I don't know if the PG developers are aware of this:
>
> https://serverfault.com/questions/1081642/postgresql-13-speed-up-pg-dump-to-5-minutes-instead-of-70-minutes
>
> But this question is quite famous and many users like the solution.
> So maybe you can fix it by changing the pg_dump process to not compress
> any bytea data.
Doesn't sound like a bug to me.
Compression is determined when "pg_dump" starts. How should it guess that
there is a binary column with compressed data in some table? Even if it did,
I wouldn't feel well with a "pg_dump" with enough artificial intelligence to
do this automatically for me (and get it wrong occasionally).
In addition, I don't think that this problem is limited to compressed
binary data. In my experience, compressed dumps are always slower than
uncompressed ones. It is a speed vs. size thing.
By the way, PostgreSQL v16 introduced compression with "lz4" und "zstd"
to "pg_dump", which is much faster.
Yours,
Laurenz Albe