On 7/3/2014 11:47 AM, Eduardo Morras wrote:
> No, there's nothing wrong. All transparent compressed objects stored
> in database, toast, lo, etc.. is transparently decompressed while
> pg_dump access them and then you gzip it again. I don't know why it
> doesn't dump the compressed data directly.
That sounds odd, but if pg_dump decompresses the large objects and then
I gzip them on backup, doesn't the same more or less happen in reverse
when I pg_restore them? I mean, I gunzip the backup and then pg_restore
must compress the large objects when it writes them back.
It just seems odd that pg_dump is slower than pg_restore to me. Most
grumblings I read about suggest that pg_restore is too slow.
I have noted that the last split file segment will often appear to be
done -- no file modifications -- while pg_dump is still running, often
for another 20 minutes or so, and then some last bit is finally
written. It's as if pg_dump is calculating something at the end that is
quite slow. At startup, there's a delay before data is written, too,
but it's generally 1-2 minutes at most.