The images are stored in whatever format our users load them as, so we
don't have any control over their compression or lack thereof.
I ran pg_dump with the arguments you suggested, and my 4 GB test table
finished backing up in about 25 minutes, which seems great. The only
problem is that the resulting backup file was over 9 GB. Using -Z2
resulting in a 55 minute 6GB backup.
Here's my interpretation of those results: the TOAST tables for our
image files are compressed by Postgres. During the backup, pg_dump
uncompresses them, and if compression is turned on, recompresses the
backup. Please correct me if I'm wrong there.
If we can't find a workable balance using pg_dump, then it looks like
our next best alternative may be a utility to handle filesystem backups,
which is a little scary for on-site, user-controlled servers.
Ryan
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Saturday, April 12, 2008 9:46 PM
To: Ryan Wells
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Slow pg_dump
"Ryan Wells" <ryan.wells@soapware.com> writes:
> We have several tables that are used to store binary data as bytea (in
> this example image files),
Precompressed image formats, no doubt?
> pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
> "backupTest.backup" -t "public"."images" db_name
Try it with -Z0, or even drop the -Fc completely, since it's certainly
not very helpful on a single-table dump. Re-compressing already
compressed data is not only useless but impressively slow ...
Also, drop the -i, that's nothing but a foot-gun.
regards, tom lane