Thanks Joshua. Even if we have a long transaction running on the database, pg_dump shouldn't be affected right ? As it doesn't block readers or writers.
Before getting resources to setup stand by server, I just wanna make sure that we don't this issue on stand by too.
Hi all, We tried pg_dump with compression level set to zero on 1TB database. Dump data rate started with 250GB/hr and gradually dropped to 30 GB/hr with 2 hours time span. We might see this behavior on standby server too, which will be undesirable.
Any explanation on why we see this behavior ?
Because you have a long running transaction that is causing bloat to pile up. Using pg_dump on a production database that size is a non-starter. You need a warm/hot standby or snapshot to do this properly.
JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Support, Training, Professional Services and Development High Availability, Oracle Conversion, @cmdpromptinc "If we send our children to Caesar for their education, we should not be surprised when they come back as Romans."