Thread: Running pg_dump from a slave server
pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.
Isn't that possible? I can't run pg_dump from a slave?
Cheers
Patrick
Hi guys,I'm using PostgreSQL 9.2 and I got one master and one slave with streaming replication.Currently, I got a backup script that runs daily from the master, it generates a dump file with 30GB of data.I changed the script to start running from the slave instead the master, and I'm getting this errors now:pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.
Isn't that possible? I can't run pg_dump from a slave?
Cheers
Patrick
101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
T: +65 6438 3504 | M: +65 8110 0350
Skype: sameer.ashnik | www.ashnik.com
On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbakerbr@gmail.com> wrote:Hi guys,I'm using PostgreSQL 9.2 and I got one master and one slave with streaming replication.Currently, I got a backup script that runs daily from the master, it generates a dump file with 30GB of data.I changed the script to start running from the slave instead the master, and I'm getting this errors now:pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.Looks like while your pg_dump sessions were trying to fetch the data, someone fired a DDL or REINDEX or VACUUM FULL on the master database.
Isn't that possible? I can't run pg_dump from a slave?
Well you can do that, but it has some limitation. If you do this quite often, it would be rather better to have a dedicated standby for taking backups/pg_dumps. Then you can set max_standby_streaming_delay and max_standby_archiving_delay to -1. But I would not recommend doing this if you use your standby for other read queries or for high availability.Another option would be avoid such queries which causes Exclusive Lock on the master database during pg_dump.
2016-08-17 15:31 GMT+12:00 Sameer Kumar <sameer.kumar@ashnik.com>:On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbakerbr@gmail.com> wrote:Hi guys,I'm using PostgreSQL 9.2 and I got one master and one slave with streaming replication.Currently, I got a backup script that runs daily from the master, it generates a dump file with 30GB of data.I changed the script to start running from the slave instead the master, and I'm getting this errors now:pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.Looks like while your pg_dump sessions were trying to fetch the data, someone fired a DDL or REINDEX or VACUUM FULL on the master database.
Isn't that possible? I can't run pg_dump from a slave?
Well you can do that, but it has some limitation. If you do this quite often, it would be rather better to have a dedicated standby for taking backups/pg_dumps. Then you can set max_standby_streaming_delay and max_standby_archiving_delay to -1. But I would not recommend doing this if you use your standby for other read queries or for high availability.Another option would be avoid such queries which causes Exclusive Lock on the master database during pg_dump.
Sameer,yeah I was just reading this thread: https://www.postgresql.org/message-id/AANLkTinLg%2BbpzcjzdndsnGGNFC%3DD1OsVh%2BhKb85A-s%3Dn%40mail.gmail.comWell.. I thought it was possible, but as the DB is big, this dump takes a long time and it won't work.I also could increase those parameters you showed, but won't do that as I only have one slave.
cheers
101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
T: +65 6438 3504 | M: +65 8110 0350
Skype: sameer.ashnik | www.ashnik.com
But do you have statements which causes Exclusive Locks? Ignoring them in OLTP won't make your life any easier.(Keeping avoiding to get into 'recovery conflict' as your sole goal) If you decide to run pg_dump from master, it would block such statements which have Exclusive locking. This would cause delays, deadlocks, livelocks etc and it might take a while for your before you can figure out what is going on.I would say try to find out who is and why is someone creating Exclusive locks.
On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbakerbr@gmail.com> wrote:Hi guys,I'm using PostgreSQL 9.2 and I got one master and one slave with streaming replication.Currently, I got a backup script that runs daily from the master, it generates a dump file with 30GB of data.I changed the script to start running from the slave instead the master, and I'm getting this errors now:pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.Looks like while your pg_dump sessions were trying to fetch the data, someone fired a DDL or REINDEX or VACUUM FULL on the master database.
Isn't that possible? I can't run pg_dump from a slave?
Well you can do that, but it has some limitation. If you do this quite often, it would be rather better to have a dedicated standby for taking backups/pg_dumps. Then you can set max_standby_streaming_delay and max_standby_archiving_delay to -1. But I would not recommend doing this if you use your standby for other read queries or for high availability.Another option would be avoid such queries which causes Exclusive Lock on the master database during pg_dump.
On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar <sameer.kumar@ashnik.com> wrote:On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbakerbr@gmail.com> wrote:Hi guys,I'm using PostgreSQL 9.2 and I got one master and one slave with streaming replication.Currently, I got a backup script that runs daily from the master, it generates a dump file with 30GB of data.I changed the script to start running from the slave instead the master, and I'm getting this errors now:pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.Looks like while your pg_dump sessions were trying to fetch the data, someone fired a DDL or REINDEX or VACUUM FULL on the master database.
Isn't that possible? I can't run pg_dump from a slave?
Well you can do that, but it has some limitation. If you do this quite often, it would be rather better to have a dedicated standby for taking backups/pg_dumps. Then you can set max_standby_streaming_delay and max_standby_archiving_delay to -1. But I would not recommend doing this if you use your standby for other read queries or for high availability.Another option would be avoid such queries which causes Exclusive Lock on the master database during pg_dump.Another work around could be to pause the recovery, execute the pg_dump and then, resume the recovery process. Not sure if this work around has been considered.You can consider executing "pg_xlog_replay_pause()" before executing pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump process completes.
Regards,Venkata B NFujitsu Australia
101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
T: +65 6438 3504 | M: +65 8110 0350
Skype: sameer.ashnik | www.ashnik.com
But do you have statements which causes Exclusive Locks? Ignoring them in OLTP won't make your life any easier.(Keeping avoiding to get into 'recovery conflict' as your sole goal) If you decide to run pg_dump from master, it would block such statements which have Exclusive locking. This would cause delays, deadlocks, livelocks etc and it might take a while for your before you can figure out what is going on.I would say try to find out who is and why is someone creating Exclusive locks.Yeah! The pg_dump was already running on the master... it's been running for months.. I just wanted to change now to use the slave, but it seems I can't right?
Exclusive locking - I probably have statements that causes this. Is there any way I could "track" them?
101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
T: +65 6438 3504 | M: +65 8110 0350
Skype: sameer.ashnik | www.ashnik.com
On Wed, Aug 17, 2016 at 12:00 PM Venkata B Nagothi <nag1010@gmail.com> wrote:On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar <sameer.kumar@ashnik.com> wrote:On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbakerbr@gmail.com> wrote:Hi guys,I'm using PostgreSQL 9.2 and I got one master and one slave with streaming replication.Currently, I got a backup script that runs daily from the master, it generates a dump file with 30GB of data.I changed the script to start running from the slave instead the master, and I'm getting this errors now:pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.Looks like while your pg_dump sessions were trying to fetch the data, someone fired a DDL or REINDEX or VACUUM FULL on the master database.
Isn't that possible? I can't run pg_dump from a slave?
Well you can do that, but it has some limitation. If you do this quite often, it would be rather better to have a dedicated standby for taking backups/pg_dumps. Then you can set max_standby_streaming_delay and max_standby_archiving_delay to -1. But I would not recommend doing this if you use your standby for other read queries or for high availability.Another option would be avoid such queries which causes Exclusive Lock on the master database during pg_dump.Another work around could be to pause the recovery, execute the pg_dump and then, resume the recovery process. Not sure if this work around has been considered.You can consider executing "pg_xlog_replay_pause()" before executing pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump process completes.Ideally I would not prefer if I had only one standby. If I am right, it would increase the time my standby would take to complete recovery and become active during a promotion (if I need it during a failure of master). It may impact high availability/uptime. Isn't it?