Thread: pg_dump streaming fails - PostgreSQL 9.2

pg_dump streaming fails - PostgreSQL 9.2

From
"drum.lucas@gmail.com"
Date:
Hi there!

I'm running this command: (Trying to copy a full DB (2TB) from a hot-standby server to a master (test) server)

ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' -—format=custom master_db" | /usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp --exit-on-error —-verbose

Then, after 3 GB I got this error:

pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  canceling statement due to conflict with recovery
DETAIL:  User was holding a relation lock for too long.
pg_dump: The command was: COPY dm.invoices (invoice_id, format_version, ts_issue, ts_ack, customer_id, code, tag, account_data, customer_data, invoice_data, invoice_items_data) TO stdout;

I've tried the pg_dump command even with: "--no-unlogged-table-data" option, but it was unsuccessful (http://www.postgresql.org/message-id/E1TutDO-0001Hb-2v@wrigleys.postgresql.org)

DB size: 2 TB

How can I solve the problem? What's going on? Thanks

Re: pg_dump streaming fails - PostgreSQL 9.2

From
"Joshua D. Drake"
Date:
On 01/28/2016 05:23 PM, drum.lucas@gmail.com wrote:
> Hi there!
>
> I'm running this command: *(Trying to copy a full DB (2TB) from a
> hot-standby server to a master (test) server)*
>
> |ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
> --exclude-table-data='junk.*' -—format=custom
> master_db"|/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
> --exit-on-error —-verbose|
>
> Then, after 3 GB I got this error:
>
> |pg_dump:Dumping the contents
> oftable"invoices"failed:PQgetResult()failed.pg_dump:Error message
> fromserver:ERROR:canceling statement due toconflict withrecovery
> DETAIL:Userwas holding a relation lock fortoo long.pg_dump:The command
> was:COPY dm.invoices
>
(invoice_id,format_version,ts_issue,ts_ack,customer_id,code,tag,account_data,customer_data,invoice_data,invoice_items_data)TOstdout;|
>
>     I've tried the pg_dump command even with: "--no-unlogged-table-data"
>     option, but it was unsuccessful
>     (http://www.postgresql.org/message-id/E1TutDO-0001Hb-2v@wrigleys.postgresql.org)
>
>     DB size: 2 TB
>
> How can I solve the problem? What's going on? Thanks
>

Back up everything from the master.

JD



--
Command Prompt, Inc.                  http://the.postgres.company/
                      +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.


Sv: Re: [GENERAL] pg_dump streaming fails - PostgreSQL 9.2

From
Andreas Joseph Krogh
Date:
På fredag 29. januar 2016 kl. 02:30:59, skrev Joshua D. Drake <jd@commandprompt.com>:
On 01/28/2016 05:23 PM, drum.lucas@gmail.com wrote:
> Hi there!
>
> I'm running this command: *(Trying to copy a full DB (2TB) from a
> hot-standby server to a master (test) server)*
>
> |ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
> --exclude-table-data='junk.*' -—format=custom
> master_db"|/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
> --exit-on-error —-verbose|
>
> Then, after 3 GB I got this error:
>
> |pg_dump:Dumping the contents
> oftable"invoices"failed:PQgetResult()failed.pg_dump:Error message
> fromserver:ERROR:canceling statement due toconflict withrecovery
> DETAIL:Userwas holding a relation lock fortoo long.pg_dump:The command
> was:COPY dm.invoices
> (invoice_id,format_version,ts_issue,ts_ack,customer_id,code,tag,account_data,customer_data,invoice_data,invoice_items_data)TOstdout;|
>
>     I've tried the pg_dump command even with: "--no-unlogged-table-data"
>     option, but it was unsuccessful
>     (http://www.postgresql.org/message-id/E1TutDO-0001Hb-2v@wrigleys.postgresql.org)
>
>     DB size: 2 TB
>
> How can I solve the problem? What's going on? Thanks
>

Back up everything from the master.

JD
 
Hi.
 
I'm still getting this on standby running 10.1 (both on primary and standby)
 
Isn't this supposed to work? Isn't some of the point of having a hot-standby like this to be able to backup from it?
 
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
 
Attachment

Re: Sv: Re: [GENERAL] pg_dump streaming fails - PostgreSQL 9.2

From
Jeff Janes
Date:
On Jan 9, 2018 03:49, "Andreas Joseph Krogh" <andreas@visena.com> wrote:
På fredag 29. januar 2016 kl. 02:30:59, skrev Joshua D. Drake <jd@commandprompt.com>:
On 01/28/2016 05:23 PM, drum.lucas@gmail.com wrote:
> Hi there!
>
> I'm running this command: *(Trying to copy a full DB (2TB) from a
> hot-standby server to a master (test) server)*
>
> |ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
> --exclude-table-data='junk.*' -—format=custom
> master_db"|/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
> --exit-on-error —-verbose|
>
> Then, after 3 GB I got this error:
>
> |pg_dump:Dumping the contents
> oftable"invoices"failed:PQgetResult()failed.pg_dump:Error message
> fromserver:ERROR:canceling statement due toconflict withrecovery
> DETAIL:Userwas holding a relation lock fortoo long.pg_dump:The command
> was:COPY dm.invoices
> (invoice_id,format_version,ts_issue,ts_ack,customer_id,code,tag,account_data,customer_data,invoice_data,invoice_items_data)TOstdout;|
>

 
Hi.
 
I'm still getting this on standby running 10.1 (both on primary and standby)

Surely you are not getting recovery conflicts on the master.
 
Isn't this supposed to work? 
 

You can avoid recovery conflicts by setting

max_standby_streaming_delay=-1

On the hot standby.  This might render the hot standby less useful for other purposes, in which case you might need different hot standbys for different purposes.

> Isn't some of the point of having a hot-standby like this to be able to backup from it?

Not for me.  I might use one to dump one or a few tables, but if I wanted a full backup I'd use pg_basebackup, not pg_dump.

Cheers,

Jeff