Re: pg_dump fails when a table is in ACCESS SHARE MODE - Mailing list pgsql-sql

From Viral Shah
Subject Re: pg_dump fails when a table is in ACCESS SHARE MODE
Date
Msg-id CAEVFvu3N4sANuPzWuvN-UmPP--mphPM3MiXMtQhgH+WctSjq5A@mail.gmail.com
Whole thread Raw
In response to Re: pg_dump fails when a table is in ACCESS SHARE MODE  (Rene Romero Benavides <rene.romero.b@gmail.com>)
Responses Re: pg_dump fails when a table is in ACCESS SHARE MODE
List pgsql-sql
The pg_dump works if I increase max_locks_per_transaction from its default value to 150. However, what I am trying to figure out is why it won't work at 64. 

Per the postgres documentation,
max_locks_per_transaction * (max_connections + max_prepared_transactions)
this formula determines the max no of allowed objects that can be locked on the database. Currently my database has the following values:
max_locks_per_transaction = 64 (default)
max_connections = 100
max_prepared_transactions = 0
Using this value in the above formula tells that our database or rather postgres server can/should handle 6400 locked objects at a time.

What is surprising is why Postgres complains of insufficient locks per transaction if only 10 processes (parallel jobs in pg_dump) are running on the database while taking the dump.

And to answer your question, currently, max_locks_per_transaction is 64 and shared_mem_buffer is 30GB

Thanks,
Viral Shah
Senior Data Analyst, Nodal Exchange LLC



On Thu, Apr 30, 2020 at 10:58 PM Rene Romero Benavides <rene.romero.b@gmail.com> wrote:
Did you try increasing max_locks_per_transaction as the hint message says? what's your current settings for both max_locks_per_transaction and shared_buffers? 

On Thu, Apr 30, 2020 at 5:15 PM Viral Shah <viralshah009@gmail.com> wrote:
Hello Rene,

Apologies for the late reply. To answer your question, disabling hot_standby_feedback doesn't help. Also, both my master and standby runs on the same PostgreSQL Minor version 10.12. One other thing that I have tried is to create a standalone server and try taking the backup of the same database using pg_dump with parallel jobs set to 10. I make sure there are no processes running on the database other than the autovacuum jobs. Inspite of this, the dump fails with the following error:

pg_dump: [archiver (db)] query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: [parallel archiver] a worker process died unexpectedly

Any suggestions would be appreciated. :)

Thanks
Viral Shah
Senior Data Analyst, Nodal Exchange LLC

On Tue, Feb 25, 2020 at 1:32 PM Rene Romero Benavides <rene.romero.b@gmail.com> wrote:


On Tue, Feb 25, 2020 at 10:27 AM Rene Romero Benavides <rene.romero.b@gmail.com> wrote:

On Tue, Feb 25, 2020 at 9:45 AM Viral Shah <viralshah009@gmail.com> wrote:
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
viralshah009@gmail.com
(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <rene.romero.b@gmail.com> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <viralshah009@gmail.com> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
viralshah009@gmail.com
(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Nothing off the top of my head that could explain this behavior, if possible, try disabling hot_standby_feedback on the replica (in the master it has no effect) for a bit and see if that helps. Access share mode kind of locks shouldn't interfere with pg_dump, are both master and slave the exact same versions including the minor releases? it might be a bug, try upgrading to the latests minor releases in both ends. Since when did you start seeing this behavior,  how long have you been on PG10 and creating backups this way? any recent modification in configurations?
Since you're using PG10, you could try logical replication, it's more reliable for this sort of thing, since the activity on the master doesn't affect the slave lock-wise whatsoever.

--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison



Also, check for the occurrence of long running transactions on the master. Does the replica serve as a data source for analytic applications (reports) ? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison




--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/

pgsql-sql by date:

Previous
From: legrand legrand
Date:
Subject: Re: Table/log file name which store the queries which got executedin last 1 month
Next
From: Tom Lane
Date:
Subject: Re: pg_dump fails when a table is in ACCESS SHARE MODE