I have about 7400 tables in my database. When I ran a select on pg_locks while attempting a pg_dump, pg_locks returned with about 7400 rows all originating from one process id that was running select pg_catalog.pg_get_statisticsobjdef() by pg_dump. Now, this does explain why I need to increase the max_transaction_locks so that I can account for all the locks pg_locks is running into while pg_dump is in progress. I am still trying to understand why pg_get_statisticsobjdef() creates locks (AccessShare) on all the base tables at once leading to this issue?
Viral Shah <viralshah009@gmail.com> writes: > Per the postgres documentation, > *max_locks_per_transaction * (max_connections + max_prepared_transactions)* > this formula determines the max no of allowed objects that can be locked on > the database. Currently my database has the following values: > *max_locks_per_transaction = 64 (default)* > *max_connections = 100* > *max_prepared_transactions = 0* > Using this value in the above formula tells that our database or rather > postgres server can/should handle *6400* locked objects at a time.
Right.
> What is surprising is why Postgres complains of insufficient locks per > transaction if only 10 processes (parallel jobs in pg_dump) are running on > the database while taking the dump.
They're taking more than 6400 locks, evidently. How many tables are in your database? Have you tried looking into pg_locks while the dump is running?