Thread: pg_dump with lots and lots of tables

pg_dump with lots and lots of tables

From
Andy Colson
Date:
pg_dump is upset that my max_locks_per_transaction is too low.  I've bumped it up several times (up to 600 so far) but
nowsure how many it needs. 

I'm merging 90 databases into a single database with 90 schemas.  Each schema can have 500'ish tables.  Do I need to
setmax_locks_per_transaction to (90*500) 45,000?  Will that even work? 

Will I ever need to bump up sysctl kernel.shmmax?

Oh, I'm on Slackware 64, PG 9.3.1.  I'm trying to get my db from the test box back to the live box.  For regular backup
Ithink I'll be switching to streaming replication. 

Thanks for your time,

-Andy


Re: pg_dump with lots and lots of tables

From
Tom Lane
Date:
Andy Colson <andy@squeakycode.net> writes:
> pg_dump is upset that my max_locks_per_transaction is too low.  I've bumped it up several times (up to 600 so far)
butnow sure how many it needs. 

> I'm merging 90 databases into a single database with 90 schemas.  Each schema can have 500'ish tables.  Do I need to
setmax_locks_per_transaction to (90*500) 45,000?  Will that even work? 

The pg_dump will need about 45000 locks altogether, so anything north of
45000/max_connections should work (more if you have other sessions going
on at the same time).

Basically the lock table is sized at max_locks_per_transaction*max_connections,
and transactions can use as many entries as they want --- there's no
attempt to hold a session to its "fair share" of the table.  The parameter
is only defined as it is to ensure that if you bump up max_connections the
lock table will get bigger automatically, so you won't starve sessions of
locks accidentally.

> Will I ever need to bump up sysctl kernel.shmmax?

If the postmaster fails to start with the larger setting, then yes.
But lock entries aren't that large so probably it won't matter.
If it does matter, and increasing shmmax is inconvenient, you could
back off shared_buffers to make room.

            regards, tom lane