Thread: pg_dump out of shared memory

pg_dump out of shared memory

From
tfo@alumni.brown.edu (Thomas F. O'Connell)
Date:
In using pg_dump to dump an existing postgres database, I get the
following:

pg_dump: WARNING:  out of shared memory
pg_dump: attempt to lock table <table name> failed: ERROR:  out of
shared memory
HINT:  You may need to increase max_locks_per_transaction.

postgresql.conf just has the default of 1000 shared_buffers. The
database itself has thousands of tables, some of which have rows
numbering in the millions. Am I correct in thinking that, despite the
hint, it's more likely that I need to up the shared_buffers?

Or is it that pg_dump is an example of "clients that touch many
different tables in a single transaction" [from
http://www.postgresql.org/docs/7.4/static/runtime-config.html#RUNTIME-CONFIG-LOCKS]
and I actually ought to abide by the hint?

-tfo

Re: pg_dump out of shared memory

From
Tom Lane
Date:
tfo@alumni.brown.edu (Thomas F. O'Connell) writes:
> In using pg_dump to dump an existing postgres database, I get the
> following:

> pg_dump: WARNING:  out of shared memory
> pg_dump: attempt to lock table <table name> failed: ERROR:  out of
> shared memory
> HINT:  You may need to increase max_locks_per_transaction.

> Am I correct in thinking that, despite the
> hint, it's more likely that I need to up the shared_buffers?

No.

            regards, tom lane

Re: pg_dump out of shared memory

From
tfo@alumni.brown.edu (Thomas F. O'Connell)
Date:
tfo@alumni.brown.edu (Thomas F. O'Connell) wrote in message news:
> postgresql.conf just has the default of 1000 shared_buffers. The
> database itself has thousands of tables, some of which have rows
> numbering in the millions. Am I correct in thinking that, despite the
> hint, it's more likely that I need to up the shared_buffers?

So the answer here, verified by Tom Lane and my own remedy to the
problem, is "no". Now I'm curious: why does pg_dump require that
max_connections * max_shared_locks_per_transaction be greater than the
number of objects in the database? Or if that's not the right
assumption about how pg_dump is working, how does pg_dump obtain its
locks, and why is the error that it runs out of shared memory? Is
there a portion of shared memory that's set aside for locks? What is
the shared lock table?

-tfo

Re: pg_dump out of shared memory

From
Tom Lane
Date:
tfo@alumni.brown.edu (Thomas F. O'Connell) writes:
> Now I'm curious: why does pg_dump require that
> max_connections * max_shared_locks_per_transaction be greater than the
> number of objects in the database?

Not objects, just tables.  pg_dump takes AccessShareLock (the weakest
kind of lock) on each table it intends to dump.  This is basically
just to prevent someone from dropping the table underneath it.  (It
would actually have to take that lock anyway as a byproduct of reading
the table contents, but we grab the locks ASAP during pg_dump startup
to reduce the risks of problems from concurrent drops.)

On a database with thousands of tables, this could easily require more
locks than the default lock table size can hold.  Most normal apps don't
need more than a few tables locked within any one transaction, which is
why the table size is calculated as a multiple of max_connections.
There's a great deal of slop involved, because we pad the shared memory
size by 100K or so which is room for quite a few more lock entries than
the nominal table size ... but eventually you'll run out of room.

            regards, tom lane