pg_dump out of shared memory - Mailing list pgsql-general

From tfo@alumni.brown.edu (Thomas F. O'Connell)
Subject pg_dump out of shared memory
Date
Msg-id 80c38bb1.0406171334.4e0b5775@posting.google.com
Whole thread Raw
Responses Re: pg_dump out of shared memory  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
In using pg_dump to dump an existing postgres database, I get the
following:

pg_dump: WARNING:  out of shared memory
pg_dump: attempt to lock table <table name> failed: ERROR:  out of
shared memory
HINT:  You may need to increase max_locks_per_transaction.

postgresql.conf just has the default of 1000 shared_buffers. The
database itself has thousands of tables, some of which have rows
numbering in the millions. Am I correct in thinking that, despite the
hint, it's more likely that I need to up the shared_buffers?

Or is it that pg_dump is an example of "clients that touch many
different tables in a single transaction" [from
http://www.postgresql.org/docs/7.4/static/runtime-config.html#RUNTIME-CONFIG-LOCKS]
and I actually ought to abide by the hint?

-tfo

pgsql-general by date:

Previous
From: "Daniel Baughman"
Date:
Subject: PGSQL service dieing...
Next
From: Ron Snyder
Date:
Subject: Re: putting binary data in a char field?