Tom,
That did the trick. I made a bad assumption that the shared_memory
was causing the problem and not the other way around. I set it up to
256, last attempt was 128 and it still failed, not sure what value
would have given me success (128 - 256) but it needed quite a bit
more.
Thanks for your help
On Sun, 07 Aug 2011 12:37:22 -0400, tgl@sss.pgh.pa.us (Tom Lane)
wrote:
>jtkells@verizon.net writes:
>> I am having a problem running "pg_dump -s database "on one system
>> while it runs fine on another system.
>
>> when I run the following dump on the Ubuntu system I get :
>> pg_dump -s DB >/tmp/DB_schema_only.dmp
>> pg_dump: WARNING: out of shared memory
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR: out of shared memory
>> HINT: You might need to increase max_locks_per_transaction.
>> pg_dump: The command was: LOCK TABLE schema_x.x_table IN ACCESS SHARE
>> MODE
>
>> I don't understand what I am doing wrong since I have given a larger
>> amount of resources on the Ubuntu system and continue to fail. Am I
>> missing anything else?
>
>The HINT told you what you need to do: increase max_locks_per_transaction.
>
>The exact point at which you run out of shared memory after exceeding
>max_locks_per_transaction will vary depending on a number of
>hard-to-predict factors (in this case I'll bet 32-bitness vs 64-bitness
>has a lot to do with it), so the fact that it fails on one machine and
>not another is not that surprising. You can be sure though that if the
>databases are identical, the "working" machine has not got a lot of
>headroom; so you'd be well advised to apply the max_locks_per_transaction
>adjustment to both.
>
> regards, tom lane