Re: [ADMIN] Increasing the shared memory - Mailing list pgsql-general

From Tom Lane
Subject Re: [ADMIN] Increasing the shared memory
Date
Msg-id 18737.1176473308@sss.pgh.pa.us
Whole thread Raw
In response to Re: [ADMIN] Increasing the shared memory  ("Sorin N. Ciolofan" <ciolofan@ics.forth.gr>)
List pgsql-general
"Sorin N. Ciolofan" <ciolofan@ics.forth.gr> writes:
>  I will simplify the things in order to describe when the error occurred:
> The input of the application is some data which is read from files on disk,
> processed and then inserted in the database in one transaction. This total
> quantity of data represents an integer number of data files, n*q, where q is
> a file which has always 60kb and n is the positive integer.
> For n=23 and shared_buffers=1000 and max_locks_per_transaction=64 the
> Postgres throws the following exception:

> org.postgresql.util.PSQLException: ERROR: out of shared memory

> For n=23 I estimated that we create and manipulate about 8000 tables.

Okay, as far as I know the only user-causable way to get that message is
to run out of lock-table space, and a transaction does take a lock for
each table it touches, so I concur that raising
max_locks_per_transaction is an appropriate response.  If you didn't see
any change in the maximum N you could handle then I wonder whether you
actually did raise it --- does "show max_locks_per_transaction" reflect
the intended new value?

Another possibility is that there's something about your code that makes
the number of locks involved very nonlinear in N.  You could try
checking the number of rows in pg_locks immediately before commit at
some smaller values of N to confirm what the scaling is really like.

            regards, tom lane

pgsql-general by date:

Previous
From: Ron Johnson
Date:
Subject: Re: Is there a shortage of postgresql skilled ops people
Next
From: Tom Lane
Date:
Subject: Re: ERROR: XLogFlush: request