Re: ERROR: out of shared memory - Mailing list pgsql-general

From Sorin N. Ciolofan
Subject Re: ERROR: out of shared memory
Date
Msg-id 20070330131926.B9DC18E40FC@mailhost.ics.forth.gr
Whole thread Raw
In response to Re: ERROR: out of shared memory  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: ERROR: out of shared memory  (Joseph S <jks@selectacast.net>)
List pgsql-general
    Dear Mr. Tom Lane,

  From what I've read from the postgresql.conf file I've understood that
which each unit increasing of the "max_locks_per_transaction" parameter the
shared memory used is also increased.
  But the shared memory looks to be already fully consumed according to the
error message, or is the error message irrelevant and improper in this
situation?

With best regards,
Sorin

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, March 27, 2007 4:59 PM
To: Sorin N. Ciolofan
Cc: pgsql-general@postgresql.org; pgsql-admin@postgresql.org;
pgsql-performance@postgresql.org
Subject: Re: [GENERAL] ERROR: out of shared memory

"Sorin N. Ciolofan" <ciolofan@ics.forth.gr> writes:
> It seems that the legacy application creates tables dynamically and the
> number of the created tables depends on the size of the input of the
> application. For the specific input which generated that error I've
> estimated a number of created tables of about 4000.
> Could be this the problem?

If you have transactions that touch many of them within one transaction,
then yup, you could be out of locktable space.  Try increasing
max_locks_per_transaction.

            regards, tom lane



pgsql-general by date:

Previous
From: Gerald Timothy G Quimpo
Date:
Subject: Re: COPY command details
Next
From: "BaseTwo"
Date:
Subject: calling a stored procedure using sql query in 7.4