Analyse - max_locks_per_transaction - why? - Mailing list pgsql-general

From Phil Endecott
Subject Analyse - max_locks_per_transaction - why?
Date
Msg-id 41923241.9040906@chezphil.org
Whole thread Raw
Responses Re: Analyse - max_locks_per_transaction - why?
List pgsql-general
Dear PostgreSQL experts,

This is with version 7.4.2.

My database has grown a bit recently, mostly in number of tables but
also their size, and I started to see ANALYSE failing with this message:

WARNING:  out of shared memory
ERROR:  out of shared memory
HINT:  You may need to increase max_locks_per_transaction.

So I increased max_locks_per_transaction from 64 to 200 and, after doing
a /etc/init.d/postgresql/restart rather than a
/etc/init.d/postgresql/reload, it seems to work again.

Naively I imagined that ANALYSE looks at each table in turn,
independently.  So why does it need more locks when there are more
tables?  Isn't "ANALYSE" with no parameter equivalent to

for i in all_tables_in_database {
   ANALYSE i;
}

I'm working in a memory-poor environment (a user-mode-linux virtual
machine) and I'm a bit concerned about the memory overhead if I have to
keep increasing max_locks_per_transaction just to keep ANALYSE happy.

As an aside, what I really need in this particular case is to analyse
all of the tables in a particular schema.  Having "ANALYSE schemaname"
or "ANALYSE schemaname.*" would be great.  I presume that I can write a
function to get the same effect - has anyone already done that?

Regards,

Phil Endecott.



pgsql-general by date:

Previous
From: Robert Treat
Date:
Subject: Re: FW: Proper nesting of hierarchical objects
Next
From: "Marc G. Fournier"
Date:
Subject: Re: These Lists Are Being Cut To FOUR