Thread: Analyse - max_locks_per_transaction - why?

Analyse - max_locks_per_transaction - why?

From
Phil Endecott
Date:
Dear PostgreSQL experts,

This is with version 7.4.2.

My database has grown a bit recently, mostly in number of tables but
also their size, and I started to see ANALYSE failing with this message:

WARNING:  out of shared memory
ERROR:  out of shared memory
HINT:  You may need to increase max_locks_per_transaction.

So I increased max_locks_per_transaction from 64 to 200 and, after doing
a /etc/init.d/postgresql/restart rather than a
/etc/init.d/postgresql/reload, it seems to work again.

Naively I imagined that ANALYSE looks at each table in turn,
independently.  So why does it need more locks when there are more
tables?  Isn't "ANALYSE" with no parameter equivalent to

for i in all_tables_in_database {
   ANALYSE i;
}

I'm working in a memory-poor environment (a user-mode-linux virtual
machine) and I'm a bit concerned about the memory overhead if I have to
keep increasing max_locks_per_transaction just to keep ANALYSE happy.

As an aside, what I really need in this particular case is to analyse
all of the tables in a particular schema.  Having "ANALYSE schemaname"
or "ANALYSE schemaname.*" would be great.  I presume that I can write a
function to get the same effect - has anyone already done that?

Regards,

Phil Endecott.



Re: Analyse - max_locks_per_transaction - why?

From
Tom Lane
Date:
Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:
> Naively I imagined that ANALYSE looks at each table in turn,
> independently.  So why does it need more locks when there are more
> tables?

7.4 runs a database-wide ANALYZE as a single transaction, so the locks
accumulate.  This was recognized to be a bad idea :-(.  8.0 is a bit
smarter.

The best bet in 7.4 is probably to use VACUUM ANALYZE rather than
analyzing separately.  That will force it to use a transaction per
table.

            regards, tom lane

Re: Analyse - max_locks_per_transaction - why?

From
Phil Endecott
Date:
I asked:

>>Naively I imagined that ANALYSE looks at each table in turn,
>>independently.  So why does it need more locks when there are more
>>tables?

Tom replied:

> 7.4 runs a database-wide ANALYZE as a single transaction, so the locks
> accumulate.  This was recognized to be a bad idea :-(.  8.0 is a bit
> smarter.

Thanks Tom.  I will upgrade to 8.0 one day but not soon.  In the
meantime, is there a way to judge a suficient setting for
max_locks_per_transaction so that a global ANALYZE will work?  It
doesn't seem to be one lock per table or anything as simple as that.

 > The best bet in 7.4 is probably to use VACUUM ANALYZE rather than
 > analyzing separately.  That will force it to use a transaction per
 > table.

That's another possibility.

Thanks.

--Phil.


Re: Analyse - max_locks_per_transaction - why?

From
Tom Lane
Date:
Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:
> Thanks Tom.  I will upgrade to 8.0 one day but not soon.  In the
> meantime, is there a way to judge a suficient setting for
> max_locks_per_transaction so that a global ANALYZE will work?  It
> doesn't seem to be one lock per table or anything as simple as that.

No.  The shared lock table has room for max_locks_per_transaction *
max_connections entries (actually rather more, but that's the supported
limit), so as soon as this exceeds the number of tables in your DB
the ANALYZE will work ... at least as long as nothing else is going on.
Any other transactions you may be running will eat some lock entries,
and you have to allow for those.

The conservative answer is to set max_locks_per_transaction to
(tables in database)/max_connections more than you were using before.
This is still probably overkill, since for most purposes the default
value is plenty.

There was some discussion recently of renaming the
max_locks_per_transaction variable to make it clearer that it's not
a hard per-transaction limit but a global average.  Nobody really
came up with a better name though.

            regards, tom lane