Maciek Sakrejda <m.sakrejda@gmail.com> writes:
> We've run into a perplexing issue with a customer database. He moved
> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB
> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is
> now regularly getting constant errors regarding running out of shared
> memory (there were none on the old system in the recent couple of
> days' logs from before the upgrade):
> ERROR: out of shared memory
> HINT: You might need to increase max_pred_locks_per_transaction.
This has nothing to do with work_mem nor maintenance_work_mem; rather,
it means you're running out of space in the database-wide lock table.
You need to take the hint's advice.
> The query causing this has structurally identical plans on both systems:
> old: http://explain.depesz.com/s/Epzq
> new: http://explain.depesz.com/s/WZo
The query in itself doesn't seem very exceptional. I wonder whether
you recently switched your application to use serializable mode? But
anyway, a query's demand for predicate locks can depend on a lot of
not-very-visible factors, such as how many physical pages the tuples
it accesses are spread across. I don't find it too hard to credit
that yesterday you were just under the limit and today you're just
over even though "nothing changed".
regards, tom lane