Thread: table partitioning & max_locks_per_transaction
Hello All -- I have implemented table partitioning in order to increase performance in my database-backed queuing system. My queue is partitioned by job_id into separate tables that all inherit from a base "queue" table. Things were working swimmingly until my system started managing thousands of jobs. As soon as I had ~1070 queue subtables, queries to the main queue table would fail with: "out of shared memory HINT: You might need to increase max_locks_per_transaction" I found this thread on the archives: http://archives.postgresql.org/pgsql-general/2007-08/msg01992.php Still, I have a few questions/problems: 1) We've already tuned postgres to use ~2BG of shared memory -- which is SHMAX for our kernel. If I try to increase max_locks_per_transaction, postgres will not start because our shared memory is exceeding SHMAX. How can I increase max_locks_per_transaction without having my shared memory requirements increase? 2) Why do I need locks for all of my subtables, anyways? I have constraint_exclusion on. The query planner tells me that I am only using three tables for the queries that are failing. Why are all of the locks getting allocated? Is there any way to prevent this? Many thanks in advance for any and all help anyone can provide! Brian
Brian Karlak <zenkat@metaweb.com> writes: > "out of shared memory HINT: You might need to increase > max_locks_per_transaction" You want to do what it says ... > 1) We've already tuned postgres to use ~2BG of shared memory -- which > is SHMAX for our kernel. If I try to increase > max_locks_per_transaction, postgres will not start because our shared > memory is exceeding SHMAX. How can I increase > max_locks_per_transaction without having my shared memory requirements > increase? Back off shared_buffers a bit? 2GB is certainly more than enough to run Postgres in. > 2) Why do I need locks for all of my subtables, anyways? I have > constraint_exclusion on. The query planner tells me that I am only > using three tables for the queries that are failing. Why are all of > the locks getting allocated? Because the planner has to look at all the subtables and make sure that they in fact don't match the query. So it takes AccessShareLock on each one, which is the minimum strength lock needed to be sure that the table definition isn't changing underneath you. Without *some* lock it's not really safe to examine the table at all. regards, tom lane
Tom -- Thanks for the pointers and advice. We've started by doubling max_locks and halving shared_buffers, we'll see how it goes. Brian On Oct 10, 2009, at 7:56 PM, Tom Lane wrote: > Brian Karlak <zenkat@metaweb.com> writes: >> "out of shared memory HINT: You might need to increase >> max_locks_per_transaction" > > You want to do what it says ... > >> 1) We've already tuned postgres to use ~2BG of shared memory -- which >> is SHMAX for our kernel. If I try to increase >> max_locks_per_transaction, postgres will not start because our shared >> memory is exceeding SHMAX. How can I increase >> max_locks_per_transaction without having my shared memory >> requirements >> increase? > > Back off shared_buffers a bit? 2GB is certainly more than enough > to run Postgres in. > >> 2) Why do I need locks for all of my subtables, anyways? I have >> constraint_exclusion on. The query planner tells me that I am only >> using three tables for the queries that are failing. Why are all of >> the locks getting allocated? > > Because the planner has to look at all the subtables and make sure > that they in fact don't match the query. So it takes AccessShareLock > on each one, which is the minimum strength lock needed to be sure that > the table definition isn't changing underneath you. Without *some* > lock > it's not really safe to examine the table at all. > > regards, tom lane