Thread: 8.2 Partition lock changes and resource queuing.
I've been working on a DSS query resource management module for Bizgres [1], which is currently in the code and working (Bizgres is based on 8.1.x). ...and therein lies my new problem - updating to the 8.2 codebase. The issue is that I have used the LockMgrLock [2] to: 1/ Manage the (resource) locks that provide the query queuing functionality. 2/ Protect shared memory structures (the queues and portal/statement hashes) that provide the modified conflict routines and data for the queues. However in 8.2 the partition lock changes break this design. So I'm thinking of the following; 1/ Use the partition lock for the (resource) locks - as they live in the lock hash table along with the standard and advisory locks. 2/ Protect shared memory structures (the queues and portal/statement hashes) with a new lwlock (ResLockMgrLock). This complicates the code somewhat, with what looks like a lot more lwlock/unlock oprations - but may actually be better for concurrency as we have less contention for an single lock (I think). The other approach I wondered about was arranging for the resource locks and related data structures to all use an *additional* partition lock - which would mean faking a LOCKTAG that always hashed to NUM_LOCK_PARTITIONS, and using that everywhere in the resource code... Comments welcome - as I'm scratching my head somewhat about this :-)! Cheers Mark [1] http://archives.postgresql.org/pgsql-hackers/2006-07/msg00133.php [2] The main reason that a new lwlock was not used was that we need to be able to do deadlock checking between resource and standard locks. ~ ~
Mark Kirkwood <markir@paradise.net.nz> writes: > The other approach I wondered about was arranging for the resource locks > and related data structures to all use an *additional* partition lock - > which would mean faking a LOCKTAG that always hashed to > NUM_LOCK_PARTITIONS, and using that everywhere in the resource code... That seems mighty ugly, as well as defeating the purpose of spreading the LWLock contention around evenly. I'd go for letting the resource locks go into their natural hash partitions, and making a separate LWLock for your other data structures. (Some day you might get to the point of wanting to partition the other data structures, in which case you'd be glad you separated the locks.) regards, tom lane
Tom Lane wrote: > Mark Kirkwood <markir@paradise.net.nz> writes: >> The other approach I wondered about was arranging for the resource locks >> and related data structures to all use an *additional* partition lock - >> which would mean faking a LOCKTAG that always hashed to >> NUM_LOCK_PARTITIONS, and using that everywhere in the resource code... > > That seems mighty ugly, as well as defeating the purpose of spreading > the LWLock contention around evenly. Yes - and possibly confusing to amend later, when I (or someone else) had forgotten why it was done that way... > I'd go for letting the resource > locks go into their natural hash partitions, and making a separate LWLock > for your other data structures. (Some day you might get to the point of > wanting to partition the other data structures, in which case you'd be > glad you separated the locks.) Great, thanks for the quick reply! Cheers Mark