From: Alvaro Herrera [mailto:alvherre@2ndquadrant.com]
> On 2019-Sep-03, Tsunakawa, Takayuki wrote:
> > I don't think it's rejected. It would be a pity (mottainai) to refuse
> > this, because it provides significant speedup despite its simple
> > modification.
>
> I don't necessarily disagree with your argumentation, but Travis is
> complaining thusly:
I tried to revise David's latest patch (v8) and address Tom's comments in his last mail. But I'm a bit at a loss.
First, to accurately count the maximum number of acquired locks in a transaction, we need to track the maximum entries
inthe hash table, and make it available via a new function like hash_get_max_entries(). However, to cover the shared
partitionedhash table (that is not necessary for LockMethodLocalHash), we must add a spin lock in hashhdr and
lock/unlockit when entering and removing entries in the hash table. It spoils the effort to decrease contention by
hashhdr->freelists[].mutex. Do we want to track the maximum number of acquired locks in the global variable in lock.c,
notin the hash table?
Second, I couldn't understand the comment about the fill factor well. I can understand that it's not correct to
comparethe number of hash buckets and the number of locks. But what can we do?
I'm sorry to repeat what I mentioned in my previous mail, but my v2 patch's approach is based on the database textbook
andseems intuitive. So I attached the rebased version.
Regards
Takayuki Tsunakawa