At Mon, 14 Mar 2022 09:39:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in
> I'll examine the possibility to resolve this...
The existence of nfree and nalloc made me confused and I found the
reason.
In the case where a parittion collects many REUSE-ASSIGN-REMOVEed
elemetns from other paritiotns, nfree gets larger than nalloced. This
is a strange point of the two counters. nalloced is only referred to
as (sum(nalloced[])). So we don't need nalloced per-partition basis
and the formula to calculate the number of used elements would be as
follows.
sum(nalloced - nfree)
= <total_nalloced> - sum(nfree)
We rarely create fresh elements in shared hashes so I don't think
there's additional contention on the <total_nalloced> even if it were
a global atomic.
So, the remaining issue is the possible imbalancement among
partitions. On second thought, by the current way, if there's a bad
deviation in partition-usage, a heavily hit partition finally collects
elements via get_hash_entry(). By the patch's way, similar thing
happens via the REUSE-ASSIGN-REMOVE sequence. But buffers once used
for something won't be freed until buffer invalidation. But bulk
buffer invalidation won't deviatedly distribute freed buffers among
partitions. So I conclude for now that is a non-issue.
So my opinion on the counters is:
I'd like to ask you to remove nalloced from partitions then add a
global atomic for the same use?
No need to do something for the possible deviation issue.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center