> On Feb 10, 2024, at 20:15, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:
>
> On 2/8/24 14:27, wenhui qiu wrote:
>> Hi Heikki Linnakangas
>> I think the larger shared buffer higher the probability of multiple
>> backend processes accessing the same bucket slot BufMappingLock
>> simultaneously, ( InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS); When I
>> have free time, I want to do this test. I have seen some tests, but the
>> result report is in Chinese
>>
>
> I think Heikki is right this is unrelated to the amount of RAM. The
> partitions are meant to reduce the number of lock collisions when
> multiple processes try to map a buffer concurrently. But the machines
> got much larger in this regard too - in 2006 the common CPUs had maybe
> 2-4 cores, now it's common to have CPUs with ~100 cores, and systems
> with multiple of them. OTOH the time spent holing the partition lock
> should be pretty low, IIRC we pretty much just pin the buffer before
> releasing it, and the backend should do plenty other expensive stuff. So
> who knows how many backends end up doing the locking at the same time.
>
> OTOH, with 128 partitions it takes just 14 backends to have 50% chance
> of a conflict, so with enough cores ... But how many partitions would be
> enough? With 1024 partitions it still takes only 38 backends to get 50%
> chance of a collision. Better, but considering we now have hundreds of
> cores, not sure if sufficient.
>
> (Obviously, we probably want much lower probability of a collision, I
> only used 50% to illustrate the changes).
>
I find it seems need to change MAX_SIMUL_LWLOCKS if we enlarge the NUM_BUFFER_PARTITIONS,
I didn’t find any comments to describe the relation between MAX_SIMUL_LWLOCKS and
NUM_BUFFER_PARTITIONS, am I missing someghing?