Re: Improve LWLock tranche name visibility across backends - Mailing list pgsql-hackers
From | Sami Imseih |
---|---|
Subject | Re: Improve LWLock tranche name visibility across backends |
Date | |
Msg-id | CAA5RZ0uic7Us255mFPJSrMFH7UY-RR8YZcjBzNDzfE4gFREDGQ@mail.gmail.com Whole thread Raw |
In response to | Re: Improve LWLock tranche name visibility across backends (Nathan Bossart <nathandbossart@gmail.com>) |
Responses |
Re: Improve LWLock tranche name visibility across backends
Re: Improve LWLock tranche name visibility across backends |
List | pgsql-hackers |
>> Attached is a proof of concept that does not alter the >> LWLockRegisterTranche API. > IMHO we should consider modifying the API, because right now you have to > call LWLockRegisterTranche() in each backend. Why not accept the name as > an argument for LWLockNewTrancheId() and only require it to be called once > during your shared memory initialization? Yes, we could do that, and this will simplify the tranche registration from the current 2 step process of LWLockNewTrancheId() followed up with a LWLockRegisterTranche(), to just simply LWLockNewTrancheId("my tranche"). I agree. >> Instead, it detects when a registration is >> performed by a normal backend and stores the tranche name in shared memory, >> using a dshash keyed by tranche ID. Tranche name lookup now proceeds in >> the order of built-in names, the local list, and finally the shared memory. >> The fallback name "extension" can still be returned if an extension does >> not register a tranche. > Why do we need three different places for the lock names? Is there a > reason we can't put it all in shared memory? The real reason I felt it was better to keep three separate locations is that it allows for a clear separation between user-defined tranches registered during postmaster startup and those registered during a normal backend. The tranches registered during postmaster are inherited by the backend via fork() (or EXEC_BACKEND), and therefore, the dshash table will only be used by a normal backend. Since DSM is not available during postmaster, if we were to create a DSA segment in place, similar to what's done in StatsShmemInit(), we would also need to ensure that the initial shared memory is sized appropriately. This is because it would need to be large enough to accommodate all user-defined tranches registered during postmaster, without having to rely on new dsm segments. From my experimentation, this sizing is not as straightforward as simply calculating # of tranches * size of a tranche entry. I still think we should create the dsa during postmaster, as we do with StatsShmemInit, but it would be better if postmaster keeps its hands off this dshash and only normal backends can use them. Thoughts? >> 2/ What is the appropriate size limit for a tranche name. The work done >> in [0] caps the tranche name to 128 bytes for the dshash tranche, and >> 128 bytes + length of " DSA" suffix for the dsa tranche. Also, the >> existing RequestNamedLWLockTranche caps the name to NAMEDATALEN. Currently, >> LWLockRegisterTranche does not have a limit on the tranche name. I wonder >> if we also need to take care of this and implement some common limit that >> applies to tranch names regardless of how they're created? > Do we need to set a limit? If we're using a DSA and dshash, we could let > folks use arbitrary long tranche names, right? The reason for the limit in > the DSM registry is because the name is used as the key for the dshash > table. Sure that is a good point. The dshash entry could be like below, without a limit on the tranche_name. ``` typedef struct LWLockTracheNamesEntry { int trancheId; const char *tranche_name; } LWLockTracheNamesEntry; ``` >> Is there a concern with a custom wait event to be created implicitly >> via the GetNamed* APIs? > I'm not sure I see any particular advantage to using custom wait events > versus a dedicated LWLock tranche name table. If anything, the limits on > the number of tranches and the lengths of the names gives me pause. Sure, after contemplating on this a bit, I prefer a separate shared memory as well. Custom wait events, while could work, will also be a bit of a confusing user experience. -- Sami
pgsql-hackers by date: