On Thu, May 09, 2024 at 01:47:45PM +0900, Michael Paquier wrote:
> That sounds fine to do that at the end.. I'm not sure how large this
> chunk area added to each InjectionPointEntry should be, though. 128B
> stored in each InjectionPointEntry would be more than enough I guess?
> Or more like 1024? The in-core module does not need much currently,
> but larger is likely better for pluggability.
Actually, this is leading to simplifications in the module, giving me
the attached:
4 files changed, 117 insertions(+), 134 deletions(-)
So I like your suggestion. This version closes the race window
between the shmem exit detach in backend A and a concurrent backend B
running a point local to A so as B will never run the local point of
A. However, it can cause a failure in the shmem exit callback of
backend A if backend B does a detach of a point local to A because A
tracks its local points with a static list in its TopMemoryContext, at
least in the attached. The second case could be solved by tracking
the list of local points in the module's InjectionPointSharedState,
but is this case really worth the complications it adds in the module
knowing that the first case would be solid enough? Perhaps not.
Another idea I have for the second case is to make
InjectionPointDetach() a bit "softer", by returning a boolean status
rather than fail if the detach cannot be done, so as the shmem exit
callback can still loop through the entries it has in store. It could
always be possible that a concurrent backend does a detach followed by
an attach with the same name, causing the shmem exit callback to drop
a point it should not, but that's not really a plausible case IMO :)
This stuff can be adjusted in subtle ways depending on the cases you
are most interested in. What do you think?
--
Michael