If we take the per-backend slot approach the locking seems unnecessary and there are principally two options:
1) The backend puts the DSM handle in its own slot and notifies the requester to read it.
2) The backend puts the DSM handle in the slot of the requester (and notifies it).
If we go with the first option, the backend that has created the DSM will not know when it's OK to free it, so this has to be responsibility of the requester. If the latter exits before reading and freeing the DSM, we have a leak. Even bigger is the problem that the sender backend can no longer send responses to a number of concurrent requestors: if its slot is occupied by a DSM handle, it can not send a reply to another backend until the slot is freed.
With the second option we have all the same problems with not knowing when to free the DSM and potentially leaking it, but we can handle concurrent requests.
It should not be true - the data sender create DSM and fills it. Then set caller slot and send signal to caller. Caller can free DSM any time, because data sender send newer touch it.
But the requester can timeout on waiting for reply and exit before it sees the reply DSM. Actually, I now don't even think a backend can free the DSM it has not created. First it will need to attach it, effectively increasing the refcount, and upon detach it will only decrease the refcount, but not actually release the segment...
I am afraid so it has not simple and nice solution - when data sender will wait for to moment when data are received, then we have same complexity like we use shm_mq.
Isn't better to introduce new background worker with responsibility to clean orphaned DSM?
Regards
Pavel
So this has to be the responsibility of the reply sending backend in the end: to create and release the DSM *at some point*.