On Tue, Sep 8, 2015 at 6:58 AM, Pavel Stehule <
pavel.stehule@gmail.com> wrote:
>>
>> But you will still lock on the slots list to find an unused one. How is that substantially different from what I'm doing?
>
> It is not necessary - you can use similar technique to what it does PGPROC. I am sending "lock free" demo.
>
> I don't afraid about locks - short locks, when the range and time are limited. But there are lot of bugs, and fixes with the name "do interruptible some", and it is reason, why I prefer typical design for work with shared memory.
Thanks, this is really helpful! The key difference is that every backend has a dedicated slot, so there's no need to search for a free one, which would again incur locking.
>> Well, we are talking about hundreds to thousands bytes per plan in total. And if my reading of shm_mq implementation is correct, if the message fits into the shared memory buffer, the receiver gets the direct pointer to the shared memory, no extra allocation/copy to process-local memory. So this can be actually a win.
>
> you have to calculate with signals and interprocess communication. the cost of memory allocation is not all.
Sure. Anyway, we're talking about only kilobytes being sent in this case, so the whole performance discussion is rather moot.
>> The real problem could be if the process that was signaled to connect to the message queue never handles the interrupt, and we keep waiting forever in shm_mq_receive(). We could add a timeout parameter or just let the user cancel the call: send a cancellation request, use pg_cancel_backend() or set statement_timeout before running this.
>
> This is valid question - for begin we can use a statement_timeout and we don't need to design some special (if you don't hold some important lock).
> My example (the code has prototype quality) is little bit longer, but it work without global lock - the requester doesn't block any other
I'll update the commitfest patch to use this technique.
--
Alex