Re: shared memory message queues - Mailing list pgsql-hackers

From Andres Freund
Subject Re: shared memory message queues
Date
Msg-id 20131220210405.GC15323@alap2.anarazel.de
Whole thread Raw
In response to Re: shared memory message queues  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: shared memory message queues
Re: shared memory message queues
List pgsql-hackers
Hi,

On 2013-12-18 15:23:23 -0500, Robert Haas wrote:
> It sounds like most people who have looked at this stuff are broadly
> happy with it, so I'd like to push on toward commit soon, but it'd be
> helpful, Andres, if you could review the comment additions to
> shm-mq-v2.patch and see whether those address your concerns.  If so,
> I'll see about improving the overall comments for shm-toc-v1.patch as
> well to clarify the points that seem to have caused a bit of
> confusion; specific thoughts on what ought to be covered there, or any
> other review, is most welcome.

Some things:

* shm_mq_minimum_size should be const

* the MAXALIGNing in shm_mq_create looks odd. We're computing data_offset using MAXALIGN, determine the ring size using
it,but then set mq_ring_offset to data_offset - offsetof(shm_mq, mq_ring)?
 

* same problem as the toc stuff with a dynamic number of spinnlocks.

* you don't seem to to initialize the spinlock anywhere. That's clearly not ok, independent of the spinlock
implementation.

* The comments around shm_mq/shm_mq_handle are a clear improvement. I'd be pretty happy if you additionally add
sometingakin to 'This struct describes the shared state of a queue' and 'Backend-local reference to a queue'.
 

* s/MQH_BUFSIZE/MQH_INITIAL_BUFSIZE/, I was already wondering whether there's a limit on the max number of bytes.

* I think shm_mq_attach()'s documentation should mention that we'll initially and later allocate memory in the current
CurrentMemoryContextwhen attaching. That will likely require some care for some callers. Yes, it's documented in a
biggercomment somewhere else, but still.
 

* I don't really understand the mqh_consume_pending stuff. If we just read a message spanning from the beginning to the
endof the ringbuffer, without wrapping, we might not confirm the read in shm_mq_receive() until the next the it is
called?Do I understand that correctly?
 
I am talking about the following and other similar pieces of code:
+        if (rb >= needed)
+        {
+            /*
+             * Technically, we could consume the message length information at
+             * this point, but the extra write to shared memory wouldn't be
+             * free and in most cases we would reap no benefit.
+             */
+            mqh->mqh_consume_pending = needed;
+            *nbytesp = nbytes;
+            *datap = ((char *) rawdata) + MAXALIGN64(sizeof(uint64));
+            return SHM_MQ_SUCCESS;
+        }

* ISTM the messages around needing to use the same arguments for send/receive again after SHM_MQ_WOULD_BLOCK could
standto be more forceful. "In this case, the caller should call this function again after the process latch has been
set."doesn't make it all that clear that failure to do so will corrupt the next message. Maybe there also should be an
assertchecking that the size is the same when retrying as when starting a partial read?
 

* You have some CHECK_FOR_INTERRUPTS(), are you sure the code is safe to longjmp out from there in all the cases? E.g.
Iam not sure it is for shm_mq_send_bytes(), when we've first written a partial message, done a
shm_mq_inc_bytes_written()and then go into the available == 0 branch and jump out.
 

* Do I understand it correctly that mqh->mqh_counterparty_attached is just sort of a cache, and we'll still detect a
detachedcounterparty when we're actually sending/receiving?
 

That's it for now.

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE
Next
From: Heikki Linnakangas
Date:
Subject: Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE