Thread: Race conditions in shm_mq.c
During my experiments with parallel workers I sometimes saw the "master" and worker process blocked. The master uses shm queue to send data to the worker, both sides nowait==false. I concluded that the following happened: The worker process set itself as a receiver on the queue after shm_mq_wait_internal() has completed its first check of "ptr", so this function left sender's procLatch in reset state. But before the procLatch was reset, the receiver still managed to read some data and set sender's procLatch to signal the reading, and eventually called its (receiver's) WaitLatch(). So sender has effectively missed the receiver's notification and called WaitLatch() too (if the receiver already waits on its latch, it does not help for sender to call shm_mq_notify_receiver(): receiver won't do anything because there's no new data in the queue). Below is my patch proposal. -- Antonin Houska Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de, http://www.cybertec.at diff --git a/src/backend/storage/ipc/shm_mq.c b/src/backend/storage/ipc/shm_mq.c index 126cb07..39ea673 100644 --- a/src/backend/storage/ipc/shm_mq.c +++ b/src/backend/storage/ipc/shm_mq.c @@ -803,6 +808,22 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data, return SHM_MQ_DETACHED; } mqh->mqh_counterparty_attached = true; + + /* + * Re-check if the queue is still full. + * + * While we were using our procLatch to detect receiver's + * connection, the receiver could have connected and started + * reading from it - that includes concurrent manipulation of + * the latch, in order to report on reading activity. Thus we + * could miss the information that some data has already been + * consumed, and cause a deadlock by calling SetLatch() below. + * + * (If the receiver starts waiting on its latch soon enough, + * our call of shm_mq_notify_receiver() will have no effect.) + */ + if (!nowait) + continue; } /* Let the receiver know that we need them to read some data. */
On Thu, Aug 6, 2015 at 10:10 AM, Antonin Houska <ah@cybertec.at> wrote: > During my experiments with parallel workers I sometimes saw the "master" and > worker process blocked. The master uses shm queue to send data to the worker, > both sides nowait==false. I concluded that the following happened: > > The worker process set itself as a receiver on the queue after > shm_mq_wait_internal() has completed its first check of "ptr", so this > function left sender's procLatch in reset state. But before the procLatch was > reset, the receiver still managed to read some data and set sender's procLatch > to signal the reading, and eventually called its (receiver's) WaitLatch(). > > So sender has effectively missed the receiver's notification and called > WaitLatch() too (if the receiver already waits on its latch, it does not help > for sender to call shm_mq_notify_receiver(): receiver won't do anything > because there's no new data in the queue). > > Below is my patch proposal. Another good catch. However, I would prefer to fix this without introducing a "continue" as I think that will make the control flow clearer. Therefore, I propose the attached variant of your idea. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
On Thu, Aug 6, 2015 at 2:38 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Thu, Aug 6, 2015 at 10:10 AM, Antonin Houska <ah@cybertec.at> wrote: >> During my experiments with parallel workers I sometimes saw the "master" and >> worker process blocked. The master uses shm queue to send data to the worker, >> both sides nowait==false. I concluded that the following happened: >> >> The worker process set itself as a receiver on the queue after >> shm_mq_wait_internal() has completed its first check of "ptr", so this >> function left sender's procLatch in reset state. But before the procLatch was >> reset, the receiver still managed to read some data and set sender's procLatch >> to signal the reading, and eventually called its (receiver's) WaitLatch(). >> >> So sender has effectively missed the receiver's notification and called >> WaitLatch() too (if the receiver already waits on its latch, it does not help >> for sender to call shm_mq_notify_receiver(): receiver won't do anything >> because there's no new data in the queue). >> >> Below is my patch proposal. > > Another good catch. However, I would prefer to fix this without > introducing a "continue" as I think that will make the control flow > clearer. Therefore, I propose the attached variant of your idea. Err, that doesn't work at all. Have a look at this version instead. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
Robert Haas <robertmhaas@gmail.com> wrote: > On Thu, Aug 6, 2015 at 2:38 PM, Robert Haas <robertmhaas@gmail.com> wrote: > > On Thu, Aug 6, 2015 at 10:10 AM, Antonin Houska <ah@cybertec.at> wrote: > >> During my experiments with parallel workers I sometimes saw the "master" and > >> worker process blocked. The master uses shm queue to send data to the worker, > >> both sides nowait==false. I concluded that the following happened: > >> > >> The worker process set itself as a receiver on the queue after > >> shm_mq_wait_internal() has completed its first check of "ptr", so this > >> function left sender's procLatch in reset state. But before the procLatch was > >> reset, the receiver still managed to read some data and set sender's procLatch > >> to signal the reading, and eventually called its (receiver's) WaitLatch(). > >> > >> So sender has effectively missed the receiver's notification and called > >> WaitLatch() too (if the receiver already waits on its latch, it does not help > >> for sender to call shm_mq_notify_receiver(): receiver won't do anything > >> because there's no new data in the queue). > >> > >> Below is my patch proposal. > > > > Another good catch. However, I would prefer to fix this without > > introducing a "continue" as I think that will make the control flow > > clearer. Therefore, I propose the attached variant of your idea. > > Err, that doesn't work at all. Have a look at this version instead. This makes sense to me. One advantage of "continue" was that I could apply the patch to my test code (containing the appropriate sleep() calls, to simulate the race conditions) with no conflicts and see the difference. The restructuring you do does not allow for such a "mechanical" testing, but it's clear enough. -- Antonin Houska Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de, http://www.cybertec.at
On Thu, Aug 6, 2015 at 5:59 PM, Antonin Houska <ah@cybertec.at> wrote: > Robert Haas <robertmhaas@gmail.com> wrote: >> On Thu, Aug 6, 2015 at 2:38 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> > On Thu, Aug 6, 2015 at 10:10 AM, Antonin Houska <ah@cybertec.at> wrote: >> >> During my experiments with parallel workers I sometimes saw the "master" and >> >> worker process blocked. The master uses shm queue to send data to the worker, >> >> both sides nowait==false. I concluded that the following happened: >> >> >> >> The worker process set itself as a receiver on the queue after >> >> shm_mq_wait_internal() has completed its first check of "ptr", so this >> >> function left sender's procLatch in reset state. But before the procLatch was >> >> reset, the receiver still managed to read some data and set sender's procLatch >> >> to signal the reading, and eventually called its (receiver's) WaitLatch(). >> >> >> >> So sender has effectively missed the receiver's notification and called >> >> WaitLatch() too (if the receiver already waits on its latch, it does not help >> >> for sender to call shm_mq_notify_receiver(): receiver won't do anything >> >> because there's no new data in the queue). >> >> >> >> Below is my patch proposal. >> > >> > Another good catch. However, I would prefer to fix this without >> > introducing a "continue" as I think that will make the control flow >> > clearer. Therefore, I propose the attached variant of your idea. >> >> Err, that doesn't work at all. Have a look at this version instead. > > This makes sense to me. > > One advantage of "continue" was that I could apply the patch to my test code > (containing the appropriate sleep() calls, to simulate the race conditions) > with no conflicts and see the difference. The restructuring you do does not > allow for such a "mechanical" testing, but it's clear enough. OK, I've committed that and back-patched it to 9.5 and 9.4. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company