Thread: New locking code

New locking code

From
Bruce Momjian
Date:
I was reading in my "Unix Internals:  New Frontiers" book by Vahalia
yesterday, and he was talking about multi-cpu kernels, and he was
talking about read/write locks, and he said:


---------------------------------------------------------------------------

When the last active reader releases its shared lock, it must wake up a
single waiting writer.

When a writer releases a lock, it must choose whether to wake up another
writer or the other readers, assuming both readers and writers are
waiting.  If writers are given preference, the readers could starve
indefinitely under heavy contention.  The preferred solution is to wake
up all waiting readers when releasing an exclusive lock.  If there are
no waiting readers, wake up a single waiting writer.

The scheme can lead to writer starvation.  If there is a constant stream
of readers, they will keep the resource read-locked, and the writer
will never acquire the lock.  To avoid this situation, a lockShared()
request must block if there is any waiting writer, even though the
resource is currently only read-locked.  Such a solution, under heavy
contention, will alternate access between individual writer and batches
of readers.


---------------------------------------------------------------------------

The attached patch will do something similar to what he is suggesting.
My change is to keep a single waiting writer at the front of the queue,
even if the current lock holder is a writer.  Any readers arriving
during the current lock holder, or when the new writer gets the lock
will be bunched together and be given the shared lock after the writer
releases the lock.  It just doesn't seem fair for the readers to jump in
front of a single waiting writer.  If multiple writers are waiting, then
they will be behind the readers, except in cases where a late-arriving
reader can't share the readlock, and sits behind the first writer, so in
that case it would be alternating.

What do people think of this patch?  Is his way better, to always put
the readers at the front if the current lock holder is a writer?

The patch basically reverses the queue order, putting the readers before
the writers, except that now, a single writer can sit at the front of
the queue.  This structure also makes it easy for us to check to see if
we should allow a new reader to share a readlock.  We can quickly check
the front of the queue to see if there is a writer waiting.

---------------------------------------------------------------------------


*** ./backend/storage/lmgr/proc.c.orig    Wed Feb 18 13:48:11 1998
--- ./backend/storage/lmgr/proc.c    Wed Feb 18 14:15:09 1998
***************
*** 451,465 ****
            int prio,
            LOCK *lock)
  {
!     int            i;
      PROC       *proc;
      struct itimerval timeval,
                  dummy;

      proc = (PROC *) MAKE_PTR(waitQueue->links.prev);
!     for (i = 0; i < waitQueue->size; i++)
      {
!         if (proc->prio >= prio)
              proc = (PROC *) MAKE_PTR(proc->links.prev);
          else
              break;
--- 451,490 ----
            int prio,
            LOCK *lock)
  {
!     int            i = 0;
      PROC       *proc;
      struct itimerval timeval,
                  dummy;

      proc = (PROC *) MAKE_PTR(waitQueue->links.prev);
!     /*
!      *    If the first entry in the waitQueue has a greater priority than
!      *    we have, we must be a reader, and he must be a writer, and we
!      *    must be here because the current holder is a writer or a
!      *    reader but we don't share shared locks if a writer is waiting.
!      *    We put ourselves after the first writer.  What this hopefully
!      *    does it to bunch up all the waiting readers first, unless there
!      *    is a waiting writer, in which case all the waiting readers are
!      *    after the first waiting writer.  This way, we alternate between
!      *    writers and bunches of readers and don't starve anyone. So the
!      *    queue becomes:
!      *
!      *    [optional writer][reader][reader][reader][writer][writer][writer]
!      *
!      *    In a full queue, we would have a writer holding a lock, then another
!      *    writer gets the lock, then a bunch of readers, it repeats.
!      *    This may seem strange, but because new readers share the lock even
!      *    if older writers are waiting, this seems only fair. - bjm
!      */
!     if (i < waitQueue->size)
!         if (proc->prio > prio)
!         {
!             proc = (PROC *) MAKE_PTR(proc->links.prev);
!             i++;
!         }
!     for (; i < waitQueue->size; i++)
      {
!         if (proc->prio <= prio)
              proc = (PROC *) MAKE_PTR(proc->links.prev);
          else
              break;
***************
*** 596,603 ****

          /*
           * ProcWakeup removes proc from the lock waiting process queue and
!          * returns the next proc in chain.    If a writer just dropped its
!          * lock and there are several waiting readers, wake them all up.
           */
          proc = ProcWakeup(proc, NO_ERROR);

--- 621,627 ----

          /*
           * ProcWakeup removes proc from the lock waiting process queue and
!          * returns the next proc in chain.
           */
          proc = ProcWakeup(proc, NO_ERROR);


--
Bruce Momjian
maillist@candle.pha.pa.us

Re: [HACKERS] New locking code

From
ocie@paracel.com
Date:
Bruce Momjian wrote:

[snip]
> What do people think of this patch?  Is his way better, to always put
> the readers at the front if the current lock holder is a writer?

This way seems to make sense to me because it provides two guarantees:

1) Neither readers or writers can ever be starved when trying toa
access an object.

2) The system implements a FIFO queue.  I.E. A reader can't "cut in
line" in front of a waiting writer just because the current access
mode is shared.  The operations that have been waiting the longest get
serviced first.

Ocie Mitchell

Re: [HACKERS] New locking code

From
Bruce Momjian
Date:
> 2) The system implements a FIFO queue.  I.E. A reader can't "cut in
> line" in front of a waiting writer just because the current access
> mode is shared.  The operations that have been waiting the longest get
> serviced first.

I thinking about this FIFO comment, here is a better fix.  It implements
a FIFO for the lock wait queue, except it groups readers togther in the
queue.  The major change is that I now loop past as many writers that
may be at the head of the queue, then put the readers in the queue.  If
there are already readers in the queue after the writers, I add them to
the end of the existing readers.  The source code comments are changed,
and provide a more detailed explaination.

I am planning to put this into 6.3, to fix a possible reader starvation
problem with the new code.

---------------------------------------------------------------------------

*** ./backend/storage/lmgr/proc.c.orig    Tue Jan 27 21:29:29 1998
--- ./backend/storage/lmgr/proc.c    Wed Feb 18 20:10:04 1998
***************
*** 451,469 ****
            int prio,
            LOCK *lock)
  {
!     int            i;
      PROC       *proc;
      struct itimerval timeval,
                  dummy;

      proc = (PROC *) MAKE_PTR(waitQueue->links.prev);
!     for (i = 0; i < waitQueue->size; i++)
!     {
!         if (proc->prio >= prio)
!             proc = (PROC *) MAKE_PTR(proc->links.prev);
!         else
!             break;
!     }

      MyProc->prio = prio;
      MyProc->token = token;
--- 451,492 ----
            int prio,
            LOCK *lock)
  {
!     int            i = 0;
      PROC       *proc;
      struct itimerval timeval,
                  dummy;

+     /*
+      *    If the first entries in the waitQueue have a greater priority than
+      *    we have, we must be a reader, and they must be a writers, and we
+      *    must be here because the current holder is a writer or a
+      *    reader but we don't share shared locks if a writer is waiting.
+      *    We put ourselves after the writers.  This way, we have a FIFO, but
+      *    keep the readers together to give them decent priority, and no one
+      *    starves.  Because we group all readers together, a non-empty queue
+      *    only has a few possible configurations:
+      *
+      *    [readers]
+      *    [writers]
+      *    [readers][writers]
+      *    [writers][readers]
+      *    [writers][readers][writers]
+      *
+      *    In a full queue, we would have a reader holding a lock, then a
+      *    writer gets the lock, then a bunch of readers, made up of readers
+      *    who could not share the first readlock because a writer was waiting,
+      *    and new readers arriving while the writer had the lock.
+      *
+      */
      proc = (PROC *) MAKE_PTR(waitQueue->links.prev);
!
!     /* If we are a reader, and they are writers, skip past them */
!     while (i++ < waitQueue->size && proc->prio > prio)
!         proc = (PROC *) MAKE_PTR(proc->links.prev);
!
!     /* The rest of the queue is FIFO, with readers first, writers last */
!     while (i++ < waitQueue->size && proc->prio <= prio)
!         proc = (PROC *) MAKE_PTR(proc->links.prev);

      MyProc->prio = prio;
      MyProc->token = token;
***************
*** 596,603 ****

          /*
           * ProcWakeup removes proc from the lock waiting process queue and
!          * returns the next proc in chain.    If a writer just dropped its
!          * lock and there are several waiting readers, wake them all up.
           */
          proc = ProcWakeup(proc, NO_ERROR);

--- 619,625 ----

          /*
           * ProcWakeup removes proc from the lock waiting process queue and
!          * returns the next proc in chain.
           */
          proc = ProcWakeup(proc, NO_ERROR);



--
Bruce Momjian
maillist@candle.pha.pa.us