locking for unique hash indexes - Mailing list pgsql-hackers

From Neil Conway
Subject locking for unique hash indexes
Date
Msg-id 1063758747.24276.29.camel@tokyo
Whole thread Raw
Responses Re: locking for unique hash indexes
List pgsql-hackers
I'm working on adding support for unique indexes to the hash index code.
The majority of the code is finished, with the exception of the changes
to locking. My proposed changes are below -- if anyone sees a better way
to do things, let me know.

There's lots of info on the locking scheme used by hash indexes in
src/backend/access/hash/README, but I'll present a short summary here.
(I've omitted some unimportant details.) Insertions do the following:
       1. Lookup the bucket in which to store the index item, based on       its hash (we acquire some locks to ensure
thehash-key-to-bucket       mapping doesn't change under our feet).              2. Acquire a shared lmgr lock on the
targetbucket, to ensure       that no one tries to split or compact it while we're looking at       it.              3.
Acquirean exclusive lwlock on the primary bucket page. This       prevents hash scans from seeing the partially written
page.             4. If the primary bucket page has enough free space to fit the       new index item, insert it and
releasethe lwlock; if not,       release the lwlock, advance to the next overflow page, and       repeat. If we're out
ofoverflow pages but haven't found free       space, create a new overflow page.              5. Release the shared
lmgrlock on the target bucket.
 

Scans use a similar algorithm, with the exception that they need only
acquire a shared lwlock on the bucket's pages.

The additional requirement for unique hash indexes is to be able to
prevent other backends from concurrently inserting a duplicate key into
the target bucket chain.

In order to do this, we could:

- Invent a new set of lmgr locks; call them "right of insertion" locks,
and have one for each bucket in the hash index. Only one backend will
hold the ROI lock for a given bucket at any given time.

- On insertion, grab the ROI lock in exclusive mode for the target
bucket, after we've already done #2. This serializes insertions into the
bucket.

- Index scans don't need to acquire the ROI lock, since they can't
possibly insert a duplicate, and they're already protected from seeing
partially written pages by the lwlocks acquired by the inserting
backend. Similarly, bulk deletions don't need to acquire the ROI lock,
since they already can't proceed in parallel with an insertion on any
given bucket (they acquire #2 in exclusive mode). Insertions on other
buckets in the index can proceed in parallel with one another, since by
definition they couldn't be inserting duplicate tuples.

- naturally, we don't bother with ROI locks if the hash index isn't a
unique index

Q: Is there a possibility of deadlock here? (If not, the ROI lock could
be an lwlock, perhaps.)

Any comments would be welcome.

-Neil

P.S. While we're on the subject on hash indexes and locking, ISTM that
we could get better concurrent performance in #4 by first acquiring the
lwlock on a particular bucket page in shared mode, checking if it has
free space, and only if it does, getting a write lock on it and doing
the insertion. Of course, we'd need to re-check the free space once we
got the write lock, to insure that someone else wasn't granted a write
lock on that page while we were waiting and then decreased the free
space.



pgsql-hackers by date:

Previous
From: Kris Jurka
Date:
Subject: Re: observations about temporary tables and schemas
Next
From: Hiroshi Inoue
Date:
Subject: Re: New thoughts about indexing cross-type comparisons