Re: GSoC on WAL-logging hash indexes - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: GSoC on WAL-logging hash indexes
Date
Msg-id 531992AF.2080306@vmware.com
Whole thread Raw
In response to Re: GSoC on WAL-logging hash indexes  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: GSoC on WAL-logging hash indexes
List pgsql-hackers
On 03/06/2014 09:34 PM, Robert Haas wrote:
> On Thu, Mar 6, 2014 at 8:11 AM, Heikki Linnakangas
> <hlinnakangas@vmware.com> wrote:
>> I don't think it's necessary to improve concurrency just to get WAL-logging.
>> Better concurrency is a worthy goal of its own, of course, but it's a
>> separate concern.
>
> To some extent, I agree, but only to some extent.  To make hash
> indexes generally usable, we really need to solve both problems.  When
> I got rid of just some of the heavyweight locking in commit
> 76837c1507cb5a5a0048046233568447729e66dd, the results were pretty
> dramatic at higher levels of concurrency:
>
> http://www.postgresql.org/message-id/CA+Tgmoaf=nOJxLyzGcbrrY+pe-0VLL0vfHi6tjdM3fFtVwsOmw@mail.gmail.com
>
> But there was still an awful lot of contention inside the heavyweight
> lock manager, and I don't think hash indexes are going to be ready for
> prime time until we solve that problem.

Hmm. You suggested ensuring that a scan always has at least a pin, and 
split takes a vacuum-lock. That ought to work. There's no need for the 
more complicated maneuvers you described, ISTM that you can just replace 
the heavy-weight share lock with holding a pin on the primary page of 
the bucket, and an exclusive lock with a vacuum-lock. Note that 
_hash_expandtable already takes the exclusive lock conditionally, ie. if 
it doesn't get the lock immediately it just gives up. We could do the 
same with the cleanup lock.

Vacuum could also be enhanced. It currently takes an exclusive lock on 
the bucket, then removes any dead tuples and finally "squeezes" the 
bucket by moving tuples to earlier pages. But you only really need the 
exclusive lock for the squeeze-phase. You could do the dead-tuple 
removal without the bucket-lock, and only grab for the squeeze-phase. 
And the squeezing is optional, so you could just skip that if you can't 
get the lock. But that's a separate patch as well.

One more thing we could do to make hash indexes more scalable, 
independent of the above: Cache the information in the metapage in 
backend-private memory. Then you (usually) wouldn't need to access the 
metapage at all when scanning. Store a copy of the bitmask for that 
bucket in the primary page, and when scanning, check that it matches the 
cached value. If not, refresh the cached metapage and try again.


So, there seems to be a few fairly simple and independent improvements 
to be made:

1. Replace the heavy-weight lock with pin & vacuum-lock.
2. Make it crash-safe, by adding WAL-logging
3. Only acquire the exclusive-lock (vacuum-lock after step 1) in VACUUM 
for the squeeze phase.
4. Cache the metapage.

We still don't know if it's going to be any better than B-tree after all 
that's done, but the only way to find out is to go ahead and implement it.

This seems like a great GSoC project to me. We have a pretty good idea 
of what we want to accomplish. It's uncontroversial: I don't think 
anyone is going to object to improving hash indexes (one could argue 
that it's a waste of time, but that's different from objecting to the 
idea). And it consists of a few mostly independent parts, so it's 
possible to do incrementally which makes it easier to track progress, 
and we'll probably have something useful at the end of the summer even 
if it doesn't all get finished.

- Heikki



pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: ALTER TABLE lock strength reduction patch is unsafe Reply-To:
Next
From: Heikki Linnakangas
Date:
Subject: Re: missing RelationCloseSmgr in FreeFakeRelcacheEntry?