Re: Hash index use presently(?) discouraged since 2005: revive or bury it? - Mailing list pgsql-performance

On Mon, Sep 19, 2011 at 4:04 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
> On Sun, Sep 18, 2011 at 9:59 AM, Stefan Keller <sfkeller@gmail.com> wrote:
>> Merlin and Jeff,
>>
>> General remark again:It's hard for me to imagine that btree is
>> superior for all the issues mentioned before. I still believe in hash
>> index for primary keys and certain unique constraints where you need
>> equality search and don't need ordering or range search.
>
> It is -- but please understand I'm talking about int32 tree vs hash.
> Hashing as a technique of course is absolutely going to cream btree
> for all kinds of data because of the advantages of working with
> decomposed data -- we are all taught that in comp-sci 101 :-).  The
> debate here is not about the advantages of hashing, but the specific
> implementation of the hash index used.
>
> Postgres's hash index implementation used to be pretty horrible -- it
> stored the pre-hashed datum in the index which, while making it easier
> to do certain things,  made it horribly slow, and, for all intents and
> purposes, useless.  Somewhat recently,a lot of work was put in to fix
> that -- the index now packs the hash code only which made it
> competitive with btree and superior for larger keys.  However, certain
> technical limitations like lack of WAL logging and uniqueness hold
> hash indexing back from being used like it really should be.  In cases
> where I really *do* need hash indexing, I do it in userland.
>
> create table foo
> (
>  a_long_field text;
> );
> create index on foo(hash(a_long_field));
>
> select * from foo where hash(a_long_field) = hash(some_value) and
> a_long_field = some_value;
>
> This technique works fine -- the main disadvantage is that enforcing
> uniqueness is a PITA but since the standard index doesn't support it
> either it's no great loss.  I also have the option of getting
> 'uniqueness' and being able to skip the equality operation if I
> sacrifice some performance and choose a strong digest.  Until the hash
> index issues are worked out, I submit that this remains the go-to
> method to do this.

Is this approach (storing the hash code in a btree) really faster than
a regular btree index on "a_long_field"?  And if so, for which kind of
data and load?

> Now, my point here is that I've noticed that even with the latest
> optimizations btree seems to still be superior to the hash indexing by
> most metrics, so that:
> create table foo
> (
>  an_int_field int;
>  a_long_field text;
> );
>
> create index on foo(an_int_field);
> create index on foo using hash(a_long_field);
>
> On performance grounds alone, the btree index seems to be (from my
> very limited testing) a better bet.  So I'm conjecturing that the
> current hash implementation should be replaced with a formalization of
> the userland technique shown above -- when you request a hash index,
> the database will silently hash your field and weave it into a btree.
> It's a hybrid: a hashed btree.

I'd rather call it a "btreefied hash" because you are storing a hash
but in a btree structure. :-) But that's a detail.  What I find
worrying is that then there is a certain level of obscuring the real
nature since "create index ... using hash" is not exactly creating a
hash table.

> To truly demonstrate if the technique
> was effective though, it would have to be coded up -- it's only fair
> to compare if the btree based hash is also double checking the value
> in the heap which the standard hash index must do.

Right.

> The other way to go of course is to try and fix up the existing hash
> index code -- add wal logging, etc. In theory, a customized hash
> structure should be able to beat btree all day long which argues to
> continue in this direction.

I still haven't seen a solution to locking when a hash table needs
resizing.  All hashing algorithms I can think of at the moment would
require a lock on the whole beast during the resize which makes this
type of index impractical for certain loads (heavy updating).

One solution would be to apply partitioning to the hash table itself
(e.g. have four partitions for the two least significant bits or 16
for the four lest significant bits) and treat them independently.  How
that would interact with WAL I have no idea though.

Kind regards

robert

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

pgsql-performance by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: Hash index use presently(?) discouraged since 2005: revive or bury it?
Next
From: Vitalii Tymchyshyn
Date:
Subject: Re: Hash index use presently(?) discouraged since 2005: revive or bury it?