Re: Hash indexes (was: On-disk bitmap index patch) - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: Hash indexes (was: On-disk bitmap index patch)
Date
Msg-id 87ejw0y0uv.fsf@stark.xeocode.com
Whole thread Raw
In response to Re: Hash indexes (was: On-disk bitmap index patch)  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Hash indexes  (Andrew Dunstan <andrew@dunslane.net>)
List pgsql-hackers
Tom Lane <tgl@sss.pgh.pa.us> writes:

> I think the problem may well be that we use hash buckets that are too
> large (ie, whole pages).  After we fetch the page, we have to grovel
> through every tuple on it to find the one(s) that really match the
> query, whereas btree has a much more intelligent strategy (viz binary
> search) to do its intrapage searches.  Smaller buckets would help make
> up for this.

Hm, you would expect hash indexes to still be a win for very large indexes
where you're worried more about i/o than cpu resources.

> Another issue is that we don't store the raw hashcode in the index
> tuples, so the only way to test a tuple is to actually invoke the
> datatype equality function.  If we stored the whole 32-bit hashcode
> we could eliminate non-matching hashcodes cheaply.  I'm not sure how
> painful it'd be to do this though ... hash uses the same index tuple
> layout as everybody else, and so there's no convenient place to put
> the hashcode.

I looked a while back and was suspicious about the actual hash functions too.
It seemed like a lot of them were vastly suboptimal. That would mean we're
often dealing with mostly empty and mostly full buckets instead of well
distributed hash tables.


--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: float8 regression failure (HEAD, cygwin)
Next
From: Teodor Sigaev
Date:
Subject: Re: User-defined typle similar to char(length) varchar(length)