On Wed, Mar 26, 2008 at 10:55:44PM +0800, Xiao Meng wrote:
> Hello, Hackers:
>
> I've post a question about GSoC before.
> [GSoC] (Is it OK to choose items without % mark in theToDoList) && (is
> it an acceptable idea to build index on Flash Disk)
> http://archives.postgresql.org/pgsql-hackers/2008-03/msg00859.php
>
> Now, I start a new thread since the topic had been transfered to
> improving hash index.
> Kenneth had started a thread about this topic before.
> http://archives.postgresql.org/pgsql-hackers/2007-09/msg00051.php
>
> I've browsed the long discussion follow Kenneth's advice.
> Now, I've have some rough idea. And there's still something confused
> me.I'm glad to hear your advice.
>
> 1. The Benchmark
> In what condition, hash index "should" be used in general , i.e. when
> it should work better that b-tree? I think we should focus on these
> condition. IMHO, hash should be efficient when the cost of comparison
> is expensive and most of the query is equality query.
> In addition, maybe we can use TPC benchmark's data.
>
Yes, the cost of comparison for large values can be much better
with a hash index. Hash indexes really only work with an equality
query. The hash should outperform the normal Btree index when the
index probe takes O(1). This can be accomplished by making the
bucket size smaller, packing fewer items per bucket, or both.
> 2.The management of the bucket
> some idea in the todo list is about this
>
> > o Pack hash index buckets onto disk pages more efficiently
> >
> > Currently only one hash bucket can be stored on a page. Ideally
> > several hash buckets could be stored on a single page and greater
> > granularity used for the hash algorithm.
> >
> > http://archives.postgresql.org/pgsql-hackers/2004-06/msg00168.php
> >
> AIUI, one approach is to divide one big hash bucket into
> sub-bucket and re-hash in the big bucket. And we should use closed
> hash IMHO.
>
One approach that I had been considering, is to subdivide the page
into buckets that are some fraction of the page. We still need to
handle the fact that hash functions are not perfect, in general, and
that for non-unique indexes there could be many, many items in the
bucket with the same value (and hash value). The current overflow
page idea will handle that case if we extend the idea slightly. First,
the fractional page-size buckets should be a 1/2^n. Then do not use
all of the index tuple slots in each bucket. Reserve the space that
is unused on the page through this to create the initial overflow
page on the spare space on the same index page. Then any overflow
pages after the first will be a full page in size. Obviously, testing
can be used to determine the best bucket-size vs. initial overflow
page space size. If the buckets are relatively empty, then simply using
a sub-set of the hash value to index the sub-page bucket will require
no additional computation.
What do you see as the advantage in using a closed hash for re-hashing
the big bucket into sub-buckets?
> 2.what about other Hash Implementation
> What about static hash ? We need to re-organize the whole file when
> the file grows big. But what about in such a condition that we can
> estimate the size of table and it's not changed frequently. Besides
> the high cost of re-organization ,it's an attractive technique because
> we only need one I/O to fetch a item.
> Extendable hash is also effective when the memory is big enough .It's
> also need one I/O to fetch a item. Of course the growth of memory
> usage is a big problem.
>
The file re-organization can be minimized by allowing for the setting
of some intial size and multiplicity values during the initial index
creation.
> 3. the page layout of the bucket
> > o In hash indexes, consider storing the hash value with or instead
> > of the key itself
> >
> It means that we need at least two I/O's to fetch a tuple.
> For a three level B-Tree(if we place the upper two level into the
> memory), it works not worse than hash index. IMHO, Hash is effective
> when the index is big enough and the B-Tree grows to 4 level.
> Neil Conway had posted a patch doing this with an old version of
> PostgreSQL. I think It's a good start point for me.
>
Right, we need to get hash indexes to the point at which their
performance surpasses Btree indexes for some problem domains. If
we could include unique index support and WAL, that would be great.
>
> 4.what about user defined hash function?
> Sometimes, DBA may know the data distribution well, so DBA can give a
> better hash function in some cases.
> As far as I know, Oracle support user defined hash function using a
> "Hash is exper" clause
> e.g.
> create cluster acctclust (acctid int) size 80 single table hashkeys 100000
> hash is mod(acctid,100003)
>
I think that having a user defined hash function is not nearly as
valuable as fixing the other warts and performance issues. One item
that could be easily change is the some of the hash constants that
we use in our current hash function. This would allow a simple knob
to tweak in the case of pathalogical hash layouts.
Just some thoughts. Good luck with your proposal.
Cheers,
Ken Marshall
>
> Any comment or advices? Hope to hear from you,thanks!
> And I think I should finish my proposal as soon as possible now, since
> the deadline of application is coming.
>
> Have a good day;-)
> --
> Best Regards,
> Xiao Meng
>
> ???????????????????????????????????????????????????????????????????????????
> Data and Knowledge Engineering Research Center,CS&T
> Harbin Institute of Technology, Harbin, Heilongjiang, China
> Gtalk: mx.cogito@gmail.com
> MSN: cnEnder@live.com
> Blog: http://xiaomeng.yo2.cn