Thread: updated hash functions for postgresql v1

updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
Dear PostgreSQL Developers,

This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.
It implements the 2006 version of the hash function by Bob Jenkins. Its
features include a better and faster hash function. I have included the
versions supporting big-endian and little-endian machines that will be
selected based on the machine configuration. Currently, I have hash_any()
just a stub calling hashlittle and hashbig. In order to allow the hash
index to support large indexes (>10^9 entries), the hash function needs
to be able to provide 64-bit hashes.

The functions hashbig2/hashlittle2 produce 2 32-bit hashes that can be
used as a 64-bit hash value. I would like some feedback as to how best
to include 64-bit hashes within our current 32-bit hash infrastructure.
The hash-merge can simple use one of the 2 32-bit pieces to provide
the current 32-bit hash values needed. Then they could be pulled directly
from the hash index and not need to be recalculated at run time. What
would be the best way to implement this in a way that will work on
machines without support for 64-bit integers?

The current patch passes all the regression tests, but has a few warnings
for the different variations of the new hash function. Until the design
has crystalized, I am not going to worry about them and I want testers to
have access to the different functions. I am doing the initial patches
to the hash index code based on a 32-bit hash, but I would like to add the
64-bit hash support pretty early in the development cycle in order to
allow for better testing. Any thoughts would be welcome.

Regards,
Ken

Attachment

Re: updated hash functions for postgresql v1

From
Simon Riggs
Date:
On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> Its features include a better and faster hash function.

Looks very promising. Do you have any performance test results to show
it really is faster, when compiled into Postgres? Better probably needs
some definition also; in what way are the hash functions better?

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > Its features include a better and faster hash function.
>
> Looks very promising. Do you have any performance test results to show
> it really is faster, when compiled into Postgres? Better probably needs
> some definition also; in what way are the hash functions better?
>
> --
>   Simon Riggs
>   2ndQuadrant  http://www.2ndQuadrant.com
>
The new hash function is roughly twice as fast as the old function in
terms of straight CPU time. It uses the same design as the current
hash but provides code paths for aligned and unaligned access as well
as separate mixing functions for different blocks in the hash run
instead of having one general purpose block. I think the speed will
not be an obvious win with smaller items, but will be very important
when hashing larger items (up to 32kb).

Better in this case means that the new hash mixes more thoroughly
which results in less collisions and more even bucket distribution.
There is also a 64-bit varient which is still faster since it can
take advantage of the 64-bit processor instruction set.

Ken

Re: updated hash functions for postgresql v1

From
Simon Riggs
Date:
On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
> >
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
> >
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
> >
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.

Ken, I was really looking for some tests that show both of the above
were true. We've had some trouble proving the claims of other algorithms
before, so I'm less inclined to take those things at face value.

I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
Others may have different concerns.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: updated hash functions for postgresql v1

From
"Luke Lonergan"
Date:

We just applied this and saw a 5 percent speedup on a hash aggregation query with four colums in a 'group by' clause run against a single TPC-H table (lineitem).

CK - can you post the query?

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Simon Riggs [mailto:simon@2ndquadrant.com]
Sent:   Sunday, October 28, 2007 04:11 PM Eastern Standard Time
To:     Kenneth Marshall
Cc:     pgsql-patches@postgresql.org; twraney@comcast.net; neilc@samurai.com
Subject:        Re: [PATCHES] updated hash functions for postgresql v1

On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
> >
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
> > 
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
> >
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.

Ken, I was really looking for some tests that show both of the above
were true. We've had some trouble proving the claims of other algorithms
before, so I'm less inclined to take those things at face value.

I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
Others may have different concerns.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate

Re: updated hash functions for postgresql v1

From
"CK Tan"
Date:
Hi, this query on TPCH 1G data gets about 5% improvement.

select count (*) from (select l_orderkey, l_partkey, l_comment,
count(l_tax) from lineitem group by 1, 2, 3) tmpt;

Regards,
-cktan


On Oct 28, 2007, at 1:17 PM, Luke Lonergan wrote:

We just applied this and saw a 5 percent speedup on a hash aggregation query with four colums in a 'group by' clause run against a single TPC-H table (lineitem).

CK - can you post the query?

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Simon Riggs [mailto:simon@2ndquadrant.com]
Sent:   Sunday, October 28, 2007 04:11 PM Eastern Standard Time
To:     Kenneth Marshall
Cc:     pgsql-patches@postgresql.org; twraney@comcast.net; neilc@samurai.com
Subject:        Re: [PATCHES] updated hash functions for postgresql v1

On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
> >
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
> > 
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
> >
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.

Ken, I was really looking for some tests that show both of the above
were true. We've had some trouble proving the claims of other algorithms
before, so I'm less inclined to take those things at face value.

I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
Others may have different concerns.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate


Re: updated hash functions for postgresql v1

From
Simon Riggs
Date:
On Sun, 2007-10-28 at 13:19 -0700, CK Tan wrote:
> Hi, this query on TPCH 1G data gets about 5% improvement.

> select count (*) from (select l_orderkey, l_partkey, l_comment,
> count(l_tax) from lineitem group by 1, 2, 3) tmpt;

> On Oct 28, 2007, at 1:17 PM, Luke Lonergan wrote:
>
> > We just applied this and saw a 5 percent speedup on a hash
> > aggregation query with four colums in a 'group by' clause run
> > against a single TPC-H table (lineitem).
> >
> > CK - can you post the query?

Is this on Postgres or Greenplum?


That looks like quite a wide set of columns.

Sounds good though. Can we get any more measurements in?

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: updated hash functions for postgresql v1

From
"Luke Lonergan"
Date:

That's on Greenplum latest.

We used this query to expose CPU heavy aggregation.

The 1GB overall TPCH size is chosen to fit into the RAM of a typical workstation/laptop with 2GB of RAM.  That ensures the time is spent in the CPU processing of the hashagg, which is what we'd like to measure here.

The PG performance will be different, but the measurement approach should be the same IMO.  The only suggestion to make it easier is to use 250MB scale factor, as we use four cores against 1GB.  The principal is the same.

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Simon Riggs [mailto:simon@2ndquadrant.com]
Sent:   Sunday, October 28, 2007 04:48 PM Eastern Standard Time
To:     CK.Tan
Cc:     Luke Lonergan; Kenneth Marshall; pgsql-patches@postgresql.org; twraney@comcast.net; neilc@samurai.com
Subject:        Re: [PATCHES] updated hash functions for postgresql v1

On Sun, 2007-10-28 at 13:19 -0700, CK Tan wrote:
> Hi, this query on TPCH 1G data gets about 5% improvement.

> select count (*) from (select l_orderkey, l_partkey, l_comment,
> count(l_tax) from lineitem group by 1, 2, 3) tmpt;

> On Oct 28, 2007, at 1:17 PM, Luke Lonergan wrote:
>
> > We just applied this and saw a 5 percent speedup on a hash
> > aggregation query with four colums in a 'group by' clause run
> > against a single TPC-H table (lineitem).
> >
> > CK - can you post the query?

Is this on Postgres or Greenplum?


That looks like quite a wide set of columns.

Sounds good though. Can we get any more measurements in?

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com

Re: updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
On Sun, Oct 28, 2007 at 08:06:58PM +0000, Simon Riggs wrote:
> On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> > On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > > Its features include a better and faster hash function.
> > >
> > > Looks very promising. Do you have any performance test results to show
> > > it really is faster, when compiled into Postgres? Better probably needs
> > > some definition also; in what way are the hash functions better?
> > >
> > > --
> > >   Simon Riggs
> > >   2ndQuadrant  http://www.2ndQuadrant.com
> > >
> > The new hash function is roughly twice as fast as the old function in
> > terms of straight CPU time. It uses the same design as the current
> > hash but provides code paths for aligned and unaligned access as well
> > as separate mixing functions for different blocks in the hash run
> > instead of having one general purpose block. I think the speed will
> > not be an obvious win with smaller items, but will be very important
> > when hashing larger items (up to 32kb).
> >
> > Better in this case means that the new hash mixes more thoroughly
> > which results in less collisions and more even bucket distribution.
> > There is also a 64-bit varient which is still faster since it can
> > take advantage of the 64-bit processor instruction set.
>
> Ken, I was really looking for some tests that show both of the above
> were true. We've had some trouble proving the claims of other algorithms
> before, so I'm less inclined to take those things at face value.
>
> I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
> Others may have different concerns.
>

Simon,

I agree, that we should not take claims withoug testing them ourselves.
My main motivation for posting the patch was to get feedback on how to
add support for 64-bit hashes that will work with all of our supported
platforms. I am trying to avoid the "work on a feature in isolation...
and submit a giant patch with many problems" problem. I intend to do
more extensive testing, but I am trying to reach a basic implementation
level before I start the testing. I am pretty good with theory, but my
coding skills are out of practice. It will take me longer to generate
the tests now and without any clear benefit to the hash index implementation.
I am willing to test further, but I would like to have my testing benefit
the hash index implementation and not just the effectiveness and efficiency
of the hashing algorithm.

Regards,
Ken
> --
>   Simon Riggs
>   2ndQuadrant  http://www.2ndQuadrant.com
>
>

Re: updated hash functions for postgresql v1

From
bob_jenkins@burtleburtle.net
Date:
On Oct 28, 11:05 am, k...@rice.edu (Kenneth Marshall) wrote:
> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
>
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
>
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
>
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.
>
> Ken
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: You can help support the PostgreSQL project by donating at
>
>                http://www.postgresql.org/about/donate

I don't make use of 64-bit arithmetic when producing the 64-bit result
in hashlittle2().  Wish I did.  The routine internally produces 3 32-
bit results a b c, the returned 64-bit result is roughly c | (b<<32).


hashlittle(), hashbig(), hashword() and endianness

From
Alex Vinokur
Date:
On Oct 27, 10:15 pm, k...@rice.edu (Kenneth Marshall) wrote:
> Dear PostgreSQL Developers,
>
> This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.
> It implements the 2006 version of the hash function by Bob Jenkins. Its
> features include a better and faster hash function. I have included the
> versions supporting big-endian and little-endian machines that will be
> selected based on the machine configuration.
[snip]

I have some question concerning Bob Jenkins' functions
hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
hashbig(uint8_t*, size_t) in lookup3.c.

Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.

1. hashlittle(k1) produces the same value on Little-Endian and Big-
Endian machines.
   Let hashlittle(k1) be == L1.

2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
machines.
   Let hashbig(k1) be == B1.

  L1 != B1


3. hashword((uint32_t*)k1) produces
    * L1 on LittleEndian machine and
    * B1 on BigEndian machine.

---------------------
The question is: is it possible to change hashword() to get
    * L1 on Little-Endian machine and
    * B1 on Big-Endian machine
   ?

Thanks.

Alex Vinokur
     email: alex DOT vinokur AT gmail DOT com
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn






Re: hashlittle(), hashbig(), hashword() and endianness

From
Alex Vinokur
Date:
On Nov 15, 10:40 am, Alex Vinokur <ale...@users.sourceforge.net>
wrote:
[snip]
> I have some question concerning Bob Jenkins' functions
> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
> hashbig(uint8_t*, size_t) in lookup3.c.
>
> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
>
> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
> Endian machines.
>    Let hashlittle(k1) be == L1.
>
> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
> machines.
>    Let hashbig(k1) be == B1.
>
>   L1 != B1
>
> 3. hashword((uint32_t*)k1) produces
>     * L1 on LittleEndian machine and
>     * B1 on BigEndian machine.
>
===================================
> ---------------------
> The question is: is it possible to change hashword() to get
>     * L1 on Little-Endian machine and
>     * B1 on Big-Endian machine
>    ?

Sorry, it should be as follows:

Is it possible to create two new hash functions on basis of
hashword():
   i)  hashword_little () that produces L1 on Little-Endian and Big-
Endian machines;
   ii) hashword_big ()    that produces B1 on Little-Endian and Big-
Endian machines
   ?

====================================

Thanks.

Alex Vinokur
     email: alex DOT vinokur AT gmail DOT com
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn


Re: hashlittle(), hashbig(), hashword() and endianness

From
Heikki Linnakangas
Date:
Alex Vinokur wrote:
> On Nov 15, 10:40 am, Alex Vinokur <ale...@users.sourceforge.net>
> wrote:
> [snip]
>> I have some question concerning Bob Jenkins' functions
>> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
>> hashbig(uint8_t*, size_t) in lookup3.c.
>>
>> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
>>
>> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
>> Endian machines.
>>    Let hashlittle(k1) be == L1.
>>
>> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
>> machines.
>>    Let hashbig(k1) be == B1.
>>
>>   L1 != B1
>>
>> 3. hashword((uint32_t*)k1) produces
>>     * L1 on LittleEndian machine and
>>     * B1 on BigEndian machine.
>>
> ===================================
>> ---------------------
>> The question is: is it possible to change hashword() to get
>>     * L1 on Little-Endian machine and
>>     * B1 on Big-Endian machine
>>    ?
>
> Sorry, it should be as follows:
>
> Is it possible to create two new hash functions on basis of
> hashword():
>    i)  hashword_little () that produces L1 on Little-Endian and Big-
> Endian machines;
>    ii) hashword_big ()    that produces B1 on Little-Endian and Big-
> Endian machines
>    ?

Why?

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: hashlittle(), hashbig(), hashword() and endianness

From
Alex Vinokur
Date:
On Nov 15, 1:23 pm, hei...@enterprisedb.com (Heikki Linnakangas)
wrote:
> Alex Vinokurwrote:
> > On Nov 15, 10:40 am,Alex Vinokur<ale...@users.sourceforge.net>
> > wrote:
> > [snip]
> >> I have some question concerning Bob Jenkins' functions
> >> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
> >> hashbig(uint8_t*, size_t) in lookup3.c.
>
> >> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
>
> >> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
> >> Endian machines.
> >>    Let hashlittle(k1) be == L1.
>
> >> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
> >> machines.
> >>    Let hashbig(k1) be == B1.
>
> >>   L1 != B1
>
> >> 3. hashword((uint32_t*)k1) produces
> >>     * L1 on LittleEndian machine and
> >>     * B1 on BigEndian machine.
>
> > ===================================
> >> ---------------------
> >> The question is: is it possible to change hashword() to get
> >>     * L1 on Little-Endian machine and
> >>     * B1 on Big-Endian machine
> >>    ?
>
> > Sorry, it should be as follows:
>
> > Is it possible to create two new hash functions on basis of
> > hashword():
> >    i)  hashword_little () that produces L1 on Little-Endian and Big-
> > Endian machines;
> >    ii) hashword_big ()    that produces B1 on Little-Endian and Big-
> > Endian machines
> >    ?
>
> Why?
>
[snip]

Suppose:
uint8_t chBuf[SIZE32 * 4];  // ((size_t)&chBuf[0] & 3) == 0

Function
hashlittle(chBuf, SIZE32 * 4, 0)
produces the same hashValue (let this value be L1) on little-endian
and big-endian machines. So, hashlittle() is endianness-indepent.

On other hand, function
hashword ((uint32_t)chBuf, SIZE32, 0)
produces hashValue == L1 on little-endian machine and hashValue != L1
on big-endian machine. So, hashword() is endianness-dependent.

I would like to use both hashlittle() and hashword() (or
hashword_little) on little-endian and big-endian machine and to get
identical hashValues.


Alex Vinokur
     email: alex DOT vinokur AT gmail DOT com
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn




Re: hashlittle(), hashbig(), hashword() and endianness

From
Kenneth Marshall
Date:
On Fri, Nov 16, 2007 at 01:19:13AM -0800, Alex Vinokur wrote:
> On Nov 15, 1:23 pm, hei...@enterprisedb.com (Heikki Linnakangas)
> wrote:
> > Alex Vinokurwrote:
> > > On Nov 15, 10:40 am,Alex Vinokur<ale...@users.sourceforge.net>
> > > wrote:
> > > [snip]
> > >> I have some question concerning Bob Jenkins' functions
> > >> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
> > >> hashbig(uint8_t*, size_t) in lookup3.c.
> >
> > >> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
> >
> > >> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
> > >> Endian machines.
> > >>    Let hashlittle(k1) be == L1.
> >
> > >> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
> > >> machines.
> > >>    Let hashbig(k1) be == B1.
> >
> > >>   L1 != B1
> >
> > >> 3. hashword((uint32_t*)k1) produces
> > >>     * L1 on LittleEndian machine and
> > >>     * B1 on BigEndian machine.
> >
> > > ===================================
> > >> ---------------------
> > >> The question is: is it possible to change hashword() to get
> > >>     * L1 on Little-Endian machine and
> > >>     * B1 on Big-Endian machine
> > >>    ?
> >
> > > Sorry, it should be as follows:
> >
> > > Is it possible to create two new hash functions on basis of
> > > hashword():
> > >    i)  hashword_little () that produces L1 on Little-Endian and Big-
> > > Endian machines;
> > >    ii) hashword_big ()    that produces B1 on Little-Endian and Big-
> > > Endian machines
> > >    ?
> >
> > Why?
> >
> [snip]
>
> Suppose:
> uint8_t chBuf[SIZE32 * 4];  // ((size_t)&chBuf[0] & 3) == 0
>
> Function
> hashlittle(chBuf, SIZE32 * 4, 0)
> produces the same hashValue (let this value be L1) on little-endian
> and big-endian machines. So, hashlittle() is endianness-indepent.
>
> On other hand, function
> hashword ((uint32_t)chBuf, SIZE32, 0)
> produces hashValue == L1 on little-endian machine and hashValue != L1
> on big-endian machine. So, hashword() is endianness-dependent.
>
> I would like to use both hashlittle() and hashword() (or
> hashword_little) on little-endian and big-endian machine and to get
> identical hashValues.
>
>
> Alex Vinokur
>      email: alex DOT vinokur AT gmail DOT com
>      http://mathforum.org/library/view/10978.html
>      http://sourceforge.net/users/alexvn
>
>
Alex,

As I suspected, you want a hash function that is independent of the
machine endian-ness. You will need to design, develop, and test such
a function yourself. As you start to look at how overflow, rot's, and
shifts are handled at the boundaries you may find it difficult to
get a fast hash function with those properties. Good luck.

Regards,
Ken

Re: hashlittle(), hashbig(), hashword() and endianness

From
"Marko Kreen"
Date:
On 11/16/07, Alex Vinokur <alexvn@users.sourceforge.net> wrote:
> I would like to use both hashlittle() and hashword() (or
> hashword_little) on little-endian and big-endian machine and to get
> identical hashValues.

Whats wrong with hashlittle()?  It does use the same optimized
reading on LE platform that hashword() does.  Or you could wrap
the read values with some int2le() macro that is NOP on LE cpu.
Although I suspect the performance wont be better than using
hashlittle() directly.

--
marko

Re: updated hash functions for postgresql v1

From
Tom Lane
Date:
Kenneth Marshall <ktm@rice.edu> writes:
> Dear PostgreSQL Developers,
> This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.

It's pretty obvious that this patch hasn't even been tested on a
big-endian machine:

> + #ifndef WORS_BIGENDIAN

However, why do we need two code paths anyway?  I don't think there's
any requirement for the hash values to come out the same on little-
and big-endian machines.  In common cases the byte-array data being
presented to the hash function would be different to start with, so
you could hardly expect identical hash results even if you had separate
code paths.

I don't find anything very compelling about 64-bit hashing, either.
We couldn't move to that without breaking API for hash functions
of user-defined types.  Given all the other problems with hash
indexes, the issue of whether it's useful to have more than 2^32
hash buckets seems very far off indeed.

            regards, tom lane

Re: updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
On Sun, Mar 16, 2008 at 10:53:02PM -0400, Tom Lane wrote:
> Kenneth Marshall <ktm@rice.edu> writes:
> > Dear PostgreSQL Developers,
> > This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.
>
> It's pretty obvious that this patch hasn't even been tested on a
> big-endian machine:
>
> > + #ifndef WORS_BIGENDIAN
>
> However, why do we need two code paths anyway?  I don't think there's
> any requirement for the hash values to come out the same on little-
> and big-endian machines.  In common cases the byte-array data being
> presented to the hash function would be different to start with, so
> you could hardly expect identical hash results even if you had separate
> code paths.
>
> I don't find anything very compelling about 64-bit hashing, either.
> We couldn't move to that without breaking API for hash functions
> of user-defined types.  Given all the other problems with hash
> indexes, the issue of whether it's useful to have more than 2^32
> hash buckets seems very far off indeed.
>
>             regards, tom lane
>

Yes, there is that typo but it has, in fact, been tested on big and
little-endian machines. Since, it was a simple update to replace the
current hash function used by PostgreSQL with the new version from
Bob Jenkins. The test for the endian-ness of the system allows for
the code paths to be optimized for the particular CPU. The 64-bit
hashing was included for use during my work on on the hash index.
Part of that will entail testing the performance of various
permutations of previously submitted suggestions.

Regards,
Ken Marshall

Re: updated hash functions for postgresql v1

From
Tom Lane
Date:
Simon Riggs <simon@2ndquadrant.com> writes:
> On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
>> The new hash function is roughly twice as fast as the old function in
>> terms of straight CPU time. It uses the same design as the current
>> hash but provides code paths for aligned and unaligned access as well
>> as separate mixing functions for different blocks in the hash run
>> instead of having one general purpose block. I think the speed will
>> not be an obvious win with smaller items, but will be very important
>> when hashing larger items (up to 32kb).
>>
>> Better in this case means that the new hash mixes more thoroughly
>> which results in less collisions and more even bucket distribution.
>> There is also a 64-bit varient which is still faster since it can
>> take advantage of the 64-bit processor instruction set.

> Ken, I was really looking for some tests that show both of the above
> were true. We've had some trouble proving the claims of other algorithms
> before, so I'm less inclined to take those things at face value.

I spent some time today looking at this code more closely and running
some simple speed tests.  It is faster than what we have, although 2X
is the upper limit of the speedups I saw on four different machines.
There are several things going on in comparison to our existing
hash_any:

* If the source data is word-aligned, the new code fetches it a word at
a time instead of a byte at a time; that is

        a += (k[0] + ((uint32) k[1] << 8) + ((uint32) k[2] << 16) + ((uint32) k[3] << 24));
        b += (k[4] + ((uint32) k[5] << 8) + ((uint32) k[6] << 16) + ((uint32) k[7] << 24));
        c += (k[8] + ((uint32) k[9] << 8) + ((uint32) k[10] << 16) + ((uint32) k[11] << 24));

becomes

        a += k[0];
        b += k[1];
        c += k[2];

where k is now pointer to uint32 instead of uchar.  This accounts for
most of the speed improvement.  However, the results now vary between
big-endian and little-endian machines.  That's fine for PG's purposes.
But it means that we need two sets of code for the unaligned-input code
path, since it clearly won't do for the same bytestring to get two
different hashes depending on whether it happens to be presented aligned
or not.  The presented patch actually offers *four* code paths, so that
you can compute either little-endian-ish or big-endian-ish hashes on
either type of machine.  That's nothing but bloat for our purposes, and
should be reduced to the minimum.

* Given a word-aligned source pointer and a length that isn't a multiple
of 4, the new code fetches the last partial word as a full word fetch
and masks it off, as per the code comment:

     * "k[2]&0xffffff" actually reads beyond the end of the string, but
     * then masks off the part it's not allowed to read.  Because the
     * string is aligned, the masked-off tail is in the same word as the
     * rest of the string.  Every machine with memory protection I've seen
     * does it on word boundaries, so is OK with this.  But VALGRIND will
     * still catch it and complain.  The masking trick does make the hash
     * noticably faster for short strings (like English words).

This I think is well beyond the bounds of sanity, especially since we
have no configure support for setting #ifdef VALGRIND.  I'd lose the
"non valgrind clean" paths (which again are contributing to the patch's
impression of bloat/redundancy).

* Independently of the above changes, the actual hash calculation
(the mix() and final() macros) has been changed.  Ken claims that
this made the hash "better", but I'm deeply suspicious of that.
The comments in the code make it look like Jenkins actually sacrificed
hash quality in order to get a little more speed.  I don't think we
should adopt those changes unless some actual evidence is presented
that the hash is better and not merely faster.


In short: I think we should adopt the changes to use aligned word
fetches where possible, but not adopt the mix/final changes unless
more evidence is presented.

Lastly, the patch adds yet more code to provide the option of computing
a 64-bit hash rather than 32.  (AFAICS, the claim that this part is
optimized for 64-bit machines is mere fantasy.  It's simply Yet Another
duplicate of the identical code, but it gives you back two of its three
words of internal state at the end, instead of only one.)  As I said
before, this is just bloat for us.  I've got zero interest in pursuing
64-bit hashing when we still don't have a hash index implementation that
anyone would consider using in anger.  Let's see if we can make the cake
edible before worrying about putting a better grade of icing on it.

            regards, tom lane

Re: updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
On Sat, Apr 05, 2008 at 03:40:35PM -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> >> The new hash function is roughly twice as fast as the old function in
> >> terms of straight CPU time. It uses the same design as the current
> >> hash but provides code paths for aligned and unaligned access as well
> >> as separate mixing functions for different blocks in the hash run
> >> instead of having one general purpose block. I think the speed will
> >> not be an obvious win with smaller items, but will be very important
> >> when hashing larger items (up to 32kb).
> >>
> >> Better in this case means that the new hash mixes more thoroughly
> >> which results in less collisions and more even bucket distribution.
> >> There is also a 64-bit varient which is still faster since it can
> >> take advantage of the 64-bit processor instruction set.
>
> > Ken, I was really looking for some tests that show both of the above
> > were true. We've had some trouble proving the claims of other algorithms
> > before, so I'm less inclined to take those things at face value.
>
> I spent some time today looking at this code more closely and running
> some simple speed tests.  It is faster than what we have, although 2X
> is the upper limit of the speedups I saw on four different machines.
> There are several things going on in comparison to our existing
> hash_any:
>
> * If the source data is word-aligned, the new code fetches it a word at
> a time instead of a byte at a time; that is
>
>         a += (k[0] + ((uint32) k[1] << 8) + ((uint32) k[2] << 16) + ((uint32) k[3] << 24));
>         b += (k[4] + ((uint32) k[5] << 8) + ((uint32) k[6] << 16) + ((uint32) k[7] << 24));
>         c += (k[8] + ((uint32) k[9] << 8) + ((uint32) k[10] << 16) + ((uint32) k[11] << 24));
>
> becomes
>
>         a += k[0];
>         b += k[1];
>         c += k[2];
>
> where k is now pointer to uint32 instead of uchar.  This accounts for
> most of the speed improvement.  However, the results now vary between
> big-endian and little-endian machines.  That's fine for PG's purposes.
> But it means that we need two sets of code for the unaligned-input code
> path, since it clearly won't do for the same bytestring to get two
> different hashes depending on whether it happens to be presented aligned
> or not.  The presented patch actually offers *four* code paths, so that
> you can compute either little-endian-ish or big-endian-ish hashes on
> either type of machine.  That's nothing but bloat for our purposes, and
> should be reduced to the minimum.
>

I agree that a good portion of the speed up is due to the full word
processing. The original code from Bob Jenkins had all of these code
paths and I just dropped them in with a minimum of changes.

> * Given a word-aligned source pointer and a length that isn't a multiple
> of 4, the new code fetches the last partial word as a full word fetch
> and masks it off, as per the code comment:
>
>      * "k[2]&0xffffff" actually reads beyond the end of the string, but
>      * then masks off the part it's not allowed to read.  Because the
>      * string is aligned, the masked-off tail is in the same word as the
>      * rest of the string.  Every machine with memory protection I've seen
>      * does it on word boundaries, so is OK with this.  But VALGRIND will
>      * still catch it and complain.  The masking trick does make the hash
>      * noticably faster for short strings (like English words).
>
> This I think is well beyond the bounds of sanity, especially since we
> have no configure support for setting #ifdef VALGRIND.  I'd lose the
> "non valgrind clean" paths (which again are contributing to the patch's
> impression of bloat/redundancy).
>

Okay, I will strip the VALGRIND paths. I did not see a real need for them
either.

> * Independently of the above changes, the actual hash calculation
> (the mix() and final() macros) has been changed.  Ken claims that
> this made the hash "better", but I'm deeply suspicious of that.
> The comments in the code make it look like Jenkins actually sacrificed
> hash quality in order to get a little more speed.  I don't think we
> should adopt those changes unless some actual evidence is presented
> that the hash is better and not merely faster.
>

I was repeating the claims made by the functions author after his own
testing. His analysis and tests were reasonable, but I do agree that
we need some testing of our own. I will start pulling some test cases
together like what was discussed earlier with Simon.

>
> In short: I think we should adopt the changes to use aligned word
> fetches where possible, but not adopt the mix/final changes unless
> more evidence is presented.
>
Okay, I agree and will work on producing evidence either way.

> Lastly, the patch adds yet more code to provide the option of computing
> a 64-bit hash rather than 32.  (AFAICS, the claim that this part is
> optimized for 64-bit machines is mere fantasy.  It's simply Yet Another
> duplicate of the identical code, but it gives you back two of its three
> words of internal state at the end, instead of only one.)  As I said
> before, this is just bloat for us.  I've got zero interest in pursuing
> 64-bit hashing when we still don't have a hash index implementation that
> anyone would consider using in anger.  Let's see if we can make the cake
> edible before worrying about putting a better grade of icing on it.
>
You are correct, my 64-bit claim was due to mis-interpreting some comments
by the author. He sent in a correction to the mailing list, personally.

Regards,
Ken Marshall

Re: updated hash functions for postgresql v1

From
Tom Lane
Date:
Kenneth Marshall <ktm@rice.edu> writes:
> Okay, I will strip the VALGRIND paths. I did not see a real need for them
> either.

I have a patch ready to commit (as soon as I fix the regression test
issues) that incorporates all the word-wide-ness stuff.  All you really
need to look at is the question of hash quality.

I did confirm that the mixing changes account for a noticeable chunk
of the runtime improvement.  For instance on a Xeon

hash_any_old(32K): 4.386922 s            (CVS HEAD)
hash_any(32K): 3.853754 s            (CVS + word-wide calcs)
hashword(32K): 3.041500 s            (from patch)
hashlittle(32K): 3.092297 s            (from patch)

hash_any_old(32K unaligned): 4.390311 s
hash_any(32K unaligned): 4.380700 s
hashlittle(32K unaligned): 3.464802 s

hash_any_old(8 bytes): 1.580008 s
hash_any(8 bytes): 1.293331 s
hashword(8 bytes): 1.137054 s
hashlittle(8 bytes): 1.112997 s

So adopting the mixing changes would make it faster yet.  What we need
to be certain of is that this doesn't expose us to poorer hashing.
We know that it is critical that all bits of the input affect all bits
of the hash fairly uniformly --- otherwise we are subject to very
serious performance hits at higher levels in hash join, for instance.
The comments in the new code led me to worry that Jenkins had
compromised on that property in search of speed.  I looked at his
website but couldn't find any real discussion of the design principles
for the new mixing code ...

            regards, tom lane

Re: updated hash functions for postgresql v1

From
"Marko Kreen"
Date:
On 4/6/08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>  So adopting the mixing changes would make it faster yet.  What we need
>  to be certain of is that this doesn't expose us to poorer hashing.
>  We know that it is critical that all bits of the input affect all bits
>  of the hash fairly uniformly --- otherwise we are subject to very
>  serious performance hits at higher levels in hash join, for instance.
>  The comments in the new code led me to worry that Jenkins had
>  compromised on that property in search of speed.  I looked at his
>  website but couldn't find any real discussion of the design principles
>  for the new mixing code ...

Scroll at the end of doobs.html, there is the longer discussion.

My understanding is following:

His design is based on 2 properties of mixing function:

- reversible - that means for each input tuple of (a,b,c) corresponds
  exactly one output tuple of (a,b,c).  Such property guarantees that
  after repeatedly applying mixing function, no bits get lost.

- avalanche - that means any single bit change in input (a,b,c)
  half of the output bits are affected.

His "insight" (as he called it) when creating lookup3 was that
the bulk mixing that is applied repeatedly does not need to
have avalanche, it only needs to be reversible, meaning all the
bits that went in, are still there after mixing repeatedly.

And only final mixing needs to have avalanche as it produces
final result, but it does not need to be reversible as it wont
be applied repeatedly and most of the result is dropped anyway.

IMHO his choices are reasonable.

--
marko

Re: updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
On Sun, Apr 06, 2008 at 12:02:25PM -0400, Tom Lane wrote:
> Kenneth Marshall <ktm@rice.edu> writes:
> > Okay, I will strip the VALGRIND paths. I did not see a real need for them
> > either.
>
> I have a patch ready to commit (as soon as I fix the regression test
> issues) that incorporates all the word-wide-ness stuff.  All you really
> need to look at is the question of hash quality.
>
> I did confirm that the mixing changes account for a noticeable chunk
> of the runtime improvement.  For instance on a Xeon
>
> hash_any_old(32K): 4.386922 s            (CVS HEAD)
> hash_any(32K): 3.853754 s            (CVS + word-wide calcs)
> hashword(32K): 3.041500 s            (from patch)
> hashlittle(32K): 3.092297 s            (from patch)
>
> hash_any_old(32K unaligned): 4.390311 s
> hash_any(32K unaligned): 4.380700 s
> hashlittle(32K unaligned): 3.464802 s
>
> hash_any_old(8 bytes): 1.580008 s
> hash_any(8 bytes): 1.293331 s
> hashword(8 bytes): 1.137054 s
> hashlittle(8 bytes): 1.112997 s
>
> So adopting the mixing changes would make it faster yet.  What we need
> to be certain of is that this doesn't expose us to poorer hashing.
> We know that it is critical that all bits of the input affect all bits
> of the hash fairly uniformly --- otherwise we are subject to very
> serious performance hits at higher levels in hash join, for instance.
> The comments in the new code led me to worry that Jenkins had
> compromised on that property in search of speed.  I looked at his
> website but couldn't find any real discussion of the design principles
> for the new mixing code ...
>
>             regards, tom lane
>
Here is a section from http://burtleburtle.net/bob/hash/doobs.html
describing some testing that Bob Jenkins did concerning the mixing
properties of both his original hash function (our current hash_any)
and the new version (the patch)--the last paragraph in particular.

>lookup3.c
>
> A hash I wrote nine years later designed along the same lines as
> "My Hash", see http://burtleburtle.net/bob/c/lookup3.c. It takes
> 2n instructions per byte for mixing instead of 3n. When fitting
> bytes into registers (the other 3n instructions), it takes advantage
> of alignment when it can (a trick learned from Paul Hsieh's hash).
> It doesn't bother to reserve a byte for the length. That allows
> zero-length strings to require no mixing. More generally, the
> length that requires additional mixes is now 13-25-37 instead of
> 12-24-36.
>
> One theoretical insight was that the last mix doesn't need to do
> well in reverse (though it has to affect all output bits). And the
> middle mixing steps don't have to affect all output bits (affecting
> some 32 bits is enough), though it does have to do well in reverse.
> So it uses different mixes for those two cases. "My Hash" (lookup2.c)
> had a single mixing operation that had to satisfy both sets of
> requirements, which is why it was slower.
>
> On a Pentium 4 with gcc 3.4.?, Paul's hash was usually faster than
> lookup3.c. On a Pentium 4 with gcc 3.2.?, they were about the same
> speed. On a Pentium 4 with icc -O2, lookup3.c was a little faster
> than Paul's hash. I don't know how it would play out on other chips
> and other compilers. lookup3.c is slower than the additive hash
> pretty much forever, but it's faster than the rotating hash for
> keys longer than 5 bytes.
>
> lookup3.c does a much more thorough job of mixing than any of my
> previous hashes (lookup2.c, lookup.c, One-at-a-time). All my
> previous hashes did a more thorough job of mixing than Paul Hsieh's
> hash. Paul's hash does a good enough job of mixing for most
> practical purposes.
>
> The most evil set of keys I know of are sets of keys that are all
> the same length, with all bytes zero, except with a few bits set.
> This is tested by frog.c.. To be even more evil, I had my hashes
> return b and c instead of just c, yielding a 64-bit hash value.
> Both lookup.c and lookup2.c start seeing collisions after 253
> frog.c keypairs. Paul Hsieh's hash sees collisions after 217
> keypairs, even if we take two hashes with different seeds.
> lookup3.c is the only one of the batch that passes this test. It
> gets its first collision somewhere beyond 263 keypairs, which is
> exactly what you'd expect from a completely random mapping to
> 64-bit values.

I am ready to do some comparison runs between the old hash function
and the new hash function to validate its mixing ability versus our
current function, although the results will seem almost anecdotal.
Do you happen to have particular hashing problems in mind that I
could use for testing? Depending upon the problem sizes you are
interested in gathering empirical data it may take many hours of
CPU time. If I will need more than a single CPU to perform the
testing in a timely fashion, I will need to gain access to our
local cluster resources.

Cheers,
Ken Marshall

Re: updated hash functions for postgresql v1

From
Kenneth Marshall
Date:
On Sun, Oct 28, 2007 at 08:06:58PM +0000, Simon Riggs wrote:
> On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> > On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > > Its features include a better and faster hash function.
> > >
> > > Looks very promising. Do you have any performance test results to show
> > > it really is faster, when compiled into Postgres? Better probably needs
> > > some definition also; in what way are the hash functions better?
> > >
> > > --
> > >   Simon Riggs
> > >   2ndQuadrant  http://www.2ndQuadrant.com
> > >
> > The new hash function is roughly twice as fast as the old function in
> > terms of straight CPU time. It uses the same design as the current
> > hash but provides code paths for aligned and unaligned access as well
> > as separate mixing functions for different blocks in the hash run
> > instead of having one general purpose block. I think the speed will
> > not be an obvious win with smaller items, but will be very important
> > when hashing larger items (up to 32kb).
> >
> > Better in this case means that the new hash mixes more thoroughly
> > which results in less collisions and more even bucket distribution.
> > There is also a 64-bit varient which is still faster since it can
> > take advantage of the 64-bit processor instruction set.
>
> Ken, I was really looking for some tests that show both of the above
> were true. We've had some trouble proving the claims of other algorithms
> before, so I'm less inclined to take those things at face value.
>
> I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
> Others may have different concerns.
>
> --
>   Simon Riggs
>   2ndQuadrant  http://www.2ndQuadrant.com
>
Hi,

I have finally had a chance to do some investigation on
the performance of the old hash mix() function versus
the updated mix()/final() in the new hash function. Here
is a table of my current results for both the old and the
new hash function. In this case cracklib refers to the
cracklib-dict containing 1648379 unique words massaged
in various ways to generate input strings for the hash
functions. The result is the number of collisions in the
hash values generated.

hash input                            old    new
----------                            ---    ---
cracklib                              338    316
cracklib x 2 (i.e. clibclib)          305    319
cracklib x 3 (clibclibclib)           323    329
cracklib x 10                         302    310
cracklib x 100                        350    335
cracklib x 1000                       314    309
cracklib x 100 truncated to char(100) 311    327

uint32 from 1-1648379                 309    319
(uint32 1-1948379)*256                309    314
(uint32 1-1948379)*16                 310    314
"a"uint32 (i.e. a00001,a0002...)      320    321

uint32uint32 (i.e. uint64)            321    287

In addition to these tests, the new mixing functions allow
the hash to pass the frog2.c test by Bob Jenkins. Here is
his comment from http://burtleburtle.net/bob/hash/doobs.html:

  lookup3.c does a much more thorough job of mixing than any
  of my previous hashes (lookup2.c, lookup.c, One-at-a-time).
  All my previous hashes did a more thorough job of mixing
  than Paul Hsieh's hash. Paul's hash does a good enough job
  of mixing for most practical purposes.

  The most evil set of keys I know of are sets of keys that are
  all the same length, with all bytes zero, except with a few
  bits set. This is tested by frog.c.. To be even more evil, I
  had my hashes return b and c instead of just c, yielding a
  64-bit hash value. Both lookup.c and lookup2.c start seeing
  collisions after 2^53 frog.c keypairs. Paul Hsieh's hash sees
  collisions after 2^17 keypairs, even if we take two hashes with
  different seeds. lookup3.c is the only one of the batch that
  passes this test. It gets its first collision somewhere beyond
  2^63 keypairs, which is exactly what you'd expect from a completely
  random mapping to 64-bit values.

If anyone has any other data for me to test with, please let me
know. I think this is a reasonable justification for including the
new mixing process (mix() and final()) as well as the word-at-a-time
processing in our hash function. I will be putting a small patch
together to add the new mixing process back in to the updated hash
function this weekend in time for the September commit-fest unless
there are objections. Both the old and the new hash functions meet
the strict avalanche conditions as well.
(http://home.comcast.net/~bretm/hash/3.html)

I have used an Inline::C perl driver for these tests and can post
it if others would like to use it as a testbed.

Regards,
Ken
avalance