Jim Nasby <Jim.Nasby@BlueTreble.com> writes:
> On 12/18/14, 5:00 PM, Jim Nasby wrote:
>> 2201582 20 -- Mostly LOCALLOCK and Shared Buffer
> Started looking into this; perhaps https://code.google.com/p/fast-hash/ would be worth looking at, though it requires
uint64.
> It also occurs to me that we're needlessly shoving a lot of 0's into the hash input by using RelFileNode and
ForkNumber.RelFileNode includes the tablespace Oid, which is pointless here because relid is unique per-database. We
alsohave very few forks and typically care about very few databases. If we crammed dbid and ForkNum together that gets
usdown to 12 bytes, which at minimum saves us the trip through the case logic. I suspect it also means we could
eliminateone of the mix() calls.
I don't see this working. The lock table in shared memory can surely take
no such shortcuts. We could make a backend's locallock table omit fields
that are predictable within the set of objects that backend could ever
lock, but (1) this doesn't help unless we can reduce the tag size for all
LockTagTypes, which we probably can't, and (2) having the locallock's tag
be different from the corresponding shared tag would be a mess too.
I think dealing with (2) might easily eat all the cycles we could hope to
save from a smaller hash tag ... and that's not even considering the added
logical complexity and potential for bugs.
Switching to a different hash algorithm could be feasible, perhaps.
I think we're likely stuck with Jenkins hashing for hashes that go to
disk, but hashes for dynahash tables don't do that.
regards, tom lane