Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date
Msg-id 20130706165424.GD3286@tamriel.snowman.net
Whole thread Raw
In response to Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-hackers
Jeff,

* Jeff Janes (jeff.janes@gmail.com) wrote:
> I was going to add another item to make nodeHash.c use the new huge
> allocator, but after looking at it just now it was not clear to me that it
> even has such a limitation.  nbatch is limited by MaxAllocSize, but
> nbuckets doesn't seem to be.

nodeHash.c:ExecHashTableCreate() allocates ->buckets using:

palloc(nbuckets * sizeof(HashJoinTuple))

(where HashJoinTuple is actually just a pointer), and reallocates same
in ExecHashTableReset().  That limits the current implementation to only
about 134M buckets, no?

Now, what I was really suggesting wasn't so much changing those specific
calls; my point was really that there's a ton of stuff in the HashJoin
code that uses 32bit integers for things which, these days, might be too
small (nbuckets being one example, imv).  There's a lot of code there
though and you'd have to really consider which things make sense to have
as int64's.
Thanks,
    Stephen

pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: GIN improvements part2: fast scan
Next
From: Tomas Vondra
Date:
Subject: Re: GIN improvements part 3: ordering in index