Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date
Msg-id 20130622131231.GF7093@tamriel.snowman.net
Whole thread Raw
In response to Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
* Simon Riggs (simon@2ndQuadrant.com) wrote:
> On 22 June 2013 08:46, Stephen Frost <sfrost@snowman.net> wrote:
> >>The next limit faced by sorts is
> >> INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> >> 150 GiB when sorting int4.
> >
> > That's frustratingly small. :(
>
> But that has nothing to do with this patch, right? And is easily fixed, yes?

I don't know about 'easily fixed' (consider supporting a HashJoin of >2B
records) but I do agree that dealing with places in the code where we are
using an int4 to keep track of the number of objects in memory is outside
the scope of this patch.

Hopefully we are properly range-checking and limiting ourselves to only
what a given node can support and not solely depending on MaxAllocSize
to keep us from overflowing some int4 which we're using as an index for
an array or as a count of how many objects we've currently got in
memory, but we'll want to consider carefully what happens with such
large sets as we're adding support into nodes for these Huge
allocations (along with the recent change to allow 1TB work_mem, which
may encourage users with systems large enough to actually try to set it
that high... :)
Thanks,
    Stephen

pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Support for REINDEX CONCURRENTLY
Next
From: Simon Riggs
Date:
Subject: A better way than tweaking NTUP_PER_BUCKET