Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize - Mailing list pgsql-hackers

From Robert Haas
Subject Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date
Msg-id CA+Tgmobs4hWd51877WY4kfs+R4+GPSh8icTdW5j6YO+Ez0p6Hw@mail.gmail.com
Whole thread Raw
In response to Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Stephen Frost <sfrost@snowman.net>)
List pgsql-hackers
On Sat, Jun 22, 2013 at 3:46 AM, Stephen Frost <sfrost@snowman.net> wrote:
> I'm not a huge fan of moving directly to INT_MAX.  Are we confident that
> everything can handle that cleanly..?  I feel like it might be a bit
> safer to shy a bit short of INT_MAX (say, by 1K).

Maybe it would be better to stick with INT_MAX and fix any bugs we
find.  If there are magic numbers short of INT_MAX that cause
problems, it would likely be better to find out about those problems
and adjust the relevant code, rather than trying to dodge them.  We'll
have to confront all of those problems eventually as we come to
support larger and larger sorts; I don't see much value in putting it
off.

Especially since we're early in the release cycle.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: A better way than tweaking NTUP_PER_BUCKET
Next
From: Robert Haas
Date:
Subject: Re: [Review] Re: minor patch submission: CREATE CAST ... AS EXPLICIT