On Mon, May 2, 2011 at 11:09 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Merlin Moncure <mmoncure@gmail.com> writes:
>> On Tue, Apr 26, 2011 at 3:19 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
>>> On Tue, Apr 26, 2011 at 1:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>>> After chewing on that thought for a bit, it seems like an easy fix is to
>>>> modify AllocSetContextCreate (around line 390 in HEAD's aset.c) so that
>>>> allocChunkLimit is not just constrained to be less than maxBlockSize,
>>>> but significantly less than maxBlockSize --- say an eighth or so.
>
>>> well, +1 on any solution that doesn't push having to make assumptions
>>> about the allocator from the outside. your fix seems to nail it
>>> without having to tinker around with the api which is nice. (plus you
>>> could just remove the comment).
>>>
>>> Some perfunctory probing didn't turn up any other cases like this.
>
>> patch attached -- I did no testing beyond make check though. I
>> suppose changes to the allocator are not to be take lightly and this
>> should really be tested in some allocation heavy scenarios.
>
> I did a bit of testing of this and committed it with minor adjustments.
Thanks for the attribution -- I hardly deserved it. One question
though: ALLOC_CHUNK_FRACTION was put to four with the language 'We
allow chunks to be at most 1/4 of maxBlockSize'.
further down we have:
"+ * too. For the typical case of maxBlockSize a power of 2, the chunk size
+ * limit will be at most 1/8th maxBlockSize, so that given a stream of
+ * requests that are all the maximum chunk size we will waste at most
+ * 1/8th of the allocated space."
Is this because the divide by 2 right shift halves the amount of
wasted space, so that the maximum waste is in fact half again the
fraction?
merlin