At 01:21 12/07/00 -0400, Tom Lane wrote:
>Philip Warner <pjw@rhyme.com.au> writes:
>> Can you maintain one free list for each power of 2 (which it might already
>> be doing by the look of it), and always allocate the max size for the list.
>> Then when you want a 10k chunk, you get a 16k chunk, but you know from the
>> request size which list to go to, and anything on the list will satisfy the
>> requirement.
>
>Maybe the right answer is to eliminate the gap between small chunks
>(which basically work as Philip sketches above) and huge chunks (for
>which we fall back on malloc). The problem is with the stuff in
>between, for which we have a kind of half-baked approach...
That sounds good to me.
You *might* want to enable some kind of memory statistics in shared memory
(for a mythical future repoting tool) so you can see how many memory
allocations fall into the 'big chunk' range, and adjust your definition of
'big chunk' appropriately.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.C.N. 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \| | --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/