Re: rethinking dense_alloc (HashJoin) as a memory context - Mailing list pgsql-hackers

From Greg Stark
Subject Re: rethinking dense_alloc (HashJoin) as a memory context
Date
Msg-id CAM-w4HPhHCTA==pNYBGtVnX8QXEmfPrsKqcJWivDdOBZd+fZcA@mail.gmail.com
Whole thread Raw
In response to Re: rethinking dense_alloc (HashJoin) as a memory context  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: rethinking dense_alloc (HashJoin) as a memory context  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: rethinking dense_alloc (HashJoin) as a memory context  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Sun, Jul 17, 2016 at 1:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>
>On Wed, Jul 13, 2016 at 4:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>
>> I wonder whether we could compromise by reducing the minimum "standard
>> chunk header" to be just a pointer to owning context, with the other
>> fields becoming specific to particular mcxt implementations.
>
> I think that would be worth doing.  It's not perfect, and the extra 8
> (or 4) bytes per chunk certainly do matter.

I wonder if we could go further. If we don't imagine having a very
large number of allocators then we could just ask each one in turn if
this block is one of theirs and which context it came from. That would
allow an allocator that just allocated everything in a contiguous
block to recognize pointers and return the memory context just by the
range the pointer lies in.

There could be optimizations like if the leading points point to a
structure with a decently large magic number then assume it's a valid
header to avoid the cost of checking with lots of allocators. But I'm
imagining that the list of allocators in use concurrenlty will be
fairly small so it might not even be necessary.
greg



pgsql-hackers by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: DO with a large amount of statements get stuck with high memory consumption
Next
From: Tom Lane
Date:
Subject: Re: DO with a large amount of statements get stuck with high memory consumption