On Tue, Nov 13, 2012 at 3:21 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Tue, Nov 13, 2012 at 12:18 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> I wonder though if we ought to think about running output functions in
>>> a short-lived memory context instead of the executor's main context.
>>> We've considered that before, I think, and it's always been the path
>>> of least resistance to fix the output functions instead --- but there
>>> will always be another leak I'm afraid.
>
>> Such is the lot of people who code in C. I worry that the number of
>> memory contexts we're kicking around already is imposing a significant
>> distributed overhead on the system that is hard to measure but
>> nevertheless real, and that this will add to it.
>
> Yeah, perhaps. I'd like to think that a MemoryContextReset is cheaper
> than a bunch of retail pfree's, but it's hard to prove anything without
> actually coding and testing it --- and on modern machines, effects like
> cache locality could swamp pure instruction-count gains anyway.
Yeah. The thing that concerns me is that I think we have a pretty
decent number of memory contexts where the expected number of
allocations is very small ... and we have the context *just in case*
we do more than that in certain instances. I've seen profiles where
the setup/teardown costs of memory contexts are significant ... which
doesn't mean that those examples would perform better with fewer
memory contexts, but it's enough to make me pause for thought.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company