Thread: Memory leak in vac_update_relstats ?
<br clear="all" />Are we leaking memory in vac_update_relstats ?<br /><br /> /* Fetch a copy of the tuple to scribble on*/<br /> ctup = SearchSysCacheCopy(RELOID,<br /> ObjectIdGetDatum(relid),<br /> 0, 0, 0); <br /><br />This copy is not subsequently freed in the function.<br /><br />Thanks,<br/>Pavan<br /><br /><br />-- <br />Pavan Deolasee<br />EnterpriseDB <a href="http://www.enterprisedb.com">http://www.enterprisedb.com</a>
Pavan Deolasee wrote: > Are we leaking memory in vac_update_relstats ? > > /* Fetch a copy of the tuple to scribble on */ > ctup = SearchSysCacheCopy(RELOID, > ObjectIdGetDatum(relid), > 0, 0, 0); > > This copy is not subsequently freed in the function. It's palloc'd in the current memory context, so it's not serious. It'll be freed at the end of the transaction, if not before that. That's the beauty of memory contexts; no need to worry about small allocations like that. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
Hi,
That's the beauty of memory contexts for small allocations. But because of the 'convenience' of memory contexts we sometimes tend to not pay attention to doing explicit pfrees. As a general rule I think allocations in TopMemoryContext should be critically examined. I was bitten by this undue bloat recently while developing some code and valgrind is not of much help in such cases because of this very beauty of memory contexts :).
Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com
It's palloc'd in the current memory context, so it's not serious. It'll
be freed at the end of the transaction, if not before that. That's the
beauty of memory contexts; no need to worry about small allocations like
that.
Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com
Hi,
One specific case I want to mention here is hash_create(). For local hash tables if HASH_CONTEXT is not specified, they get created in a context which becomes a direct child of TopMemoryContext. Wouldn't it be a better idea to create the table in CurrentMemoryContext?
If hash_destroy() is not explicitly invoked, this can cause a lot of bloat especially if the intention was to use the hash table only for a while.
Regards,
Nikhils
Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com
That's the beauty of memory contexts for small allocations. But because of the 'convenience' of memory contexts we sometimes tend to not pay attention to doing explicit pfrees. As a general rule I think allocations in TopMemoryContext should be critically examined. I was bitten by this undue bloat recently while developing some code and valgrind is not of much help in such cases because of this very beauty of memory contexts :).
One specific case I want to mention here is hash_create(). For local hash tables if HASH_CONTEXT is not specified, they get created in a context which becomes a direct child of TopMemoryContext. Wouldn't it be a better idea to create the table in CurrentMemoryContext?
If hash_destroy() is not explicitly invoked, this can cause a lot of bloat especially if the intention was to use the hash table only for a while.
Regards,
Nikhils
Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com
NikhilS <nikkhils@gmail.com> writes: > One specific case I want to mention here is hash_create(). For local hash > tables if HASH_CONTEXT is not specified, they get created in a context which > becomes a direct child of TopMemoryContext. Wouldn't it be a better idea to > create the table in CurrentMemoryContext? It works that way partly for historical reasons and mostly because the vast majority of dynahash uses are for long-lived hash tables (a quick search says that a default of CurrentMemoryContext would be correct for only about one in ten of the current callers). I don't see any value in changing it. regards, tom lane
On 7/20/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote:
Right. But may be for code completeness, we should add that
missing heap_freetuple.
Thanks,
Pavan
Pavan Deolasee wrote:
> Are we leaking memory in vac_update_relstats ?
>
> /* Fetch a copy of the tuple to scribble on */
> ctup = SearchSysCacheCopy(RELOID,
> ObjectIdGetDatum(relid),
> 0, 0, 0);
>
> This copy is not subsequently freed in the function.
It's palloc'd in the current memory context, so it's not serious. It'll
be freed at the end of the transaction, if not before that. That's the
beauty of memory contexts; no need to worry about small allocations like
that.
Right. But may be for code completeness, we should add that
missing heap_freetuple.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
"Pavan Deolasee" <pavan.deolasee@gmail.com> writes: > On 7/20/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote: >> It's palloc'd in the current memory context, so it's not serious. > Right. But may be for code completeness, we should add that > missing heap_freetuple. Personally I've been thinking of mounting an effort to get rid of unnecessary pfree's wherever possible. Particularly in user-defined functions, "cleaning up" at the end is a waste of code space and cycles too, because they're typically called in contexts that are going to be reset immediately afterward. In the case of vac_update_relstats, it's called only once per transaction, so there's certainly no point in being a neatnik. Stuff you need to worry about is functions that might be called many times in the same memory context. regards, tom lane
"Tom Lane" <tgl@sss.pgh.pa.us> writes: > Personally I've been thinking of mounting an effort to get rid of > unnecessary pfree's wherever possible. Particularly in user-defined > functions, "cleaning up" at the end is a waste of code space and > cycles too, because they're typically called in contexts that are > going to be reset immediately afterward. It seems like the impact of this is self-limiting though. The worst-case is going to be something which executes an extra pfree for every tuple. Or perhaps one for every expression in a complex query involving lots of expressions. Saving a few extra pfrees per tuple isn't really going to buy many cpu cycles. > In the case of vac_update_relstats, it's called only once per > transaction, so there's certainly no point in being a neatnik. > Stuff you need to worry about is functions that might be called > many times in the same memory context. Fwiw there are user-space functions that can't leak memory. I'm sure everyone knows about btree operators, but pgsql also assumes that type input and output functions don't leak memory too (or else assignment leaks memory in the function scope memory context and in a loop...). -- Gregory Stark EnterpriseDB http://www.enterprisedb.com
Gregory Stark <stark@enterprisedb.com> writes: > It seems like the impact of this is self-limiting though. The worst-case is > going to be something which executes an extra pfree for every tuple. Or > perhaps one for every expression in a complex query involving lots of > expressions. Saving a few extra pfrees per tuple isn't really going to buy > many cpu cycles. I can't tell you how many profiles I've looked at in which palloc/pfree were *the* dominant consumers of CPU cycles. I'm not sure how much could be saved this particular way, but I wouldn't dismiss it as uninteresting. I've actually thought about making short-term memory contexts use a variant MemoryContext type in which pfree was a no-op and palloc was simplified by not worrying at all about recycling space. regards, tom lane
Tom Lane wrote: > I've actually thought about making short-term memory > contexts use a variant MemoryContext type in which pfree was a no-op and > palloc was simplified by not worrying at all about recycling space. > > > That sounds like a good idea to me. cheers andrew
Tom Lane wrote: > Gregory Stark <stark@enterprisedb.com> writes: >> It seems like the impact of this is self-limiting though. The worst-case is >> going to be something which executes an extra pfree for every tuple. Or >> perhaps one for every expression in a complex query involving lots of >> expressions. Saving a few extra pfrees per tuple isn't really going to buy >> many cpu cycles. > > I can't tell you how many profiles I've looked at in which palloc/pfree > were *the* dominant consumers of CPU cycles. I'm not sure how much > could be saved this particular way, but I wouldn't dismiss it as > uninteresting. I've actually thought about making short-term memory > contexts use a variant MemoryContext type in which pfree was a no-op and > palloc was simplified by not worrying at all about recycling space. I played with such a beast last winter. It suits the parser particularly well, it does a lot of tiny allocations that are all freed together, and palloc is at the top of the CPU profile. My implementation was a simple stack-like allocator, with a no-op pfree. I got carried away with other stuff, but I remember that one difficulty was using a different memory context for the parser because of the hack in PreventTransactionChain that checked if a piece of memory was allocated in QueryContext. I'm glad it's gone. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com