"cache reference leak" and "problem in alloc set" warnings - Mailing list pgsql-hackers

From Volkan YAZICI
Subject "cache reference leak" and "problem in alloc set" warnings
Date
Msg-id 20060816120913.GA1320@alamut.tdm.local
Whole thread Raw
Responses Re: "cache reference leak" and "problem in alloc set" warnings
List pgsql-hackers
Hi,

I've been trying to implement INOUT/OUT functionality in PL/scheme. When
I return a record type tuple, postmaster complains with below warnings:

WARNING:  problem in alloc set ExprContext: detected write past chunk
end in block 0x8462f00, chunk 0x84634c8
WARNING:  cache reference leak: cache pg_type (34), tuple 2/7 has
count 1

I found a related thread in the ml archives that Joe Conway fixed a
similar problem in one of his patches but I couldn't figure out how he
did it. Can somebody help me to figure out the reasons of above warnings
and how can I fix them?


Regards.

P.S. Also here's the backtrace of stack just before warnings are dumped.    Yeah, it's a little bit useless 'cause
there'snearly one way to    reach these errors but... I thought it can give an oversight to    hackers who takes a
quicklook.
 

Breakpoint 2, AllocSetCheck (context=0x845ff58) at aset.c:1155
1155                                    elog(WARNING, "problem in alloc set %s: detected write past c
(gdb) where
#0  AllocSetCheck (context=0x845ff58) at aset.c:1155
#1  0x0829b728 in AllocSetReset (context=0x845ff58) at aset.c:407
#2  0x0829c958 in MemoryContextReset (context=0x845ff58) at mcxt.c:129
#3  0x0817dce5 in ExecResult (node=0x84a0754) at nodeResult.c:113
#4  0x0816b423 in ExecProcNode (node=0x84a0754) at execProcnode.c:334
#5  0x081698fb in ExecutePlan (estate=0x84a05bc, planstate=0x84a0754, operation=CMD_SELECT,   numberTuples=0,
direction=138818820,dest=0x84102ec) at execMain.c:1145
 
#6  0x0816888b in ExecutorRun (queryDesc=0x842c680, direction=ForwardScanDirection, count=138818820)   at
execMain.c:223
#7  0x08204a08 in PortalRunSelect (portal=0x842eae4, forward=1 '\001', count=0, dest=0x84102ec)   at pquery.c:803
#8  0x08204762 in PortalRun (portal=0x842eae4, count=2147483647, dest=0x84102ec, altdest=0x84102ec,
completionTag=0xbfc23cb0"") at pquery.c:655
 
#9  0x082001e5 in exec_simple_query (query_string=0x840f91c "SELECT in_out_t_2(13, true);")   at postgres.c:1004
#10 0x08202de5 in PostgresMain (argc=4, argv=0x83bd7fc, username=0x83bd7d4 "vy") at postgres.c:3184
#11 0x081d6b54 in BackendRun (port=0x83d21a8) at postmaster.c:2853
#12 0x081d636f in BackendStartup (port=0x83d21a8) at postmaster.c:2490
#13 0x081d455e in ServerLoop () at postmaster.c:1203
#14 0x081d39ca in PostmasterMain (argc=3, argv=0x83bb888) at postmaster.c:955
#15 0x0818d404 in main (argc=3, argv=0x83bb888) at main.c:187

Breakpoint 1, PrintCatCacheLeakWarning (tuple=0xb5ef7dbc) at catcache.c:1808
1808            Assert(ct->ct_magic == CT_MAGIC);
(gdb) where
#0  PrintCatCacheLeakWarning (tuple=0xb5ef7dbc) at catcache.c:1808
#1  0x0829e927 in ResourceOwnerReleaseInternal (owner=0x83da800,   phase=RESOURCE_RELEASE_AFTER_LOCKS, isCommit=1
'\001',isTopLevel=0 '\0') at resowner.c:273
 
#2  0x0829e64c in ResourceOwnerRelease (owner=0x83da800, phase=RESOURCE_RELEASE_AFTER_LOCKS,   isCommit=1 '\001',
isTopLevel=0'\0') at resowner.c:165
 
#3  0x0829dd8e in PortalDrop (portal=0x842eae4, isTopCommit=0 '\0') at portalmem.c:358
#4  0x082001f9 in exec_simple_query (query_string=0x840f91c "SELECT in_out_t_2(13, true);")   at postgres.c:1012
#5  0x08202de5 in PostgresMain (argc=4, argv=0x83bd7fc, username=0x83bd7d4 "vy") at postgres.c:3184
#6  0x081d6b54 in BackendRun (port=0x83d21a8) at postmaster.c:2853
#7  0x081d636f in BackendStartup (port=0x83d21a8) at postmaster.c:2490
#8  0x081d455e in ServerLoop () at postmaster.c:1203
#9  0x081d39ca in PostmasterMain (argc=3, argv=0x83bb888) at postmaster.c:955       
#10 0x0818d404 in main (argc=3, argv=0x83bb888) at main.c:187


pgsql-hackers by date:

Previous
From: Stefan Kaltenbrunner
Date:
Subject: seahorse buildfarm issues
Next
From: Peter Eisentraut
Date:
Subject: Re: BugTracker (Was: Re: 8.2 features status)