Re: [HACKERS] Memory leaks for large objects - Mailing list pgsql-hackers

From Maurice Gittens
Subject Re: [HACKERS] Memory leaks for large objects
Date
Msg-id 00d801bd3ca3$9711bda0$fcf3b2c2@caleb..gits.nl
Whole thread Raw
Responses Re: [HACKERS] Memory leaks for large objects  (Bruce Momjian <maillist@candle.pha.pa.us>)
List pgsql-hackers
>
>Large object have been broken for quite some time.  I say remove the
>memory context stuff and see what breaks.  Can't be worse than earlier
>releases, and if there is a problem, it will show up for us and we can
>issue a patch.
>
>--


I insured that all memory allocations in be-fsstubs.c used the
current memorycontext for their allocations.
The system encounters errors when opening large objects which
were just created. Message like: "ERROR cannot open xinv<number>".
This happens even though all large_object operations are performed
in a transaction.

I'm now wondering wether in the approach above the files associated
with the large object will ever be freed (Or will de virtual file descriptor
stuff
handle this?).

Might it be so that because large objects and are implemented using
relations/indexes that information about these must persist until these
are properly closed by the postgres system?

How about not changing anything except adding a lo_garbage_collect function,
which frees the MemoryContext used by large objects and does any other
work needed? (Like closes indexes/relations?).

Thanks,
Maurice



pgsql-hackers by date:

Previous
From: The Hermit Hacker
Date:
Subject: rights on pg_user (WAs: Re: [HACKERS] Open 6.3 issues (fwd))
Next
From: Bruce Momjian
Date:
Subject: New locking code