Added to TODO list.
>
> >
> >Large object have been broken for quite some time. I say remove the
> >memory context stuff and see what breaks. Can't be worse than earlier
> >releases, and if there is a problem, it will show up for us and we can
> >issue a patch.
> >
> >--
>
>
> I insured that all memory allocations in be-fsstubs.c used the
> current memorycontext for their allocations.
> The system encounters errors when opening large objects which
> were just created. Message like: "ERROR cannot open xinv<number>".
> This happens even though all large_object operations are performed
> in a transaction.
>
> I'm now wondering wether in the approach above the files associated
> with the large object will ever be freed (Or will de virtual file descriptor
> stuff
> handle this?).
>
> Might it be so that because large objects and are implemented using
> relations/indexes that information about these must persist until these
> are properly closed by the postgres system?
>
> How about not changing anything except adding a lo_garbage_collect function,
> which frees the MemoryContext used by large objects and does any other
> work needed? (Like closes indexes/relations?).
>
> Thanks,
> Maurice
>
>
>
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)