Re: [HACKERS] How To free resources used by large object Relations? - Mailing list pgsql-hackers

From Maurice Gittens
Subject Re: [HACKERS] How To free resources used by large object Relations?
Date
Msg-id 002d01bd3f88$462872c0$fcf3b2c2@caleb..gits.nl
Whole thread Raw
List pgsql-hackers
-----Original Message-----
From: Vadim B. Mikheev <vadim@sable.krasnoyarsk.su>
To: Maurice Gittens <mgittens@gits.nl>
Cc: pgsql-hackers@postgreSQL.org <pgsql-hackers@postgreSQL.org>
Date: zondag 22 februari 1998 17:47
Subject: Re: [HACKERS] How To free resources used by large object Relations?


>>
>> Somehow I have to free the relation from the cache in the following
>> situations:
>> 1. In a transaction I must free the stuff when the transaction is
>> commited/aborted.
>
>Backend does it, don't worry.
I don't really understand all of the code so please bear with me.
Could it be that large objects don't use the right  memorycontext/portals so
that memory isn't freed automagically?
>
>> 2. Otherwise it must happen when lo_close is called.
>
>It seems that you can't remove relation from cache untill
>commit/abort, currently: backend uses local cache to unlink
>files of relations created in transaction if abort...
>We could change relcache.c:RelationPurgeLocalRelation()
>to read from pg_class directly...
Is there a way to to tell the cache manager to free resources?
The relations concerned are know how to properly free them is not
however.
>
>But how many LO do you create in single xact ?
Only one (in my real application).
>Is memory allocated for cache so big ?
Not really except that the leak accumulates as long as the connection
with the backend is not closed.
>
>Vadim

I have a simple test program which goes like this:

(this is C-like psuedo code)

main()
{
    connection = createConnection();

    for(;;)
    {
        lo_create(connection,READ|WRITE);
    }

    destroyConnection(connection);
}

This program will leak memory each time it goes through the for loop.
It doesn't matter if the statements in the for loop are in a transaction or
not.

When I give each large object it's own memory context (so that memory
is freed per large object) it seems to leak memory more slowly, but it leaks
anyway.

I've tried calling a number of the functions (like
RelationPurgeLocalRelation)
in relcache.c to try to free up the memory myself but the backend doesn't
like
this (== closed connection).

It looks like there is some assumption about which memorycontext/portal is
used during transactions and that largeobjects don't obey this
assumption.

Can you make these assumptions explicite? Maybe I can then let
large object respect these rules.

Now I have the following understanding of these matters:
1. In transactions
All memory should be freed automatically at commit/abort.
How do I tell the system to do it for me?

2. In autocommit mode
All resources used by large object should be freed at lo_close.
Can I have this delayed and done automatically in the CommitTransaction
function?

3. Somehow atomic functions like lo_create should not leak memory either.
This is the case however.

Thanks for any help,
Maurice


pgsql-hackers by date:

Previous
From: "Vadim B. Mikheev"
Date:
Subject: Re: AW: [HACKERS] triggers, views and rules (not instead)
Next
From: "Vadim B. Mikheev"
Date:
Subject: Re: [HACKERS] Open 6.3 issues