Re: Re: Large Objects - Mailing list pgsql-general

From David McWherter
Subject Re: Re: Large Objects
Date
Msg-id 14795.27353.638551.140778@tangent.mcs.drexel.edu
Whole thread Raw
In response to Re: Re: Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Re: Large Objects  (Denis Perchine <dyp@perchine.com>)
List pgsql-general
 > currently PG stores each BLOB as an independent table --- and just to
 > add insult to injury, creates an index for it :-(.  So you actually have
 > *two* added files in the DB directory per BLOB.  Needless to say, this
 > does not scale real well to large numbers of BLOBs.
 >
 > Denis Perchine has done the legwork to store BLOBs in a more reasonable
 > fashion, ie all together in one big (but indexed!) table.  His patch
 > needs review by a core person before we'll consider it safe to commit,
 > but I have that as a personal "must do for 7.1".  So those MySQL results
 > should also apply to Postgres in 7.1 and later.

I was experimenting with the patches in question, after applying
them to postgres-7.0.2.  I'm encountering a nasty problem when
inserting my own huge (several megabyte) files into the relation,
that I can't quite figure out.

Apparently, what's going on is that the inserts/updates into the
pg_largeobject table result in buffers getting leaked (as
reported by BufferPoolCheckLeak()).  Eventually, importing
a file results in exhausting the block free list.  I think
there were a couple of places where a ReleaseBuffer was
needed, but not called...i've modified his code for inv_write_existing
to look more like:

------------
   buffer = InvalidBuffer;
   while ( index_getnext(...) ) {
     if ( buffer != InvalidBuffer ) {
        ReleaseBuffer( buffer );
        buffer = InvalidBuffer;
     }
     heap_fetch( ..., &buffer );
     index_endscan(...);

     {
        /* build a block of data */
        ...
        ntup = heap_modifytuple( ... );
        heap_update( ... );
        heap_freetuple( ntup );

    }
    if ( buffer != InvalidBuffer ) {
       WriteBuffer( buffer );
    }
------------

The Leak-check seems to indicate that the pages that are getting
leaked are all dirty pages, which seems slightly odd to me...if
I call it at the end of this loop function (which gets called over
and over again during an import), I end up getting something that
looks like this:
-------
lockNum=56, flags=0x85, refcount=1 1)
NOTICE:  Buffer Leak: [058] (freeNext=-3, freePrev=-3, relname=pg_largeobject, blockNum=9, flags=0x85, refcount=1 1)
NOTICE:  Buffer Leak: [059] (freeNext=-3, freePrev=-3, relname=pg_largeobject, blockNum=58, flags=0x85, refcount=1 3)
lockNum=49, flags=0x85, refcount=1 1)
NOTICE:  Buffer Leak: [061] (freeNext=-3, freePrev=-3, relname=pg_largeobject, blockNum=51, flags=0x85, refcount=1 3)
NOTICE:  Buffer Leak: [062] (freeNext=-3, freePrev=-3, relname=pg_largeobject, blockNum=37, flags=0x85, refcount=1 3)
ERROR:  out of free buffers: time to abort !
-----

The number of buffers leaked monotonically increases as the
import proceeds...I can't quite pin down at what location the
Buffers in question are being pinned down...

-David

----------------------[=========]------------------------
David T. McWherter                udmcwher@mcs.drexel.edu

Many Myths are based on truth
        -- Spock, "The Way to Eden",  stardate 5832.3

pgsql-general by date:

Previous
From: root
Date:
Subject: how to store a query, that results in a table
Next
From: "Abe Asghar"
Date:
Subject: Re: one more word about rules