Re: Inefficient handling of LO-restore + Patch - Mailing list pgsql-hackers

From Mario Weilguni
Subject Re: Inefficient handling of LO-restore + Patch
Date
Msg-id D143FBF049570C4BB99D962DC25FC2D21780F8@freedom.icomedias.com
Whole thread Raw
In response to Inefficient handling of LO-restore + Patch  ("Mario Weilguni" <mario.weilguni@icomedias.com>)
Responses Re: Inefficient handling of LO-restore + Patch  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
>"Mario Weilguni" <mario.weilguni@icomedias.com> writes:
>> And I did not find out how I can detect the large object
>> chunksize, either from getting it from the headers (include
>> "storage/large_object.h" did not work)
>
>Why not?
>
>Still, it might make sense to move the LOBLKSIZE definition into
>pg_config.h, since as you say it's of some interest to clients like
>pg_dump.

I tried another approach to detect the LOBLKSIZE of the destination server:
* at restore time, create a LO large enough to be split in two parts (e.g. BLCSIZE+1)
* select octet_length(data) from pg_largeobject where loid=OIDOFOBJECT and pageno=0
* select lo_unlink(OIDOFOBJECT)

IMO this gives the advantage that the LOBLKSIZE is taken from the database I'm restoring to, and not a constant defined
atcompile time. Otherwise, it wastes an OID. 

Is there a way to get compile-time settings (such as BLCSIZE, LOBLKSIZE and such via functions - e.g.
select pginternal('BLCSIZE') or something similar?


I tested with and without my patch against 2 Gigabytes of LO's using MD5, and got exactly the same result on all 25000
largeobjects. So I think my patch is safe. If there's interest for integration into pg_dump, I'll prepare a patch for
thecurrent CVS version. 




pgsql-hackers by date:

Previous
From: Denis Perchine
Date:
Subject: Re: Importing Large Amounts of Data
Next
From: Vince Vielhaber
Date:
Subject: Re: [PATCHES] ANSI Compliant Inserts