AW: Re: pg_dump and LOs (another proposal) - Mailing list pgsql-hackers

From Zeugswetter Andreas SB
Subject AW: Re: pg_dump and LOs (another proposal)
Date
Msg-id 219F68D65015D011A8E000006F8590C605BA59B5@sdexcsrv1.f000.d0188.sd.spardat.at
Whole thread Raw
List pgsql-hackers
 
> At 11:09 5/07/00 -0400, Tom Lane wrote:
> >Philip Warner <pjw@rhyme.com.au> writes:
> >> Having now flirted with recreating BLOBs (and even DBs) 
> with matching OIDs,
> >> I find myself thinking it's a waste of effort for the 
> moment. A modified
> >> version of the system used by Pavel Janik in pg_dumplo may 
> be substantially
> >> more reliable than my previous proposal:
> >
> >I like this a lot better than trying to restore the original 
> OIDs.  For
> >one thing, the restore-original-OIDs idea cannot be made to 
> work if what
> >we want to do is load additional tables into an existing database.
> >
> 
> The thing that bugs me about this if for 30,000 rows, I do 
> 30,000 updates
> after the restore. It seems *really* inefficient, not to mention slow.
> 
> I'll also have to modify pg_restore to talk to the database 
> directly (for
> lo import). As a result I will probably send the entire 
> script directly
> from withing pg_restore. Do you know if comment parsing 
> ('--') is done in
> the backend, or psql?

Strictly speaking you are absolutely safe if you only do one update 
with the max oid from the 30,000 rows before you start creating the lo's.
Don't know if you know that beforehand though.

If you only know afterwards then you have to guarantee that no other 
connection to this db (actually postmaster if you need the oid's site
unique)
does anything while you insert the lo's and then update to max oid.

Andreas


pgsql-hackers by date:

Previous
From: Hannu Krosing
Date:
Subject: Re: Proposed new libpq API
Next
From: Tom Lane
Date:
Subject: Re: Alternative new libpq interface.