Thread: HELP - need way to restore only blobs
I recently posted a similar message but left some key info out: I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall tool which I now realize does not capture blobs. I now need to move only the blob data to the 7.4 database. The problem with redoing the dump with pg_dump -b is the database is now in production and writing over the top of changes to the database is not exceptable. The blob data is very static so if there was some way to copy the physical files from disk and modify some records in the system tables to properly locate the blob records that would be best. Another option I was looking at was to restore the archived database with the blobs intact and then restore the production version over the top without destroying the blob data. All suggestions welcome, I'm dying here. -- Warren Little Senior Vice President Secondary Marketing Security Lending Wholesale, LC www.securitylending.com Tel: 866-369-7763 Fax: 866-849-8082
At some point in time, wlittle@securitylending.com (Warren Little) wrote: >I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall >tool which I now realize does not capture blobs. > >I now need to move only the blob data to the 7.4 database. >The problem with redoing the dump with pg_dump -b is the database is now >in production and writing over the top of changes to the database is not >exceptable. The blob data is very static so if there was some way to >copy the physical files from disk and modify some records in the system >tables to properly locate the blob records that would be best. Another >option I was looking at was to restore the archived database with the >blobs intact and then restore the production version over the top >without destroying the blob data. In a similar situation, we wrote a script that constructs "\lo_export" commands to dump each large object to a file. The file name for each contains the information needed to reconstruct the object at the other end. For instance, if you have a table like this: CREATE TABLE foobar (primary_id INTEGER PRIMARY KEY, large_obj OID, ...) you'd want a series of commands that look like this: \lo_export <large_obj> <primary_id>.dmp Then, given a directory full of such files, you construct another series of commands that look like this: \lo_import <primary_id>.dmp UPDATE foobar SET large_obj = <new_value> WHERE primary_id = <primary_id>; The trick is capturing the output of the \lo_import command and parsing it to get the large object OID after it is created. I don't know if I have permission to post or email the script, but if you contact me offline I should know by then. jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m -- ~~~~~~~~~~~~~~~~| Genius may have its limitations, but stupidity is not Jeff Boes | thus handicapped. jboes@qtm.net | --Elbert Hubbard (1856-1915), American author
Thanks to all who responded. Found the pg_dumplo tool in contrib which did exactly what I needed. On Wed, 2004-02-18 at 05:54, Jeff Boes wrote: > At some point in time, wlittle@securitylending.com (Warren Little) wrote: > > >I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall > >tool which I now realize does not capture blobs. > > > >I now need to move only the blob data to the 7.4 database. > >The problem with redoing the dump with pg_dump -b is the database is now > >in production and writing over the top of changes to the database is not > >exceptable. The blob data is very static so if there was some way to > >copy the physical files from disk and modify some records in the system > >tables to properly locate the blob records that would be best. Another > >option I was looking at was to restore the archived database with the > >blobs intact and then restore the production version over the top > >without destroying the blob data. > > In a similar situation, we wrote a script that constructs "\lo_export" commands > to dump each large object to a file. The file name for each contains the > information needed to reconstruct the object at the other end. For instance, if > you have a table like this: > > CREATE TABLE foobar (primary_id INTEGER PRIMARY KEY, large_obj OID, ...) > > you'd want a series of commands that look like this: > > \lo_export <large_obj> <primary_id>.dmp > > Then, given a directory full of such files, you construct another series of > commands that look like this: > > \lo_import <primary_id>.dmp > > UPDATE foobar SET large_obj = <new_value> WHERE primary_id = <primary_id>; > > The trick is capturing the output of the \lo_import command and parsing it to > get the large object OID after it is created. > > I don't know if I have permission to post or email the script, but if you > contact me offline I should know by then. > > jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m > > -- > ~~~~~~~~~~~~~~~~| Genius may have its limitations, but stupidity is not > Jeff Boes | thus handicapped. > jboes@qtm.net | --Elbert Hubbard (1856-1915), American author > > > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly -- Warren Little Senior Vice President Secondary Marketing Security Lending Wholesale, LC www.securitylending.com Tel: 866-369-7763 Fax: 866-849-8082