Philip Warner <pjw@rhyme.com.au> writes:
> The thing that bugs me about this if for 30,000 rows, I do 30,000 updates
> after the restore. It seems *really* inefficient, not to mention slow.
Shouldn't be a problem. For one thing, I can assure you there are no
databases with 30,000 LOs in them ;-) --- the existing two-tables-per-LO
infrastructure won't support it. (I think Denis Perchine has started
to work on a replacement one-table-for-all-LOs solution, btw.) Possibly
more to the point, there's no reason for pg_restore to grovel through
the individual rows for itself. Having identified a column that
contains (or might contain) LO OIDs, you can do something like
UPDATE userTable SET oidcolumn = tmptable.newLOoid WHERE oidcolumn = tmptable.oldLOoid;
which should be quick enough, especially given indexes.
> I'll also have to modify pg_restore to talk to the database directly (for
> lo import). As a result I will probably send the entire script directly
> from withing pg_restore. Do you know if comment parsing ('--') is done in
> the backend, or psql?
Both, I believe --- psql discards comments, but so will the backend.
Not sure you really need to abandon use of psql, though.
regards, tom lane