On Sun, 2003-01-26 at 17:10, Curt Sampson wrote:
> On Sun, 25 Jan 2003, Ron Johnson wrote:
>
> > > Oh, and you're using COPY right?
> >
> > No. Too much data manipulation to do 1st. Also, by committing every
> > X thousand rows, then if the process must be aborted, then there's
> > no huge rollback, and the script can then skip to the last comitted
> > row and pick up from there.
>
> I don't see how the amount of data manipulation makes a difference.
> Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,
> just print the data for the columns, separated by tabs. Where you now
> issue a COMMIT, end the copy.
Yes, create an input file for COPY. Great idea.
However, If I understand you correctly, then if I want to be able
to not have to roll-back and re-run and complete COPY (which may
entail millions of rows), then I'd have to have thousands of seperate
input files (which would get processed sequentially).
Here's what I'd like to see:
COPY table [ ( column [, ...] ) ]
FROM { 'filename' | stdin }
[ [ WITH ]
[ BINARY ]
[ OIDS ]
[ DELIMITER [ AS ] 'delimiter' ]
[ NULL [ AS ] 'null string' ] ]
[COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<<
[SKIP ... ROWS] <<<<<<<<<<<<<
This way, if I'm loading 25M rows, I can have it commit every, say,
1000 rows, and if it pukes 1/2 way thru, then when I restart the
COPY, it can SKIP past what's already been loaded, and proceed apace.
--
+---------------------------------------------------------------+
| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |
| Jefferson, LA USA http://members.cox.net/ron.l.johnson |
| |
| "Fear the Penguin!!" |
+---------------------------------------------------------------+