Re: LOCK TABLE & speeding up mass data loads - Mailing list pgsql-performance

From Ron Johnson
Subject Re: LOCK TABLE & speeding up mass data loads
Date
Msg-id 1043658500.818.398.camel@haggis
Whole thread Raw
In response to Re: LOCK TABLE & speeding up mass data loads  (Curt Sampson <cjs@cynic.net>)
Responses Re: LOCK TABLE & speeding up mass data loads  ("Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>)
Re: LOCK TABLE & speeding up mass data loads  (Curt Sampson <cjs@cynic.net>)
List pgsql-performance
On Sun, 2003-01-26 at 17:10, Curt Sampson wrote:
> On Sun, 25 Jan 2003, Ron Johnson wrote:
>
> > > Oh, and you're using COPY right?
> >
> > No.  Too much data manipulation to do 1st.  Also, by committing every
> > X thousand rows, then if the process must be aborted, then there's
> > no huge rollback, and the script can then skip to the last comitted
> > row and pick up from there.
>
> I don't see how the amount of data manipulation makes a difference.
> Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,
> just print the data for the columns, separated by tabs. Where you now
> issue a COMMIT, end the copy.

Yes, create an input file for COPY.  Great idea.

However, If I understand you correctly, then if I want to be able
to not have to roll-back and re-run and complete COPY (which may
entail millions of rows), then I'd have to have thousands of seperate
input files (which would get processed sequentially).

Here's what I'd like to see:
COPY table [ ( column [, ...] ) ]
    FROM { 'filename' | stdin }
    [ [ WITH ]
          [ BINARY ]
          [ OIDS ]
          [ DELIMITER [ AS ] 'delimiter' ]
          [ NULL [ AS ] 'null string' ] ]
    [COMMIT EVERY ... ROWS WITH LOGGING]  <<<<<<<<<<<<<
    [SKIP ... ROWS]          <<<<<<<<<<<<<

This way, if I'm loading 25M rows, I can have it commit every, say,
1000 rows, and if it pukes 1/2 way thru, then when I restart the
COPY, it can SKIP past what's already been loaded, and proceed apace.

--
+---------------------------------------------------------------+
| Ron Johnson, Jr.        mailto:ron.l.johnson@cox.net          |
| Jefferson, LA  USA      http://members.cox.net/ron.l.johnson  |
|                                                               |
| "Fear the Penguin!!"                                          |
+---------------------------------------------------------------+


pgsql-performance by date:

Previous
From: Sean Chittenden
Date:
Subject: Re: 7.3.1 New install, large queries are slow
Next
From: Ron Johnson
Date:
Subject: Re: bigserial vs serial - which one I'd have to use?