Re: LOCK TABLE & speeding up mass data loads - Mailing list pgsql-performance

From Shridhar Daithankar
Subject Re: LOCK TABLE & speeding up mass data loads
Date
Msg-id 3E354CFB.32732.A38120C@localhost
Whole thread Raw
In response to Re: LOCK TABLE & speeding up mass data loads  (Ron Johnson <ron.l.johnson@cox.net>)
Responses Re: LOCK TABLE & speeding up mass data loads
Re: LOCK TABLE & speeding up mass data loads
List pgsql-performance
On 27 Jan 2003 at 3:08, Ron Johnson wrote:

> Here's what I'd like to see:
> COPY table [ ( column [, ...] ) ]
>     FROM { 'filename' | stdin }
>     [ [ WITH ]
>           [ BINARY ]
>           [ OIDS ]
>           [ DELIMITER [ AS ] 'delimiter' ]
>           [ NULL [ AS ] 'null string' ] ]
>     [COMMIT EVERY ... ROWS WITH LOGGING]  <<<<<<<<<<<<<
>     [SKIP ... ROWS]          <<<<<<<<<<<<<
>
> This way, if I'm loading 25M rows, I can have it commit every, say,
> 1000 rows, and if it pukes 1/2 way thru, then when I restart the
> COPY, it can SKIP past what's already been loaded, and proceed apace.

IIRc, there is a hook to \copy, not the postgreSQL command copy for how many
transactions you would like to see. I remember to have benchmarked that and
concluded that doing copy in one transaction is the fastest way of doing it.

DOn't have a postgresql installation handy, me being in linux, but this is
definitely possible..

Bye
 Shridhar

--
I still maintain the point that designing a monolithic kernel in 1991 is
afundamental error.  Be thankful you are not my student.  You would not get
ahigh grade for such a design :-)(Andrew Tanenbaum to Linus Torvalds)


pgsql-performance by date:

Previous
From: Ron Johnson
Date:
Subject: Re: bigserial vs serial - which one I'd have to use?
Next
From: Ron Johnson
Date:
Subject: Re: LOCK TABLE & speeding up mass data loads