Re: psql copy errors - Mailing list pgsql-admin

From Vladimir Yevdokimov
Subject Re: psql copy errors
Date
Msg-id 200506231748.19051.vladimir@givex.com
Whole thread Raw
In response to psql copy errors  (David Bear <David.Bear@asu.edu>)
List pgsql-admin
On June 23, 2005 03:27 pm, David Bear wrote:
> I'm finding the \copy is very brittle. It seems to stop for everyone
> little reason. Is there a way to tell it to be more forgiving -- for
> example, to ignore extra data fields that might exists on a line?
>
> Or, to have it just skip that offending record but continue on to the
> next.
>
> I've got a tab delimited file, but if \copy sees any extra tabs in the
> file it just stops at that record. I want to be able to control what
> pg does when it hits an exception.
>
> I'm curious what others do for bulk data migration. Since copy seems
> so brittle, there must be a better way...
>

You may use '-d' option of pg_dump in which case it dumps data into INSERT statements.
In this case when you load the damped data it will process tabs properly and will fail any invalid records but finish
theprocess itself. 
If you redirect output into a separate file you can analyze later how many records failed.
May be that's what you need in your case.
The only problem with this method I know is that it takes longer to load the data as it does full validation for each
record.
--
Vladimir Yevdokimov <vladimir@givex.com>

pgsql-admin by date:

Previous
From: Simon Riggs
Date:
Subject: Re: restoring wal archive and pg_xlog dir
Next
From: Jeff Frost
Date:
Subject: Re: restoring wal archive and pg_xlog dir