Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Vadim Mikheev
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id 000001c194f4$37c84f50$ed2db841@home
Whole thread Raw
In response to Re: Bulkloading using COPY - ignore duplicates?  (Daniel Kalchev <daniel@digsys.bg>)
Responses Re: Bulkloading using COPY - ignore duplicates?  (Daniel Kalchev <daniel@digsys.bg>)
Re: Bulkloading using COPY - ignore duplicates?  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-hackers
> Now, how about the same functionality for
>
> INSERT into table1 SELECT * from table2 ... WITH ERRORS;
>
> Should allow the insert to complete, even if table1 has unique indexes and
we
> try to insert duplicate rows. Might save LOTS of time in bulkloading
scripts
> not having to do single INSERTs.

1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL
block and define
for what exceptions (errors) what actions should be taken (ie IGNORE for
NON_UNIQ_KEY
error, etc).

2. For INSERT ... SELECT statement one can put DISTINCT in select' target
list.

> Guess all this will be available in 7.3?

We'll see.

Vadim




pgsql-hackers by date:

Previous
From: Oleg Bartunov
Date:
Subject: Re: RC1 time?
Next
From: Daniel Kalchev
Date:
Subject: Re: Bulkloading using COPY - ignore duplicates?