Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Daniel Kalchev
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id 200201040736.JAA29349@dcave.digsys.bg
Whole thread Raw
In response to Re: Bulkloading using COPY - ignore duplicates?  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-hackers
>>>Bruce Momjian said:> Mikheev, Vadim wrote:> > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:> > > > > Seems
nestedtransactions are not required if we load> > > > > each COPY line in its own transaction, like we do with> > > > >
INSERTfrom pg_dump.> > > > > > > > I don't think that's an acceptable answer.  Consider> > > > > > Oh, very good point.
"Requires nested transactions" added to TODO.> > > > Also add performance issue with per-line-commit...> > > > Also-II
-there is more common name for required feature - savepoints.> > OK, updated TODO to prefer savepoints term.
 

Now, how about the same functionality for

INSERT into table1 SELECT * from table2 ... WITH ERRORS;

Should allow the insert to complete, even if table1 has unique indexes and we 
try to insert duplicate rows. Might save LOTS of time in bulkloading scripts 
not having to do single INSERTs.

Guess all this will be available in 7.3?

Daniel



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: RC1 time?
Next
From: Oleg Bartunov
Date:
Subject: Re: RC1 time?