Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Hiroshi Inoue
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id EKEJJICOHDIEMGPNIFIJKEADFIAA.Inoue@tpf.co.jp
Whole thread Raw
In response to Re: Bulkloading using COPY - ignore duplicates?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
> -----Original Message-----
> From: Tom Lane
>
> I said:
> > "Zeugswetter Andreas SB SD" <ZeugswetterA@spardat.at> writes:
> >> I thought that the problem was, that you cannot simply skip the
> >> insert, because at that time the tuple (pointer) might have already
> >> been successfully inserted into an other index/heap, and thus this was
> >> only sanely possible with savepoints/undo.
>
> > Hmm, good point.  If we don't error out the transaction then that tuple
> > would become good when we commit.  This is nastier than it appears.
>
> On further thought, I think it *would* be possible to do this without
> savepoints,

It's a very well known issue that the partial rolloback functionality is
a basis of this kind of problem and it's the reason I've mentioned that
UNDO functionality has the highest priority. IMHO we shouldn't
implement a partial rolloback functionality specific to an individual
problem.

regards,
Hiroshi Inoue



pgsql-hackers by date:

Previous
From: Thomas Lockhart
Date:
Subject: Re: CVS changes
Next
From: Bruce Momjian
Date:
Subject: Re: CVS changes