Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Jim Buttafuoco
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id 200110022259.f92MxgV26809@dual.buttafuoco.net
Whole thread Raw
In response to Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
List pgsql-hackers
I have used Oracle SQLOADER for many years now.  It has the ability to 
put rejects/discards/bad into an output file and keep on going,  maybe 
this should be added to the copy command.


COPY [ BINARY ] table [ WITH OIDS ]   FROM { 'filename' | stdin }   [ [USING] DELIMITERS 'delimiter' ]   [ WITH NULL AS
'nullstring' ]   [ DISCARDS 'filename' ]           
 

what do you think???


> Tom Lane writes:
> 
> > It occurs to me that skip-the-insert might be a useful option for
> > INSERTs that detect a unique-key conflict, not only for COPY.  (Cf.
> > the regular discussions we see on whether to do INSERT first or
> > UPDATE first when the key might already exist.)  Maybe a SET 
variable
> > that applies to all forms of insertion would be appropriate.
> 
> What we need is:
> 
> 1. Make errors not abort the transaction.
> 
> 2. Error codes
> 
> Then you can make your client deal with this in which ever way you 
want,
> at least for single-value inserts.
> 
> However, it seems to me that COPY ignoring duplicates can easily be 
done
> by preprocessing the input file.
> 
> -- 
> Peter Eisentraut   peter_e@gmx.net   http://funkturm.homeip.net/~peter
> 
> 
> ---------------------------(end of broadcast)-------------------------
--
> TIP 4: Don't 'kill -9' the postmaster
> 
> 




pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: btree_gist regression test busted?
Next
From: Martín Marqués
Date:
Subject: Missing inserts