Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Lee Kindness
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id 15385.58057.662013.530035@elsick.csl.co.uk
Whole thread Raw
In response to Re: Bulkloading using COPY - ignore duplicates?  (Peter Eisentraut <peter_e@gmx.net>)
List pgsql-hackers
Peter Eisentraut writes:> I think allowing this feature would open up a world of new> dangerous ideas, such as ignoring
checkcontraints or foreign keys> or magically massaging other tables so that the foreign keys are> satisfied, or
ignoringdefault values, or whatever.  The next step> would then be allowing the same optimizations in INSERT.  I feel>
COPYshould load the data and that's it.  If you don't like the> data you have then you have to fix it first.
 

I agree that PostgreSQL's checks during COPY are a bonus and I
wouldn't dream of not having them. Many database systems provide a
fast bulkload by ignoring these constraits and cross references -
that's a tricky/horrid situation.

However I suppose the question is should such 'invalid data' abort the
transaction, it seems a bit drastic...

I suppose i'm not really after a IGNORE DUPLICATES option, but rather
a CONTINUE ON ERROR kind of thing.

Regards, Lee.


pgsql-hackers by date:

Previous
From: Karel Zak
Date:
Subject: deadline date for .po translators
Next
From: Luis Amigo
Date:
Subject: Re: [GENERAL] can someone explain that?