Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Peter Eisentraut
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id Pine.LNX.4.30.0112131724100.647-100000@peter.localdomain
Whole thread Raw
In response to Re: Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
Responses Re: Bulkloading using COPY - ignore duplicates?
List pgsql-hackers
Lee Kindness writes:

> Yes, in an ideal world the input to COPY should be clean and
> consistent with defined indexes. However this is only really the case
> when COPY is used for database/table backup and restore. It misses the
> point that a major use of COPY is in speed optimisation on bulk
> inserts...

I think allowing this feature would open up a world of new dangerous
ideas, such as ignoring check contraints or foreign keys or magically
massaging other tables so that the foreign keys are satisfied, or ignoring
default values, or whatever.  The next step would then be allowing the
same optimizations in INSERT.  I feel COPY should load the data and that's
it.  If you don't like the data you have then you have to fix it first.

-- 
Peter Eisentraut   peter_e@gmx.net



pgsql-hackers by date:

Previous
From: Doug McNaught
Date:
Subject: Re: Platform testing (last call?)
Next
From: Peter Eisentraut
Date:
Subject: Re: Bulkloading using COPY - ignore duplicates?