Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Jim Buttafuoco
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id 200112161412.fBGECER20364@dual.buttafuoco.net
Whole thread Raw
In response to Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
Responses Re: Bulkloading using COPY - ignore duplicates?
List pgsql-hackers
I agree with Lee,  I also like Oracle's options for a discard file, so
you can look at what was rejected, fix your problem and reload if
necessary just the rejects.

Jim


> Peter Eisentraut writes:
>  > I think allowing this feature would open up a world of new
>  > dangerous ideas, such as ignoring check contraints or foreign keys
>  > or magically massaging other tables so that the foreign keys are
>  > satisfied, or ignoring default values, or whatever.  The next step
>  > would then be allowing the same optimizations in INSERT.  I feel
>  > COPY should load the data and that's it.  If you don't like the
>  > data you have then you have to fix it first.
> 
> I agree that PostgreSQL's checks during COPY are a bonus and I
> wouldn't dream of not having them. Many database systems provide a
> fast bulkload by ignoring these constraits and cross references -
> that's a tricky/horrid situation.
> 
> However I suppose the question is should such 'invalid data' abort the
> transaction, it seems a bit drastic...
> 
> I suppose i'm not really after a IGNORE DUPLICATES option, but rather
> a CONTINUE ON ERROR kind of thing.
> 
> Regards, Lee.
> 
> 




pgsql-hackers by date:

Previous
From: mlw
Date:
Subject: Explicit config patch 7.2B4
Next
From: Doug McNaught
Date:
Subject: Re: Explicit config patch 7.2B4