Re: Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Patrick Welche
Subject Re: Bulkloading using COPY - ignore duplicates?
Date
Msg-id 20011213152957.C12426@quartz.newn.cam.ac.uk
Whole thread Raw
In response to Re: Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
Responses Re: Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
List pgsql-hackers
On Thu, Dec 13, 2001 at 01:25:11PM +0000, Lee Kindness wrote:
> That's what I'm currently doing as a workaround - a SELECT DISTINCT
> from a temporary table into the real table with the unique index on
> it. However this takes absolute ages - say 5 seconds for the copy
> (which is the ballpark figure I aiming toward and can achieve with
> Ingres) plus another 30ish seconds for the SELECT DISTINCT.

Then your column really isn't unique, so how about dropping the unique index,
import the data, fix the duplicates, recreate the unique index - just as
another possible work around ;)

Patrick


pgsql-hackers by date:

Previous
From: Thomas Lockhart
Date:
Subject: Re: Intermediate report for AIX 5L port
Next
From: Thomas Lockhart
Date:
Subject: Re: [CYGWIN] Platform Testing - Cygwin