Bulkloading using COPY - ignore duplicates? - Mailing list pgsql-hackers

From Lee Kindness
Subject Bulkloading using COPY - ignore duplicates?
Date
Msg-id 15288.19913.741567.344714@elsick.csl.co.uk
Whole thread Raw
Responses Re: Bulkloading using COPY - ignore duplicates?
Re: Bulkloading using COPY - ignore duplicates?
Bulkloading using COPY - ignore duplicates?
List pgsql-hackers
Hello,

I'm in the process of porting a large application from Ingres to
PostgreSQL. We make heavy use of bulkloading using the 'COPY'
statement in ESQL/C. Consider the SQL statements below (in a psql
session on an arbitrary database):
CREATE TABLE copytest(f1 INTEGER, f2 INTEGER);CREATE UNIQUE INDEX copytest_idx ON copytest USING BTREE(f1, f2);COPY
copytestFROM '/tmp/copytest';
 

Given the file /tmp/copytest:
1    12    23    34    44    45    56    6

will result in the following output:
ERROR:  copy: line 5, Cannot insert a duplicate key into unique index copytest_idx

However my application code is assuming that duplicate rows will
simply be ignored (this is the case in Ingres, and I believe Oracle's
bulkloader too). I propose modifying _bt_check_unique() in
/backend/access/nbtree/nbtinsert.c to emit a NOTICE (rather than
ERROR) elog() and return NULL (or appropriate) to the calling function
if a duplicate key is detected and a 'COPY FROM' is in progress (add
new parameter to flag this).

Would this seem a reasonable thing to do? Does anyone rely on COPY
FROM causing an ERROR on duplicate input? Would:
WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)

need to be added to the COPY command (I hope not)?

Thanks,

-- Lee Kindness, Senior Software EngineerConcept Systems Limited.


pgsql-hackers by date:

Previous
From: Karel Zak
Date:
Subject: HISTORY file
Next
From: Justin Clift
Date:
Subject: Re: Bulkloading using COPY - ignore duplicates?