On Sun, 2015-05-24 at 16:56 +0630, Arup Rakshit wrote:
> Hi,
>
> I am copying the data from a CSV file to a Table using "COPY" command.
> But one thing that I got stuck, is how to skip duplicate records while
> copying from CSV to tables. By looking at the documentation, it seems,
> Postgresql don't have any inbuilt too to handle this with "copy"
> command. By doing Google I got below 1 idea to use temp table.
>
> http://stackoverflow.com/questions/13947327/to-ignore-duplicate-keys-during-copy-from-in-postgresql
>
> I am also thinking what if I let the records get inserted, and then
> delete the duplicate records from table as this post suggested -
> http://www.postgresql.org/message-id/37013500.DFF0A64A@manhattanproject.com.
>
> Both of the solution looks like doing double work. But I am not sure
> which is the best solution here. Can anybody suggest which approach
> should I adopt ? Or if any better ideas you guys have on this task,
> please share.
Assuming you are using Unix, or can install Unix tools, run the input
files through
sort -u
before passing them to COPY.
Oliver Elphick