On Mon, Oct 06, 2003 at 19:28:29 +0200,
papapep <papapep@gmx.net> wrote:
>
> I've got, on the other hand, text files prepared to be inserted in this
> table with the \copy command, but we are not sure (we've found
> duplicated rows several times) that there are not repeated rows.
>
> I'm trying to create a function that controls these duplicated rows to
> keep the table "clean" of them. In fact, I don't mind if the duplicated
> rows are inserted in a "duplicated rows" table (but perhaps it should be
> a good way to detect where they are generated) or if they get "missed in
> action".
And what do want to happen when you run accross a duplicate row?
Do you just want to discard tuples with a duplicate primary key?
If you are discarding duplicates, do you care which of the duplicates
is discarded?
If you want to combine data from the duplicates, do you have a precise
description of what you want to happen?