Robert Treat wrote:
>>> I have a table contains some duplicate records, and this table create
>>> without oids, for example:
>>> id | temp_id
>>> ----+---------
>>> 10 | 1
>>> 10 | 1
>>> 10 | 1
>>> 20 | 4
>>> 20 | 4
>>> 30 | 5
>>> 30 | 5
>>> I want get the duplicated records removed and only one is reserved, so
>>> the results is:
>>> 10 1
>>> 20 4
>>> 30 5
>>>
>>> I know create a temp table will resolve this problem, but I don't want
>>> this way:)
>>
>> DELETE FROM t t1 USING t t2
>> WHERE t1.id = t2.id AND t1.temp_id = t2.temp_id AND t1.ctid t2.ctid;
>
> note that one problem the delete from approaches have that the temp table
> solutions dont is that you can end up with a lot of dead tuples if there were
> a lot of duplicates... so if you can afford the locks, its not a bad idea to
> do begin; lock table t1 in access exclsuive mode; create temp table x as
> select ... from t1; truncate t1; insert into t1 select * from x; create
> unique index ui1 on t1(...); commit; this way you're now unique table will
> be nice and compacted, and wont get any more duplicate rows.
Very true; an alternative way to achieve that is to
VACUUM FULL t
after deleting the duplicate rows.
As for the UNIQUE index, that's of course the right thing to do, but
I wasn't sure if Yi Zhao wanted to change the database "design".
At any rate, I had thought that a unique constraint was preferrable to
a unique index because - while doing the same thing - the former will
also show up in pg_catalog.pg_constraint.
Yours,
Laurenz Albe