Dunno about quickly, but I usually do something like this (before slapping myself in the face for getting into that
state):
CREATE TABLE tn_backup AS SELECT DISTINCT * FROM tn;
TRUNCATE TABLE tn;
INSERT INTO tn VALUES SELECT * from tn_backup;
(Where "tn" is the table name)
May not be the best way, but keeps indexes and stuff on the original table if you don't want to set them all up again.
Melazy?
That said, if you've got foriegn keys pointing at it, the truncate ain't going to work.
Or if you have your data exported as a tab or csv, the use sort | uniq on it and
shove it back in...
-----Original Message-----
From: Junkone [mailto:junkone1@gmail.com]
Sent: 13 September 2006 23:47
To: pgsql-general@postgresql.org
Subject: [GENERAL] remote duplicate rows
hI
i have a bad situation that i did not have primary key. so i have a
table like this
colname1 colname2
1 apple
1 apple
2 orange
2 orange
It is a very large table. how do i remove the duplctes quickly annd
without much change.
Regards
Seede
-----------------------------------------
The information contained in this email is confidential and is
intended for the recipient only. If you have received it in error,
please notify us immediately by reply email and then delete it from
your system. Please do not copy it or use it for any purposes, or
disclose its contents to any other person or store or copy this
information in any medium. The views contained in this email are
those of the author and not necessarily those of Lorien plc.
Thank you for your co-operation.