Thread: How to find double entries
Hi, how can I find double entries in varchar columns where the content is not 100% identical because of a spelling error or the person considered it "looked nicer" that way? I'd like to identify and then merge records of e.g. 'google', 'gogle', 'guugle' Then I want to match abbrevations like 'A-Company Ltd.', 'a company ltd.', 'A-Company Limited' Is there a way to do this? It would be OK just to list candidats up to be manually checked afterwards. Regards Andreas
Andreas <maps.on@gmx.net> writes: > I'd like to identify and then merge records of e.g. 'google', 'gogle', > 'guugle' > Then I want to match abbrevations like 'A-Company Ltd.', 'a company > ltd.', 'A-Company Limited' > Is there a way to do this? > It would be OK just to list candidats up to be manually checked afterwards. There are some functions in contrib/fuzzystrmatch that seem like they'd help you find candidate duplicates. contrib/pg_trgm and text search might also offer promising tools. What's really a duplicate sounds like a judgment call here, so you probably shouldn't even think of automating it completely. regards, tom lane
Andreas wrote: > Hi, > > how can I find double entries in varchar columns where the content is > not 100% identical because of a spelling error or the person considered > it "looked nicer" that way? When doing some near-duplicate elimination as part of converting a legacy data set to PostgreSQL I found the `fuzzystrmatch' contrib module immensely helpful. http://www.postgresql.org/docs/current/static/fuzzystrmatch.html -- Craig Ringer
<p><font size="2">Hi,<br /><br /> In a recent linux magazine article (<a href="http://www.linux-mag.com/id/5679">http://www.linux-mag.com/id/5679</a>)<br/> there was a mentioning of Full-Text SearchIntegration. Which I know<br /> nothing about, but sounded interesting to me. You might want to<br /> check it out.<br/><br /> Regards,<br /><br /> Tena Sakai<br /> tsakai@gallo.ucsf.edu<br /><br /><br /> -----Original Message-----<br/> From: pgsql-sql-owner@postgresql.org on behalf of Andreas<br /> Sent: Tue 4/15/2008 8:15 PM<br /> To: pgsql-sql@postgresql.org<br/> Subject: [SQL] How to find double entries<br /><br /> Hi,<br /><br /> how can I find doubleentries in varchar columns where the content is<br /> not 100% identical because of a spelling error or the personconsidered<br /> it "looked nicer" that way?<br /><br /> I'd like to identify and then merge records of e.g. 'google','gogle',<br /> 'guugle'<br /><br /> Then I want to match abbrevations like 'A-Company Ltd.', 'a company<br /> ltd.','A-Company Limited'<br /><br /> Is there a way to do this?<br /> It would be OK just to list candidats up to be manuallychecked afterwards.<br /><br /><br /> Regards<br /> Andreas<br /><br /> --<br /> Sent via pgsql-sql mailing list(pgsql-sql@postgresql.org)<br /> To make changes to your subscription:<br /><a href="http://www.postgresql.org/mailpref/pgsql-sql">http://www.postgresql.org/mailpref/pgsql-sql</a><br/><br /></font>
On Wed, 16 Apr 2008, Andreas <maps.on@gmx.net> writes: > how can I find double entries in varchar columns where the content is > not 100% identical because of a spelling error or the person > considered it "looked nicer" that way? > > I'd like to identify and then merge records of e.g. 'google', > gogle', 'guugle' > > Then I want to match abbrevations like 'A-Company Ltd.', 'a company > ltd.', 'A-Company Limited' > > Is there a way to do this? > It would be OK just to list candidats up to be manually checked > afterwards. You can try something similar to below example. (levenshtein(text, text) function is supplied by fuzzystrmatch module.) SELECT T1.col, T2.col FROM tbl AS T1, INNER JOIN tbl AS T2 ON T1.col <> T2.col AND levenshtein(T1.col,T2.col) < (length(T1.col) * 0.5) Regards.
On Apr 15, 2008, at 11:23 PM, Tom Lane wrote: > What's really a duplicate sounds like a judgment call here, so you > probably shouldn't even think of automating it completely. I did a consulting gig about 10 years ago for a company that made software to normalize street addresses and names. Literally dozens of people worked there, and that was their primary software product. It is definitely not a trivial task, as the rules can be extremely complex.
Vivek Khera wrote: > > On Apr 15, 2008, at 11:23 PM, Tom Lane wrote: >> What's really a duplicate sounds like a judgment call here, so you >> probably shouldn't even think of automating it completely. > > I did a consulting gig about 10 years ago for a company that made > software to normalize street addresses and names. Literally dozens of > people worked there, and that was their primary software product. It is > definitely not a trivial task, as the rules can be extremely complex. From what little I've personally seen of others' addressing handling, some (many/most?) people who blindly advocate full normalisation of addresses either: (a) only care about a rather restricted set of address types ("ordinary residential addresses in <my country>", though that can be bad enough); or (b) don't know how horrible addressing is .... yet ... and are going to find out soon when their highly normalized addressing schema proves incapable of representing some address they've just been presented with. with most probably falling into the second category. Overly strict addressing, without the associated fairly extreme development effort to get it even vaguely right, seems to lead to users working around the broken addressing schema by entering bogus data. Personally I'm content to provide lots of space for user-formatted addresses, only breaking out separate fields for the post code (Australian only), the city/suburb, the state, and the country - all stored as strings. The only DB level validation is a rule preventing the entry of invalid & undefined postcodes for Australian addresses, and preventing the entry of invalid Australian states. The app is used almost entirely with Australian addresses, and there's a definitive, up to date list of australian post codes available from the postal services, so it's worth a little more checking to protect against basic typos and misunderstandings. The app provides some more help at the UI level for users, such as automatically filling in the state and suburb if an Australian post code is entered. It'll warn you if you enter an unknown Australian suburb/city for an entry in Australia. For everything else I leave it to the user and to possible later validation and reporting. I've had good results with this policy when working with other apps that need to handle addressing information, and I've had some truly horrible experiences with apps that try to be too strict in their address checking. -- Craig Ringer
Andreas wrote: > Hi, > > how can I find double entries in varchar columns where the content is not > 100% identical because of a spelling error or the person considered it > "looked nicer" that way? > > I'd like to identify and then merge records of e.g. 'google', 'gogle', > 'guugle' Then I want to match abbrevations like 'A-Company Ltd.', 'a > company ltd.', 'A-Company Limited' > > Is there a way to do this? It would be OK just to list candidats up to be > manually checked afterwards. > > This is really tough, whether you use postgreSQL or not. I once worked for a large regulated monopoly who had to look up stuff in telephone books a lot. Several of us got a magnetic tape with all the business, professional, and government listings for a county on Long Island in it; we thought residential would be too easy and did not have the disk space for it (in those days, hard drives cost $40,000 and held 40 Megabytes). One of the things we did was do leading substring partial matching. I.e., we could look for "J Smith" and find "Smith, John Robert"; we could find him with "Rob Smit" as well. This helped because people did not put their names in the right order. Sometimes they said "Smith, John" and other times they said "John Smith" or "J Smith" and meant the same guy. Sometimes they said "White St" or "White Street" when they meant "White Road". And so it went. Sometimes they spelled her name "Jeannine" when she spelled it "Genine". So the question always ended up being what did they really mean. To make matters worse, someone had run a program over the data to spell out abbreviations, but that generated "42 Saint" instead of "42 Street" and problems like that. Also, if a field was too big, the data-entry clerks just kept on typing into the next field, so a lot of entries had, as an address, "If no entry call" I stuck in a phonetic matcher (similar to Soundex coding) so that a query for "Ristorante Italiano" would find "Mom's Pizza Italian Restaurant". It would also find Genine whey you were looking for Jeannine. People often got the towns wrong. Around here, there is a town on the map, but the telephone company had that in another town, and the tax collector had it in yet another town. So towns were weighted lower than names. For government listings, there was a separate record for each line in the telephone book, so you would get entries like this, each line a separate record: U S Government Federal Aviations Administration Kennedy Airport Pilot Information Arrivals Departures We had to make it find "Pilot Arrivals" so indexing was not trivial until you figured out how to do it. But when all was said and done, we put a program on the output that displayed answers in terms of decreasing goodness of match and stuck the users with deciding what they wanted. A big trick was to do all this without doing a sequential search of the database. -- .~. Jean-David Beyer Registered Linux User 85642. /V\ PGP-Key: 9A2FC99A Registered Machine 241939./()\ Shrewsbury, New Jersey http://counter.li.org^^-^^ 21:30:01 up 33 days, 2:32, 1 user, load average: 4.06, 4.07,4.11