FUZZY MATCHING ALGORITHMS EXPLAINED
What is Fuzzy Matching?
Fuzzy matching is a technique of finding strings in a dataset, that approximately match strings in a separate dataset, rather than exactly. The discipline of fuzzy matching can be typically sub-divided into two problems:
Finding approximate substring matches inside any given text entry.
Finding dictionary text entries that approximately match a specific pattern.
Fuzzy matching is known by several names including fuzzy string matching and approximate string matching. Most fuzzy matching algorithms return similarity scores as percentages to help users gauge how similar the compared text entries are, with a typical scale ranging from 0% for no matches to 100% for exact matches.
Why Use Fuzzy Matching Software?
Real-world data is rarely stored in standardised formats because of the different methods of collecting and processing the same.
This usually leads to differences in spelling, formatting and other common data entry inconsistencies. Despite these text-based differences, the process of data cleaning can be vastly improved by making use of fuzzy matching software.
A well-designed fuzzy matching tool eliminates the expensive need for coding or algorithm creation anew, allowing business users and technical teams to focus on data processing challenges without this extra burden.
What Can Fuzzy Matching Software Do?
Fuzzy matching algorithms have been successfully applied in areas like spell checking, spam filtering and record linkage. Here is a brief look at these and other applications of fuzzy matching:
Record linkage: Seamlessly link related records across multiple data sources, creating a unified identity.
Data deduplication: Merge duplicate records within extensive datasets with ease.
Spelling variation analysis: Detect and correct spelling errors, typos, or variations in customer data for precise search and analysis.
Data standardization: Link records with abbreviations and acronyms, for instance, matching ‘Limited’ with ‘Ltd’ for a uniform format.
Data integration: Consolidate data from diverse sources into a single on-premises platform for straightforward data sanitation.
Name variation matching: Manage variations in names, titles, or prefixes to ensure accurate customer profiling and personalised communication.
Minimising the Impact of False Positives and False Negatives
Set a fuzzy match threshold: Establish a fuzzy match threshold for your particular dataset, a level where anything below will not be considered a match. Values that are too low will increase the likelihood of false positives, while values that are too high increase the likelihood of false negatives.
Quality over quantity: Make sure your main dataset is clean, comprehensive and current. Compromised datasets will always lead to corrupted match results.
Refine your lookup criteria: Do not just rely on one data point for matching. Consider including other factors like addresses and social security numbers for a more robust fuzzy matching operation.
Expert Review: Have a domain expert review the results of the match operation. An expert, with their in-depth knowledge of your data, can be instrumental in developing and fine-tuning the data-matching algorithm, as well as reviewing the results. For instance, if you are matching a school database, consulting someone who understands why certain information might be missing or unrecorded could be beneficial.
Fuzzy Matching in Action: A Real-World Example
Record linkage techniques can be used to detect fraud, resource wastage or abuse. In this example, two databases were merged and compared for inconsistencies, leading to a discovery that helped the U.S. government put a stop to fraudulent behaviour by some government employees:
In a period of 18 months leading to the summer of 2005, a database comprising records of 40,000 pilots licensed by the U.S. Federal Aviation Administration and residing in Northern California, was matched to a database consisting of individuals receiving disability payments from the Social Security Administration, and it was discovered that names of some pilots appeared in both databases.
In a report by the Associated Press, a prosecutor from the U.S. Attorney’s Office in Fresno, CA stated the following:
There was probably criminal wrongdoing. The pilots were either lying to the FAA or wrongfully receiving benefits. The pilots claimed to be medically fit to fly airplanes. However, they may have been flying with debilitating illnesses that should have kept them grounded, ranging from schizophrenia and bipolar disorder to drug and alcohol addiction and heart conditions.
In the end, at least 40 pilots were charged with "Making false statements to a government agency" and "Making and delivering a false official writing". The FAA also suspended licenses of 14 pilots in total, while others were put on notice pending further investigations.
Popular Fuzzy Matching Algorithms
Peregrine: This is our own fuzzy matching algorithm and it was developed by Andrew Apell. It calculates the percentage similarity between the unique substrings contained in any two text entries.
Cosine Similarity: This is used to measure the similarity between any two strings by representing them as vectors in an n-dimensional vector space. The cosine of the angle between these two vectors is calculated, with a score ranging from 0 to 1.
Levenshtein Distance: This calculates the minimum number of single-character edits that are required to transform one word into another. Valid edits are insertions, deletions or substitutions.
Damerau–Levenshtein Distance: This calculates the minimum number of edits that are required to transform one word into the other. Valid edits are insertions, deletions, substitutions or transpositions of adjacent characters.
Soundex: This algorithm indexes words by sound, as pronounced in English. The goal is for similar sounding words to be encoded to the same representation so that they can be compared, despite minor differences in spelling. Flookup uses a refined version of Soundex for matching text by sound similarity.
n-gram: This is a contiguous sequence of n items from any given text entry. It can be a sequence of syllables, letters, phonemes, words or base pairs according to the application.