Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR IDENTIFYING UNIQUE ENTITIES IN DISPARATE DATA FILES
Document Type and Number:
WIPO Patent Application WO/2001/037097
Kind Code:
A1
Abstract:
This invention relates to a method of matching computer-based records (301) for identifying unique entities (303) both within and between disparate data files. This method of record-linkage has particular utility in the fields of epidemiology and health services research.

Inventors:
VICTOR TIMOTHY W (US)
Application Number:
PCT/US2000/031399
Publication Date:
May 25, 2001
Filing Date:
November 15, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SMITHKLINE BEECHAM CORP (US)
VICTOR TIMOTHY W (US)
International Classes:
G06F17/30; G16H10/60; G16H30/20; (IPC1-7): G06F12/00; G06F7/36
Foreign References:
US4821184A1989-04-11
US5594889A1997-01-14
US5487164A1996-01-23
US5668897A1997-09-16
Attorney, Agent or Firm:
Kanagy, James M. (UW2220 709 Swedeland Road P.O. Box 1539 King of Prussia, PA, US)
Download PDF:
Claims:
What is claimed is :
1. A computerimplemented system and method for creating a universal identifier for more than one record in one or more data files, the process comprising : standardizing one or more data elements in each record ; estimating the agreement and disagreement weights employed in the probabilistic function ; and assigning a randomly generated unique identifier to each record.
2. A computerimplemented system and method for concatenating records belonging to the same source within a data base or between data bases, the process comprising : (A) creating a universal identifier for each record in one or more data files, by : a) standardizing one or more data elements in each record ; b) estimating the agreement and disagreement weights employed in the probabilistic function ; and c) assigning a randomly generated unique identifier to each record ; and (B) concatenating records having the same unique identifier.
3. A computerimplemented system and method for concatenating records belonging to the same source where some records have a unique identifier and new records are created, the process comprising : (A) creating a universal identifier for each new record in one or more data files, by : a) standardizing one or more data elements in each record ; b) estimating the agreement and disagreement weights employed in the probabilistic function ; and c) assigning a randomly generated unique identifier to each record ; and (B) concatenating records newly assigned a unique identifier with existing records having the same unique identifier.
4. A method for assigning a unique identification number to a source or owner data as described herein.
Description:
Method for Identifying Unique Entities in Disparate Data Files Field of the Invention This invention relates to a method of matching computer-based records for identifying unique entities both within and between disparate data files. This method of record-linkage has particular utility in the fields of epidemiology and health services research.

Background of the Invention A custom universal identifier methodology was developed in response to the limitations of exact matching techniques. The methodology was designed to incorporate a combination of exact and probabilistic matching techniques. The term record linkage has been used to indicate the bringing together of two or more separately recorded pieces of information concerning a particular entity. Integrating patient information from various sources is essential for multivariate research. The various facts concerning an individual, if brought together, form an extensive history of that individual.

There are many purposes for linking records. Examples range from obtaining more data elements about an individual by merging data from different data sources, to creating a more comprehensive name and address list by merging the names and address from several data sources. In the first case, it is important to ensure that the matching is done accurately so that the matched data truly represent a multivariate observation from a single individual.

In the second, the merging is intended to ensure as complete a list as possible while eliminating duplication.

The idea of linkage records in the interest of science has a long pedigree. Fisher (Box, 1979, p. 237) lectured at a Zurich public health congress in 1929, arguing the usefulness of public records supplemented by (and presumably linked with) family data, in human genetics research. Earlier, Alexander Graham Bell exploited genealogical records and administrative records on marriages, census results and others apparently linking some sources, to sustain his familial studies of deafness (Bruce, 1973 ; Bell, 1906).

For many applications involving multiple databases, enough information is present to allow an accurate human judgement about whether a record from one source refers to the same case as a record from other sources. However, this is an extremely time-consuming, error-prone, and unreliable method except for small data sets. Computer methods are necessary to perform this task for a record matching exercise to be cost effective.

Summarv of the Invention The present invention is a computer-implemented system and method for creating a universal identifier for more than one record in one or more data files, the process comprising : standardizing one or more data elements in each record ; estimating the agreement and disagreement weights employed in the probabilistic function ; and assigning a randomly generated unique identifier to each record.

In a second aspect, this invention relates to a computer-implemented system and method for concatenating records belonging to the same source within a data base or between data bases, the process comprising : (1) creating a universal identifier for each record in one or more data files, by : a) standardizing one or more data elements in each record ; b) estimating the agreement and disagreement weights employed in the probabilistic function ; and c) assigning a randomly generated unique identifier to each record ; and (2) concatenating records having the same unique identifier.

In yet a third aspect, this invention relates to a computer-implemented system and method for concatenating records belonging to the same source where some records have a unique identifier and new records are created, the process comprising : (1) creating a universal identifier for each new record in one or more data files, by : a) standardizing one or more data elements in each record ; b) estimating the agreement and disagreement weights employed in the probabilistic function ; and c) assigning a randomly generated unique identifier to each record ; and (2) concatenating records newly assigned a unique identifier with existing records having the same unique identifier.

Descripiton of the Figures Figure 1 is a block diagram of illustrative input record components and atomic components.

Figure 2 is a flowchart of weights calculated based on chance agreement using an iterative bootstrap technique.

Future 3 is a flowchart of the process for generating randomly assigned unique identifiers.

Description of the Invention General Overview This invention provides a means for generating a unique identifier for records that ultimately relate back to a single source. It is particularly useful where characterizing data identifying that source expands or changes over time. Specific examples are financial data and patient data. However, in both instances, data can normally be stored in a centralised data file such as a central server only if it is adequate secured and anonymized. One way to effect this security interst is to use a trusted third party-environments. This invention has its greatest use in the trusted third-party environment.

A Trusted Third Party (TTP) service is a current way for anonymizing patient data.

The data is sent to a TTP, which takes the data and replaces all patient identifiers with a new code. The TTP matches codes against the patients-it therefore knows all the codes and patients.

Working within the pervue of a TTP, or elsewhere, this invention address the step of creating and assigning a unique identifier to a record after which these records are concatenated based on the unique identifier. The creation and assignment steps have three major components : i) data standardization, ii) weight estimation, and iii) the assignment of a unique identifier, in that order.

Definitions For the purposes of this invention, the following definitions and abbreviations are used : -Probability : The probability that any random element pair will match by chance <BR> <BR> <BR> <BR> <BR> 8 nntatci1<BR> <BR> <BR> n. X&num p-Probability : The reliability of the data element. If the Element Error Rate is >. 99 then p = 1-EER ; Else p =. 99-EER Agreement : A condition such that a given element pair matches exactly and both elements are knownae, =Be, e@ @e@ Agreement Weight : The weight assigned to an element pair when they agree during the record matching process Cartesian Product : The set of ordered pairs A*B={(a,b)#a#a#b#B} Disagreement : A condition such that a given element pair does not exactly match and both elements are known -4e $ Be Disagreement Weight : The weight assigned to an element pair when they disagree during the record matching process.

Element Error Rate : The proportion of element pairs where at least one element is unknown, e. g., null £ = nn'' nA*B.

Frequency Table : Summary of the number of times, and percentage of total different values of a variable occur Mean : Arithmetic average No Decision : A condition such that a given element pair where either one or both of the elements is unknown.

Random Number Assignment : Every row in the data set will be assigned a random number such that v blocks of approximately 1500 are created p =int[(v*P)+1] where #= Random Number, D = Upper Bound and P = Random Function.

Threshold : The threshold utilized in probabilistic matching is a binit odds ratio with a range of--> x <-.

Upper Bound : Number of strata such that the data set is divided into approximately equal rows of 1500. <BR> <BR> <BR> <BR> <P> (Number of Records in Data Set 1<BR> <BR> <BR> vint<BR> <BR> <BR> 1500 As regards the computer and machine language used in this process, just about any piece of hardware capable of executing a fairly large number of calculations in shrot order will fill the bill. Any current state-of-the-art PC or server could be used. As for the operating system, UNIX is perferred, but Windows 98 or NT for Windows or the like could be used. The source code can be written in any language, though Java if preferred.

Data Standardization The first step of this process involves the standardization of data in an input file.

This standardization is required for increased precision and reliability. The input file can contain an number of variables of which one or more are or may be unique to a particular data source such as an individual. Examples of useful variables are : member identifier, drivers'license number, social security number, insurance company code number, name, gender, date of birth, street address, city, state, postal code, citizenship. In addition, some identifiers can be further distilled down into their basic, or atomic, components. Figure 1 illustrates the use of selected input record components and atomic components of some records that are amenable to such further distillation. Referring to Figure 1, Input Record 100 illustrates data which can be used as the basis for assigning a unique identifier, and how that data can be broken out inot its atomic and subatomic components exemplified by Street Address 110, Date of Birth 120 and Name 130.

During the standardization process, all character data is preferably transformed to a single case. For example they are transformed to uppercase. So for instance, first names are standardized to uppercase, e. g., {BOB, ROB, ROBBY} = ROBERT. Common names for cities and streets may be transformed to the postal code, e. g., in the U. S. to United States Postal Service standard. In the latter instance this can be done using industry standard CASS certified software.

Weight Estimation

A fundamental component of this algorithm is the process of estimating the agreement and disagreement weights necessary for the probabilistic function. Weights are calculated based in probabilities of chance agreement using an iterative bootstrap technique.

Figure 2 provides a flow of the process.

The first step in the weight estimation process is to determine the number of strata required such that the data set can be divided into approximately equal blocks of 1500 rows (Fig. 2-201-219), see equation 1. (Number of Records in Data Set ß lsoo 1500 The source file is then scanned and the records are assigned a random number between 1 and v. A data matrix is created containing a Cartesian product of records with a random number of 1 assigned. The resulting matrix is then scanned. Each element pair within each record pair is assessed and assigned a value in the following manner : 01 if = (Agreement) en = {Oif Aen = Null and/or Ben = Null (No decision) (2) zizi if Axe &num Bell (Disagreement) where A is the nth element from record A e@ Once the matrix has been fully assessed, percentages for each e are tabulated and stored.

This process is repeated for 15 iterations.

Mean percentages of Agreements and No Decisions are calculated for each data element (Fig. 2-221). The p probability, or the reliability, for each data element is then calculated, see equation 3. let #=<BR> Percem No l) ecizion (if £ 99 M 1-c (3) else. 99-c

The R probability, or the probability that element n for any given record pair will match by chance, is calculated (Fig. 2-223), see equation 4.

µ = 8 XPerceni Agreemenl From the p and L probabilities, the disagreement and agreement weight formula are calculated (Fig. 2-225) employing equations 5 and 6 respectively.

Disagreement = lOg2 (l _ (5) "\1- Agreement = lpg p 1 (6) Unique Identifier Assignent The final stage of this process is the action of uniquely identifying entities within the input data set. Figure 3 provides an overview of this process.

Each record from the input file is evaluated against a reference file to determine if the entity represented by the data has been previously identified using a combination of deterministic and probabilistic matching techniques. If it is judged that the entity is already represented in the reference set, the input record is assigned the unique identifier (UID) from the reference record that it has matched against. If it is judged that the entity represented by data is not yet in the reference set, a new UID is randomly generated and assigned. Random numbers are generated in whatever language the process is being implemented.

After the UID assignment occurs, the input record is evaluated, in it's entirety, to determine if the record is a unique representation of the entity not already contained in the reference table. If it is a new record, then it is inserted into the reference table for future use.

Deterministic Matching Technique The deterministic matching technique employs simple Boolean logic. Two records are judged to match if certain criteria are met, such as the following : First Name Matches Exactly Last Name Matches Exactly Date of Birth Matches Exactly Social Security Number OR Member Identifier Matches Exactly If two records satisfy the criteria for deterministic matching, no probabilistic processing occurs. However, if no deterministic match occurs, the input record is presented for a probabilistic match.

Probabilistic Matching Technique The first step in the probabilistic matching process is to build a set of candidate records from the reference table based on characteristics of specific elements of the input record. This process is referred to as blocking, the set of candidate records is referred to as the blocking table. All data sets do not use the same characteristics, the elements used in this process are determined through data analysis. However, it is suggested that blocking

variable consist of those elements that are somewhat unique to an element, e. g., social security number, or a combination of date of birth and last name.

Upon completion of the construction of the blocking table, each element for each candidate record is compared against its corresponding element from the input record. See equation 7 for the scoring mechanism. <BR> <BR> <BR> <BR> f Agreement Weight if Ae = Be<BR> <BR> e"e" -=<!0//'=7VM//oor/? = Null ""e,,-"e,.Y.<BR> <BR> <BR> <BR> <BR> <BR> <P> !D/.S'Og7/77<?/7//g/7F/' where A is the nth element from record A " A composite weight is then calculated for all candidate records, see equation 8.

N W=i=1wi(8) The candidate record with the highest composite weight is then evaluated against a predefined threshold. If the weight meets or exceeds the threshold, the candidate record is judged to match the input record. If the weight does not exceed the threshold, it is assumed that the input record represents an entity not yet included in the reference set.