Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BIAS REDUCTION IN MACHINE LEARNING MODEL TRAINING AND INFERENCE
Document Type and Number:
WIPO Patent Application WO/2023/239506
Kind Code:
A1
Abstract:
One or more default protected attribute values may be determined for a prediction model trained based on training data including a plurality of training observations. Each of the plurality of training observations may include a respective plurality of training data values corresponding with a plurality of features. Each of the plurality of training observations may also include a respective target value. Each of the training observations may include a respective protected attribute value corresponding with a protected attribute feature. A request to determine a designated predicted target value for a designated inference observation may be received after determining the one or more default protected attribute values. The predicted target value may be selected from one or more target values determined by applying the prediction model to an inference observation and potentially one or more default protected attribute values.

Inventors:
LAM CHRISTOPHER (US)
Application Number:
PCT/US2023/021166
Publication Date:
December 14, 2023
Filing Date:
May 05, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EPISTAMAI LLC (US)
International Classes:
G06F16/35; G06N7/00; G06N20/00; G06F21/53; G06F21/60; G06F21/62; G06N20/20; G06Q30/02
Foreign References:
US20210192280A12021-06-24
US20200311300A12020-10-01
US20200065710A12020-02-27
US20190102693A12019-04-04
US20150066593A12015-03-05
Attorney, Agent or Firm:
KUHN, Jeffrey M. (US)
Download PDF:
Claims:
CLAIMS

1. A method comprising: determining one or more default protected attribute values for a prediction model trained based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a plurality of features, each of the plurality of training observations also including a respective target value, each of the plurality of training observations including a respective protected attribute value corresponding with a protected attribute feature; receiving via a communication interface a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features and a designated protected attribute value corresponding with a protected attribute feature; determining a first predicted target value via a processor by applying the prediction model to the designated plurality of inference data values and the designated protected attribute value; determining a second one or more predicted target values via the processor by applying the prediction model to the designated plurality of inference data values and one or more designated default protected attribute values of the one or more default protected attribute values; selecting a designated predicted target value of the first predicted target value and the second one or more predicted target values by comparing the first predicted target value and the second one or more predicted target values; and storing the designated predicted target value on a storage device.

2. The method recited in claim 1, wherein the second one or more predicted target values include a designated first predicted target value corresponding to a first default protected attribute value and a designated second predicted target value corresponding to a second default protected attribute value.

3. The method recited in one of claim 1 or claim 2, wherein selecting the designated predicted target value comprises selecting a smallest value of the first predicted target value and the second one or more predicted target values.

4. The method recited in any of claims 1-3, wherein selecting the designated predicted target value comprises selecting a largest value of the first predicted target value and the second one or more predicted target values.

5. The method recited in any of claims 1-4, wherein the designated predicted target value corresponds to an outcome having an ordinal ranking, and wherein selecting the designated predicted target value comprises selecting a value of the first predicted target value and the second one or more predicted target values having a most positive ordinal ranking for the designated inference observation.

6. The method recited in any of claims 1-5, the method further comprising: determining a plurality of evaluation metric values indicating performance of the prediction model for each of a plurality of candidate default protected attribute values, wherein the one or more default protected attribute values are determined at least in part based on the plurality of evaluation metric values.

7. The method recited in any of claims 1-6, wherein determining the one or more default protected attribute values involves determining an overlap profile between the protected attribute feature and a designated feature of the plurality of features, the overlap profile indicating a respective degree of overlap among the plurality of training observations between first selected values corresponding to the protected attribute feature and second selected values corresponding to the designated feature, the method further comprising: determining based on the overlap profile that a designated one of the respective degrees of overlap indicates a positivity violation; identifying one or more value replacement rules for correcting the positivity violation by replacing a feature value or a protected attribute value; determining a replacement data value based on the one or more value replacement rules; and replacing an original feature value or a protected attribute value in the designated inference observation with the replacement data value.

8. The method recited in any of claims 1-7, wherein the prediction model is a regression model that includes a plurality of regression coefficients corresponding with the plurality of features, a designated one or more of the plurality of regression coefficients corresponding with the protected attribute feature, wherein determining a designated one of the second one or more predicted target values involves determining a constant term based on a first one of the designated one or more default protected attribute values and the designated one or more regression coefficients.

9. The method recited in any of claims 1-8, wherein the prediction model is a neural network that includes a plurality of neurons corresponding with the plurality of features, a designated neuron of the plurality of neurons corresponding with the protected attribute feature, wherein determining a designated one of the second one or more predicted target values involves determining a constant value for the designated neuron based on a first one of the designated one or more default protected attribute values.

10. The method recited in any of claims 1-9, wherein the prediction model is selected from the group consisting of: a tree-based model, a neural network model, and a gradient boosting model.

11. The method recited in any of claims 1-10, wherein each of the training observations corresponds to a respective individual, and wherein the protected attribute feature is selected from the group consisting of: race, ethnicity, sex, gender, national origin, religion, disability status, age, genetic information, marital status, and receipt of public assistance.

12. A system comprising: means for determining one or more default protected attribute values for a prediction model trained based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a plurality of features, each of the plurality of training observations also including a respective target value, each of the plurality of training observations including a respective protected attribute value corresponding with a protected attribute feature; means for receiving a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features and a designated protected attribute value corresponding with a protected attribute feature; means for determining a first predicted target value by applying the prediction model to the designated plurality of inference data values and the designated protected attribute value; means for determining a second one or more predicted target values by applying the prediction model to the designated plurality of inference data values and one or more designated default protected attribute values of the one or more default protected attribute values; means for selecting a designated predicted target value of the first predicted target value and the second one or more predicted target values by comparing the first predicted target value and the second one or more predicted target values; and means for storing the designated predicted target value.

13. A system comprising: a processor configured to determine one or more default protected attribute values for a prediction model trained based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a plurality of features, each of the plurality of training observations also including a respective target value, each of the plurality of training observations including a respective protected attribute value corresponding with a protected attribute feature; a communication interface operable to receive a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features and a designated protected attribute value corresponding with a protected attribute feature, wherein the processor is further configured to: determine a first predicted target value via a processor by applying the prediction model to the designated plurality of inference data values and the designated protected attribute value, determine a second one or more predicted target values via the processor by applying the prediction model to the designated plurality of inference data values and one or more designated default protected attribute values of the one or more default protected attribute values, and select a designated predicted target value of the first predicted target value and the second one or more predicted target values by comparing the first predicted target value and the second one or more predicted target values; and a storage device configured to store the designated predicted target value.

14. A method comprising: determining one or more default protected attribute values for a prediction model trained based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a plurality of features, each of the plurality of training observations also including a respective target value, each of the plurality of training observations including a respective protected attribute value corresponding with a protected attribute feature; receiving via a communication interface a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features; determining the designated predicted target value via a processor by applying the prediction model to the designated inference observation and a designated default protected attribute value of the one or more default protected attribute values; and storing the predicted target value on a storage device.

15. The method recited in claim 14, the method further comprising: determining via the processor a plurality of predicted target values including the designated predicted target value by applying the prediction model to the designated default protected attribute value and a plurality of inference observations including the designated inference observation, each of the plurality of inference observations including a respective plurality of inference data values corresponding with the plurality of features.

16. The method recited in one of claim 14 or claim 15, the method further comprising: determining a plurality of evaluation metric values indicating performance of the prediction model for each of a plurality of candidate default protected attribute values, wherein the one or more default protected attribute values are determined at least in part based on the plurality of evaluation metrics.

17. The method recited in any of claims 14-16, wherein determining the one or more default protected attribute values involves determining an overlap profile between the protected attribute feature and a designated feature of the plurality of features, the overlap profile indicating a respective degree of overlap among the plurality of training observations between first selected values corresponding to the protected attribute feature and second selected values corresponding to the designated feature.

18. The method recited in claim 17, the method further comprising: determining based on the overlap profile that a designated one of the respective degrees of overlap indicates a positivity violation; and identifying one or more value replacement rules for correcting the positivity violation by replacing a feature value or a protected attribute value.

19. The method recited in claim 18, the method further comprising: determining a replacement data value based on the one or more value replacement rules; and replacing an original feature value or a protected attribute value in the inference observation with the replacement data value.

20. The method recited in any of claims 14-19, wherein the prediction model is a regression model that includes a plurality of regression coefficients corresponding with the plurality of features, a designated one or more of the plurality of regression coefficients corresponding with the protected attribute feature, wherein applying the prediction model to the inference observation involves determining a constant term based on the designated default protected attribute value and the designated one or more regression coefficients.

21. The method recited in any of claims 14-20, wherein the prediction model is a neural network that includes a plurality of neurons corresponding with the plurality of features, a designated one of the plurality of neurons corresponding with the protected attribute feature, wherein applying the prediction model to the inference observation involves determining a constant value for the designated neuron based on the designated default protected attribute value.

22. The method recited in any of claims 14-21, wherein the prediction model is selected from the group consisting of: a tree-based model, a neural network model, and a gradient boosting model.

23. The method recited in any of claims 14-22, wherein each of the training observations corresponds to a respective individual, and wherein the protected attribute is selected from the group consisting of: race, ethnicity, sex, gender, national origin, religion, disability status, age, genetic information, marital status, and receipt of public assistance.

24. A system comprising: means for determining one or more default protected attribute values for a prediction model trained based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a plurality of features, each of the plurality of training observations also including a respective target value, each of the plurality of training observations including a respective protected attribute value corresponding with a protected attribute feature; means for receiving a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features; means for determining the designated predicted target value by applying the prediction model to the designated inference observation and a designated default protected attribute value of the one or more default protected attribute values; and means for storing the predicted target value.

25. A method comprising: determining via a processor a prediction model based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a first plurality of features, each of the plurality of training observations also including a respective target value, wherein the first plurality of features includes a protected attribute; receiving via a communication interface a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features; determining via a processor a designated predicted target value by applying the prediction model to a designated inference observation including a plurality of inference data values corresponding with a second plurality of features, wherein the second plurality of features excludes the protected attribute; and storing the predicted target value on a storage device.

26. The method recited in claim 25, the method further comprising: determining via the processor a plurality of predicted target values including the designated predicted target value by applying the prediction model to a plurality of inference observations including the designated inference observation, each of the plurality of inference observations including a respective plurality of inference data values corresponding with the second plurality of features.

27. The method recited in one of claim 25 or claim 26, wherein the prediction model is a regression model that includes a plurality of regression coefficients, some or all of the plurality of regression coefficients corresponding with the first plurality of features, wherein the plurality of regression coefficients includes a designated one or more coefficients corresponding with the protected attribute, and wherein the designated one or more coefficients are omitted from the regression model when determining the predicted target value.

28. The method recited in any of claims 25-27, wherein each of the training observations corresponds to a respective individual, and wherein the protected attribute is selected from the group consisting of: race, ethnicity, sex, gender, sexual orientation, national origin, religion, disability status, age, genetic information, marital status, and receipt of public assistance.

29. A system comprising: means for determining a prediction model based on training data including a plurality of training observations, each of the plurality of training observations including a respective plurality of training data values corresponding with a first plurality of features, each of the plurality of training observations also including a respective target value, wherein the first plurality of features includes a protected attribute; means for receiving a request to determine a designated predicted target value for a designated inference observation after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features; means for determining a designated predicted target value by applying the prediction model to a designated inference observation including a plurality of inference data values corresponding with a second plurality of features, wherein the second plurality of features excludes the protected attribute; and means for storing the predicted target value.

Description:
BIAS REDUCTION IN MACHINE LEARNING MODEL TRAINING AND INFERENCE

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Patent Application No. 18/309,190 by Christopher Lam, titled "BIAS REDUCTION IN MACHINE LEARNING MODEL TRAINING AND EXECUTION," filed April 28, 2023, which claims priority to provisional U.S. Patent Application No. 63/482,361 by Christopher Lam, titled "BIAS REDUCTION IN MACHINE LEARNING MODELTRAINING AND EXECUTION," filed January 31, 2023, and to U.S. Patent App. No. 18/051,134 by Christopher Lam, titled "BIAS REDUCTION IN MACHINE LEARNING MODEL TRAINING AND EXECUTION," filed October 31, 2022, which claims priority to provisional U.S. Patent Application No. 63/365,905 by Christopher Lam, titled "BIAS REDUCTION IN MACHINE LEARNING MODEL TRAINING AND EXECUTION," filed June 6, 2022, all of which are hereby incorporated by reference in their entirety and for all purposes.

FIELD OF TECHNOLOGY

[0002] This patent document relates generally to machine learning and more specifically to bias reduction in machine learning.

BACKGROUND

[0003] Machine learning algorithms are applied to solve prediction problems in a variety of contexts. In a conventional machine learning approach, data is used to train a prediction model in a training phase. The trained prediction model may then be used to predict unobserved outcomes in an inference phase. A significant problem in machine learning is algorithmic bias, a topic that has recently received enormous attention, with high profile examples of discrimination in criminal justice, facial recognition, employment screening, and advertising.

[0004] Algorithmic bias refers to situations in which an algorithm is trained a way that biases the algorithm against individuals based on protected characteristics. For example, a class of people who have historically and/or currently faced discrimination in a society may be treated differently and hence obtain worse outcomes in areas such as credit, employment, and the like through structural discrimination alone, irrespective of personal choices and characteristics. A machine learning model trained to predict these outcomes based on data that includes information that could identify an individual as belonging to such a class may therefore inadvertently reinforce discrimination by effectively predicting negative outcomes based on membership in the class. Accordingly, improved techniques fortraining and executing accurate prediction models while reducing algorithmic bias are desired.

SUMMARY

[0005] Various embodiments of techniques and mechanisms described herein provide for methods, systems, and computer-readable media having instructions stored thereon for performing methods for determining and applying prediction models. In some embodiments, one or more default protected attribute values for a prediction model trained based on training data including a plurality of training observations may be determined. Each of the plurality of training observations may include a respective plurality of training data values corresponding with a plurality of features. Each of the plurality of training observations may also include a respective target value and a respective protected attribute value corresponding with a protected attribute feature.

[0006] In some embodiments, a request to determine a designated predicted target value for a designated inference observation may be received via a communication interface after determining the one or more default protected attribute values. The designated inference observation may include a designated plurality of inference data values corresponding with the plurality of features and a designated protected attribute value corresponding with a protected attribute feature.

[0007] In some embodiments, a first predicted target value may be determined by applying the prediction model to the designated plurality of inference data values and the designated protected attribute value. A second one or more predicted target values may be determined by applying the prediction model to the designated plurality of inference data values and one or more designated default protected attribute values of the one or more default protected attribute values.

[0008] In some embodiments, a designated predicted target value of the first predicted target value and the second one or more predicted target values may be determined by comparing the first predicted target value and the second one or more predicted target values. The designated predicted target value may be stored on a storage device.

[0009] In some embodiments, the second one or more predicted target values may include a designated first predicted target value corresponding to a first default protected attribute value and/or a designated second predicted target value corresponding to a second default protected attribute value.

[0010] In some embodiments, selecting the designated predicted target value may involve selecting a smallest value of the first predicted target value and the second one or more predicted target values.

[0011] In some embodiments, selecting the designated predicted target value may involve selecting a largest value of the first predicted target value and the second one or more predicted target values.

[0012] In some embodiments, the designated predicted target value may correspond to an outcome having an ordinal ranking. Selecting the designated predicted target value may involve selecting a value of the first predicted target value and the second one or more predicted target values having a most positive ordinal ranking for the designated inference observation.

[0013] In some embodiments, a plurality of evaluation metric values indicating performance of the prediction model for each of a plurality of candidate default protected attribute values may be determined. The one or more default protected attribute values may be determined at least in part based on the plurality of evaluation metric values.

[0014] In some embodiments, determining the one or more default protected attribute values may involve determining an overlap profile between the protected attribute feature and a designated feature of the plurality of features. The overlap profile may indicate a respective degree of overlap among the plurality of training observations between first selected values corresponding to the protected attribute feature and second selected values corresponding to the designated feature. A determination may be made based on the overlap profile that a designated one of the respective degrees of overlap indicates a positivity violation. One or more value replacement rules may be identified for correcting the positivity violation by replacing a feature value or a protected attribute value. A replacement data value may be determined based on the one or more value replacement rules. An original feature value or a protected attribute value in the designated inference observation may be replaced with the replacement data value.

[0015] In some embodiments, the prediction model may be a regression model that includes a plurality of regression coefficients corresponding with the plurality of features. A designated one or more of the plurality of regression coefficients may correspond with the protected attribute feature. Determining a designated one of the second one or more predicted target values may involve determining a constant term based on a first one of the designated one or more default protected attribute values and the designated one or more regression coefficients.

[0016] In some embodiments, the prediction model may be a neural network that includes a plurality of neurons corresponding with the plurality of features. A designated neuron of the plurality of neurons may correspond with the protected attribute feature. Determining a designated one of the second one or more predicted target values may involve determining a constant value for the designated neuron based on a first one of the designated one or more default protected attribute values. [0017] In some embodiments, the prediction model may include one or more of a tree-based model, a neural network model, and a gradient boosting model.

[0018] In some embodiments, each of the training observations may correspond to a respective individual. The protected attribute feature may be one of race, ethnicity, sex, gender, national origin, religion, disability status, age, genetic information, marital status, and receipt of public assistance. [0019] In some embodiments, one or more default protected attribute values for a prediction model trained may be determined based on training data including a plurality of training observations. Each of the plurality of training observations may include a respective plurality of training data values corresponding with a plurality of features, a respective target value, and a respective protected attribute value corresponding with a protected attribute feature. A request to determine a designated predicted target value for a designated inference observation may be received via a communication interface after determining the one or more default protected attribute values. The designated inference observation may include a designated plurality of inference data values corresponding with the plurality of features. The designated predicted target value may be determined via a processor by applying the prediction model to the designated inference observation and a designated default protected attribute value of the one or more default protected attribute values. The predicted target value may be stored on a storage device.

[0020] In some embodiments, a plurality of predicted target values including the designated predicted target value may be determined by applying the prediction model to the designated default protected attribute value and a plurality of inference observations including the designated inference observation. Each of the plurality of inference observations may include a respective plurality of inference data values corresponding with the plurality of features.

[0021] In some embodiments, a plurality of evaluation metric values indicating performance of the prediction model for each of a plurality of candidate default protected attribute values may be determined. The one or more default protected attribute values may be determined at least in part based on the plurality of evaluation metrics.

[0022] In some embodiments, determining the one or more default protected attribute values may involve determining an overlap profile between the protected attribute feature and a designated feature of the plurality of features. The overlap profile may indicate a respective degree of overlap among the plurality of training observations between first selected values corresponding to the protected attribute feature and second selected values corresponding to the designated feature.

[0023] In some embodiments, a determination may be made based on the overlap profile that a designated one of the respective degrees of overlap indicates a positivity violation. One or more value replacement rules may be identified for correcting the positivity violation by replacing a feature value or a protected attribute value.

[0024] In some embodiments, a replacement data value may be determined based on the one or more value replacement rules. An original feature value or a protected attribute value in the inference observation may be replaced with the replacement data value.

[0025] In some embodiments, a prediction model may be determined based on training data including a plurality of training observations. Each of the plurality of training observations may include a respective plurality of training data values corresponding with a first plurality of features and a respective target value. The first plurality of features may include a protected attribute. A request to determine a designated predicted target value for a designated inference observation may be received via a communication interface after determining the one or more default protected attribute values, the designated inference observation including a designated plurality of inference data values corresponding with the plurality of features. A designated predicted target value may be determined via a processor by applying the prediction model to a designated inference observation including a plurality of inference data values corresponding with a second plurality of features. The second plurality of features excludes the protected attribute. The predicted target value may be stored on a storage device.

[0026] In some embodiments, a plurality of predicted target values including the designated predicted target value may be determined by applying the prediction model to a plurality of inference observations including the designated inference observation. Each of the plurality of inference observations may include a respective plurality of inference data values corresponding with the second plurality of features. BRIEF DESC IPTION OF THE DRAWINGS

[0027] The included drawings are for illustrative purposes and serve only to provide examples of possible structures and operations for the disclosed inventive systems, apparatus, methods and computer program products for bias reduction in machine learning model training and execution. These drawings in no way limit any changes in form and detail that may be made by one skilled in the art without departing from the spirit and scope of the disclosed implementations.

[0028] Figure 1 illustrates an example of a machine learning model overview method, performed in accordance with one or more embodiments.

[0029] Figure 2 illustrates an example of a model representation, configured in accordance with one or more embodiments.

[0030] Figure 3A, Figure 3B, Figure 3C, Figure 3D, Figure 3E, and Figure 3F are diagrams that illustrate various types of causal relationships, generated in accordance with one or more embodiments.

[0031] Figure 4 illustrates an example of a method for training a prediction model, performed in accordance with one or more embodiments.

[0032] Figure 5A, Figure 5B, Figure 5C, Figure 5D, and Figure 5E represent diagrams illustrating how prediction models may be used to predict a target variable, generated in accordance with one or more embodiments.

[0033] Figure 6 illustrates an example of a method for preprocessing supervised machine learning data, performed in accordance with one or more embodiments.

[0034] Figure 7 illustrates an example of a method for evaluating a supervised machine learning model, performed in accordance with one or more embodiments.

[0035] Figure 8 illustrates an example of a method for applying a prediction model, performed in accordance with one or more embodiments.

[0036] Figure 9 illustrates one example of a computing device, configured in accordance with one or more embodiments. DETAILED DESCRIPTION

[0037] Various personal attributes may be used to predict future success in higher education, employment, or credit. Such prediction models may then be used to assist in making determinations such as whether to admit a person to an educational institution, whether to extend an offer of employment, or whether to extend an offer of credit. However, some personal attributes such as race or gender may be considered as inappropriate, as their use in prediction models may result in impermissible or undesired bias against particular classes of people. Accordingly, many models omit such attributes.

[0038] Even when protected attributes are omitted from prediction models, nonprotected attributes such as educational institution or personal residence postal zip code may strongly predict one or more protected attributes such as race or gender. For example, having knowledge that an applicant previously attended a historically Black college or university (HBCU) could inform the model that the applicant is most likely Black. Then, despite possessing characteristics otherwise comparable to creditworthy individuals of other races, the applicant having attended an HBCU may be discriminated against, to the extent that being Black correlates with credit default. For this reason, the use of some otherwise non-protected data in a prediction model can create a source of confounder bias along a protected attribute inside the model. That is, models using certain types of otherwise non-protected data may cause overt, intentional discrimination (i.e., disparate treatment), even though the protected attribute is not directly used in the model. Accordingly, even non-protected attributes are omitted from many prediction models. However, such omissions may weaken the predictive power of the model. For instance, an applicant's zip code may be omitted from a model because it can be highly predictive of race (due to redlining), despite the fact that zip code can also provide powerfully predictive information about an applicant's likelihood of default that is entirely unrelated to race (for example, living in a Rust Belt zip code versus a Sun Belt zip code).

[0039] Techniques and mechanisms described herein provide for the reduction or elimination of some types of bias in machine learning models. A supervised machine learning process may be modeled as a casual Bayesian network. A set of training data that includes observed outcome values, observed predictor values, and observed protected attribute values are used to train the prediction model in a training phase. One or more performance metrics may be determined and used to evaluate the trained model and improve the training process. Then, to predict one or more unobserved outcome values, the trained prediction model may be applied to inference data. The inference data may either omit the protected attribute entirely or may include default values substituted for actual values associated with the protected attribute.

[0040] In some implementations, techniques and mechanisms described herein may significantly improve the predictive quality of some prediction models by allowing for the use of new sources of data while reducing or eliminating unacceptable or impermissible bias that would result from the use of such new data sources in connection with conventional techniques.

[0041] According to various embodiments, techniques and mechanisms described herein may be used to address disparate treatment, a particular type of discrimination that is conceptually and often legally distinct from disparate impact. That is, a difference in decisions across groups (i.e., disparate impact) does not necessarily imply that a prediction model treats people in the groups differently (i.e., disparate treatment). For instance, a model in which women are on average are predicted to be somewhat less creditworthy than men may not be deemed discriminatory when the result merely reflects differences between the two groups in characteristics such as income. However, a model in which a woman is deemed less creditworthy than a man despite the two observations being generally comparable in characteristics other than gender may be deemed unfairly discriminatory.

[0042] It should be noted that some concepts used herein may be referred to differently in different technical disciplines and/or geographic regions. For example, for clarity and consistency the terms "disparate treatment" and "disparate impact" are used throughout this application, although in some technical disciplines and/or geographic regions these concepts may be more commonly referred to as "direct discrimination" and "indirect discrimination." Nevertheless, the techniques and mechanisms described herein are generally applicable regardless of the particular nomenclature.

[0043] According to various embodiments, techniques and mechanisms described herein may also be used to address disparate impact, which refers to practices that adversely affect one group of people of a protected characteristic more than another, even though the practices do not explicitly take one's membership in a protected group into account. For example, a model that doesn't explicitly take into account race but has a tendency to overpredict the probability of default for members of an unprivileged group compared to members of a privileged group. In this case, we may want the model to explicitly take into account race to provide a less discriminatory outcome for the unprivileged group.

[0044] Many conventional approaches to addressing bias in machine learning have relied on statistical or correlational approaches that measure disparities in outcomes across groups. However, such approaches have significant limitations because discrimination is based on causation, not correlation. Accordingly, such conventional approaches often result in models with relatively limited predictive power. In contrast, some techniques and mechanisms described herein are based on a causal, Bayesian analysis rather than a statistical or correlational approach, thus avoiding these problems.

[0045] Other conventional approaches to addressing bias in machine learning have attempted to adopt a causal approach. However, such approaches typically provide only a partial model of fairness and discrimination that do not entirely address the problem. Moreover, such approaches typically require complex modeling of causal relationships, rendering the solutions unclear as to whether or not disparate treatment has been eliminated. In contrast, techniques and mechanisms described herein model the entire supervised machine learning process itself as a causal Bayesian network, thus providing a way to build a complete model of fairness and discrimination. Thus, techniques and mechanisms described herein provide for the reduction or elimination of confounder bias from a supervised machine learning model.

[0046] According to various embodiments, techniques and mechanisms described herein may be used to address discrimination across a variety of dimensions. Examples of such dimensions may include, but are not limited to: race, ethnicity, sex, gender, sexual orientation, transgender status, national origin, religion, disability status, age, skin color, genetic information, marital status, and receipt of public assistance.

[0047] Figure 1 illustrates an example of an arrangement of a supervised machine learning model overview method 100, performed in accordance with one or more embodiments. According to various embodiments, the method 100 may be performed on one or more computing devices to train a machine learning model and then use the trained model to predict one or more unobserved outcome values in a way that reduces bias based on one or more protected attributes.

[0048] Training data fortraining a supervised machine learning model is determined at 102. According to various embodiments, the training data includes a set of observations that each corresponds with a unit of analysis, such as an individual. Each observation includes a number of data values that correspond with features, including one or more protected attributes.

[0049] Feature overlap within the training data for the protected attribute or attributes is determined at 104. According to various embodiments, the feature overlap may identify a degree to which particular values of a protected attribute overlap with particular values or combinations of values of other features. For example, some colleges have historically restricted admission to either men or women. A college to which only women have been admitted would therefore overlap entirely with a value of "female" for the feature "gender", which in some models may be considered a protected attribute. The lack of overlap for "gender" with "male" for students who attend that college could also be called a positivity violation.

[0050] According to various embodiments, a protected attribute may be any feature for which bias is to be removed. Values corresponding with the protected attribute may be included when training the supervised machine learning model. Values corresponding with the protected attribute may then be omitted or replaced with default values during the inference phase. It should be noted that an attribute considered as protected in one model may not be considered as protected in a different model.

[0051] A machine learning model is trained at 106 using the training data. According to various embodiments, the particular operations performed to train the model may depend in part on the type of model being trained. For instance, the model may be a neural network model, regression model, gradient boosting machine, tree-based model, ensemble model, or other type of model.

[0052] First inference data including one or more observed predictor values is determined at 108. According to various embodiments, the first inference data may include observations similar to those included in the training phase except that the target values have not yet been observed. In some implementations, the first inference data may include observed protected attribute values.

[0053] Second inference data including one or more observed predictor values is determined at 108. According to various embodiments, the second inference data may be substantially similar to the first inference data except that the second inference data may simply omit the protected attribute values entirely. Alternatively, the second inference data may include substituted protected attribute values.

[0054] A first and a second one or more predicted target values are predicted at 112 by applying the machine learning model to the first and second inference data. Additional details regarding the determination of inference data and the application of the trained machine learning model to inference data are described with respect to the method 800 shown in Figure 8.

[0055] A third one or more predicted target values are determined at 114 based on the first and second one or more predicted target values. In some implementations, an observation may have only a single predicted target value. For instance, if the observation has an actual protected attribute value that is identical to the default protected attribute value substituted at operation 110 when constructing the second inference data, then the observation would be the same in the first and second inference data, leading to the same predicted target value for the observation in both the first and second one or more predicted attribute values. In such a situation, the third one or more predicted target value for the observation would be the same as the first and second predicted target value for the observation.

[0056] In some embodiments, an observation may have a different predicted target value in the first inference data and the second inference data. For instance, if the actual protected attribute value for the observation in the first inference data was substituted at 110 for a different, default protected attribute value when determining the second inference data, then different predicted target values for the observation may be determined by applying the prediction model to the first and second inference data. In such a situation, the third predicted target value may be determined based on one or more predetermined rules. For example, the target value for an observation that is least discriminatory may be selected. For instance, in a loan application or college admission context, the target value that is most favorable to the applicant represented by the observation may be selected. As another example, the different target values for an observation may be combined in some fashion, for instance by averaging them.

[0057] Figure 2 illustrates an example of a model representation 200, configured in accordance with one or more embodiments. Figure 2 illustrates causal relationships represented by arrows between observable data values represented by dots. The model representation 200 illustrates an example of how a protected attribute such as race could be causally related to a target variable such as credit default or criminal recidivism.

[0058] In Figure 2, a person's Race can influence the Zip Code that the person lives in, which can then influence the person's access to Education, which can then influence their Employment opportunities, which can then influence their Income, which can then influence their probability of Default on a loan. In addition, a person's race may directly affect their education opportunities (e.g., via discrimination, or a decision to attend a particular university). Also, Employment opportunities 214 may affect a person's Zip Code 210, for instance if the person decides to move to pursue a new job. Finally, a direct connection may exist between Race 208 and Default 218. Such a connection does not indicate that a person's Race directly causes Default, since it could be completely mediated or explained away by other creditworthiness factors. Rather, it instead indicates only that a different causal pathway exists apart from the path from Zip Code 210 to Default 218 that is not represented in the model, for instance via family wealth. For this reason, in many contexts an attribute such as Race is treated as protected and excluded from the model, since its inclusion could lead the model to generate discriminatory predictions.

[0059] According to various embodiments, one challenge in machine learning is that models such as that shown in Figure 2 are difficult to analyze because they contain loops such as the loop from Zip Code 210 to Education 212 to Employment 214 and back to Zip Code 210. Another challenge may be that non-protected attributes such as Zip Code may act as a proxy for a protected attribute such as Race. For example, consider a model in which a protected attribute 202 is omitted from both training and inference but the other predictors 204 are retained. Such an approach may not address the bias problem since a person's Zip Code 210 may act as a proxy for Race 208, leading to similar bias.

[0060] Figure 3A, Figure 3B, Figure 3C, Figure 3D, Figure 3E, and Figure 3F are diagrams that illustrate various types of causal relationships, generated in accordance with one or more embodiments.

[0061] In Figure 3A, A 302 represents a protected attribute, such as race or gender, while Y 306 represents a target variable, such as default on a loan. Although the protected attribute 302 may have some correlation with the target variable Y 306, we assume that there is a mediating construct M 304, such as creditworthiness, that explains away this relationship in a causal sense. That is, conditional on the mediating construct M 304, the protected attribute A 302 and the target variable Y 306 are independent. Put another way, the mediating construct M 304 d-separates A 302 from Y 306. [0062] In Figure 3B, protected attribute pure proxies A' 312 purely proxies for the protected attribute A 302 and are removed from the set of features used in both training and inference. For instance, hair length may proxy for gender but may have no relevance to predicting creditworthiness. In some implementations, protected attribute pure proxies A' 312 may be removed manually. Alternatively, or additionally, one or more protected attribute pure proxies A' 312 may be removed during the training phase. For instance, features that have low predictive power but that are highly correlated with values corresponding with the protected attribute A 302 may be automatically removed.

[0063] In addition, Figure 3B also introduces traditional features X* 310 and alternative features X' 308. The traditional features X* 310 includes any features that measure or proxy for the mediating construct M 304. For instance, income may be considered a proxy for creditworthiness. The alternative features X'* 308 includes any features that may affect the mediating construct M 304 but that may also be seen as proxying for the protected attribute A 302 due to confounder bias. For instance, zip code may partially predict credit worthiness but may also proxy for a protected attribute such as race.

[0064] According to various embodiments, the traditional features X* 310 represents data traditionally used to predict the target variable Y 306. As shown in Figure 3C, the target variable Y 306 may be predicted directly by the traditional features X* 310, by the mediating construct M 304, and spuriously by the traditional features X* 310 through the mediating construct M 304. For instance, higher income may be directly indicative of an ability to pay off a loan. At the same time, higher income may be spuriously indicative of creditworthiness (such as having more financial assets), which may also in turn lead to higher income. However, in Figure 3C, the protected attribute A only affects the target variable Y 306 via the mediating construct M 304. Therefore, X* 310 does not act as a proxy for A 302, even if X* 310 is correlated with A 302.

[0065] According to various embodiments, the alternative features X'* 308 represents data that is not traditionally used to predict the target variable Y 306. As shown in Figure 3D, the mediating construct M 304 may be predicted directly by the alternative features X'* 308, directly by the protected attribute A 302, and spuriously by the alternative features X'* 308 through the protected attribute A 302. Further, the protected attribute A may also directly predict the alternative features X'*. For example, race may directly predict zip code and credit worthiness. Zip code may also be directly indicative of creditworthiness (such as having more financial assets). However, zip code may also spuriously predict creditworthiness through race.

[0066] As shown in Figure 3E, the mediating construct M 304 is assumed to completely explain any relationship between the protected attribute A 302 and the target variable Y 306, as well as between the alternative features X'* 308 and the target variable Y 306. The alternative features X'* 308 may directly predict the mediating construct M 304. However, the alternative features X'* 308 may also spuriously predict the mediating construct M 304 by proxying for the protected attribute A 302. For example, a particular zip code may be indicative of wealth and thereby predict credit worthiness. However, a particular zip code may also be a proxy for race. For this reason, models predicting creditworthiness traditionally exclude zip code as a feature to avoid inadvertently discriminating against people of a particular race (e.g., via redlining).

[0067] Figure 3F introduces traditional feature data X 314 and alternative feature data X' 316. As discussed with respect to protected attribute A 302 in Figure 3B, the traditional features X* 310 and the alternative features X*' 308 are imperfectly measured by traditional feature data X 314 and alternative feature data X' 316. For example, in a prediction model a person's income may be self-reported or proxied based on an estimate or range. Accordingly, in Figure 3F, although the traditional feature data X 314 and alternative feature data X' 316 do not directly cause the mediating construct M 304 and the target variable Y 306, they may nevertheless be used in training and inference data sets.

[0068] According to various embodiments, techniques and mechanisms described herein may be applied to textual data. For instance, text sources such as a loan application, resume, voice interview recording, or other such source of textual data may be analyzed to identify textual data. The textual data may then be cleaned by applying operations such as parsing, tokenization, removal of stop words, and the like.

[0069] In some embodiments, a bag-of-words or n-gram approach may be used to tokenize the textual data into individual words and phrases. Some or all of these words and phrases may then be used to predict an outcome such as job performance or loan default. However, some words and phrases, such as "women" and "God bless", act as pure proxies A' 312 and would be removed from the model. Other words like "executed" and "captured" (which are more frequently used by men on their resumes) may be highly correlated with protected attributes such as sex but still have a direct effect on the target variable Y 306. Accordingly, some words and phrases may be treated in the model in a manner similar to zip code or other such features that are correlated with protected classes.

[0070] Figure 4 illustrates an example of a method 400 for training a supervised machine learning model, performed in accordance with one or more embodiments. According to various embodiments, the method 400 may be implemented on any suitable computing device.

[0071] A request to train a supervised machine learning model, which is also referred to herein as a prediction model, is received at 402. According to various embodiments, the request may be generated manually or automatically. The request may include some or all of the information identified in Figure 4, such as target value data and training data. Alternatively, or additionally, the request may identify or refer to such information.

[0072] A supervised machine learning model is identified for training at 404. According to various embodiments, any of a variety of supervised machine learning models may be employed. Examples of suitable machine learning models include, but are not limited to: decision trees, tree-based models, gradient boosting models, deep learning models, neural networks, and regression models.

[0073] Training data for the prediction model are identified at 406. The training data may include data identifying target values to predict, protected attribute data values, data values corresponding to traditional features used to predict the target values, and data values corresponding to alternative features used to predict the target values.

[0074] According to various embodiments, the training data may be divided into a plurality of observations. For example, an observation may correspond to an individual, an organization, or any other suitable unit of analysis. Each observation may in turn be associated with one or more protected attribute values, one or more values corresponding with traditional features, one or more values corresponding with alternative features, and one or more target values.

[0075] In some embodiments, target values may correspond to any values a supervised machine learning model may be trained for predict. For example, outcome values may include, but are not limited to, criminal recidivism, professional performance, educational performance, and credit default. In general, target values may be observable for historical data, to aid in training the supervised machine learning model. However, target values may be unobserved during the inference phase, at least at the time of inference.

[0076] According to various embodiments, target values may include discrete or continuous variables. For example, a discrete target value may be whether a loan applicant will default on a loan, while a continuous target variable may be an interest rate for a loan or a purchase price for an asset such as a house.

[0077] In some embodiments, traditional and alternative feature data values may correspond to any values not identified as an outcome value or a protected attribute value that are observable before the corresponding outcome value. Feature data values may indicate or measure characteristics such as education level, education performance, professional experience, income, age, location of residence, and/or any other relevant information used for the purpose of training and applying a machine learning model. It should be noted that the status of a variable as a feature or a protected attribute might differ, for instance depending on the application. For example, age may be considered a feature in some applications but a protected attribute in other applications. Additional details for determining training data for the supervised machine learning model are discussed with respect to the method 600 shown in Figure 6.

[0078] The supervised machine learning model is trained at 408 using the training data. According to various embodiments, the particular operations employed to train the supervised machine learning model may depend in significant part on the prediction model employed. In some configurations, the scikit-learn Python package may be used for example to train the supervised machine learning model.

[0079] One or more default protected attribute values are determined at 410. According to various embodiments, the default protected attribute values may be used during the test and inference phases to replace actual protected attribute values. Various approaches may be used to determine default protected attribute values. For example, protected attribute values may be dropped completely and treated as missing. As another example, protected attribute values may be replaced with a single value for all observations. For instance, in a data set in which each observation corresponds to a person, the race of each individual may be set to a default value (e.g., Black, White, etc.), while the gender of each individual may be set to a default value (e.g., female, male, etc.). In this way, the actual race and gender of an individual may be masked during the test and inference phases so that it may not generate disparate treatment bias.

[0080] One or more model performance parameters are determined at 412. According to various embodiments, the model performance parameters may include one or more parameters related to the predictive performance of the supervised machine learning model. For instance, the model performance parameters may include one or more of accuracy, lift, precision, recall, or area under a receiver operator curve (AUC).

[0081] In some implementations, the model performance parameters may include one or more parameters related to bias. For example, the model performance parameters may compare a predicted outcome rate for members of a protected attribute value class under one or more variations of the model. As another example, the model performance parameters may compare a predictive performance of the model for particular values of a protected attribute. Additional details regarding the determination of model performance parameters are discussed with respect to the method 700 shown in Figure 7.

[0082] A determination is made at 414 as to whether to update the supervised machine learning model. In some implementations, the supervised machine learning model may continue to be updated until one or more termination criteria are met. Such criteria may include, but are not limited to: a designated number of iterations, a designated level of predictive performance, a designated level of increase in predictive performance.

[0083] The supervised machine learning model is stored on a storage device at 416. In some implementations, storing the supervised machine learning model may involve storing one or more weights or values suitable for use in applying the supervised machine learning model to novel data. For example, in a regression model, storing the supervised machine learning model may involve storing regression coefficients. As another example, in a neural network model, storing the supervised machine learning model may involve storing weights associated with various neurons in the neural network.

[0084] Figure 5A, Figure 5B, Figure 5C, Figure 5D, and Figure 5E represent diagrams illustrating how prediction models may be used to predict a target variable, generated in accordance with one or more embodiments.

[0085] Figure 5A represents a training phase using traditional feature data X 314. In Figure 5A, traditional feature data X 314 is used as a measure for the mediating construct M 304 to train a prediction model to produce a score R 502. The score R 502 is then used to reach a decision Y 504. The training is performed by using observed target outcome values Y 306 to determine the model's parameters. For instance, one or more metrics of model performance may be generated based on a comparison of Y 504 with Y 306. However, alternative data X' is omitted to avoid training the prediction model to be biased.

[0086] Figure 5B represents an inference phase using traditional data X 314. In Figure 5B, the prediction model trained in Figure 5A is applied to the traditional feature data X 314 as a measure for the mediating construct M 304 to produce a score R 502. The score R 502 is then used to reach a decision Y 504. Of course, in some configurations Y 504 may or may not affect Y 306. For instance, making a determination to extend a loan enables the possibility of loan default, whereas making a determination not to extend a loan removes the possibility of loan default. [0087] Figure 5C represents a training phase usingtraditional feature data X 314 and alternative feature data X' 316. In Figure 5A, both traditional feature data X 314 and alternative feature data X' 316 are used to train a prediction model to produce a score R 502, even though the alternative feature data X' 316 may proxy for the protected attribute A 302. To correct for this, and in contrast to Figure 5A, data representing the protected attribute A 302 is also used to train the model (this is known in causal terminology as a backdoor adjustment). The score R 502 is then used to reach a decision Y 504. The training is performed by using observed target outcome values Y 306 to determine the model's parameters. For instance, one or more metrics of model performance may be generated based on a comparison of Y 504 with Y 306.

[0088] Figure 5D represents an inference phase using traditional feature data X 314 and alternative feature data X' 316. In Figure 5D, the prediction model trained in Figure 5C is applied to the traditional feature data X 314 and alternative feature data X' 316 to produce a score R a 510. The score R a 510 is then used to reach a decision Y 504. Of course, in some configurations Y 504 may or may not affect Y 306. For instance, making a determination to extend a loan enables the possibility of loan default, whereas making a determination not to extend a loan removes the possibility of loan default. In Figure 5D, data representing the protected attribute A 302 is omitted. The omitted data may be dropped entirely or may be replaced with a default protected attribute value (in causal terminology this could be considered an intervention with a "do" operator, such as do(A - White) for race or do(A - Male) for gender). For instance, all values for race may be set to a single default value.

[0089] Figure 5E represents an alternative approach to an inference phase using traditional feature data X 314 and alternative feature data X' 316. In Figure 5E, the prediction model trained in Figure 5C is applied to the traditional feature data X 314 and alternative feature data X' 316 to produce a score R a 510, using a default protected attribute value similar to Figure 5D. The prediction model trained in Figure 5C is also applied to the traditional feature data X 314 and alternative data X' 316, using the actual protected attribute value to produce a score R 502. The score R 502 and the score R a 510 are then compared against each other to reach a decision Y 504. For example, a particular observation may be evaluated to determine whether the score R 502 or the score R a 510 is less discriminatory. The less discriminatory score may then be used to reach the decision Y 504. The less discriminatory score may be selected as, for instance, the better score from the perspective of the applicant in terms of the effect on the decision Y 504.

[0090] For clarification, in Figure 5E, data representing the protected attribute A 302 is omitted when computing the score R a 510, but not when computing the score R 502. The omitted data may be dropped entirely or may be replaced with default data (in causal terminology this could be considered an intervention with a "do" operator, such as do(A - White) for race or do(A - Male) for gender). For instance, all values for race may be set to a single default value.

[0091] Figure 6 illustrates an example of a method 600 for preprocessing supervised machine learning data, performed in accordance with one or more embodiments. According to various embodiments, the method 600 may be performed at any suitable computing device.

[0092] A request is received at 602 to prepare training data for supervised machine learning. According to various embodiments, the request may be generated as discussed with respect to operation 406 shown in Figure 4.

[0093] A protected attribute is identified in the training data at 604. According to various embodiments, protected attributes in the training data may be identified based on membership in a set of protected attributes. As discussed herein, which attributes are deemed as protected may be specific to a particular context.

[0094] A feature is selected for analysis at 606. According to various embodiments, features may be analyzed in any suitable order, in sequence or in parallel. In some embodiments, all features in the training data may be analyzed. Alternatively, only features that meet some suitable criteria may be analyzed. For instance, features may not be selected for analysis when they are considered traditional data, but may be selected for analysis when they are considered alternative data.

[0095] A determination is made at 608 as to whether the feature purely proxies for the protected attribute (in causal terminology, this means that the feature has no causal relationship to the target variable). This would correspond to A' 312 in Figure 3F. If it is determined that the feature purely proxies for the selected attribute, then the selected feature is removed from the model and training data at 610.

[0096] In some implementations, the determination made at 608 may involve determining one or more characteristics related to the feature and the protected attribute. For example, the determination may involve determining one or more correlations or truth tables between values of the feature and values of the protected attribute. As another example, the determination may involve determining some measure of predictive power, alone or in combination with other features, that the selected feature has in predicting target outcome values.

[0097] As one example, an attribute such as hair length may be highly correlated with gender but have very little predictive power in predicting a target outcome such as job performance, and hence be deemed a pure proxy for gender. In contrast, an attribute such as education may be somewhat highly correlated with race but nevertheless may provide significant predictive power in models predicting credit default, and hence be deemed not a pure proxy for race. That is, education may have both a causal effect in predicting credit default, as well as a spurious backdoor effect through race.

[0098] According to various embodiments, whether a particular combination of characteristics is deemed to indicate that a feature is a proxy for protected attribute A 302 may be determined by comparing one or more of the characteristics or combinations of characteristics against one or more threshold values. Moreover, different threshold values may be used in different contexts. For example, the predictive power of the model may be generally enhanced by including more features, while bias may be reduced by removing features that more purely proxy for a protected attribute. When evaluating predictive power, operation 608 may be performed in conjunction with one or more operations discussed with respect to the method 700 shown in Figure 7.

[0099] An overlap profile including one or more overlap values between the feature and the protected attribute A 302 is determined at 612. According to various embodiments, the overlap profile may identify instances in which a combination of values occurs. In causal language, this allows us to identify positivity violations.

[0100] As a specific example, consider an overlap profile comparing values of gender with values of university institution among a training data set to identify a number and/or percentage of attendees of each institution who are classified as men or women. For many institutions, the overlap values may be high. For example, a training data set may include instances of both men and women who attended Purdue University, with the percentage of men attending the institution being relatively close to 50%. However, for other institutions, the overlap values may be low. For instance, a training data set may include relatively few if any observations in which a man attended a college historically attended only by women, such as Smith College, with the percentage of men attending the institution being relatively close to 0%. Therefore, an algorithm would not be able to isolate the direct treatment effect of one's attendance at Smith College for causing credit default versus its spurious backdoor proxy effect through gender, even if the algorithm had access to the protected attribute.

[0101] In some embodiments, a prediction model trained on data values having insufficient overlap may risk creating bias due to overfitting on rare events. For instance, a handful of men who attended a historically women-only college may have an outsized effect on the predictions produced by a model for such individuals. Accordingly, the data may need to be adjusted to preemptively reduce such bias. Insufficient overlap may be referred to as a positivity violation.

[0102] At 614, a determination is made as to whether the overlap values exceed a designated threshold. According to various embodiments, the designated threshold may be determined so as to avoid or prevent positivity violations, and may depend on the goals or context associated with the prediction model. For example, higher threshold levels may improve model prediction at the expense of increasing potential bias, while lower threshold levels may reduce potential bias but also model predictive power. In particular embodiments, the designated threshold may depend on any of a variety of characteristics, such as the rarity of other combinations of features in the training data, the number of features included in the training data, and the like.

[0103] If one or more of the overlap values fail to exceed a designated threshold, then at 616 the feature values having insufficient overlap are replaced with default feature values. According to various embodiments, various approaches may be used to determine default feature values. For example, feature values with insufficient overlap may be dropped completely and treated as missing. As another example, feature values with insufficient overlap may be replaced with comparable feature values that have sufficient overlap. For instance, a particular educational institution (e.g., Smith College) in an observation may be replaced with a different educational institution of comparable quality and characteristics (e.g., New York University). As yet another example, feature values with insufficient overlap may be replaced with more generalized feature values. For instance, a zip code may be replaced with a city and state, or a particular education institution (e.g., Smith College) may be replaced with a general descriptor (e.g., 4-year college in Massachusetts). The feature value replacement rules used to determine the default values may depend on the particular empirical context. However, any rules applied at 616 to replace feature values may be stored so that the same rules can be applied during the inference phase.

[0104] Alternatively, or additionally, a protected attribute feature value may be replaced to eliminate the positivity violation. For instance, a protected attribute feature value may be replaced with an aggregate value. For example, a zip code may contain Whites, Blacks, and Latinos but few if any Asians. In this example, rather than treating each of the racial groups separately, Whites and Asians may be aggregated as one group and Blacks and Latinos aggregated as another group. Such a replacement would then provide the overlap needed to avoid positivity violations. For clarity, a positivity violation may be corrected using one or more feature value replacement rules, one or more protected attribute replacement rules, or a combination thereof.

[0105] A determination is made at 618 as to whether to select an additional feature for analysis. In some implementations, as discussed with respect to operation 606, all features in the training data may be analyzed. Alternatively, only features that meet some suitable criteria may be analyzed. For instance, features may not be selected for analysis when they are considered traditional data, but may be selected for analysis when they are considered alternative data.

[0106] If no additional feature is selected for analysis, then a determination is made at 620 as to whether to select an additional protected attribute for analysis. According to various embodiments, each protected attribute included in the training data may be analyzed to determine whether to remove proxies and/or determine default data values.

[0107] Figure 7 illustrates an example of a method 700 for evaluating a supervised machine learning model, performed in accordance with one or more embodiments. According to various embodiments, the method 700 may be performed on any suitable computing device.

[0108] A request is received at 702 to evaluate an instance of a supervised machine learning model. According to various embodiments, the request may be generated as discussed with respect to operation 406 shown in Figure 4.

[0109] Test data for analysis is determined at 704. According to various embodiments, a training data set may be divided into data used to actively train the model and data used to test the performance of the training. For example, K-fold validation is one such technique. Accordingly, the test data may include any data remaining after applying the training data to train the model.

[0110] In some implementations, a test data set may be preprocessed using some or all of the techniques discussed with respect to operation 406 and the method 800 shown in Figure 8. That is, the same rules used to determine default data values for feature values exhibiting insufficient overlap or positivity violations may be applied to the test data set. [0111] One or more model-level model performance metrics are determined at 706. According to various embodiments, any of a variety of suitable model-level model performance metrics may be determined. Examples of such metrics may include, but are not limited to: accuracy, lift, precision, recall, and area under a receiver operator curve (AUC). Performance metrics may include fairness measures, such as demographic/statistical parity. Such fairness measures may or may not involve controlling for other features, like income.

[0112] A protected attribute is selected for analysis at 708. According to various embodiments, each protected attribute included in the test data may be selected for analysis. Attributes may be analyzed in sequence, in parallel, or in any suitable order. [0113] A protected attribute value is selected for analysis at 710. According to various embodiments, protected attribute values selected for analysis may include any values that may be assigned to the protected attribute within the test data set. [0114] One or more attribute-level model performance metrics are determined at 712. According to various embodiments, the attribute-level performance metrics may include any or all of the model-level performance metrics discussed with respect to the operation 706. In this way, the predictive performance of the model for particular subsets of the data may be determined independently. Moreover, the predictive performance of the model for particular classes may be compared across different instances of the model. For example, the predictive performance of a model for women may be compared before and after adding a particular feature to the model. In such a configuration, the feature may be retained if it generally improves or at least does not harm the predictive performance of the model for values of a protected attribute. In addition, the feature would be retained if it improves fairness, such as reducing relative denial rates between groups, without a significant drop in a performance metric like accuracy.

[0115] A determination is made at 714 as to whether to select an additional protected attribute value for analysis. If no additional protected attribute value is selected for analysis, then at 716 a determination is made as to whether to select an additional protected attribute for analysis. As discussed with respect to the operations 708 and 710, any or all of the protected attributes and associated values may be analyzed to determine their contributions to model performance and/or any indications of bias related to the values.

[0116] Figure 8 illustrates an example of a method 800 for applying a supervised machine learning model, performed in accordance with one or more embodiments. According to various embodiments, the method 800 may be used to determine one or more predicted outcome values based on a prediction model trained as described with respect to the method 400 shown in Figure 4.

[0117] A request to apply a supervised machine learning model is received at 802. In some implementations, the request may be received at a computing device. The request may include or reference any or all of the information discussed in Figure 8. [0118] First, inference data for the supervised machine learning model is identified at 804. In some implementations, the first inference data may include one or more observations similar to those discussed with respect to the training data in Figure 4. However, an inference observation may include a target value that is not observable. [0119] A determination is made at 806 as to whether the first inference data includes protected attribute values. The determination may be made by analyzing the features reflected in the observations within the inference data. If the first inference data includes protected attribute values, then at 808 the first inference data is used to determine second inference data by updating the first inference data to remove the protected attribute values.

[0120] In some embodiments, the operation 808 may involve entirely removing data values corresponding with the protected attribute. For instance, a sex or race parameter in a supervised machine learning model may be dropped entirely.

[0121] In some embodiments, the operation 808 may involve replacing data values corresponding with the protected attribute value with default values. For instance, all data values corresponding with gender may be set to either male or female.

[0122] A determination is made at 810 as to whether the second inference data includes feature data values with insufficient overlap or have a positivity violation. According to various embodiments, the determination may involve analyzing the second inference data to determine whether it includes combinations of feature values identified in Figure 6 as having insufficient overlap or a positivity violation.

[0123] If it is determined that the inference data includes feature data values with insufficient overlap or a positivity violation, then at 812, the inference data is updated to remove the feature data values. According to various embodiments, the operation 812 may involve replacing feature data values having insufficient overlap with substitute values. In some embodiments, the substitute values may be determined as discussed with respect to the method 600 shown in Figure 6.

[0124] At 814, one or more predicted target values are determined for each observation. In some embodiments, one predicted target value may be determined by applying the prediction model determined in Figure 4 to the first inference data identified at 804. In such an approach, when determining the first one or more predicted target values, the original protected attribute values are used to generate the predicted outcome values.

[0125] In some embodiments, another predicted target value may be determined by applying the prediction model determined in Figure 4 to the second inference data determined as discussed with respect to the operations 808. In such an approach, and in contrast to the approach described in the previous paragraph, the original protected attribute values are not used to generate the predicted outcome values. If no substitute protected attribute values are determined, then the protected attribute values may be omitted entirely. Alternatively, if substitute protected attribute values are determined, then the substitute protected attribute values may be supplied to the supervised machine learning model instead of the original protected attribute values when determining the second one or more predicted target values.

[0126] In particular embodiments, more than two predicted target values may be determined. For example, an observation corresponding to a person may be tested as (1) a black female, (2) a black male, (3) a white female, and (4) a black male. Each of these different observations may lead to a different predicted outcome value.

[0127] A third one or more predicted target values are selected from the two or more predicted target values at 816. In some embodiments, selecting the predicted target value for a given observation may involve selecting the predicted target value that is least discriminatory. Thus, the selected target values may include one target value for each observation. In the event that the inference data includes multiple observations, the predicted target value for some observations may potentially be drawn from the first inference and data, while the predicted target value for other observations may potentially be drawn from the second inference data.

[0128] For example, consider the use of gender in a predictive model used for determining whether to grant a loan application. In such a model, the actual gender of an individual represented by an inference observation may be used in the first inference data, while all individuals may be artificially set to "male" in the second inference data. For a person who is actually male, the same prediction would be generated, and indeed only one prediction may need to be generated for such a person. For a person who is actually female, the prediction produced when using the first inference data (i.e., when the data correctly identifies the person as female) may differ from the prediction produced when using the second inference data (i.e., when the data is artificially set to identify the person as male). In the event of such a disparity, the prediction having the best score from the person's perspective may be selected, since that score may be considered as the least discriminatory.

[0129] As another example, consider the use of race in a predictive model used for determining whether to grant admission to a college. In such a model, the actual race of an individual represented by an inference observation may be used in the first inference data, while all individuals may be artificially set to "white" in the second inference data. For a person who is actually white, the same prediction would be generated, and indeed only one prediction may need to be generated for such a person. For a person who is actually non-white, the prediction produced when using the first inference data (i.e., when the data correctly identifies the person as white) may differ from the prediction produced when using the second inference data (i.e., when the data is artificially set to identify the person as white). In the event of such a disparity, the prediction having the best score from the person's perspective may be selected, since that score may be considered as the least discriminatory. [0130] According to various embodiments, selecting a third predicted target value from the two or more predicted target values may involve determining a maximum or minimum. For instance, selecting the third predicted target value may involve selecting the largest predicted target value or the smallest predicted target value. As another example, the designated predicted target value may correspond to an outcome having an ordinal ranking, and selecting the designated predicted target value may involve selecting a value of the first predicted target value and the second one or more predicted target values having a most positive ordinal ranking for the designated inference observation. For instance, the most positive ordinal ranking may be the ordinal ranking most likely to lead to a positive decision on a loan application, admissions decision, or the like.

[0131] The third one or more predicted target values are stored at 820. According to various embodiments, the predicted outcome values may be stored on any suitable storage device.

[0132] Figure 9 illustrates one example of a computing device. According to various embodiments, a system 900 suitable for implementing embodiments described herein includes a processor 901, a memory module 903, a storage device 905, an interface 911, and a bus 915 (e.g., a PCI bus or other interconnection fabric.) System 900 may operate as a variety of devices such as computing device configured to perform data analysis, a cloud computing system configured to perform data analysis, or any other device or service described herein. Although a particular configuration is described, a variety of alternative configurations are possible. The processor 901 may perform operations such as those described herein (this could include CPUs, GPUs, TPUs, or some combination, for example). Instructions for performing such operations may be embodied in the memory 903, on one or more non-transitory computer readable media, or on some other storage device. Various specially configured devices can also be used in place of or in addition to the processor 901. The interface 911 may be configured to send and receive data packets over a network. Examples of supported interfaces include, but are not limited to: Ethernet, fast Ethernet, Gigabit Ethernet, frame relay, cable, digital subscriber line (DSL), token ring, Asynchronous Transfer Mode (ATM), High-Speed Serial Interface (HSSI), and Fiber Distributed Data Interface (FDDI). These interfaces may include ports appropriate for communication with the appropriate media. They may also include an independent processor and/or volatile RAM. A computer system or computing device may include or communicate with a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.

[0133] Any of the disclosed implementations may be embodied in various types of hardware, software, firmware, computer readable media, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by computer-readable media that include program instructions, state information, etc., for configuring a computing system to perform various services and operations described herein. Examples of program instructions include both machine code, such as produced by a compiler, and higher-level code that may be executed via an interpreter. Instructions may be embodied in any suitable language such as, for example, Java, Python, C++, C, HTML, any other markup language, JavaScript, ActiveX, VBScript, or Perl. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks and magnetic tape; optical media such as flash memory, compact disk (CD) or digital versatile disk (DVD); magneto-optical media; and other hardware devices such as read-only memory ("ROM") devices and randomaccess memory ("RAM") devices. A computer-readable medium may be any combination of such storage devices.

[0134] In the foregoing specification, various techniques and mechanisms may have been described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless otherwise noted. For example, a system uses a processor in a variety of contexts but can use multiple processors while remaining within the scope of the present disclosure unless otherwise noted. Similarly, various techniques and mechanisms may have been described as including a connection between two entities. However, a connection does not necessarily mean a direct, unimpeded connection, as a variety of other entities (e.g., bridges, controllers, gateways, etc.) may reside between the two entities.

[0135] In the foregoing specification, reference was made in detail to specific embodiments including one or more of the best modes contemplated by the inventors. While various implementations have been described herein, it should be understood that they have been presented by way of example only, and not limitation. Particular embodiments may be implemented without some or all of the specific details described herein. In other instances, well known process operations have not been described in detail in order to avoid unnecessarily obscuring the disclosed techniques. Accordingly, the breadth and scope of the present application should not be limited by any of the implementations described herein, but should be defined only in accordance with the claims and their equivalents.