Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection

Avoiding Bias in AI

Government agencies that deliver public benefits often suffer from fraudulent activities. For instance, in some states, more than 40% of unemployment benefits are paid out improperly. Artificial intelligence (AI) can aid in identifying potential fraud with greater accuracy and speed than human labor. However, AI also has a known potential to cause bias. The introduction of proxy data or historical data with bias can cause unfairness. This is especially concerning for government agencies as the datasets they handle contain legally protected attributes like gender, age, and race. Nevertheless, by understanding machine learning (ML) models, organizations can apply four approaches to mitigate bias: Unawareness, Demographic Parity, Equalized Odds, and Predictive Rate Parity. Each of these methods has tradeoffs that must be accepted. For example, Demographic Parity, Equalized Odds, and Predictive Rate Parity all involve disparate treatment, meaning that they apply decisions to demographic groups in dissimilar ways.

avoiding bias in AI

Avoiding Bias in AI Tip # 1:  Unawareness. A model using this concept omits sensitive attributes such as age. But it doesn’t account for proxies of such sensitive attributes.

In our hypothetical example, the tax agency’s ML model applies the Unawareness concept to select taxpayers based on their frequency of calls to the contact center. It doesn’t directly use age as an attribute, but because age correlates with phone usage, it will favor younger taxpayers. As a result, the model selects 35 taxpayers under age 45, and 15 taxpayers over age 45. The outcome is that 10 of the taxpayers end up calling the contact center – not a bad result, but perhaps not ideal.

Avoiding Bias in AI Tip # 2:  Demographic Parity. With this concept, the model’s probability of predicting a specific outcome is the same for one individual as for another individual with different sensitive attributes.

Applying the Demographic Parity concept, the tax agency’s ML model directly uses age to ensure equal distribution of taxpayers above and below age 45. As a result, the model selects 25 taxpayers under age 45, and 25 over age 45. The outcome is that 14 taxpayers call the contact center – a less favorable result than with the Unawareness concept.

Avoiding Bias in AI Tip # 3:  Equalized Odds. With Equalized Odds, if the model predicts the same outcome for two individuals with different sensitive attributes, the probability it will select either individual is the same.

Applying this concept, the agency’s ML model uses age to ensure that true-positive and false-positive rates are the same for taxpayers above and below age 45. As a result, it selects 30 taxpayers under age 45, and 20 over age 45. In this case, eight taxpayers call the customer service center— the best outcome so far.

Avoiding Bias in AI Tip # 4:  Predictive Rate Parity. With this concept, if the model predicts a specific outcome for two individuals with different sensitive attributes, the probability it will predict that outcome for either individual is the same.

Applying this Avoiding Bias in AI concept, the agency’s ML model uses age to ensure that among taxpayers who call the contact center, an equal number are under 45 and over 45. As a result, it selects 40 taxpayers under age 45 and 10 over age 45. Eight taxpayers call the customer service center – the same outcome as with Equalized Odds.

To summarize the results of this hypothetical Avoiding Bias in AI situation, two of the models achieve a more desirable outcome, but they rely on sensitive data. The model that doesn’t rely on sensitive data achieves a fairly desirable outcome, but the data it uses is a proxy for sensitive data.

Balancing Accuracy and Fairness in Avoiding Bias in AI

One challenge for ML modelers is that the four concepts for mitigating unfairness are mutually exclusive. Modelers have to select one fairness definition to apply to an algorithm, and then accept the tradeoffs.

Demographic Parity, Equalized Odds, and Predictive Rate Parity all involve disparate treatment. Unawareness doesn’t involve disparate treatment, but it can result in disparate impacts. Each concept has its pros and cons, and there’s no correct or incorrect choice.

Another challenge is that there’s often a tradeoff between accuracy and fairness. A highly accurate model might not be equitable. But improving the model’s fairness can make it less accurate. For fraud detection, an agency might choose to run a less accurate model to make fraud detection more equitable.

AI is helping governments more efficiently and effectively identify and prevent fraud. What’s important is that they understand how ML concepts can affect treatment and outcomes, and that they be transparent about how they’re using AI. By leveraging strategies to avoid bias and inequity in AI-enabled fraud detection, they can serve the public fairly.

www.STEMTA.com 

Related Articles