Bias in AI Decision-Making

Historically, humans have been the decision-makers in areas such as hiring, loan eligibility, and medical diagnoses. However, artificial intelligence (AI) has advanced to the point where it can perform certain tasks more skillfully and reliably than humans. AI is now utilized in hiring, loan assessments, housing, medicine, and other sectors due to its enhanced accuracy. The potential benefits of AI for society are evident.

Before the prevalence of AI, biased decision-making by humans was widespread. A significant example is the redlining practice by the Federal Housing Administration, which discriminated against neighborhoods based on race, leading to long-lasting negative effects, including decreased house prices. This demonstrates how biased decision-making can have severe, long-term societal consequences and contribute to discrimination.

However, the use of computers in decision-making does not automatically eliminate bias. AI-driven decisions can still introduce and even amplify certain biases, affecting human lives through prejudice against race, gender, age, or other traits.

Bias is defined as an inclination of temperament or outlook; when we make biased decisions, they will all be affected I the same way. It is closely related to discrimination, which involves prejudiced actions or treatment. Discrimination arises from bias, leading to prejudicial actions with historical consequences, such as redlining. This article explores bias in decisions made by AI systems.

AI decisions can become biased in several ways. One is by confusing correlation with causation. Correlation refers to variables changing together, while causation implies a relationship between a cause and effect. Assuming one factor causes another when they only change together non-causally is a fallacy. For example, a computer model might incorrectly assume that a zip code causes employees to perform better if it finds a correlation between the two/

Bias can also cause discrimination when irrelevant factors are considered by an algorithm. While relevant parameters can improve accuracy, irrelevant ones can strengthen inequalities. Personal factors like nationality and ethnicity do not affect a person’s skills.

Another source of bias is skewed data sets that do not accurately represent the target population. This can lead to algorithms making more errors for under-represented demographics. For instance, facial recognition algorithms have shown higher error rates for darker-skinned females compared to lighter-skinned males due to data sets that under-represent people with darker skin. Similarly, facial recognition algorithms developed in East Asia tend to perform better on Asian subjects, while those developed in the Western hemisphere perform better on white subjects; this discrepancy is due to the different racial distribution in the training sets.

Biased humans can also contribute to biases in AI algorithms. For example, studies have found that resumes with “white names” receive more callbacks than those with “African-American names,” indicating discrimination in the labor market. Since AI systems involve a human component, these biases can manifest in algorithms and lead to discrimination.

An example of AI bias is Amazon’s 2014 AI-hiring program, which was canceled after biases against women were discovered. The algorithm favored resumes from men and penalized those including the word “women’s” because the training data reflected the male dominance in the tech industry. This example highlights how AI decision-making can harm potential workers and contribute to broader societal discrimination. Such biased hiring practices also raise legal issues under Federal Equal Opportunity Laws, which prohibit employment discrimination based on race, color, religion, sex, or national origin.

AI has the potential to benefit science, well-being, economics, and environmental solutions. However, public trust is essential for the widespread adoption of AI. AI must align with human values and explain its reasoning. Trust is especially important in avoiding discrimination, particularly in hiring, where biased AI can erode faith in technology.

Addressing AI bias requires tackling issues such as mistaking causation for correlation, relying on irrelevant factors, using skewed data sets, and addressing human contributions. Eliminating discriminatory variables is complex, as biases can persist even when explicit personal factors are removed. Skewed data sets, resulting from real-world biases or incomplete representation, must be improved. Google’s AI department suggests augmenting public training data sets to better reflect real-world frequencies of people. Diversifying the AI field and implementing governance and testing methodologies can help combat human bias.

Measuring fairness in algorithms is essential if we want decisions to reflect individuals. Group fairness aims for statistical parity among different groups, while individual fairness seeks similar outcomes for similar individuals, regardless of group membership.

AI bias raises legal concerns, particularly in employment discrimination. The EEOC enforces Title VII of the Civil Rights Act, which prohibits discrimination based on race, color, religion, sex, or national origin.