Fairness and Bias in Machine Learning

View all blog posts under Articles

A person interacts with a chatbot on a smartphone.

 

Fairness and Bias in Machine Learning

In today’s world of big data, machines increasingly make decisions about people. Unlike a human decision-maker, a computer algorithm is supposed to be objective and unbiased.

In recent years, though, biases in algorithms have become apparent. Some have hurt both demographic groups and company reputations:

  • Amazon scrapped a computer program that rated resumes for technical jobs after finding out that it gave lower scores to women.
  • Microsoft suspended a chatbot on Twitter after malicious users taught it to spout racist language.

“Just because it’s a computer doesn’t mean that it’s going to be fair and unbiased,” says David Anderson, PhD, Villanova University assistant professor of analytics and Master of Science in Analytics faculty director.

The good news, he says, is that overcoming bias in machine learning is possible. By uncovering its sources, programmers can prevent harm and help artificial intelligence live up to its promise.

3 Aspects of Bias in Machine Learning

The most common sources of bias in machine learning are the datasets used to teach the program, Anderson says. “We build machine-learning models based on the data that we have. Lots of times, what we end up doing is just automating the status quo.”

But data are just one potential source. To root out bias, Anderson looks at three different aspects of machine-learning fairness:

Input Accountability

Input accountability means making sure that all relevant groups are well represented in the data. “You have to have a good distribution by age, by race and by gender of the people enrolled in a study to make sure that it works well for everybody,” Anderson says.

The input data for Amazon’s algorithm, for example, consisted of 10 years’ worth of resumes. Because those resumes were predominantly from men, the algorithm discriminated against women.

Output Fairness

Analysts can also find bias in machine learning by auditing a model’s outputs, comparing them to numbers from real life. “If you have an 80% chance of paying back your loan, the model should give you an 80% chance,” Anderson says.

Models should be fair for both demographic groups and individuals, he says. While no model can be perfectly accurate, fairness means that accuracy levels are roughly equal for each group.

“When you can quantitatively measure performance and say your model is 82% accurate for Black people and 84% for white people and 83% for both men and women, you can feel pretty good that your model is performing equally well for all demographic groups,” Anderson says.

Transparency

A third way to discourage machine-learning bias is to illustrate how an algorithm arrives at its decisions. An applicant who’s denied a loan should be informed of the key factors in the decision, such as income or credit score.

“You want to have a transparent model so that when you put stuff in, you know why it labeled you the way it did,” Anderson says.

Gender Pay Gaps and Machine-Learning Fairness

To show how biases in machine learning can be corrected, Anderson points to several companies in Iceland. After the country passed a law requiring equal pay for equal work, he helped design a model that measured gender gaps in pay and recommended how to close them. The model operates in three stages:

  • It accounts for what each worker should be paid, based on gender-neutral determinants such as education, job role and experience.
  • It calculates the influence of gender bias on what workers are actually paid.
  • It determines what raises each worker should receive to close the gaps.

“You have to explicitly test for bias,” Anderson says. “If it’s there, you use that evidence to inform your decisions to correct for it.”

Tools for Identifying Bias in Machine Learning

Fortunately, Anderson says, machine-learning fairness is getting easier to achieve. A variety of free software tools exist to analyze algorithms for hidden biases and suggest improvements.

AI Fairness 360

IBM created AI Fairness 360 and then invited other developers to add to it. The toolkit contains more than 70 algorithms for detecting and correcting different kinds of biases. It includes tutorials for applying the tools to fix age bias in credit scores and racial bias in medical expenditures.

Fairlearn

An open-source software package, Fairlearn includes a dashboard that lets users choose which aspects of fairness to measure and which groups to measure it for. It graphically displays results and then offers tools for mitigating the biases it has detected.

Google AI

TensorFlow, Google’s machine-learning platform, features Fairness Indicators, a tool that allows programmers to continually test models for bias while developing them. Its website also includes practical exercises for learning to use Fairness Indicators.

Machine-Learning Fairness and the Law

As concerns have grown about bias in machine learning, so have efforts to regulate it. In 2019, the European Union issued voluntary guidelines for artificial intelligence systems. It recommended that businesses routinely assess the fairness of their algorithms and make them understandable to consumers.

A US Senate bill goes further. If signed into law, the Algorithmic Accountability Act, first introduced in 2019, would require companies to produce annual impact statements, evaluating fairness and discrimination as well as privacy and security. The rules would apply to firms with data on more than 1 million people.

Anderson believes both measures strike a reasonable balance, enhancing fairness in machine learning without micromanaging its content. “They’re not saying what you can and can’t use,” he says. “They’re saying you have to measure fairness and disclose it. I think that kind of transparency is a reasonable compromise.”

Making Machine Learning Fair to Everyone

For businesses to fully enjoy the benefits of artificial intelligence, they have to learn to manage the ethical risks. In recent years, major companies have become more proactive, creating tools to combat bias in machine learning and promote fairness to all demographic groups.

A degree program such as the online Master of Science in Analytics at Villanova University can help to equip students with the skills necessary to improve machine-learning fairness. Courses on data mining and data models teach students what to include and what to omit in models, as well as how to evaluate a model’s performance. A practicum provides them with the opportunity to put those skills to use. Explore how such a program can prepare you for careers on the technological frontier of artificial intelligence.

Recommended Readings

Data Analytics Trends to Look For

How to Start a Career in Analytics

What Is a Master of Science in Analytics?

Sources:

Copenhagen Business School, “How I Did It”

European Parliament, “EU Guidelines on Ethics in Artificial Intelligence”

Fairlearn, Improve Fairness of AI Systems

Google AI, Tools for Everyone

IBM Research, AI Fairness 360

IEEE Spectrum, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation”

Reuters, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women”

US Senator Ron Wyden, “Wyden, Booker, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms”