Pandata Blog

AI design and development for high risk industries

Bias in AI

 

4 Ways to Reduce Bias in AI

Ever since the advent of the first computer, programmers have known that garbage in equals garbage out. In other words, the quality of what you enter into your machine determines the quality of what you get out of the machine.

The advent of artificial intelligence (AI) has introduced a twist on this old maxim. Today, when it comes to AI, we might just as accurately say, “Bias In, Bias Out.” Biased data leads to biased outcomes.

AI bias, also called algorithm bias, happens when an algorithm generates answers, recommendations, or other results that are prejudiced, discriminatory or unfair. Sometimes this AI bias is statistical, as happens when sampled values differ from the true values of a data set. Other times, this bias is social, as happens when the people who create the algorithm are prejudiced (knowingly or unknowingly) against a person or group of people.

Twitter, for example, has admitted that its photo cropping algorithm was once racist, cropping out black faces in favor of white faces. Ironically, the most popular virtual assistants in North America—Siri, Alexa, Cortana and Google Assistant—are all driven by AI, and all have female voices. Gender bias?

Because AI is based on math, it is not intrinsically biased. But humans and human behavior are biased. As AI increasingly augments human judgement and decision making, we must create AI algorithms that are free from human bias. Here are the top four ways to reduce bias in AI.

1. Measure for Bias

If you want to discover if an AI algorithm is perpetuating bias, start looking long and hard for that bias. Bias, after all, is typically based on preconceived notions and unexamined judgment. This means it is not always obvious. Measure for bias in two places: the input and the output.

Look for bias in the data used to build the initial algorithm. And then look for bias in the model output. Use common metrics, such as group representation across activity type.

2. Keep Humans in Your ML Feedback Loop

Building an effective and accurate AI tool requires learning. Data Science is iterative by nature, and often involves a continuous feedback loop that helps a machine learn from its mistakes and successes and get better over time. This is especially true during the training and testing stages of building an AI algorithm.

One vital way to recognize and prevent AI bias is to include humans in this feedback loop. When humans are immersed in this process, they detect bias early, and take steps to mitigate unintended consequences.

3. Check for Bias Using Third-Party Tools

If you feel that looking for bias in your own work is a little bit like putting the fox in charge of the henhouse, use third-party tools to check for fairness and bias in your AI models. One powerful and open-source tool isIBM’s AI Fairness 360 Toolkit. This extensible, open-source toolkit helps you examine, report and mitigate discrimination and bias in machine learning models throughout your AI application lifecycle.

4. Solicit Feedback

When in doubt, ask the people who are most likely to be at the sharp end of any AI bias. For example, if you think your AI algorithm might be sexist, solicit feedback from the affected genders. If you believe your AI solution might be favoring one race over another, or one ethnicity over another, solicit feedback from people of those races and those ethnicities.

By soliciting and recording the thoughts of stakeholders on their goals around inclusion, you capture nuances in your data and your AI model design that no algorithm can do on its own without human input.

Reducing human bias from your AI algorithms takes human ingenuity, insight and effort. From the outset, assume that you have a problem with bias, and start measuring for it. Keep humans in your ML feedback loop, monitor for bias using third-party tools and solicit feedback from the people most likely to be affected by any bias.

Get Actionable Insights on Reducing AI Bias

Learn more ways to prevent bias in AI and stay up-to-date on the latest in trusted AI and data science by subscribing to our Voices of Trusted AI monthly digest. It’s a once-per-month email that contains helpful trusted AI resources, reputable information and actionable tips straight from data science experts themselves.

top challenges to designing AI solutions in regulated industries


Bob Wood is a Data Scientist at Pandata.