Pandata Blog

AI design and development for high risk industries

Bias in AI

A Marketer’s Journey Through the Land of AI – Entry 4, Bias in AI: A Very Human Issue


In the last entry, we covered a few of the unintended consequences of AI, bias being one. For this entry, I’d like to delve a little deeper into bias and the ways humans can both help and hinder the process of removing it from a machine learning model. As a marketer and AI Translator of an AI & ML design and development think tank, the ethics concerning AI and ML are always top of mind. We evangelize the need for regulation in this space, seeing so many unintended consequences of AI and ML pilots. Often, organizations rush to AI to solve business problems without considering the implications of algorithmic bias…or even realizing it is there. 

In the article, Three Notable Examples of AI Bias by Michael McKenna, he points to the ways that humans can help AND harm the process of machine learning. First, let’s look at the harm. McKenna says, “all models are made by humans and reflect human biases.” We see this in all three of his examples; the key takeaway being that we should strive for better when building models. So often the bias is found in the data itself and/or coming from the human doing the modeling (most frequently unintentionally). Humans are inherently preferential, something a computer on its own isn’t. So, preference in, preference out. This is why day after day we read more real life examples of ML gone wrong – with implications like racism, sexism, and brand reputation damage (to name just a few).

Now for the good stuff! While we have our flaws, humans also do a lot to help the process of creating fair and ethical AI. Human in the Loop ML happens when the machine or computer system is unable to solve a problem and needs human intervention –  including the training and testing stages of building an algorithm – creating a continuous feedback loop allowing the algorithm to give better results over time.” So while this can help in a myriad of ways, having a human immersed in the process, can help detect bias early, mitigating unintended consequences. At the end of the day, AI can’t happen in a vacuum, it takes humans, with ethics top of mind, to make it a success. 

Nicole Ponstingle McCaffrey is the COO and resident AI Translator at Pandata.