Journal of an AI Translator: A Marketer’s Journey Through the Land of AI
Entry #3 – Unintended Consequences
I recently got married, which got me thinking about unintended consequences. A wedding is a great example of where things that you never meant to happen, can happen. I am thankful that we did not experience this, but I know many have.
So let’s set the scene. You are innocently planning your wedding, trying to include as many of your friends and family as possible, but you also have to think about a budget and/or venue capacity, right? You will never be able to include everyone, so what happens? People end up feeling slighted, causing the unintended consequence of putting a strain on that relationship. All you were trying to do was plan an amazing day to celebrate something special; your intentions were good! So how in the world does this connect to AI?
As technology ramps up at a dizzying pace, we need to take a step back and consider the implications it has on our world. For the purpose of this entry, I will be focusing on some of the ways that the consequences of AI can single handedly cripple your brand and organization.
We all understand the potential risks of things like self-driving cars, but we also need to think about the unintended consequences – things we didn’t mean to have happen or take into account when planning our solution. This is no way an exhaustive list, but some key things to consider, as a marketer, when starting or moving down the path with AI.
Institutionalization of bias – like we talked about in my first entry, training an algorithm with historical or limited data will reflect that in the future. For example, it will favor existing customer preferences and will not account for those of potential new customers. That can lead to unintended discrimination based on race, gender, and/or ethnicity – something that can very quickly tarnish or destroy a brand’s reputation.
Loss of control – human judgement must be a part of the process, removing that element is simply reckless. It’s the reason human-centered AI is such a huge movement and we hear so much about the regulation and governance of AI. Humans need to be a part of this iterative process – reviewing and testing the algorithm, as well as assessing its performance.
In addition, you may have heard the term “explainable AI,” which relates to people being able to understand how decisions are being made by the model, which is key for trust and transparency.
Disregard for Privacy – this one should go without saying but using data without a consumer’s consent is a no-go. With data protection regulations like GDPR and CCPA leading the way, the days of, “this is just for marketing purposes” are long gone. Being clear on how you plan to use the personal data you collect – and giving them the right to opt-out – isn’t a best practice, it’s a must happen. Even more necessary is being certain to de-identify personal identifying information (PII), when dealing with regulated data.
In my next entry, I want to delve into some real-world examples of these unintended consequences and what we can learn from them. Until then…
Nicole Ponstingle McCaffrey is the Chief People and Marketing Officer & AI Translator at Pandata.