At Pandata, we advocate for Trusted AI that is approachable and ethical. This means acknowledging that sometimes there is no right answer but because we have a duty to safeguard others from unintended consequences, we have to ask difficult questions to get to the best answer. In order to put Cultivating Trust into practice, we needed to set out a conceptual framework of what it means to create Trusted AI.
After educating ourselves, brainstorming, and compiling, we established a framework to structure our practice of Trusted AI. This isn’t a rigid checklist that treats every situation the same. Instead, it is a flexible process to scaffold your approach. The central tenet of Pandata’s Trusted AI is a proactive mindset.
This proactive mindset asks you to first consider every case as its own unique set of circumstances and requirements. Your risk-profiles and solutions are situation-dependent and so each educate yourself and others about potential ethical concerns. Next, consider the impact of your decisions at scale and at the individual level. Direct effects might (or might not) be obvious but second- and third-order effects can have substantial impacts over time. Finally, build principles of trust into your project starting from the beginning. It is much harder to change course mid-project than to build off a Trusted AI framework.
There are three principles that bolster our proactive mindset: Be Transparent, Protect Privacy, and Focus on Fairness. These three principles and consistent effort towards being proactive create the scaffold of Trusted AI.
Principle 1 – Be Transparent
Transparency is about acknowledging and being honest about the process. Every step along the path to an end-product is composed of decisions with ethical consequences. To understand the end-product, we need to be able to monitor and validate these decisions, not have them live in a black box. We must also design models and use algorithms with transparency in mind. If we know all our inputs, our results will be reproducible and auditable. If our process is both explainable by us as developers and understandable by users, we create trust and buy-in. When AI is transparent, we don’t have to just hope it is making good decisions, we can trust that it is.
Principle 2 – Protect Privacy
It is easy to fall into the trap of thinking of AI solutions at scale. The truth is that AI is built on data generated by individuals and we must design with the end user in mind. We must protect privacy from the beginning to the end of the AI pipeline. Be clear and upfront about what data is being collected and how it is being used. Build protections for individuals behind data points, especially from end users. Consider opt-out frameworks and deidentification. Finally, consider the risk-profile of the data. We must make sure that data that could cause greater harm like Personally Identifiable Information and Protected Health Information are identified as potentially harmful and are restricted. AI that puts privacy first satisfies current secure data practices and will comply with future regulations.
Principle 3 – Focus on Fairness
Technology, especially AI, is not neutral; it is shaped by the society and individuals that create it. Biases inevitably exist in data sources, models, and in outcomes and we must deliberately design all facets of an AI project with fairness in mind. There is a myriad of ways to conceptualize fairness, so we need to take the extra step and define what is fair in a given situation. Fairness promotes equity and inclusion, looking at multiple axes of social identity. It considers proxy measures for characteristics that are protected or are otherwise targets of discrimination. Like transparency, fairness requires monitoring as biases can develop over time. When AI is fair, we can promote equality and equity to mitigate the harm from disparate treatment.
AI is at the point where it is no longer a question of whether the technology is advance enough to be useful. Instead, the question is whether we can trust AI. AI will not become truly widespread until people believe that the AI process is transparent, privacy-oriented, and fair. Trusted AI is not easy. We use our scaffold to proactively ensure that each step is performed with due consideration and deliberation. Trusted AI means that Pandata can offer what is becoming an essential business competency in an approachable and ethical way.
Merilys Huhn is an Associate Data Scientist at Pandata.