Pandata is now part of Further

Skim the Latest in Trusted AI

September 30th, 2021

Welcome to the Voices of Trusted AI! It's hard to stay up to date with all the changes and emerging trends in Trusted AI. I know, because I subscribe to dozens of publications, and spend hours trying to keep my head above water.

With this digest, the work is done for you. Each month, you'll receive some of the most impactful articles in Trusted AI, highlights on movers and shakers in the space, and insight from trained data science professionals.

You'll also have the opportunity to ask a data scientist any question on your mind and we'll feature the response in an upcoming edition.

Thank you for your commitment to adopting and building more responsible AI systems. I am so glad that you're joining us on this journey.

Cal Al-Dhubaib, CEO

What Do We Mean by Trusted AI?

Trusted AI is the discipline of designing AI-powered solutions that maximize the value of humans while being more fair, transparent, and privacy-preserving.


The Latest in Trusted AI

Getting to Know—and Manage—Your Biggest AI Risks

What it’s about: This article from McKinsey highlights six types of AI risk and the importance of organizations recognizing and safeguarding against them.

Why it matters: This is a great example of the definition of ‘Trusted AI’ being in flux. Industry experts are all starting to agree that these risks represent a growing pain point in AI, and trust seems to be at its core.

Read More

A Guide to Building Trustworthy and Ethical AI Systems

What it’s about: This guide from DataRobot takes a deep dive into the practical concerns many people have when developing and deploying a trusted AI solution, along with tools to address these concerns.

Why it matters: There’s a lot to unpack in this guide, but I appreciate that DataRobot converts this complex issue of trust in AI into something that’s easy to understand. I also like how this guide includes humility and fairness alongside more quantitative aspects of trust like performance and accuracy.

View the Guide

AI Ethics Experts React to Google Doubling Embattled Ethics Team

What it’s about: This article touches on the industry’s response to Google’s latest announcement: they will be doubling their AI ethics research staff to 200 members and boosting the group’s operating budget.

Why it matters: As the risks and unintended consequences of AI have become more apparent, tech giants like Google have been under increased scrutiny. But is doubling your AI ethics research team the best solution to trusted AI models? 

Read More


Connect With Trusted AI Experts

Carol Smith

Meet Carol Smith, Senior Research Scientist

Carol Smith is a senior research scientist in human-machine interaction at the Carnegie Mellon University Software Engineering Institute and an adjunct instructor for the CMU Human-Computer Interaction Institute. Carol has been conducting research to improve the human experience across industries for over 20 years and has been working to integrate ethics and improve experiences with AI systems, autonomous vehicles, and other emerging technologies since 2015. Carol is recognized globally as a leader in user experience, is an ACM Distinguished Speaker, and an editor for the Journal of Usability Studies. She holds an M.S. in Human-Computer Interaction from DePaul University.

Your Trusted AI Questions, Answered

What is an acceptable level of bias to strive for in your data?

In an ideal world, all data and subsequent AI solutions would be 100% bias free.

Unfortunately, humans are biased creatures, and data reflects patterns in human behavior. Fortunately, there are ways to identify potential biases in underlying data and mitigate it both at the data level and at stages throughout the solution.

It ultimately comes down to determining the impact of your solution. Early on, ask key questions such as: Who will be affected by this solution and how? What kind of disparate impact may exist? What processes will be put in place to monitor impact over time? How might we mitigate any biases?

These will help establish an understanding of what biases may exist in your data and how they can influence downstream decisions, informing any interventions.

Resources such as the IBM Fairness 360 toolkit as well as built in resources in data science platforms such as H2o.ai provide accessible ways to measure and mitigate bias.

Using explainable and interpretable models over black-box models will also provide insight into how your solution is making decisions.

While we can and should strive to eliminate bias in data, it is crucial to understand the effect solution output will have and measure and mitigate any potential disparate impacts. Advances and tools in the field have made doing so both standard practice and accessible.

– Hannah Arnson, PhD // Director of Data Science