Pandata Blog

AI design and development for high risk industries

Your Guide to Trends Impacting Trustworthy AI [Podcast Takeaways]

At this point, AI should be a familiar word, but what does trustworthy AI mean? At Pandata, our team understands the imperative need for trustworthy AI and we have made it our mission to educate industry leaders on its importance.

In a recent podcast, Pandata AI Strategist and Founder/CEO Cal Al-Dhubaib discussed how the AI industry continues to evolve, along with the four pillars of trustworthy AI—security, fairness, transparency, and auditability.

Below are some key takeaways from Cal’s conversation with Justin Grammens on his podcast Conversations on Applied AI – Stories from Experts in Artificial Intelligence

The Rise of Foundation Models

In the past, building truly intelligent systems was the exclusive domain of Big Tech working with massive budgets. There were teams dedicated to working on problems, and teams that specialized in language models—exploring grammatical syntax and words and phrases.

Today, there are preexisting models and tools that companies can use as building blocks. It’s possible to use foundation models that have already been trained and refine them for your use, which completely changes the AI landscape.

Future of AI Use

For many organizations, advanced machine learning is often overkill. Prototyping complex models can take a few months before they can even be tested. 

Overcomplicating the initial design process can hurt companies. An organization may need to consider the feasibility of developing a powerful model that requires special experts and needs to be retrained with expert care. When exploring AI options, ask at what point does the overhead of maintaining tools become more costly than the results you’ll receive?

Cal agrees with Jennifer Strong, who says “For artificial intelligence to be more useful it needs to become boring,”—and it’s never been a more exciting time to make AI boring. Shifting from deterministic programming to probabilistic type programming means questioning not whether models can go wrong, but what you will do when they do.

The 4 Pillars of Trustworthy AI

Pandata is founded on human-centered, explainable, trusted AI, which means AI that is more secure, fair, transparent, and auditable. The following pillars of trustworthy AI are critical to consider during the entire design process.

1. Secure

We often build applications and models on large scales of data. In some cases, we’re talking millions, or even billions, of records. This data may contain personally identifiable information (PII), so it’s extremely important to think about how to build models that preserve user privacy. In other words, how do you deal with the data most fundamentally in a way that is ethical and fair to the people it represents?

2. Fair

Does your model behave differently given different cultural contexts? Does the model have any associated harms, or exclude potential users from benefiting from the application? These questions of fairness must be considered when building models and applications. 

Transcription technology, for example, carries a lot of challenges with users who have accents. While we need to focus on the inclusivity of the actual application, there is also a risk of underperforming in a certain group and unintended consequences.

3. Transparent

As more complex AI systems and models are built, transparency is absolutely essential. Sometimes you can ask a model why it predicted what it did, but the answers might not be comprehensible, and you might not be able to pinpoint the issue to a specific feature or attribute. In these instances, the data your model is using and how it is being used must be more transparent—to both the builder of the model and the user.

4. Auditable

Auditability relates to a model being able to reproduce or recreate a prediction. If something goes wrong, for example a recommendation in patient treatment, and someone says the result was biased and hurt them in some way, you need to be able to reproduce the model’s original results and understand why it did what it did.

How Do We Handle Bias in AI?

Bias isn’t necessarily bad, but harmful bias is. What we define as harmful bias shifts with context over time. Things that we considered appropriate 20 years ago may no longer be considered appropriate today.

Bias is rounded in cultural references, and the crux of it is subjectivity. Whenever there is a situation where multiple humans can’t agree on the right answer, or where multiple humans may look at the same problem and see different solutions, data will have bias. 

The more subjective the end result is, and the more complex your data is, the harder it is to measure and define “objective” and the easier it is to encounter unintended consequences.

The biggest challenge we faced used to be building models, but now the hard part is how to do it safely and in the right way. What we know is that organizations must have humans involved. All of the care and monitoring of AI systems is going to take a lot of work and highly skilled individuals who know how to ask the right questions.

Interested in learning more about AI design and development? Click here to hear the full podcast with Cal and Justin. 

Eager To Learn More About Trustworthy AI?

Our expert team of data scientists and AI strategists regularly contribute their insight to Pandata’s Voices of Trusted AI email digest. Subscribe now to get a monthly email that contains helpful trusted AI resources, reputable information, and actionable tips.

Trusted AI

Contributor: Cal Al-Dhubaib is the CEO and AI Strategist at Pandata.