Pandata Blog

AI design and development for high risk industries

diversity and inclusion

Is Lack of Diversity in AI Costing Your Business?

One day in 2019, the legal team at Facebook woke up to a lawsuit in their inboxes, filed by the U.S. Department of Housing and Urban Development.

The charge? Facebook was violating the Fair Housing Act.

The cause? Bias in Facebook’s AI algorithms.

 The resulting court case found that Facebook was contravening not only the Fair Housing Act but also the U.S. Constitution by allowing its advertisers to target ads at individuals according to their religion, gender and race, all protected classes against discrimination under the US Constitution.

Facebook’s algorithm had learned that real estate ads for house sales and apartment rentals received the most engagement when they were shown to white people. So, the algorithm showed the ads to white people and stopped showing the ads to people of color and other minority groups. The government noticed this bias, and took Facebook to court.

Bias in AI occurs for a number of reasons. Sometimes the data is not representative (it is skewed toward a particular gender or ethnicity, for example). Sometimes algorithms amplify biased data. And at other times, the AI is biased because the people who create the algorithms are biased, either knowingly or unknowingly.

The Cause of Lack of Diversity and Biased AI

One of the leading causes of bias in AI might very well be the lack of diversity found in the data science and technology industry. 

STEM professions (particularly those in data science), are notorious for lacking gender and ethnic diversity. One study found that a whopping 85% of data scientists are men. Another study revealed that data science has the lowest diversity of all tech fields. A study published by the AI Now Institute found that “More than 80% of AI professors are men, and only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women.” 

This lack of diversity can be attributed largely to unequal access to STEM education programs and exclusionary workplace cultures. And any field that is overwhelmingly white and male is likely to replicate or perpetuate historical biases and power imbalances in their work.

The Cost of Lack of Diversity

Bias in AI caused by lack of diversity comes at a cost. It hurts those people who are discriminated against, of course. But it also hurts society at large by hindering everyone’s ability to participate in the economy and society. 

Lack of diversity also encourages mistrust among consumers by producing distorted search results and product recommendations. It hinders balanced and accurate decision making. 

Lack of diversity in AI also hinders hiring practices, perpetuating bias in who is selected for job interviews and who is rejected. 

Finally, as we saw with Facebook, persistent and uncorrected bias in AI leads to lawsuits, regulatory sanctions, fines and damage to brand reputation.

How Pandata Is Working Towards Diverse and Trusted AI

Pandata is defying industry norms by designing trusted AI solutions with a team of multicultural, multi-ethnic data scientists. Our team has prioritized diversity in the workplace by building a team of women and men—many with multicultural and multi-ethnic backgrounds.

“A lot of organizations in data science focus most of their attention on developing better models or finding better data to combat unethical AI solutions,” says Cal Al-Dhubaib, CEO of Pandata. “But at Pandata, we’ve found that cultivating a diverse, inclusive workforce organically allows us to solve complex problems in a more meaningful way.”

If you want to avoid bias in AI, improve the diversity of your data scientists. Or, hire Pandata for your next AI project. Pandata is improving the diversity in STEM and data science by welcoming, employing and teaching individuals of all backgrounds. That leads to human-centered, ethical—and unbiased—AI.

Gain More Expert Insight

Stay up-to-date on the latest in trusted AI and data science by subscribing to our Voices of Trusted AI monthly digest. It’s a once-per-month email that contains helpful trusted AI resources, reputable information and actionable tips straight from data science experts themselves. 

voices of trusted AI email digest


Hannah Arnson is the Director of Data Science at Pandata.