Pandata Blog

AI design and development for high risk industries

trusted ai

3 Horrifying Examples of Artificial Intelligence and Machine Learning Gone Wrong

When your company deploys AI, things can go wrong. Very, very wrong.

How wrong?

You could be on the hook for nightmares like:

  • Machines that discriminate against customers or employees.
  • Machines that violate consumer rights.
  • Or machines that make bad, costly decisions, but don’t tell you why those decisions were made.

The problem is, it’s not always easy for business leaders to understand how AI can go wrong, because they’re often not trained data scientists. They don’t know what they don’t know, so they can’t take steps to prevent AI and machine learning tools from causing damage to people and brands.

Not to mention, many ways AI goes wrong result from unintended consequences of technological implementation, not malice or wrongdoing.

We’re here to help.

At Pandata, our data scientists understand the unintended consequences that can result when you build, adopt, and deploy AI. We help our clients make AI and machine learning both approachable and ethical.

One great way to do that is to show you horrifying examples of how AI can go wrong—precisely so we can all make better decisions about how to make AI go right

That’s because when businesses get AI right, everyone wins. Your customers get a better experience. You make more money and become more efficient. Your offerings improve over time. But if businesses fail to get AI right, the opposite happens and businesses pay the price.

1. AI That’s Racist 

There are plenty of examples of AI accidentally learning to use racist language. The most notable is Microsoft’s chatbot Tay

Tay was trained on conversations happening on Twitter, so that she could automatically post and converse on the platform. She started posting messages that were innocent enough. But things took a turn when Tay began to learn from the wrong types of Twitter conversations.

Tay’s creators didn’t anticipate the bot would learn from every conversation happening on Twitter, including the ones containing bigoted language about certain races. 

But that’s what she did.

In short order, Tay started posting racist content, prompting Microsoft to quickly shut down the experiment—but became a cautionary tale in the process.

2. AI That’s Biased Against Women

AI can do real damage behind closed doors, too. That’s what happened when Amazon started using an AI-powered recruiting tool to vet new job candidates.

On paper, how the tool worked made perfect sense. It scanned the resumes submitted to Amazon over the previous 10 years. Then it tried to find patterns that would help identify the very best candidates at scale.

The only problem was that the resumes were heavily skewed towards men, which tracks with gender imbalances in the technology industry.

Because the data contained uncorrected bias, the AI system made the wrong conclusion about job candidates: The best hires were more often men than women.

As a result, the system began to penalize resumes from women. Amazon promptly shut down the system, but not before it became national news.

3. AI That Pretends to Be CEOs

Not all horrifying examples of AI gone wrong are mistakes. Certain AI technologies are now so cheap and powerful that any criminal with malicious intent can use them to wreak havoc on companies.

“Deepfake” is the term for a super-realistic AI-generated video and audio of a person. It’s not real, but it looks and/or sounds a lot like a real individual. And you can make it say whatever you want. Deepfakes have been used to impersonate political figures and celebrities in malicious or humorous ways.

In at least one scenario, they’ve also been used to defraud a company.

Scammers used an audio deepfake system to trick an executive at an energy company into wiring hundreds of thousands of dollars to their account. They did it by using this AI technology to impersonate the CEO of the company. The deepfake CEO ordered his subordinate to make the transfer—and he did.

Interested in more AI horror stories? These examples came from a fantastic curated list of “awful AI” on GitHub.

Do You Need AI You Can Trust?

Here at Pandata, we use a proprietary framework to make sure companies implement Trusted AI, or AI that is approachable and ethical. It’s AI that’s transparent and fair. It’s also AI that won’t hurt your customers, reputation and brand.

If you’re looking to design and implement an AI solution, schedule a free AI Exploration Session with our data science experts. We’ll discuss your needs and how you can implement Trusted AI that achieves your business goals.

Cal Al-Dhubaib is the CEO/AI Evangelist at Pandata.