Pandata Blog

AI design and development for high risk industries

responsible machine learning

How To Cultivate a Responsible Machine Learning Culture in Your Organization

Even though we’ve come a long way in improving the accuracy and integrity of AI, it’s still far from perfect. If no one is responsible for your AI, how can you guarantee it will be accurate and trustworthy in a month or a few years down the line? Developing a responsible machine learning culture is key to preventing problems before they arise.

The following key takeaways have been inspired by the recent ebook, “Responsible Machine Learning, Actionable Strategies for Mitigating Risks & Driving Adoption,” published by H2O AI cloud developers, Patrick Hall, Navdeep Gill, and Benjamin Cox. To read more, click here

Identifying the Gaps in Your Organization

How do I know if my organization’s culture is “responsible?”

According to Hall, Gill, and Cox, responsible machine learning culture starts with asking the right questions.

  • How does our organization track the use of artificial intelligence and machine learning? 
  • Who monitors our technology systems?
  • What’s our plan in the case of an AI incident? 

If you’re unable to confidently answer any of the above, your organization has some work to do. The good news, however, is that you’ve identified a starting point.

Solving these problems is dependent on your organization and its structure, but we suggest posing a series of follow-up questions to help break down the issues you’ve uncovered.

  • Who can take on the responsibility of AI monitoring? Should it be a team or an individual? 
  • Do we have the right software in place to track our AI systems automatically or does it require human intervention?
  • Are we able to accurately identify anomalies to recognize when an incident has occurred?

Traits and Benefits of a Responsible Machine Learning Culture

Once you’ve identified the weak spots in your organization, it’s important to set goals for your vision of a responsible company culture.

While there’s no set definition of a responsible machine learning culture, companies that are best at using AI have the following traits.

Accountability

Defining individual roles for each member of your team is key, especially when AI is involved. Whether an individual is directly involved in the data engineering process or is less hands-on as an engagement leader, they should understand their role in the company’s processes. 

“If an organization assumes everyone is accountable for ML risk and AI incidents, the reality is that no one is accountable.” — “Responsible Machine Learning”

The benefit? If and when a problem occurs your organization will know exactly where to looknot necessarily to assign blame, but to move forward in a solution. 

Preparedness

Preparedness builds on accountability. You now know who to turn to when something happens, but then what?

Establishing processes for resolving problems and reporting incidents will remove a huge weight from the shoulders of your team. In the midst of what will inevitably be a stressful situation, having the next steps readily available will ensure a quicker, calmer solution.

Dogfooding

“Dogfooding” is slang for “eating your own dog food”—or, in other words, using your own technology internally. 

“Responsible Machine Learning” introduces dogfooding in the context of the Golden Rule: “if you wouldn’t use a machine learning system on yourself, you probably should not use it on others.” 

Dogfooding is the ultimate way to mitigate risk in your AI solutions and reinforce trust in your technology. Think: If it’s not good enough for you, why would it be good enough for your clients?

3 Steps to Implement a Responsible Machine Learning Culture

You’ve identified the gaps in your organization. You understand what your culture should be. Now, how do you get there? 

A responsible machine learning company culture doesn’t develop overnight. Transitioning your machine learning culture from nonexistent to responsible is a series of gradual changes that take effect slowly.

Here are three steps you can take to make significant progress. 

1. Focus on Diversity 

The general lack of diversity is an ongoing issue in the data science industry. Focus on creating a diverse team of individuals from all backgrounds that can contribute a wide range of information to your artificial intelligence systems. 

2. Partner With an AI Ethics Expert 

Artificial intelligence is complex. Whether you’ve created your own technology solution or received external help, it’s important to partner with a trusted advisor to proactively address considerations and concerns around the ethics of AI.

All too often we hear about the unintended consequences of AI. Mitigating the risks associated with AI can be very cumbersome without the right team culture and expertise to build an ethical data science and AI practice

Having real and hard conversations about bias and risk, knowing what tools to adopt, and moving from ethics as a value to ethics as a virtue are just a few of the considerations you’ll need to have a firm grasp on. Whether you’re a data science veteran or an AI novice, there’s no shame in asking for help. 

3. Slow it Down

Responsible Machine Learning” mentions the engineering adage “Go fast and break things”—and while it may work for lower-risk environments like advertising, it’s not necessarily great advice for the data science industry. 

When your AI solutions apply to fields like healthcare or criminal justice, it’s important to understand the consequences of technology going wrong. Slow down when designing and implementing new solutions—mistakes shouldn’t be a result of careless, impulsive behavior. 

Before you implement artificial intelligence, consider your organization’s readiness. Are you prepared to manage responsible machine learning?

Gain More Expert Insight

Creating and implementing a responsible machine learning culture is only one facet of building AI solutions you can trust. For other helpful Trusted AI and data science resources similar to this article, subscribe to our Voices of Trusted AI monthly digest. It’s a once-per-month email that contains helpful Trusted AI resources, reputable information, and actionable tips straight from data science experts themselves.

Merilys Huhn is an Associate Data Science Consultant and Nicole McCaffrey is the COO at Pandata.