Even though we’ve come a long way in improving the accuracy and integrity of AI, it’s still far from perfect. If no one is responsible for your AI, how can you guarantee it will be accurate and trustworthy in a month or a few years down the line? Developing a responsible machine learning culture is key to preventing problems before they arise.
The following key takeaways have been inspired by the recent ebook, “Responsible Machine Learning, Actionable Strategies for Mitigating Risks & Driving Adoption,” published by H2O AI cloud developers, Patrick Hall, Navdeep Gill, and Benjamin Cox. To read more, click here.
How do I know if my organization’s culture is “responsible?”
According to Hall, Gill, and Cox, responsible machine learning culture starts with asking the right questions.
If you’re unable to confidently answer any of the above, your organization has some work to do. The good news, however, is that you’ve identified a starting point.
Solving these problems is dependent on your organization and its structure, but we suggest posing a series of follow-up questions to help break down the issues you’ve uncovered.
Once you’ve identified the weak spots in your organization, it’s important to set goals for your vision of a responsible company culture.
While there’s no set definition of a responsible machine learning culture, companies that are best at using AI have the following traits.
Defining individual roles for each member of your team is key, especially when AI is involved. Whether an individual is directly involved in the data engineering process or is less hands-on as an engagement leader, they should understand their role in the company’s processes.
“If an organization assumes everyone is accountable for ML risk and AI incidents, the reality is that no one is accountable.” — “Responsible Machine Learning”
The benefit? If and when a problem occurs your organization will know exactly where to look—not necessarily to assign blame, but to move forward in a solution.
Preparedness builds on accountability. You now know who to turn to when something happens, but then what?
Establishing processes for resolving problems and reporting incidents will remove a huge weight from the shoulders of your team. In the midst of what will inevitably be a stressful situation, having the next steps readily available will ensure a quicker, calmer solution.
“Dogfooding” is slang for “eating your own dog food”—or, in other words, using your own technology internally.
“Responsible Machine Learning” introduces dogfooding in the context of the Golden Rule: “if you wouldn’t use a machine learning system on yourself, you probably should not use it on others.”
Dogfooding is the ultimate way to mitigate risk in your AI solutions and reinforce trust in your technology. Think: If it’s not good enough for you, why would it be good enough for your clients?
You’ve identified the gaps in your organization. You understand what your culture should be. Now, how do you get there?
A responsible machine learning company culture doesn’t develop overnight. Transitioning your machine learning culture from nonexistent to responsible is a series of gradual changes that take effect slowly.
Here are three steps you can take to make significant progress.
The general lack of diversity is an ongoing issue in the data science industry. Focus on creating a diverse team of individuals from all backgrounds that can contribute a wide range of information to your artificial intelligence systems.
Artificial intelligence is complex. Whether you’ve created your own technology solution or received external help, it’s important to partner with a trusted advisor to proactively address considerations and concerns around the ethics of AI.
All too often we hear about the unintended consequences of AI. Mitigating the risks associated with AI can be very cumbersome without the right team culture and expertise to build an ethical data science and AI practice.
Having real and hard conversations about bias and risk, knowing what tools to adopt, and moving from ethics as a value to ethics as a virtue are just a few of the considerations you’ll need to have a firm grasp on. Whether you’re a data science veteran or an AI novice, there’s no shame in asking for help.
“Responsible Machine Learning” mentions the engineering adage “Go fast and break things”—and while it may work for lower-risk environments like advertising, it’s not necessarily great advice for the data science industry.
When your AI solutions apply to fields like healthcare or criminal justice, it’s important to understand the consequences of technology going wrong. Slow down when designing and implementing new solutions—mistakes shouldn’t be a result of careless, impulsive behavior.
Before you implement artificial intelligence, consider your organization’s readiness. Are you prepared to manage responsible machine learning?
Creating and implementing a responsible machine learning culture is only one facet of building AI solutions you can trust. For other helpful Trusted AI and data science resources similar to this article, subscribe to our Voices of Trusted AI monthly digest. It’s a once-per-month email that contains helpful Trusted AI resources, reputable information, and actionable tips straight from data science experts themselves.
Merilys Huhn is an Associate Data Science Consultant and Nicole McCaffrey is the COO at Pandata.