Pandata Blog

AI design and development for high risk industries

compass sitting on beach during sunset

Where To Look for Legitimate Responsible AI Standards

Until now, regulating AI design and development has largely been left up to individual interpretation. Organizations controlled the level of risk mitigation and ethical compliance involved with their ML models, and many have seen the unintended consequences of lax frameworks.

In 2023, that’s about to change.

Between the EU AI act, Microsoft’s guidelines for human-AI interaction, and more, we’re seeing a number of responsible AI frameworks and regulation emerge. And as a company trying to build trustworthy AI solutions, how can you best navigate this new landscape?

Check for Industry Guidelines

As a starting point, check and see if industry leaders and other reputable groups within your industry have released their own set of responsible AI guidelines. Not only will these come from a source you should be able to trust, but they will also likely be most applicable to your organization. While these sources will vary from one industry to the next, the work done by the nonprofit organization, Responsible Artificial Intelligence Institute, is one great example.

Pay Attention to Changing Government Regulations

The AI regulatory landscape is quickly shifting from suggested guidelines to more rigid policies and regulation. And the most successful AI builders are those that are proactively designing AI to comply with this pending legislation. Here are just a few government-led AI regulations that may impact your company.

European Union’s AI Act

The EU’s AI Act is expected to have a global impact, as international organizations will need to comply with the legislation. The Act categorizes AI systems into three levels of risk: unacceptable, high, and limited/minimal.

  • Unacceptable risk systems will be banned.
  • High-risk systems will have specific legal requirements.
  • Limited/minimal risk systems will be largely unregulated, except for customer-facing applications that require disclosure of AI use.

Canada’s Artificial Intelligence & Data Act

Canada’s Artificial Intelligence & Data Act is the country’s first attempt at formally regulating certain AI systems as part of privacy reforms introduced by Bill C-27. The act aims to mitigate risks of harm and biased output from AI systems, with penalties for non-compliance being much higher than those currently available in Canada.

Though the act is drafted at a high level, with more details to follow in forthcoming regulations, it represents a significant shift in the proposed regulation of certain AI systems in Canada.

New York City’s Automated Employment Law

At the end of 2022, New York City passed a first-of-its-kind law that prohibits employers from using AI and algorithm-based technologies for recruitment, hiring, or promotion without an audit for bias.

The law requires an audit of such tools for bias within one year of their use, with a summary of the audit results made public on the employer’s website. Failure to comply can result in a fine of up to $500 for a first violation, and fines between $500 and $1,500 daily for each subsequent violation.

Identify and Adopt Common Themes  

While you may not find a single published set of guidelines that can be directly adopted by your organization, it can be helpful to look at some of the common themes that appear in the guidelines developed and published by others.

These may include:

  • Safety
  • Fairness
  • Accountability
  • Transparency
  • Explainability
  • Privacy
  • Human control
  • Robustness
  • Team diversity
  • Continual monitoring

You may decide to emphasize some of these items more than others depending on your industry, company, and use case, but these should provide a good reference framework as you start to build your own set of responsible AI guidelines.

Avoid Ineffective AI “Frameworks”

Proceed with caution as you look for framework examples and AI regulatory best practices. As regulation and AI design frameworks become more popular, there will inevitably be ineffective “quick fix” solutions for sale. There is no one-time product or service promising to easily remove bias from your model.

Preventing and monitoring risk associated with AI is an iterative process that must be constantly managed by your team. It’s not easy, but it’s critical to building trust with, and protecting the privacy of, those impacted by your solution.

Get the Latest on Responsible AI

The AI regulation landscape is changing fast—and it can be hard to keep up. To stay up-to-date on the latest responsible AI news, subscribe to our free email digest, Voices of Trusted AI. In addition, you’ll receive top insights on designing and building AI that increases trust, buy-in, and success—straight to your inbox each month.

voices of trusted AI email digest

Contributor: Cal Al-Dhubaib is the AI Strategist and CEO at Pandata.