This article is part of a series featuring big brands that are building responsible AI solutions to solve real problems—and what we can learn from them. To receive more trustworthy AI news and expert takeaways, subscribe to our Voices of Trusted AI email digest.
When designed correctly, AI can have a powerful impact on your business.
Take Estée Lauder, for instance.
The billion dollar makeup, skin care, fragrance, and hair care brand has incorporated its own ethical machine learning practice into its business strategy to successfully drive product development and supply chain efficiency.
Of course, given the company’s worth and continually expanding customer base, it had to take a thoughtful approach when applying AI technology into its business practices. Below, we dive into several of the company’s AI-powered offerings, driven by a thoughtful ethical AI strategy.
With the COVID-19 pandemic came a significant growth in ecommerce. This prompted the beauty mogul to offer customers more digital, personalized experiences.
To solve this issue, the beauty company identified several areas of its product development process and supply chain that could be improved or augmented by AI. Below are some of the resulting use cases.
The company then used the garnered data from these tools to make informed decisions about new product developments and supply chain adjustments.
Estée Lauder would have seen little success with these developments if it weren’t for their responsible machine learning strategy. When designing AI/ML models, especially ones that involve humans, it is critical that fairness, transparency, and privacy are the pillars of your AI strategy.
To ensure it uses AI with integrity, Estée Lauder focuses on two key areas: reducing data biases, and lawful data collection and privacy.
Data scientists, major organizations, and media outlets across the globe repeatedly stress the importance of avoiding bias while strategizing and designing trustworthy AI. While bias isn’t inherently bad, what truly matters is your ability to measure the training data and outcomes of an AI model for harmful bias.
Because personalization plays a significant role in several of its use cases, reducing data biases is a critical component in Estée Lauder’s AI strategy. To help avoid data biases, the company places a significant emphasis on scrutinizing collected data to achieve inclusivity, thus ensuring product recommendations are accurate for an individual’s skin tone, ethnicity, and preferences. This means that both AI and humans must be involved to reach this goal and avoid unintended consequences.
Data privacy relates to the idea that only authorized entities have access to personal data governed by the terms of consent. Organizations ultimately need to know how to use their data to comply with both industry-specific regulations and universal ones like CCPA and GDPR.
Of course, it’s also important for maintaining customer trust.
To meet these objectives, Estée Lauder harbors its own strict data privacy and collection laws. For example, the company’s consumer data platform uses AI to help find the right products for customers. However, it only allows its AI technology to process data for customers that have opted in through a login or other form of consent.
The company also works heavily with legal professionals and privacy teams to avoid potential issues with product launches, data privacy and collection, and more. In fact, the company worked with 12 lawyers to launch its fragrance recommendation engine in China.
AI models need to be able to handle the full range of inputs they will come across in day-to-day use. For example, models have historically produced poor results for people with darker skin if the data included in training does not prioritize examples of the full range of human skin tone. If Estée Lauder trained their makeup assistant app only on images of people with lighter skin, people with darker skin would face serious, harmful bias in how well the app was able to evaluate their makeup.
Organizations that ignore model bias and data privacy don’t just open themselves up to legal consequences, they also lose their customers’ trust. Trust is the greatest currency of AI models. Skepticism means users will not rely on the models, leading to poor decision making and missed opportunities.
Looking for more guidance on ethical AI strategy? Subscribe to our free email digest, Voices of Trusted AI, for monthly emails that contain trusted AI resources, reputable information, and actionable tips for developing a trustworthy AI strategy.
Contributor: Merilys Huhn is a Data Science Consultant at Pandata.