Do you want an action plan for understanding what your customers want when it comes to AI safety? As a product manager, tech leader, or risk or compliance specialist, you may be facing questions about how to build responsible and trustworthy AI. But responsible AI is a big topic with many moving parts, so it can feel hard to nail down which aspects to work on first. Get it wrong, and you risk going to market with the wrong product, or worse, endangering and losing your loyal customers. Get it right, and you can differentiate your product and build what your customers want.
Two common mistakes with Responsible AI
So why wouldn’t companies get responsible AI right? Here are two common failure patterns.
A broad definition of responsible AI that gives little actual guidance on how or what to build:
Instead of setting specific targets, companies develop a list of vague principles. The principles don’t provide specifics so the implementing teams understand how to operationalize them. Worse, if all the vague RAI principles are of equal value, it gives no guidance on how to deal with trade-offs. Should teams collect sensitive data to measure fairness, or should they minimize data to protect privacy?
The communications don’t address the customers’ concerns:
The company doesn’t know what their customers want and fails to communicate if or how the product is safe. Alternatively, they explain AI safety in overly technical terms. They take an “expert” perspective and don’t address the customers' worries.
This proven 4-step guide will solve those problems. By the end, you will understand what your customers care about when it comes to responsible AI. With the right market research, you can focus and prioritize your AI efforts and communications around them. With this knowledge, you will be able to build the crucial responsible AI features your customers want. Also, you can develop communications (including marketing campaigns and privacy notice) that address the customers’ needs and concerns.
Ask your customers what they want, but don’t ask them directly.
Sadly, it isn’t as easy as asking customers directly. If your customers are non-experts (or laypeople), they may have concerns but lack the vocabulary to translate them directly to engineering features. This is especially true in the area of generative AI, where many of the risks are new and unknown to most laypeople. When it comes to abstract ideas like privacy, fairness, or transparency, customers may struggle to clearly express and rank what they need. Only experts in the field will ask for “transparent model cards” or “privacy by design.” You need to meet your customers where they are, which is the trickiest part about asking them what they want.
If we can’t ask directly, how can we learn what our customers want and don’t want from AI?
There is a proven method to understand and communicate about technology risks. This method has been used successfully for decades and has been repeatedly proven to be effective in understanding laypeople’s concerns about new technologies, be it radon or contraceptives or privacy. It’s called a “Mental Models Approach to Risk Communication.” At its core, the mental models approach recognizes that experts and laypeople might have different perceptions of a new technology. It’s crucial to understand both groups and to communicate any gaps between them. In this article, I’ll focus on one aspect of that method, “understanding the layperson.”
The 4-step approach to your customers’ mental models
This is the business-friendly approach to understanding your customers’ concerns around responsible and safe AI. You need about a week of time, some previous experience on responsible AI, and approximately 12 potential customers willing to spend an hour being interviewed.
Start with expert judgment on what Responsible AI means
You are an expert. If you aren’t, enlist people who are. Alternatively, there are tons of existing frameworks on Trustworthy and Responsible AI (see resources). Build your definition of Responsible AI as the expert on your company, product, and risks. List the main components of trustworthiness that might be important to your business or to your customers. You can also use the list I created here as a starting point.
AI Principle |
Danger |
Privacy and Security |
Fairness |
Environmentally Sustainable |
Explainable |
Accurate |
Translate expert terms into less technical terms.
Now brainstorm the related words or contexts that your market might use. Add a new column to your table and add as many examples or related words as you can in each row. In this example, I’ve brainstormed a few terms that might be relevant to a car company that uses AI for automatic braking. There are likely more, and if you are working for data governance in car manufacturing, you probably know them.
AI Principle | Customers might worry that AI… |
Danger | leads to crashes |
Privacy and Security | is creepy, insurance rates go up |
Fairness | won’t work on rural roads |
Environmentally sustainable | wastes gas |
Explainable | Can’t control it, brakes unexpectedly |
Accurate | Brakes unnecessarily |
Interview your potential customers without asking them directly about RAI.
There are several different ways to understand and elicit your customers’ risks and concerns. (See this paper with citations). If you are a user experience professional or academic, take a deeper dive into more structured methods. Otherwise, take my tips below to get the most bang for your buck.
Start with broad questions and then dig deeper into the risks during an interview. Try not to bias the interview by explaining all the risks and then asking if they are worried about it. For example, if you want to understand your customers’ concerns around AI in cars and braking systems, you might start by asking them what they know about such systems. Ask them if they see any benefits. Remember what they say here, because you want to make sure you are building a system that provides those benefits.
Then ask them if they have any concerns. Keep it open-ended. “Could anything go wrong? Would you be worried about this?” Then if they have concerns, you can ask more pointed questions, or keep asking for more ideas. “Anything else?” Some interviewees may put on a brave face: “I’m not worried about an AI in my car, but some other people are worried.” In these cases, you can dig deeper by asking them to explain more about why other people would be concerned.
In the interviews, never tell them they are wrong or try to explain the expert's viewpoint! Remember, what we are trying to understand here is the gap between laypeople and experts. Their perceptions matter, as this is what you need to build your communications and product around!
Key practical interview tips:
Schedule interviews with 6-12 potential customers. Try to get a variety of experiences and customer types. The interviews can be 30 minutes to an hour. Remember to protect their privacy; don’t record or re-share without their permission. In some cases, you will want to pay the interviewees for their time. Don’t coerce anyone, and permit them to stop at any time. Be careful about delicate subjects! If you make someone cry, you are doing it wrong! Review the ethical considerations here.
Look for themes.
Now review all your notes and to understand the different themes that came up. Count how many times each principle was mentioned, as well as how many customers mentioned something. For example, maybe every customer mentioned the danger of crashes. You understand it is a prevalent concern (regardless of whether it is actually likely). Not only do you need to build a product that won’t cause crashes, but you need to communicate to your customers why and how it is safe.
Maybe a few customers mentioned sustainability with a number of different examples and concerns. You know for some customers, sustainability is a really important issue. If you don’t address sustainability in your product and in your information (and a competitor does), you will lose those customers.
AI Principle | Customers might worry about… | # mentions | # customers |
Danger | Leads to crashes | 12 | 12 |
Environmentally Sustainable | Wastes gas | 6 | 2 |
Caveat: Remember that the numbers are not exact statistics and more for eyeballing trends. You don’t have a really big pool of interviewees, they probably weren’t randomly sampled, and you will probably group the themes differently than another expert would. So avoid reporting this information with numbers, percentages, bar charts, etc. For example, you don’t know that 100% of your customers are concerned about crashes, but you have a very strong signal that most are.
One or two people can conduct the interviews in a week of full-time work and need another week to review and report the trends. Those two weeks could save your company months of developing the wrong features. By doing this analysis, you will gain so much more information about what your customers care about when it comes to responsible AI, and you can build a product that will protect them from their worst fears. For companies with larger teams or resources, I recommend investing more than a couple of weeks in the process, or considering hiring an expert.
Summary
This article explored the challenges of building Responsible AI products that meet customer needs. Many companies struggle to define Responsible AI principles, establish effective organizational structures, and truly understand customer concerns. This article introduced a proven method to bridge this gap by "translating" expert AI principles into relatable customer terms. Through interviews and careful analysis, companies can identify and prioritize the AI features that matter most to their customers, ensuring that their AI products are not only responsible but also resonate with the market.
References and Links
Risk Communication
Risk Communication, A Mental Models Approach
Morgan, M. Granger. Risk communication: A mental models approach. Cambridge University Press, 2002.
Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences.
Bostrom, A., Demuth, J. L., Wirz, C. D., Cains, M. G., Schumacher, A., Madlambayan, D., Bansal, A. S., Bearth, A., Chase, R., Crosman, K. M., Ebert-Uphoff, I., Gagne, D. J., Guikema, S., Hoffman, R., Johnson, B. B., Kumler-Bonfanti, C., Lee, J. D., Lowe, A., McGovern, A., … Williams, J. K. (2024). Risk Analysis, 44, 1498–1513. https://doi.org/10.1111/risa.14245
Eliciting mental models of science and risk for disaster communication: A scoping review of methodologies,
International Journal of Disaster Risk Reduction, Volume 77, 2022, 103084, ISSN 2212-4209,
https://doi.org/10.1016/j.ijdrr.2022.103084.
Krieger, Jonas Benjamin, et al. "A systematic literature review on risk perception of Artificial Narrow Intelligence." Journal of Risk Research (2024): 1-19.