BusinessCrypto

The Consumer Psychology of Adopting AI

CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Curt Nickisch.

Artificial intelligence is changing business as we know it, but the extent of those changes depends on two things. First, how good the technology gets, and second, how much companies adopt it and consumers actually buy it, and there’s a gap there. According to a Gartner survey, for instance, four out of five corporate strategists say AI will be critical to their success in the near future, but only one out of five said they actually use AI in their day-to-day work.

That was a 2023 survey, it’s probably different today. But the point remains, there is, adoption is lagging, and a key reason for that is perception. Many people view AI and automation negatively and resist using them.

Today’s guest has studied the psychological barriers to adoption and explains what managers can do to overcome them. Julian De Freitas is an assistant professor at Harvard Business School and he wrote the HBR article, Why People Resist Embracing AI. Julian, hi.

JULIAN DE FREITAS: Hi, Curt. Thanks for having me on the show.

CURT NICKISCH: Julian, the adoption of technology is an age-old experience for people. We’ve resisted technology many times in the past and have adopted it. Is AI any different from other technologies when it comes to resistance to adoption?

JULIAN DE FREITAS: I think the answer is yes, and we’re finding in many cases AI is different from a consumer perception standpoint. What we’re seeing is that in many use cases, people perceive AI as though it is more human-like, as opposed to being this sort of non-living technology. And this has profound implications for a number of marketing problems, such as overcoming barriers to adoption, but also, new ways of unlocking value from the technology that aren’t possible with previous technologies.

And then of course, there’s also interesting challenges around the risks, because it’s not actually the case that this is another human. It does fall short of humans in various ways. And so if we treat it as a full-fledged human being, that could also create challenges and risks.

CURT NICKISCH: What are the main ways that people see AI as something that they want to drag their feet with, as something they want to resist?

JULIAN DE FREITAS: We try to narrow it down to five main barriers. At a high level, I think you can summarize them as, AI is often seen as human-like, but not human enough. Or conversely, it is seen as a little bit too human, too capable. And then there’s one last barrier that’s just about how it’s really difficult to understand. So, the five barriers that myself and my colleagues have identified through our research are that AI is opaque, emotionless, rigid, autonomous, and not human.

CURT NICKISCH: So, let’s talk about these roadblocks one by one. Starting with AI being too opaque, what does that mean?

JULIAN DE FREITAS: So this is the idea that AI is black box. There are inputs that come in, let’s say email, and then outputs that come out. It tells you if the email is spam or not, but you don’t really understand how it got from the input to the output. Or there are these really sophisticated chatbots and you just can’t really predict what they’re going to do in any new situation. And admittedly, there are many products that we don’t understand, but this is particularly acute for AI, given the complexity of the models. Some of the latest models are operating using billions or even trillions of interacting parameters, making it impossible even for the makers of the technology to understand exactly how it works.

CURT NICKISCH: I remember seeing a video where somebody was talking about autopilot on a plane where the pilots said to each other, “What is it doing now?” Just that sense that it’s doing something for a reason but you can’t quite figure out why it’s doing what it’s doing. So, what do you suggest companies or product designers do in this situation?

JULIAN DE FREITAS: So one obvious intervention is to try to explain how their systems work, especially answering this question, why is the system doing what it’s doing? So for example, an automated vehicle might be stopping because there is an obstacle ahead, as opposed to just saying that the vehicle is stopping now.

Another solution, sometimes companies will ease stakeholders into the more difficult to explain forms of AI. So, one example which a colleague, Sunil Gupta, wrote a case about is Miroglio Fashion, which is an Italian woman’s apparel company. And they were dealing with this of forecasting the inventory that they would need to have on hand in their stores. Previously, this was something that the local store manager was responsible for, but they realized that they could get more accurate at doing this, and this would translate into higher revenues, if they could use some kind of AI model.

They had two options. One was to use the latest off the shelf model that really operated in a way that was hard to understand, so it could extract all sorts of features about the clothing that you and I can’t even perfectly verbalize, and use that to forecast what the store should order for the next week. But there was also a simpler model which would use easy to verbalize features, such as the colors or the shapes of the clothing, and then use that to predict what to order for next week.

And so, even though the first type of model, the more sophisticated one performed much better, they realized that if this was going to be implemented, they needed buy-in from the store managers. The store managers needed to actually use the predictions from the model. So for that reason, they initially at least, rolled out the simpler model to a subset of their stores. And the store managers did use these, the stores that had this model performed better than the ones that didn’t.

And after doing this for some time, eventually they felt ready to upgrade to the more capable model, which they did eventually do. In some ways, they ended up with a model that is still not very easy for you and I or the store managers to understand, but what they did is they trained their employees to get used to this idea of working alongside this kind of technology to make these kinds of predictions. So they kept the human factor in mind.

CURT NICKISCH: That’s really interesting. So, what about this critique of AI that it’s emotionless?

JULIAN DE FREITAS: At the heart of this barrier is this belief that AI is incapable of feeling emotions. There are many domains that are seen as depending on this ability, domains where some sort of subjective opinion is very important. If you are selling some sort of offering and introducing AI into it, well, if it’s a domain that is seen as relying on emotions, you’re going to have a hard time getting people to get comfortable using AI in that domain.

CURT NICKISCH: This also makes me think of automated voices, right? On your smartphones or smart speakers, where a lot of companies use a woman’s voice. Doesn’t make it right, but they use a woman’s voice because it’s perceived as more trustworthy, more engaging. Is that what you’re talking about here?

JULIAN DE FREITAS: Yeah, you’re absolutely right, that imbuing the technology with a gender, a voice, even other cues that we typically associate with having a body and a mind, like when Amazon’s Alexa goes, “Hm,” it’s as if it’s really pausing and thinking, or if you imagine introducing breathing cues and all these sorts of things. What they do is they subconsciously tell us that we’re interacting with an entity that is like a human. And these kinds of anthropomorphizing interventions do indeed increase how much people feel that technology is capable of experiencing emotions.

Another strategy that I’ve seen is instead of trying to convince people in some way that this AI system can indeed experience feelings, instead, you can play to what are already seen as being AI strengths. Take dating advice. Much of the experiments will show that people prefer receiving dating advice from a human then from some kind of AI system, and that gets flipped when you think about something like financial advice.

But if you tell people that actually, getting the best dating advice or getting the best match in the domain of dating really does depend on having this machinery beneath the hood that can take in as inputs your demographics and any information you might’ve provided the company and then it has to sort of sort and rank and filter various possible matches to find the perfect match to you. Now people can see how something that they would typically view as being highly subjective, independent on emotions, actually benefits from an ability that they already think AI is good at.

So, a company like OkCupid for instance, often talks about how its AI algorithms are doing this to find the perfect match for you. That kind of intervention also helps get around this emotionless-ness barrier.

CURT NICKISCH: Do you have to know as a product designer or company, whether your product is maybe better left emotionless, where there might be a mistake to introduce emotion? Are there certain products where you really want it and certain products where you really don’t?

JULIAN DE FREITAS: For sure, yeah. I think there are domains where what you’re talking about is very sensitive in nature or embarrassing. It’s tempting to make your chatbot as lively and human-like as possible as a default, but it might not make sense for your particular use case. There are examples where people are actually happy that what they’re talking to is an AI system, as opposed to a full-fledged human being judging you and analyzing you.

CURT NICKISCH: Well, related to this idea is that people are worried that AI is too autonomous, that it just has a mind of its own and is going to do what it does without taking me into account.

JULIAN DE FREITAS: That’s right, yeah. So there are some cases where AI systems seem to have too much control. You can think about a robo vacuum that can vacuum and mop and do all these things that normally you used to do, or you can think about some sort of home automation system for regulating temperature that’s running these algorithms to change the temperature throughout the day without you needing to do anything. These kinds of systems can begin to feel as though they’re taking control away from you.

Autonomous vehicles, it’s a system where you’re getting into the car and now it’s making all of these complicated decisions and adapting to various settings, and you worry that you’re not going to be able to take control at the moment that you need to. That’s an example of where in some ways it’s the reverse of what we were talking about earlier, where AI systems can at least in some cases seem too capable for our own taste.

CURT NICKISCH: And just to underline what you said about those two examples with automated thermostats. Nest, the thermostat company that lets you use sort of a learning algorithm or you can just switch to manual mode. You give people the option of choosing which one they want to go with and you give them that sense of control. And then for the Roomba vacuum, iRobot actually programmed it to move in predictable paths rather than unpredictable ones that are, might’ve been better but just to give more of a sense that it was under control and not having a mind of its own.

JULIAN DE FREITAS: I think the broader idea with this second type of intervention is to put humans in the loop. So, even if the system is doing most of the work, giving people the sense that they are still in control makes the world of difference. One thing we know from the research is you don’t need to give people too much control either, in order for them to feel like they’re in control. And in some cases this is a good thing because overall, the system might be more accurate if the AI system is doing more of the work.

CURT NICKISCH: Is this the same perception that AI is too inflexible, even though it’s theoretically built around your needs and prompts?

JULIAN DE FREITAS: Yeah, so inflexibility is in some ways the very opposite of autonomy. So, while it’s true that there are cases like automated vehicles where the system behaves in a very sort of autonomous way that seems to take control away from you, there are other domains in which we worry that this AI system is not going to be flexible enough, that it’s not going to adapt to my particular unique problem that I’m trying to solve, because we believe that it can’t learn from mistakes in the way that we see other human beings learning from mistakes. What’s going to be helpful is including cues that suggest that the system is in fact learning.

A lot of the lab experiments will show that even if you label the system differently, calling it for example, machine learning as opposed to algorithm, this change people’s belief that AI is very inflexible.

In some companies like Netflix, you see them address this by including little cues, such as, for you, or recommended because you saw X, which shows that it is continuously learning beneath the hood.

There’s also another strategy which is, if you can, to not even talk about the fact that you’re using AI at all. One very useful example that I saw was from Borusan Cat, which is a subsidiary of Caterpillar, the sort of large vehicle manufacturer. Borusan Cat is in Turkey, and they were dealing with this issue that many of their B2B customers had, where the equipment would eventually break down and then Borusan Cat would have to repair the big machinery. And often, the parts of the machine had deteriorated to the point that they weren’t salvageable. So it would take quite a while to get the machine back up and running again, and in the meantime, the customer would be left without the machine. So, this was a really bad situation for all parties.

And so they realized that if only they could predict when the machine was going to malfunction ahead of time, they could avoid all of this altogether. So, this is a perfect job for AI. They very smartly embedded sensors in all of the machines so that they could collect data on various features of the parts as they were used over time.

Now, when they tried to sell this service as a standalone because it was really high performing, I think they were able to predict with something like 97% accuracy whether the machine was about to have a failure. What often happened is the customer said, “Are you telling me that from your office there in Borusan Cat, you can tell me better than I can, who uses this machine every day and knows all of its quirks, that this machine is going to fail? I don’t buy it. This is a sales gimmick, I’m skeptical that you’re really providing a solution that’s personalized.”

What the company eventually did was it just folded this ability into the maintenance contract. So, it told the customer, “Look, we promise you that if you go with this particular maintenance contract, you will never have machine downtime.” And instead of selling it as a standalone service, they just gave the customer this promise, and they found that that worked much better. Not only that, but because it was part of this bigger ticket maintenance contract, the salespeople were also more motivated to now sell this full bundle. And also because they were able to predict when machines would break down before they actually broke down, they were able to salvage many of the parts and refurbish them and sell them to create additional revenue streams.

In this particular case, they didn’t need to talk about the fact that AI was involved, and it allowed them to completely circumvent this concern, the skepticism that the customer would have that your AI system is not going to be able to solve my particular needs.

CURT NICKISCH: So, maybe the biggest issue, the fifth roadblock, is that people prefer dealing with people. They’re obviously an important part of our jobs that aren’t just about increasing productivity. Work is a human experience and a collaborative experience. How do you tackle this concern that, I’d just rather talk to a person?

JULIAN DE FREITAS: What we do know is that people will use AI systems if they believe that they truly outperform humans trying to do the same job. But when the performance is equated between the AI system and the human, people continue to prefer to interact with the human. Of course, we’re not yet at the point where there are humanoid systems walking around that both physically and mentally resemble us perfectly.

An interesting question, maybe a bit more of a science fiction-like one is, in a near future where these types of systems are available, will we continue to interact with humans instead? I’m not sure exactly what the intervention for this will look like, just because it is in the future.

But one interesting idea is perhaps the kinds of interventions that will get people to work with these types of robot service providers would be the same types of interventions that social scientists have historically deployed in order to soothe inter-group relations of other kinds.

So for instance, when people don’t want to interact with those who are not part of their ethnicity or whatever other group markers. And the reason for this is that when you have these interactions with those that you view as other, you slowly change how you psychologically represent them from something that’s much more categorical and stereotypical, to something that’s much more nuanced and sensitive to their unique traits. And that can eventually soothe anxiety or discomfort around interacting with them.

So it could be in a similar way in the future – the more that we’ll eventually view them in a different way. By the same token, you might imagine that if these kinds of systems will be framed as helping you achieve your goals, know complementing the goals that you already are striving to achieve, then they’ll be viewed as being on your side, and so that will also ease people’s willingness to utilize them.

CURT NICKISCH: If people are listening to this and they want to work in AI and AI adoption of their company’s products, I mean, what would you recommend to somebody to build in their careers?

JULIAN DE FREITAS: A lot of this can be done, I believe, by managers of all kinds, as long as they’re sensitive to the human factor. Anyone with marketing training for instance, learns this bitter lesson that good products don’t sell themselves. And I think in a similar way here, if one is aware of these types of barriers, then you can get good at identifying for any use case, which of the particular barriers that are at play, and then what can I do to address them so that people view this technology in a way that doesn’t conflict with their existing way of viewing the world, and that will ease their concerns and lead to adoption.

CURT NICKISCH: And I’m also wondering about ethics here. It just reminds me a lot of the early web, where a lot of the marketing training was like how to create habits and how to get people to spend more time on your site and click more things, right? And there’s just a lot of psychology work that went into that, and now there’s a lot of pushback and criticism that some of these online products have just become addictive almost and not productive.

What would you recommend to managers about the ethics of this psychology work that they’re doing as they try to increase adoption of their products?

JULIAN DE FREITAS: I think the same interventions that could increase adoption of AI in the short term can create risks for consumers, firms, and society in the long-term. So I think managers need to have a very long-term view of not just, is this going to increase customer acquisition? But also, okay, once the customer starts to use this product, what are the downstream concerns that I should be thinking about? And I think adopting that long-term view will allow them to intervene in a way that’s more balanced, where they’re thinking about the full lifetime of the customer rather than just that initial acquisition phase.

So for example, this barrier of viewing AI as very rigid, one solution is just to give people the most capable, flexible systems. That also increases the chance that they will use the system in ways that you didn’t even intend for them to use it, creating potential risks. So one study we did for instance, was looking into the so-called AI companion applications, which are applications specialized for developing social relationships.

So if you’ve seen the movie Her, it’s pretty much the same idea, where this is an AI friend or romantic partner in your pocket. Now, the intended use of these apps is exactly for that, but what we found was that about 5% of users were also using the system to express pretty serious mental health problems that they had, including in a subset of these messages, be crisis messages, such as self-harm ideation.

And we found actually, when we audited the performance of these apps by sending such messages to them and classifying how they responded, that about 25% of the responses weren’t just unhelpful, but they were also deemed as risky by a clinician.

And so that’s an example where giving people that flexibility on its own is not necessarily the best approach, but you want to also think about, does the system truly need to be that flexible for me to give the customer the benefits of the technology here? If so, what are the additional guardrails I need to put into place to protect against these downstream risks that are going to harm not just the consumer, but also me as the firm and my reputation for being able to provide this kind of offering safely?

CURT NICKISCH: Well, the human mind is complex, and these business problems are complex too. So, it’s been really helpful to talk through some of these challenges and avenues for solutions through with you. Julian, thanks so much for coming on the show to share your research.

JULIAN DE FREITAS: Thanks again so much for having me, Curt. It’s been a real pleasure to share some of these ideas and think through some of the nuances with you.

CURT NICKISCH: That’s Julian De Freitas, assistant professor at Harvard Business School and the author of the HBR article, Why People Resist Embracing AI. And if you want more, we have over 1,000 episodes and more podcasts to help you manage your team, your organization, and your career. Find them at HBR.org/podcasts, or search HBR in Apple Podcasts, Spotify, or wherever you listen.

Thanks to our team, Senior Producer Mary Dooe, Associate Producer Hannah Bates, Audio Product Manager Ian Fox, and Senior Production Specialist Rob Eckhardt. Thank you for listening to the HBR IdeaCast. We’ll be back on Tuesday with our next episode. I’m Curt Nickisch.

Emma is a tech enthusiast with a passion for everything related to WiFi technology. She holds a degree in computer science and has been actively involved in exploring and writing about the latest trends in wireless connectivity. Whether it's…

What's your reaction?

Related Posts

1 of 470