You read that right: 80% of companies fail to benefit from AI tools — but it’s not for the reason you might be thinking. No, it’s not that the available AI tools aren’t useful, but not all employees fully trust them. This means they’re changing or holding back in their interactions with the technology.

Here, I’ll explore the study that brought this statistic to light to learn more about how trust influences AI adoption. Then, I’ll highlight some strategies you can use to increase employee trust through responsible, ethical and transparent AI usage to drive more successful adoption.

Unpacking Low Trust in AI Tools

Natalia Vuori is an Assistant Professor and researcher at Aalto University in Espoo, Finland. In her paper,  It’s Amazing – But Terrifying!: Unveiling the Combined Effect of Emotional and Cognitive Trust on Organizational Member’ Behaviours, AI Performance, and Adoption, Vuori says that AI is a “vital source of competitive advantage for companies.”

In the introduction, she points to similar research that demonstrates AI’s value for enhancing decision-making, facilitating collaboration and innovation and more — but she’s interested in something else.

Despite these benefits, the organizational failure rate of AI adoption is upwards of 80%. Vuori says this is because leaders haven’t helped employees develop cognitive and emotional trust enough to change their perception of and behavior toward using AI at work.

The new study places workers into one of four categories in terms of how they feel toward and interact with new technologies like AI:

  • Full Trust (High Cognitive/High Emotional): Employees have both a strong belief in the AI’s reliability and positive emotions toward it.
  • Full Distrust (Low Cognitive/Low Emotional): Employees lack both belief in the AI’s reliability and do not trust it emotionally.
  • Uncomfortable Trust (High Cognitive/Low Emotional): Employees believe in the AI’s reliability but feel uneasy or negative about it emotionally.
  • Blind Trust (Low Cognitive/High Emotional): Employees feel positive about the AI emotionally but lack confidence in its reliability.

Each of these four dispositions influences how employees act:

  • Full Trust: Employees don’t change their behavior when interacting with an AI system.
  • Full Distrust: Employees alter the data they input into an AI system or provide minimal or no data.
  • Uncomfortable Trust: Employees limit the amount or type of data they share with the AI.
  • Blind Trust: Employees provide extensive data to an AI system.

Here’s what all this means in a nutshell — and what came out in Vuori’s study:

To function well for its intended purpose, AI models often need a constant stream of quality data to learn from. Manipulating, confining or withdrawing data leads to biased and unbalanced inputs, which can degrade the AI’s performance over time. Poor AI performance further erodes trust among employees, creating a “vicious cycle” that hinders successful AI adoption within an organization.

What This Means for Marketers

Here’s what I took away from this study when it comes to AI adoption: It’s not enough for employees to understand that an AI tool is technically reliable — they must also feel comfortable using it.

From a leadership perspective, this revelation unveils the roots of a couple of different strategies managers can use to help get marketers on board with AI and set sail toward more successful adoption:

  • Normalize AI with messaging and guidance that enhances emotional security, reassurance and a sense of control.
  • Emphasize AI’s abilities beyond creating efficiencies; for example, its role as a user-friendly assistant.

For AI to be fully embraced in marketing, leaders must address both cognitive and emotional trust concerns. AI should be seen as a strategic tool and partner rather than a black-box system that marketers fear or distrust.

How To Unleash AI Tools’ Full Potential By Building Trusting

Successful AI adoption depends on how well leadership can nurture and maintain trust on two different levels: cognitive and emotional. In her study, Vuori outlines some strategies for each, from comprehensive training sessions to communication and even tempering expectations.

Let’s unpack those ideas while expanding them with actionable steps.

Foster Curiosity

There are plenty of reasonable things to worry about in life, but that shouldn’t stop people from being curious. No matter how you currently feel about AI, allow yourself to be curious about it. This comes before (and often leads to) education, but you have to at least be open to the idea first — or you might subconsciously refuse to learn anything.

Consider which of the four trust camps you most closely identify with, and try challenging yourself with the opposite perspective. Full distrust? Consider how you would act if you held no reservations and went all-in.

Even if you’d categorize yourself as “full trust,” and are open to using AI tools at work, take the stance of a full distruster just for fun. How would these perspectives, if they were true, change how you approach AI at work today?

Emphasize the Importance of Responsibility and Ethics

Folks struggling to trust AI on that emotional level may fear that using it is wrong or could result in less-than-desirable outcomes, like compromised data. Leaders should embolden their AI trust-building strategies by emphasizing the importance of being responsible and ethical when using AI. If you can build these into your company culture, that’s even better.

In a recent study we conducted by Brafton, only 27% of respondents said their company had an AI policy in place. Interestingly, of that 27%, more than half (66.7%) indicated that having an AI policy had a positive impact on adoption, citing that it helped:

  • Increase transparency about AI’s role in business processes.
  • Establish security measures that make AI usage safer for the employees and the company.
  • Guide AI usage through structured frameworks and responsible adoption.

If your company hasn’t created an AI policy yet, this is a can’t-lose initial strategy that can get the ball rolling. 

Encourage AI Literacy

Training sessions and workshops are a great way to help employees get hands-on experience with new tools but it’s not enough to simply know how to use them. Provide easy-to-understand materials that explain how AI models work, what data they use and how results are generated. Knowing how certain models work can help spin up curiosity about the technology, which might encourage doubtful employees to give it a chance.

Demonstrate Results

Telling your employees that AI can help them and your business improve KPIs or [insert another benefit here] is one thing, but proving it is far better. Track performance metrics that demonstrate AI’s effectiveness in improving marketing results and share them. Here are some to consider:

  • Engagement Rate (ER): Measure how AI-generated or AI-optimized content performs compared to human-created content.
  • Organic Traffic Growth: Track increases in traffic from AI-generated content and recommendations.
  • Return on Ad Spend (ROAS): Compare AI-managed vs. manually managed ad campaigns.
  • Time saved: Measure how quickly tasks can be accomplished after AI implementation, compared to manual processes.

When people can see how the tools you’re asking them to use are benefiting their work, it could generate some excitement that encourages them to climb aboard.

Final Thoughts

It seems trust is applied in a couple of ways here: Employees need to trust the tools and their employers to deploy strategies and resources that help them build that trust. While everyone will inevitably bring their own viewpoints and biases to the conversation initially — and fall into one of the four categories Vuori outlined in her study — positive change is possible.

A slow, steady, transparent and educational adoption strategy where leaders emphasize responsibility and ethics is a winning strategy.

Note: This article was originally published on contentmarketing.ai.