Why do AI adoption programmes fail even when the technology works?

Quick Answer: Most organisations optimise AI adoption for the 10-20% of early adopters while neglecting the 80% who drive sustainable change. BCG research shows 74% of AI transformations fail, with 70% of challenges stemming from people and process issues. The missing ingredient is psychological safety.

Key Characteristics:
  • AI transformation creates the exact emotional states (fear, anxiety, overwhelm) that prevent people from engaging with it
  • BCG research: 74% of companies struggle with AI ROI, with 70% of challenges from people and process, not technology
  • Four team archetypes: Early Adopters (5-10%), Fast Followers (20-30%), Sceptics (30-40%), and Guardians (20-30%)
  • A Sydney design team increased AI adoption from 12% to 67% in four months by addressing fear before training
Real Example:

At Telstra, Riley Coleman led 30+ initiatives during the Future Way of Working transformation. Projects that released technology with a user guide and called it done suffered dismal adoption. A Sydney design team later replicated this lesson: by addressing fear first and having a leader deliver sixty seconds of honest acknowledgment about AI's impact on careers, adoption doubled in six weeks.

Article

Your Team’s AI Adoption Is Missing An Ingredient

G’day, I learned this lesson the hard way at Telstra, and I’m watching design leaders repeat the same mistake with AI right now.

Riley ColemanRiley Coleman
November 19, 2025·16 min read

G’day,

I learned this lesson the hard way at Telstra, and I’m watching design leaders repeat the same mistake with AI right now.

We were rolling out Future Way of Working – a dozen major software and platform releases across the organisation. Massive transformation. I was the design lead on 30+ initiatives.

Here’s what happened: the projects that released their tech with a user guide and called it done? They suffered. Adoption was dismal. People attended the training, nodded along, went back to their desks and kept working the old way.

The projects that invested in proper change management and behaviour change? Night and day difference. People actually shifted how they worked.

Here’s what I didn’t understand then but see clearly now: we were optimising for the wrong thing.

We were optimising for the 10-20% who’d read the user guide, figure it out, and champion the new tools. The early movers. The enthusiastic ones.

The other 80%? We assumed they’d “catch up eventually.” They didn’t. They went quiet. Some pushed back. Some left. And the transformation stalled.

Confession time. I championed the fast movers. I celebrated the teams who adopted quickly. I didn’t notice I was leaving everyone else behind until the adoption numbers came back and they were brutal.

Fast forward to now. I’m watching design leaders do the exact same thing with AI.

They’re championing their Early Adopters. Celebrating the designers experimenting with Claude at 11pm. Amplifying the people building CustomGPTs.

And they’re losing everyone else.

The designers who want to engage but feel overwhelmed. The sceptics whose concerns get labelled “resistance.” The experienced people whose institutional knowledge gets dismissed as “old ways of working.”

Here’s the question I wish someone had asked me at Telstra:

Who are you optimising for?

Because here’s what we learned from 130 interviews with design leaders navigating AI transformation: most of you are designing adoption for the 10-20% who are already enthusiastic.

And you’re breaking the 80% who actually make transformation sustainable.

The Paradox Nobody Tells You About

Here’s the thing: AI transformation creates the exact emotional states that prevent people from engaging with it.

AI adoption requires creativity, experimentation, willingness to fail, adaptive thinking in uncertainty. Those are the ingredients you need from your team.

But AI transformation also creates fear (will I be replaced?), anxiety (I don’t know how to use this), overwhelm (which tool? which workflow?), and uncertainty (what does this mean for my career?).

Fear and anxiety shut down creativity and experimentation. When people feel threatened, they get safer. They protect what they know. They avoid risk.

So here’s the paradox: the transformation itself triggers the emotions that shut down the transformation.

I’ve seen this in seven cohorts now. Design leaders say, “Some of my senior designers won’t engage with AI.” When I ask what they’ve done, they list the training. The tool demos. The workshops.

They’re trying to solve an information problem. It’s not an information problem. It’s an emotional safety problem before anything else. A behaviour change problem after that.

(And yes, I know “psychological safety” sounds like corporate fluff. I used to think that too. Then I watched a Sydney team go from 12% AI tool adoption to 67% in four months by addressing fear first, training second.)

The design leader stood up and said: “Some of you are asking if AI will replace you. I’m not going to lie and say nothing will change. But we’re going to navigate this in a way that values what makes each of you irreplaceable. Your judgement. Your relationships. Your understanding of our users. That doesn’t go away. It becomes more important.”

Sixty seconds of honesty. Adoption doubled in six weeks.

We can’t train people out of anxiety. You can try, but it doesn’t work. You must create psychological safety first.

And here’s where it gets uncomfortable: creating safety means validating responses you might be labelling as “resistance.”

Why 74% of AI Transformations Fail

BCG research from October 2024: 74% of companies are struggling to achieve any return on AI investment.

When they dug into why: 70% of the implementation challenges stem from people and process issues. 20% from technology problems. Only 10% from the AI algorithms themselves.

Most organisations spend the inverse. 70% on technology. 10% on people.

The organisations who succeed did on average 2x less use cases and invested in depth across all 3 more equally – algorithm, tech & data and people – training and change management.

Your spend on the AI transformation budget should reflect where the problems are.

Not tool procurement. People.

Training on how to identify bias. How to test AI outputs. How to know when to trust the tool and when to revert to manual workflows. How to document failure so the next person doesn’t repeat it.

Governance that doesn’t strangle experimentation. Rituals that normalise sharing failures. Structures that pair your enthusiasts with your sceptics so innovation gets stress-tested before it scales.

At Telstra, the projects that invested in behaviour change; genuinely helping people understand not just WHAT was changing but HOW to work differently – those were the ones that stuck. The ones that just released tech and documentation? People went through the motions and reverted to old habits within weeks.

Same pattern with AI. If you’re not spending on people and process, you’re setting yourself up to fail.

The Four Types of People on Your Team

When we interviewed 130 designers about AI transformation, four patterns emerged. These aren’t personality types. They’re intelligent responses to genuine uncertainty.

Every single one sees something real that the others miss. Every single one is essential.

You can’t change who people are. You can only help them coordinate.

Early Adopters : The 11pm Experimenter (5-10%)

You’re experimenting with Claude at 11pm.

You’ve built three CustomGPTs this month.

You’re sharing discoveries in Slack that nobody implements.

You’re frustrated your team won’t move faster.

What you’re trying to do: Discover what’s possible before it becomes obvious. Scout the territory so your team doesn’t get left behind.

What you see: Possibilities before they’re obvious. Time savings others can’t imagine. The competitive risk of standing still.

What you need: Permission to explore within boundaries. A channel to share discoveries. Protection from being labelled “reckless.”

Your risk when isolated: You burn out. You create shadow AI usage that violates governance. You move so fast you create disasters that set back the whole team.

What you should do this week:

– Time-box your experiments (trust me i know the time sapping vortex here)

– If an AI workflow isn’t delivering quality after three attempts, revert to manual.

– Document what worked and what didn’t. Share it with one person.

Fast Followers : Curious but Overwhelmed (20-30%)

You’re asking “Should I be learning this?” You attended webinars.

You’re curious but paralysed by options. You want practical examples, not hype.

What you’re trying to do: Translate possibility into practical workflow that fits your daily work. Bridge the gap between “cool demo” and “I can use this Tuesday.”

What you see: The gap between possibility and practicality. What’s needed to make innovation actually work. Implementation barriers enthusiasts overlook.

What you need: Translation from possibility to practice. Curated options, not overwhelming choice. Safe spaces to experiment without pressure to be cutting-edge.

Your risk when overwhelmed: Paralysed by options, so you do nothing. You attend training but never implement. You become cynical when demos don’t translate to reality.

What you should do this week:

– Get your team’s early adopter to show you how they get a common task done differently now with AI

– Experiment yourself with this ONE task this week. Be prepared that it wont be exactly the same as they showed you (different everytime)

– Invest some time to Refine it.

– When it works, document it in three sentences. Share it.

Sceptics : Raising Concerns Everyone Ignores (30-40%)

You’re pointing out problems.

Accuracy issues. Bias risks. Privacy concerns.

You’re asking hard questions.

You’ve been labelled “resistant to change.”

What you’re trying to do: Protect quality standards and prevent expensive disasters before they happen. Catch edge cases enthusiasts miss.

What you see: Risks before they become crises. Quality issues enthusiasm overlooks. Ethical concerns that need addressing before deployment. The gap between demos and reality.

What you need: Validation that your concerns are intelligent, not obstructive. A legitimate role in the process. Recognition that asking hard questions is valuable.

Your risk when dismissed: You become an active resister. You stop raising concerns, so risks go unidentified until they explode. You leave, taking your quality standards with you.

What you should do this week:

– Yes, raise your concerns. But also sit with an Early Adopter for 30 minutes.

– Ask them to show you their workflow. Don’t poke holes. Ask questions.

– Look for opportunities to micro-learn.

Guardians : Protecting What Matters (20-30%)

You’re defending what works.

Protecting current processes and quality standards.

Asking “What are we losing?”

You’re resisting AI or avoiding it.

What you’re trying to do: Preserve institutional knowledge and valuable practices that aren’t documented but absolutely critical. Ensure transformation doesn’t discard tacit knowledge that prevents disasters.

What you see: What actually matters in current processes. What would be lost if we move too fast. Institutional knowledge that took years to build. The value in practices that seem “old-fashioned” but prevent disasters.

What you need: Recognition you’re protecting something valuable, not just resisting. Assurance transformation won’t abandon what works. A role in determining what must be preserved.

Your risk when pressured: You become an active resister. You leave, taking institutional knowledge with you. You dig in harder when pushed.

What you should do this week:

– Document three practices your team relies on and WHY they matter.

– Share with a Fast Follower.

– Partner to explore: if we used AI for X, how do we preserve Y?

How They Work Together

Here’s where orchestration gets powerful: pair complementary profiles on the same challenge.

Early Adopter + Sceptic = Innovation Built for Safety

What each does:

Early Adopter: Scouts possibilities, tests AI, identifies time savings Sceptic: Stress-tests for edge cases, identifies failures, ensures quality Together: Innovation that won’t create disasters

Fast Follower + Guardian = Change That Lasts

What each does:

Fast Follower: Translates discoveries into practical daily workflow Guardian: Identifies what must be preserved, documents tacit knowledge Together: Transformation that’s sustainable, not just fast

The Full Orchestration

Phase 1: Discovery (Early Adopter leads, 1-2 weeks)

Scout possibilities, test AI tool, identify potential value Deliverable: “Tool X could save us 10 hours/week on task Y”

Phase 2: Translation (Fast Follower leads, 1-2 weeks)

Turn discovery into practical workflow typical team members can use Deliverable: “Here’s how to integrate tool X into our Tuesday workflow”

Phase 3: Stress-Testing (Sceptic leads, 1-2 weeks)

Identify failure modes, document what can’t be lost, set quality standards Deliverable: “Tool X fails when [edge cases]. We must preserve [practice]. Here’s the review checklist.”

Phase 4: Implementation (Fast Follower + Guardian lead, 2-4 weeks)

Scale what works, preserve what matters, create sustainable adoption Deliverable: “Training complete. Quality checklist integrated. 8/12 team members using it.”

The outcome: Transform workflow to gain AI efficiency whilst preserving quality, managing risk, maintaining team cohesion.

You’re thinking: “My Early Adopter and Sceptic can’t stand each other.” I hear you. That’s why the Monday Conversation comes first. Create safety before pairings.

The Monday Conversation Script

Before you orchestrate, you create safety. The intent is to create space to raise concerns, to acknowledge fears and come up with some shared guardrails for AI use.

“Team, we need to talk about AI. I know there are lots of feelings – excitement, concern, scepticism. All valid. I’m not here to tell you what to think. I’m here to share how we’re navigating this together.

Some of you have concerns. About accuracy, privacy, what this means for careers. I want to be clear: those concerns are intelligent. You’re recognising real risks. We need your critical thinking, not blind enthusiasm.

We don’t succeed by everyone becoming enthusiasts. We succeed by combining different perspectives. Some explore possibilities. Some bridge to practice. Some catch risks. Some preserve what matters. We need all of it.

Look, I can’t promise nothing will change. That’s a lie. But we’re navigating this in a way that values what makes each of you irreplaceable.

Your Creativity. Your judgement. Your relationships. Your understanding of our users. That doesn’t go away. It becomes more important.

My job isn’t to get everyone adopting AI quickly. My job is to help you coordinate different perspectives so we get innovation WITH reliability. No one needs to change who they are. So let’s talk about this….”

Where to Go From Here

We’re all building the plane whilst flying it. I’m one step ahead, but I’m still figuring this out.

Here’s what seven cohorts taught me: orchestration isn’t optional. AI isn’t slowing down. Your people can’t sustain fragmented energy. You need them coordinated, not competing.

If You Want Help Planning This Conversation…

I’m offering the next 20 leaders a FREE 30-minute prep session.

I’ve had the Monday Conversation with seven teams. I was nervous as hell, I was scared what would come up. Almost made it too academic and lost the room.

If you want to have this conversation but don’t want to wing it alone, I’m offering the next 20 design leaders a 30-minute prep session. We’ll map your team, anticipate the hard questions, and you’ll walk in confident.

I’ve got 8 slots left this week: CALENDAR LINK

Remember, you don’t need everyone to become an Early Adopter.

You need to orchestrate four intelligent responses to the same challenge.

That’s innovation WITH reliability.

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

Why are my senior designers refusing to engage with AI tools?

It is likely not an information problem but an emotional safety problem. AI transformation triggers fear of replacement, anxiety about competence, and career uncertainty.

What are the four team archetypes in AI adoption?

Early Adopters (5-10%) scout possibilities, Fast Followers (20-30%) bridge theory to practice, Sceptics (30-40%) identify risks, and Guardians (20-30%) preserve institutional knowledge. All four are essential.

How much should organisations spend on people vs technology?

Most spend 70% on technology and 10% on people, but 70% of challenges stem from people and process. Successful organisations invest more equally and tackle fewer use cases with greater depth.

What is the Monday Conversation?

A structured team discussion where the leader acknowledges fears, validates concerns as intelligent responses, and frames different perspectives as complementary strengths rather than resistance.