Why Deliberate Friction Is Essential in AI Product Design
Deliberate friction is what separates effective AI products from frustrating ones.
Riley ColemanDeliberate friction is the intentional placement of steps, choices, or checkpoints in an AI experience that slow the user down just enough to keep them in control. It is the opposite of automating everything. It is designing the moments where a human decision matters more than speed.
A banking app that asks a customer to confirm a transfer to a new payee and suggests starting with a smaller amount first. That is deliberate friction: giving someone a choice that reduces financial risk before they commit. A clinical decision support system that flags moderate diagnostic confidence and asks the doctor whether to order an additional test. That is deliberate friction: minimising errors by keeping a human in the loop at the moment it counts most.
These are design decisions. Someone chose to add that step instead of removing it. And the evidence for why is mounting. Research from frontier AI organisations, major consultancies, and our own interviews with 240 designers over 12 months all point to the same pattern: the AI products users trust most are not the fastest. They are the ones that give people a deliberate choice at the right moment.
I have taught this principle across seven cohorts, working with 312+ designers. The teams that build the most trusted AI products are the ones that know where to add friction, not just where to remove it.
The Wisdom Gap: The Best AI Experiences Deliberately Slow You Down
When we design AI interactions that feel seamless and frictionless, we’re actually designing out the most valuable part of the partnership: human wisdom.
The most successful AI system you’ve probably never heard of sits quietly in hospitals across the world, saving lives by doing something our industry would consider heretical: it deliberately slows doctors down.
Epic Systems’ sepsis detection model doesn’t automatically trigger interventions when risk scores elevate. Instead, it alerts nurses, who must then call physicians to discuss what the AI has observed. This “inefficient” process – this moment of human pause – has contributed to measurable reductions in mortality rates whilst maintaining the kind of clinical accountability that comes only from human judgment.
Sepsis AI Detection Model
The AI can spot patterns in vital signs and lab results that human eyes might miss. But the physician understands that this particular patient mentioned their daughter’s wedding next week, that they have a history of presenting symptoms differently than textbooks suggest, or that they’ve been unusually anxious about housing insecurity.
This wisdom; accumulated through years of watching how human beings actually behave, suffer, and heal, cannot be captured in any dataset.
What Epic understood, and what most of our industry is still missing, is that we’re not designing for efficiency anymore. We’re designing for partnership between two fundamentally different forms of intelligence.
And genuine partnership requires something our field has been trained to eliminate: moments where human wisdom can influence the outcome.
The Collaboration Revolution
For decades, we’ve designed interfaces assuming a simple relationship: one intelligent system (the human) using a tool. Users had clear intentions, interfaces provided predictable paths, and systems delivered predetermined responses. We optimised for speed, clarity, and seamless task completion.
But AI changes everything. We’re now orchestrating collaboration between two intelligent systems – human and artificial – each with different strengths, different ways of reasoning, and different forms of understanding. Each learning and adapting through every interaction.
Two intelligent systems collaborating
The human brings irreplaceable gifts: contextual wisdom, emotional intelligence, ethical reasoning, and the deep pattern recognition that comes from lived experience. The AI brings computational power, vast memory, and the ability to spot patterns across enormous datasets without fatigue.
When we design these interactions to feel “seamless,” we’re actually designing out the moments where these different forms of intelligence can inform each other. We’re optimising for efficiency whilst eliminating wisdom.
The Trust Paradox
The data reveals our current approach isn’t working. The latest research from KPMG and the University of Melbourne, surveying over 48,000 people across 47 countries, shows that only 46% globally trust AI systems. In Australia, we rank among the lowest for AI trust – only 30% believe the benefits outweigh the risks.
Here’s the paradox: 66% of those people are using AI regularly, but as familiarity increases, trust decreases. The more people interact with AI, the more sceptical they become. This isn’t a technology problem – it’s a design problem.
We’ve been designing AI interactions using the old playbook: make it fast, make it seamless, hide the complexity. But users don’t need seamless – they need transparency. They don’t need speed – they need understanding. They don’t need to be automated – they need to be partnered with.
In our research, we’ve observed this pattern repeatedly: teams that remove all verification steps in pursuit of speed eventually hit what we call the Confidence Cliff. Initial excitement about AI tools collapses when errors compound undetected, context is lost, and team members lose confidence not just in the AI but in their own judgment. The teams that avoid this cliff are the ones that deliberately design moments of pause – not because they distrust the AI, but because they understand that human oversight is what makes AI outputs trustworthy.
Reframing Friction as Invitation
The most sophisticated AI experiences aren’t eliminating friction – they’re designing it strategically. These aren’t obstacles to efficiency; they’re invitations to wisdom.
Consider Apple Intelligence, which requires users to actively enable AI features through settings rather than having them activated by default. This initial friction establishes something crucial: agency. When someone consciously chooses to enable a feature, they’re not just accepting it – they’re entering into a partnership.
Apple’s setup flow then provides granular controls and transparency dashboards where users can view detailed logs of any processing requests. The genius isn’t in the control mechanisms themselves; it’s in how the friction feels empowering rather than obstructive. Users aren’t being asked to accept terms; they’re being invited to configure a collaboration.
Apple Intelligence setup flow (image thanks to 9to5mac)
Four Invitations to Wisdom
The most effective AI systems create four distinct types of invitation – moments where human understanding can shape outcomes:
1. Invitations to Expertise
These are pause points that specifically request human knowledge to improve AI decisions. Rather than AI making recommendations in isolation, the system explicitly asks for human insight.
The AI isn’t just accepting human input; it’s actively seeking the kind of contextual understanding that only comes from professional experience.
2. Invitations to Proportional Care
Higher-consequence decisions receive increased thoughtful process, whilst routine choices flow smoothly. The system demonstrates that it understands when stakes matter.
JPMorgan Chase’s commercial loan processing demonstrates this beautifully. Their AI processes agreements 360,000 hours faster than humans annually, but requires human legal experts to review all flagged contract clauses before implementation. The system creates automatic approval pathways for routine decisions whilst ensuring human wisdom guides complex ones.
This isn’t about adding bureaucracy – it’s about the system showing proportional respect for decision importance.
3. Invitations to Accountability
These processes create space for reflection whilst building defensible decision trails. They help users think through their reasoning whilst documenting choices for future learning.
The question serves multiple purposes: it respects human judgment, captures institutional knowledge, and creates opportunities for the AI to learn from human wisdom about factors it might not have considered.
4. Invitations to Institutional Memory
Moments that help organisations learn from individual decisions, turning single interactions into systematic wisdom.
These aren’t just feedback loops; they’re recognition that the most valuable insights often emerge from the intersection of AI pattern recognition and human contextual understanding.
The Temporal Dimension of Respect
Sometimes the most respectful friction is invisible. ChatGPT’s typing indicators and progressive text revelation create deliberate delays that transform what could feel like mechanical exchange into something resembling thoughtful conversation.
User research indicates this temporal friction encourages more thoughtful engagement with AI responses. The gradual appearance gives users time to process information incrementally rather than being overwhelmed by walls of instant text.
But what if we pushed further?
What if the thinking time weren’t just simulated, but genuine – the AI actually using those moments to consider multiple approaches, to genuinely reflect? The delay wouldn’t be theatre; it would be real cognitive work made visible, demonstrating the AI’s own investment in getting the decision right.
Beyond Explanation: Understanding
Duolingo Max shows how explanation friction can transform automated feedback into genuine learning opportunities. After each exercise, users can request AI-powered explanations tailored to their specific mistake or success.
Duolingo Max
This addresses a fundamental challenge in AI-assisted learning: the tendency to progress without understanding. By creating moments where learners pause and engage with reasoning, Duolingo reports increased comprehension and retention.
The best explanation friction doesn’t just show AI reasoning; it creates dialogue between different forms of intelligence.
Designing Invitations, Not Obstacles
The difference between meaningful and annoying friction follows clear principles:
Meaningful invitations:
- Leverage uniquely human capabilities – judgment, creativity, cultural context, emotional intelligence
- Arrive at natural transition points, respecting flow states
- Offer clear value that users immediately understand
- Feel culturally appropriate for involvement preferences
- Allow calibration based on user experience and trust development
Obstacle patterns to avoid:
- Arbitrary confirmations that don’t prevent actual problems
- Requesting information the system already has
- Interrupting during deep focus
- One-size-fits-all friction regardless of user expertise
- Safety theatre that adds process without adding protection
The Implementation Challenge
For design leaders wrestling with AI adoption, this represents both opportunity and transformation requirements.
Your teams need new capabilities:
- Understanding partnership dynamics rather than just user flows
- Designing for trust development over time, not just task completion
- Creating transparency mechanisms that feel empowering, not exposing
- Balancing efficiency with wisdom across different decision stakes
- Measuring relationship quality alongside traditional metrics
If your team is navigating these challenges, we work with design leaders to build deliberate friction into their AI design practice.
What This Means for Your Work
Start with one high-anxiety moment in your current AI experience. Where do users feel most uncertain about AI decisions?
Then ask:
- What human wisdom could improve this outcome?
- How can we invite that wisdom without feeling obstructive?
- What would make users feel respected as partners in this decision?
- How do we demonstrate proportional care based on decision stakes?
Remember: you’re not starting from neutral trust. 70% of Australians don’t trust AI. With scepticism increasing alongside familiarity, every interaction is an opportunity to demonstrate that your AI system understands its own limitations and values human partnership.
The Future of Interface Design
What we’re really discussing is a new form of honesty in interface design. For too long, we’ve created interfaces that pretend to know more than they do, that hide limitations behind the illusion of seamless automation.
This approach suggests something more profound: interfaces that are genuinely transparent about their capabilities, that invite partnership rather than demanding faith. This isn’t just better design – it’s more ethical, respectful design. It treats users as intelligent partners rather than passive consumers.
The question isn’t whether you can afford to slow down; it’s whether you can afford not to build genuine partnership. Because in a world where trust in AI is declining even as usage increases, the organisations that learn to design for wisdom, not just efficiency, will build the relationships that endure.
The most revolutionary interfaces have always felt slightly uncomfortable at first. Perhaps interfaces designed for human-AI collaboration should feel deliberately unfamiliar – not to confuse, but because familiar patterns might prevent the kind of thinking these new relationships require.
What’s one moment in your AI experience where you could test inviting human wisdom this week?
RC
Want more insights like this?
Join 312+ designers navigating AI transformation together.
Join the CommunityFrequently Asked Questions
What is deliberate friction in AI design?
Deliberate friction is the intentional introduction of human verification steps into AI-assisted workflows. Based on 240 interviews with design practitioners, it distinguishes between friction that serves a purpose (catching errors, preserving judgment) and friction that merely slows work without adding value. Deliberate friction includes review gates, approval checkpoints, and structured pause points where human expertise validates or corrects AI output.
How is deliberate friction different from bad UX?
Bad UX friction is accidental, confusing, and diminishes user agency. Deliberate friction is intentional, transparent, and preserves agency by creating required moments for human judgment. Research with 312+ designers found that teams adopting deliberate friction explicitly reported higher confidence in their AI tools than teams using opaque approval workflows.
When should design teams use deliberate friction?
Deliberate friction should be calibrated to task risk and team trust levels. High-stakes outputs (user-facing copy, accessibility decisions, safety-critical features) warrant more friction. Low-stakes exploration (brainstorm generation, iteration variants) warrant less. The verification threshold varies by role, tool maturity, and team expertise.
What happens when design teams skip deliberate friction?
In research with 312+ designers across 7 cohorts, teams that removed all verification steps experienced higher error propagation, team misalignment about AI output quality, and erosion of designer confidence in AI tools. This pattern creates what is identified as the Confidence Cliff: the moment initial excitement about AI tools collapses when errors are not caught early.
How do you design deliberate friction that does not feel bureaucratic?
Deliberate friction works best when it is transparent about why it exists. Rather than generic approval gates, teams benefit from named verification steps tied to specific risks. The most effective implementations centre this transparency: designers understand not just what to verify, but why verification matters at that specific moment.