How can organizations recover trust after an AI failure?

Quick Answer: Organizations recover trust through immediate accountability (24-hour response), expectation recalibration (rename "Make Designs" to "First Draft"), and community collaboration (invite critics into beta testing). Figma's Trust Journey shows that decade-long trust equity enables recovery where competitors fail.

Key Characteristics:
  • Take personal accountability within 24 hours, identifying root cause rather than deflecting
  • Recalibrate expectations through honest naming and capability framing
  • Transform critics into collaborators through community involvement
  • Trust recovery is ongoing—monitor satisfaction gaps continuously
Real Example:

When Figma's Make Designs feature failed publicly, CEO Dylan Field took personal responsibility immediately, renamed the feature to "First Draft" to set appropriate expectations, and invited users into beta testing. Result: 46% revenue growth and 250% IPO first-day gain, though designer satisfaction remained 13 points below developers.

Article

How public humiliation made Figma stronger

This week, I’m sharing a deep dive into one of the most instructive case studies in AI trust I’ve seen – Figma Make Designs crisis and recovery.

Riley ColemanRiley Coleman
December 01, 2025·21 min read

This week, I’m sharing a deep dive into one of the most instructive case studies in AI trust I’ve seen – Figma Make Designs crisis and recovery.

It matters because the frameworks they used (whether they knew it or not) are exactly what we teach in the Human-Centred Trustworthy AI course.

If you’re navigating AI adoption in your team right now, this is the playbook.

The Moment – Figma Make Launch

July 1, 2024. Andy Allen posts a tweet that will accumulate 1.5 million views.¹ Two screenshots. Side by side. On the left, Figma’s brand new “Make Designs” AI feature generating a weather app.

On the right, Apple’s iOS Weather app. They’re nearly identical.

Within hours, the design community erupts. Within days, the feature is dead.

But here’s what makes this story worth telling: Figma didn’t just survive this. Eighteen months later, their AI features are driving enterprise adoption. Their IPO delivered a 250% first-day gain.²

How did public humiliation become a trust accelerator?

The answer matters for every designer navigating AI adoption, and every leader wondering what happens when their AI inevitably stumbles.

The Wound

It wasn’t just that an AI feature produced poor output. Features fail all the time.

It was that trust was broken. Designers had believed Figma when they said AI would augment their craft, not undermine it. They’d stood in that Config keynote, with a mix of uncertain what this would mean as a designer, mixed with genuinely excited about what was coming.

And then they used Make Designs, and it spat out Apple’s Weather app.

Suddenly, they felt foolish. Foolish for trusting it. Foolish for being excited. Foolish for not seeing this coming.

Worse – they felt exposed.

Photo by Yan Krukau on Pexels.com

What if they’d shipped that output to a client?

What if they’d presented it in a stakeholder meeting?

The tool they trusted had nearly made them complicit in plagiarism.

That’s the anatomy of trust betrayal in AI. It’s not about the feature. It’s about what the failure makes the user feel about themselves.

Did this tool just reveal that I don’t know what I’m doing?

Did I look naive for believing the hype?

Am I the problem here?

These are the 3am questions. And they’re why trust recovery requires more than a hotfix.

The Response

Within 24 hours – by July 2 – CEO Dylan Field posted a six-tweet thread.³

His words were remarkably direct:

“Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline for Config.
I hate missing the mark, especially on something that I believe is so fundamentally important to the future of design.”*⁴
No deflection. No blame-shifting to contractors or AI providers. No vague promises to “review feedback.” Personal accountability from the CEO, with specific admission of the root cause: rushing for a deadline.

The technical explanation revealed what actually happened. Make Designs used off-the-shelf AI models (GPT-4o and Amazon Titan) combined with proprietary design systems Figma had commissioned.⁵ In the week leading up to Config, new components were added without proper vetting.

As VP of Product Design Noah Levin later explained: “A few of those assets were similar to aspects of real world applications.”*⁶

The technology worked. The process failed. And Field owned it.

Meanwhile, Adobe was handling their own design tool crisis very differently. Adobe XD had frustrated users for years – slow performance, ignored feedback, deteriorating trust. Their response? Silence, then discontinuation in early 2024.⁷

No accountability. No invitation to collaborate on solutions. Just abandonment.

Same industry. Same moment. Opposite approaches. The contrast would prove instructive.

The Rebuild

For nearly three months, Figma went largely quiet. No premature announcements. No timeline promises. Just work.

What emerged in September 2024 wasn’t a patch. It was structural transformation:⁸

Complete design system reconstruction. Rather than fixing problematic assets, Figma built entirely new proprietary systems. The resulting “Simple Design System” contains 372+ atomic components – all publicly available in Figma Community for anyone to inspect.⁹ Four distinct libraries giving users genuine choice.

Expectation reframing.

The feature’s name changed from “Make Designs” to “First Draft.” This wasn’t rebranding – it was repositioning. Figma’s communications consistently emphasised: “While AI can offer a jumping off point, only designers can craft a meaningful experience from a first draft. That craft is a competitive advantage.”¹⁰

In Trust Journey terms, Figma was redesigning their Inception experience – the critical moment when users first encounter an AI feature and form expectations about what it can do.

The original Make Designs had failed at Inception. It promised too much. Users expected magic and got plagiarism. The gap between expectation and reality created betrayal.

First Draft addressed this structurally. The name itself calibrates expectations. “First Draft” says: This is a starting point. You’re still the designer. The craft is still yours.

The Invitation

But structural change wasn’t enough. Figma did something rarer: they invited the community into the solution. From TechCrunch’s coverage during the crisis: “The company says it is now working with the community to address concerns and improve the tool before its wider release.”¹¹

This wasn’t just crisis PR. By July 2025, Figma’s official announcement confirmed they’d honoured that commitment: “We’ve been working closely with our community to refine Make, incorporating feedback from beta testers to ensure it meets the needs of designers at every stage of the creative process.”¹²

And crucially, users noticed.

One designer on Reddit observed: “Glad they’re listening to feedback this time.”¹³

That phrase – “this time” – contains the entire trust recovery story. It acknowledges past imperfection. It credits genuine change. And it signals the designer chose to re-engage rather than walk away.

Compare this to Adobe XD’s final months. No community involvement. No beta programmes for improvement. Just a product quietly discontinued, leaving users to migrate elsewhere without guidance. Adobe XD generated only $15-17 million annually⁷ – a fraction of what Figma was building – because they’d never invested in the trust equity that makes recovery possible.

Adobe had trust debt. When they needed user goodwill to weather the storm, the account was empty.

Figma had trust equity. When they needed grace, their community extended it – because Figma had spent a decade earning the right to ask.

The Evidence

Did it work? The answer is a nuanced ‘Yes, and…’ – which is itself instructive.

The commercial signals were strong (they’d make any CEO happy) :

  • Revenue grew 46% year-over-year by Q1 2025¹⁴
  • Operating margin improved from 117% loss to 17% profit¹⁴
  • The July 2025 IPO opened at $85 (from $33) and closed at $115.50—a 250% first-day gain²
  • 30% of customers generating $100K+ ARR were using Figma Make weekly by Q3 2025¹⁵

But designer-specific trust gaps persisted:

  • Figma’s 2025 AI Report revealed 69% designer satisfaction versus 82% developer satisfaction – a 13-point gap¹⁶
  • Only 31% of designers use AI for core design work¹⁶
  • Only 32% of all respondents said they could “rely on the output of AI”¹⁶

And new challenges emerged:

A November 2025 class action lawsuit alleges improper data use for AI training¹⁷ Nielsen Norman Group testing found First Draft outputs “generic” with “poor hierarchy”¹⁸

This is the honest picture of trust recovery. It’s not a destination. It’s a journey with ongoing work, new disruptions, and continuous recalibration.

In Trust Journey terms, Figma successfully navigated Disruption (the crisis) and Restoration (the recovery). But they remain in active Cultivation – building consistent reliability through ongoing transparency and responsiveness. Trust is never “fixed.” It’s maintained.

The Stakes: Why This Matters Beyond Figma

Here’s the business reality your product managers and executives need to hear:

In an era when competitors can replicate your feature set in weeks, the only moat left is customer loyalty.

The speed of disruption has accelerated dramatically. Features that took teams of hundreds to build can now be approximated by well-funded competitors with access to foundation models. Your product differentiation has a shorter half-life than ever.

The only thing competitors can’t copy quickly? Your users’ trust.

And the research quantifies exactly what’s at stake:

The trust gap is severe:

University of Melbourne & KPMG (2025): Only 46% of people trust AI systems, despite 66% using them regularly¹⁹

Put another way – 64% of potential users distrust your AI on face value – You are already in a deficit and you need to PROVE why YOUR AI should be Trusted.

The perception gap is dangerous: According to PwC Consumer Intelligence Series

87% of executives believe customers trust their companies²⁰
Reality: Only 30% of customers agree²⁰

That 57-point gap is where churn hides until it’s too late

The investment returns are asymmetric:

Deloitte’s Q2 2024 State of Generative AI research found that organisations taking trust-related actions when implementing AI were 66% more likely to report achieving their expected benefits than those who didn’t prioritise trust²¹

Put differently: trust isn’t just ethics – it’s operational effectiveness.

Making the Case: Language for Your Stakeholders

Designers often struggle to advocate for trust investment because they’re speaking a different language than their product managers and executives.

Here’s how to translate:

When they ask about adoption metrics, talk about Inception:

“Our AI onboarding needs to calibrate user expectations to actual capabilities. Right now, there’s a gap between what users expect and what the feature delivers. That gap creates disappointment, reduced usage, and negative word-of-mouth.
If we redesign our Inception experience to set appropriate expectations upfront – like Figma did with ‘First Draft’ positioning – we can increase genuine adoption by reducing the expectation-reality gap.
Projected impact: Improved expectation calibration could increase 30-day AI feature retention by 5-10% and reduce related support tickets by 15-25%.

When they ask about retention metrics, talk about Recovery:

“We don’t have a trust recovery protocol for when our AI fails users. Currently, when something goes wrong, users feel foolish for trusting us—and they leave quietly.
If we design graceful failure states with transparent explanations and clear recovery paths – like Figma did with their 48-hour accountability response – we can retain users through the inevitable failures.
Projected impact: A trust recovery protocol could prevent a significant portion of the churn typically caused by AI trust breaches.

When they ask about competitive differentiation:

“Our competitors can copy our features. They can’t copy how we make users feel.
Figma’s position – with 75-90% of product designers using them as their primary tool²² – wasn’t built on features alone. Adobe had features too. It was built on trust equity that took a decade to accumulate. When Figma’s AI failed publicly, that trust equity gave them grace to recover. Adobe XD had no such buffer, and they’re gone.
Trust is the only moat our competitors can’t replicate quickly. It requires consistent behaviour over time.
Deloitte shows organisations investing in trust are 66% more likely to achieve expected AI benefits.²¹ That’s not ethics. That’s arithmetic.

Two Interventions That Build and Retain Trust

If you take nothing else from Figma’s journey, take these two design interventions:

Intervention 1: Inception Design (Builds Trust, Drives Adoption)

The Problem: Most AI features fail at first contact because they over-promise. Users expect magic, encounter limitations, feel foolish, and disengage.

The Intervention: Design your onboarding to calibrate expectations before first use.

What this looks like:

  • Honest capability framing: “First Draft” not “Make Designs”
  • Visible limitations: Show what the AI can’t do alongside what it can
  • Appropriate confidence signals: Don’t present uncertain outputs as definitive

User control: Let users choose fidelity levels (like Figma’s four libraries)

How to Pitch It:

“If we reduce the expectation gap at Inception, we could see 5-10% improvement in 30-day AI feature retention and 15-25% reduction in related support tickets. These are wedge metrics – early indicators that we’re building the trust foundation for deeper adoption.”

Intervention 2: Recovery Design (Retains Trust, Prevents Churn)

The Problem: When AI fails – and it will (AI is predictive) – most products offer no recovery pathway. Users feel foolish, lose confidence, and quietly leave.

The Intervention: Design graceful failure states that maintain user dignity and offer clear paths forward.

What this looks like:

  • Immediate acknowledgment: Recognise the failure quickly and clearly
  • Transparent explanation: Explain what went wrong in accessible language
  • User agency: Provide alternative pathways and next steps
  • Learning signals: Show how feedback improves future performance

How to Pitch It:

“We can’t eliminate AI failures, but we can design for graceful recovery. Research shows trust can actually rebound higher than baseline when recovery is handled well.²³ A recovery protocol isn’t just damage control—it’s relationship building.”

Your Trust Journey

Figma’s story maps onto a framework every AI relationship moves through:

Figma’s decade of community-building created trust equity during Inception and Cultivation. That equity provided resilience during Disruption. Their response accountability, transparency, community involvement – enabled genuine Restoration.

But restoration isn’t the end. They’re now back in Cultivation, with new challenges (the lawsuit, lingering designer scepticism) requiring ongoing attention.

Trust isn’t a milestone. It’s a practice.

The Question You’ll Face

At some point – probably soon – your AI feature will fail publicly. Maybe not as spectacularly as Figma’s. But it will happen. AI systems are prediction machines, and predictions are sometimes wrong.

When that moment comes, you’ll face a choice:

Handle it like Adobe XD: Defensively. Slowly. Without community involvement. Watch trust erode until there’s nothing left.

Handle it like Figma: Take accountability fast. Explain transparently. Invite users into the solution. Make structural changes, not cosmetic ones. Earn back what was lost—and potentially more.

The companies that thrive in the AI era won’t be those with the most advanced models. They’ll be those who’ve mastered the trust journey – building equity before they need it, recovering gracefully when they fail, and treating trust as the competitive moat it is.

Because in a world where competitors can copy your features overnight, the only thing they can’t steal is how you make people feel.

Go Deeper: The Next Cohort Is Now Open

The Trust Journey framework in this article – Inception, Validation, Cultivation, Disruption, Restoration – is one of the foundational frameworks of how we approach AI adoption in Human-Centred AI: Designing Systems People Trust.

If you’re navigating AI in your team right now, this is where we turn frameworks into practice:

Translating abstract AI Ethics into design decisions and new design practices

  • Applied Ethics Workshops that engage peer debates and discussions using real case studies of AI failures like Figma’s
  • Design protocols for each phase of the AI Trust Journey
  • Effective Human+AI collaboration patterns
  • Enhanced UX Research & Testing protocols for validating non-deterministic AI experiences.

The next cohort starts 9th Feb 2026 .

40% BLACK FRIDAY SALE

ENDS Wednesday 3rd Dec Midnight

Join the cohort →

No one navigates this alone.

P.S. If this article resonated, would you forward it to one designer or design leader who’s navigating AI adoption right now? The more people thinking carefully about trust, the better AI experiences we’ll all build.

Riley Coleman is an AI Adoption Coach helping design teams transform their relationship with AI – from anxiety to fluency. A human-centred designer with 20+ years experience who’s been in the trenches, made the mistakes, and now lights the path for others.

The insights and frameworks in this article are my own, developed through working with design teams navigating AI adoption. I acknowledge it was written in partnership with Claude AI

¹ Andy Allen (@asallen), X/Twitter, July 1, 2024. Tweet: "Figma AI looks rather heavily trained on existing apps..." View count confirmed across multiple sources including TechCrunch and The Verge coverage.

² Multiple sources on Figma IPO July 2025, including financial news coverage. IPO priced at $33, opened at $85, closed at $115.50.

³ Sarah Perez, "Figma disables its AI design feature that appeared to be ripping off Apple's Weather app," TechCrunch, July 2, 2024.

⁴ Dylan Field (@zoink), X/Twitter thread, July 2, 2024. Direct quotes from six-tweet response.

⁵ Figma official documentation on AI approach; confirmed in TechCrunch coverage and Figma blog posts.

⁶ Noah Levin, VP Product Design, Figma. Quote from Figma retrospective blog post, September 2024.

⁷ Bloomberg, "Adobe Ends XD Effort After Figma Deal Fell Apart," January 30, 2024. Revenue figure ($15-17M annually) cited in The Register, January 31, 2024, sourcing Adobe general counsel Dana Rao.

Figma Recovery & Rebuild

⁸ Figma Blog, "Building a better First Draft for designers," September 2024.

⁹ Research compilation from Figma documentation; Simple Design System available in Figma Community.

¹⁰ Figma official communications on First Draft positioning, 2024-2025.

¹¹ TechCrunch, July 2024. Direct quote from coverage.

¹² Figma Blog, "Everything you need to know about Figma Make's general availability," July 2025. Direct quote.

¹³ Reddit r/FigmaDesign, "Mixed feelings on Figma Make as a UX Designer," July 2025. User comment.

Business & Financial Metrics

¹⁴ SQ Magazine, "Figma Statistics," August 2025. Revenue growth and margin improvement data.

¹⁵ Research compilation from Figma quarterly communications.

¹⁶ Figma, "2025 AI Report: Perspectives from Designers and Developers," 2025. Survey of 2,500 respondents.

¹⁷ Khan v. Figma Inc., Class Action Complaint, November 2025.

¹⁸ Nielsen Norman Group, Figma AI evaluation, May 2025.

Trust Research

¹⁹ University of Melbourne & KPMG, "Building Trust in AI," 2025. Survey finding that 46% trust AI systems while 66% use them regularly.

²⁰ PwC Consumer Intelligence Series, "Trust: A new currency for business," 2022. Executive vs. customer trust perception gap.

²¹ Deloitte, "Trust in AI boosts benefits," Deloitte Insights, July 2024. Analysis of Q2 2024 State of Generative AI survey finding organisations taking trust-related actions were 66% more likely to achieve expected benefits.

²² Multiple sources including ElectroIQ market research; figure represents designers using Figma as primary tool, not overall market share.

²³ Kahr et al., "Trust recovery after AI errors," International Conference on Intelligent User Interfaces, 2024. Research on trust rebound dynamics following system failures.

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

What happened with Figma's Make Designs AI feature?

A designer posted screenshots showing output nearly identical to Apple's iOS Weather app. The tweet accumulated 1.5 million views. Within days, Figma pulled the feature.

How did Figma recover trust?

CEO Dylan Field took personal accountability within 24 hours. Figma rebuilt their design system from scratch with 372+ publicly inspectable components and renamed the feature to 'First Draft' to recalibrate expectations.

Why did Figma's trust recovery succeed where Adobe XD failed?

Figma had a decade of trust equity from community investment. Adobe XD had trust debt: when they needed grace, the account was empty.

What do Figma's post-crisis metrics reveal?

Revenue grew 46% year-over-year. However, designer-specific trust gaps persisted: 69% designer satisfaction versus 82% developer satisfaction, only 31% using AI for core design work.