What is the business case for investing in trustworthy AI design?

Quick Answer: 70% of AI initiatives fail due to poor user adoption and trust issues, not technical flaws. Investment of $400K-$2.5M annually prevents $2-5M revenue loss from customer churn. Top clients now demand proof of trustworthy AI implementation in procurement, especially in regulated sectors.

Key Characteristics:
  • 70% of customers will switch brands after a single poor AI interaction
  • Australia's AI trust dropped 16 points below global average (KPMG)
  • Trust Equation: Competence + Reliability + Transparency + Value Alignment
  • Most organizations lack comprehension, agency, and failure-mode testing protocols
Real Example:

A government contract was lost when an agency failed to demonstrate transparency guarantees. Banking executives now demand explainability and compliance proof before procurement. The Four Design Pillars (Human Agency, Trust Formation, Bias Mitigation, Comprehensive Testing) address these requirements.

Article

Business Case for Trustworthy AI Design: ROI & Implementation Guide

Build the business case for trustworthy AI design with proven ROI frameworks.

Riley ColemanRiley Coleman
August 31, 2025·9 min read

business case for trustworthy AI design has become essential for modern design professionals. This comprehensive guide explores the strategic implementation of business case for trustworthy AI design principles in contemporary design practice.

The Shift: Why Top Clients Now Demand Trustworthy AI

Over the past three months, I’ve spoken with design leaders across Melbourne… One truth cut across every conversation: there is now an undeniable business case for trustworthy AI design. The way we design for AI is changing, and the teams who don’t adapt will lose more than contracts – they’ll lose credibility.

The way we design for AI is changing. And the teams who don’t adapt will lose more than contracts – they’ll lose credibility.

While some design teams are shrinking, Melbourne’s most forward-thinking design leaders are making a quiet but critical pivot. They’re not just adding headcount; they’re strategically recruiting for roles like:

  • Human-AI Interaction Designers
  • Ethical/Responsible AI User Researchers
  • Human-Centred AI Designers

It’s not just a local trend – they are following in the footsteps of global leaders.

The Multi-Million Dollar Wake-Up Call

Not because it’s trendy. Not because it’s the “right thing to do.” But because their biggest clients are making it a dealbreaker.

One agency CEO told me “We lost a seven-figure government contract last month. Not on creative, not on price. But because we couldn’t demonstrate concrete proof of how we’d ensure their AI-powered service would be transparent and fair to all citizens. That was our wake-up call.”

An in-house design director at a major bank told me: “Our execs are asking how we’re ensuring it’s ethical, explainable, and compliant. We’re scrambling to answer.”

“We thought our senior UX people could just be moved onto designing AI,” another admitted. “But it’s a different paradigm. Designing for AI means we can’t just focus on functionality and ease of use. That’s table stakes for any digital product. It’s about making it trustworthy.”

The shift is clear: clients in regulated industries – finance, healthcare, and government – are no longer just asking about AI. They’re demanding proof that it’s designed responsibly. And with new government RFP requirements now mandating ethical AI governance, the pressure is only increasing.

Clients Are Demanding Proof of Trustworthiness

  • Finance: ASIC and APRA are already enforcing stricter AI governance. Clients are asking:
    • “How do you ensure users can challenge or override your AI’s decisions?”
    • “Can you show us how your system aligns with human values—not just technical specs?”
  • Healthcare: The TGA’s regulations require user-centric AI designs that prioritise safety, transparency, and recourse.
  • Government: RFPs now include ethics and oversight clauses. Agencies that can’t prove their AI is human-centred won’t make the shortlist.

Let’s be real: None of them are doing this out of the goodness of their hearts. The economics are clear: Investing in trustworthy AI isn’t just the right thing to do – it’s the only way to survive.

Design is the difference-maker: BCG report found that 74% of companies struggle to achieve and scale value in their AI investments. 70% of AI initiatives fail as a result of poor user adoption and trust issues – not technical flaws. 

Those that succeed? They invest in human-AI collaboration design from day one.

This isn’t about explainability checkboxes. It’s about designing for human + AI collaboration in an AI-driven world. And that relies on a simple premise: For humans and AI to collaborate effectively, both parties need to understand and trust each other.


The Human-Centred AI Reality Check

Lets breakdown what this shift requires from a design perspective.

1. Human Agency & Oversight: Who’s Really in Control?

AI shouldn’t just explain decisions. It should empower users to understand its decision-making and challenge, override, or opt out of them. Too many designs treat AI as an infallible oracle rather than a collaborative tool.

Questions to ask your team – do your designs give users:

  • Simple explanations of the AI’s decisions and the data it used?  Example: “We denied your loan based on [X, Y, Z data points].”
  • Clear ways to dispute AI decisions?  Example: “This recommendation doesn’t fit my needs, let me adjust it.”
  • Opt-out paths for sensitive use cases? Example: “I’d rather talk to a human for this.”
  • Control over their data? Example: “Here’s what the AI knows about you – and how to limit it.”
  • Have you designed a deprecated experience for those who prioritise privacy over full capabilities? For users who opt out or reduce AI’s data use.
    If a user feels railroaded by your AI, you’ve failed the human-centred test.

2. The Trust Equation: The Core of Human-AI Collaboration

Trust isn’t a nice-to-have in AI collaboration – it’s the operating system everything else runs on.

Research into user trust in AI reveals users don’t trust AI because it’s explainable  – they trust it because it feels useful, accountable, fair, and aligned with their needs.

TRUSTWORTHY AI = Competence + Reliability + Transparency + Value Alignment

Key Insight | Competence ≠ Capability

Competence = Actual Capabilities + Communicated Capabilities + Perceived Capabilities

So if you don’t set the right expectations at the beginning, for the user to create an accurate mental model – trust will likely fail. And fail fast.

Reliability = Does It Do What It Says + Can I Get the Outcome I Want?

Generative AI is inherently variable, but that doesn’t mean it can’t be reliable in the ways that matter.  Can users steer the AI toward their goals? Do users understand the rules of how the AI behaves? Can they iterate quickly without frustration?

Transparency = Clarity + Accessibility + Accountability

Does the AI explain its decisions in plain, human-centred language? Do the explanations given allow them to make informed decisions about their next action or how much they should trust the AI’s desicion?  And can users dispute or flag AI decisions?

Value Alignment = User Goals + Ethical Guardrails + Cultural Fit

Does the AI adapt to what users actually want (not just what the system assumes)? Does it avoid harm and respect user boundaries?

Trust formation is very culture dependent. What seems like a respectful tone and effective controls in western non-heirarchical countries like Australia, may break trust in Singapore, Bangalore and Germany.


3. Bias Isn’t Just a Technical Problem – It’s a Human One

Yes, bias in AI is driven by historic data – but when you hear “AI amplifies biases,” what that means in reality is that, just like your YouTube or Instagram feed, it systematically optimises for them. It fundamentally shapes the system and determines who bears the cost of its failures. Scaled, algorithmic bias isn’t limited like human bias, it can impact hundreds of thousands of people instantly.

Human-centred design means expanding testing and design processes to include edge cases – because these are the groups most likely to experience AI bias.


4. But you can’t fix what you don’t see or understand. 

What’s Missing from your AI User Testing Practice

Comprehension testing for AI explanations : Can users accurately explain back to you, the AI’s decision making and identify the data points it used.  If not, how can they determine if it’s correct or not?

User control testing for the effectiveness of your user control components to enhance their ability to reliably steer the AI towards their goals and produce the output they need.

Agency tests to ensure users feel like they can easily correct or override the AI.

Trust tests to understand what signals do users need in order to build trust, deepen trust (and expand use of AI) and how do users respond when trust is broken. Can trust be rebuilt to its previous level and how long does it take to recover?

Long-term impact tests: How does trust in AI change over time and does repeated AI interaction affect user autonomy” (e.g., Does it encourage dependency?)

Failure-mode testing: Are there certain groups that have higher false positives or false negatives. What happens when the AI fails for vulnerable users?

If your testing doesn’t cover these, you’re designing for machines – not humans.


The Economic Cost of Inaction – It’s a Revenue Killer

The numbers don’t lie:

  • Trust in AI is collapsing: KPMG’s research shows Australia’s trust in AI has dropped 16 percentage points below the global average since tools like ChatGPT went mainstream. Users aren’t just skeptical – they’re voting with their wallets.
  • One bad experience = lost revenue: Research found that 70% of customers who have a single poor AI interaction open to brand switching, with 53% reducing spending immediately after a bad experience.
  • Revenue growth vs. revenue loss: Companies with transparent, human-centred AI see higher revenue growth, while those with poor AI experiences face millions in churn.

The brutal math for leaders:

The choice feels pretty clear $400K/year for specialised AI design roles vs. $2–5M in revenue loss from customer churn caused by poor AI implementations for a mid-size design design.

Here’s the kicker: While some executives still see design as a cost centre to trim, the leaders who invest in human-centred AI capabilities now will dominate the next decade.

The rest?
They’ll spend it explaining to shareholders why their competitors ate their market share, while they were focused on short-term savings.

This isn’t about altruism – it’s about survival.

The leaders who act now will own the future. The rest will scramble.

Design leaders – Start Building & Socialising your Business Case NOW!!

With the start of Q4 around the corner – that means the annual planning cycle will soon begin – you need to be ready.

Soon you will be sitting in the budget meeting when your Executives asking the question “Now that AI can generate designs and analyse user feedback, why do we still need such a large design team?

1. Use the above research from some of the largest consultancies to start building your business case for investment in expanding your design training and capabilities now.

2. Start quantifying the impact on your own context – looking at adoption, engagement and retention metics of your AI features.

3. Conduct user testing to identify where your AI product loses user trust and competitor testing if possible.

4. Demonstrate the potential market gain you can have if your designs can demonstrate they are trustworthy to a very skeptical Australian audience.

The Bottom Line

This isn’t about explaining AI better. It’s about designing AI that respects humans first.

So – what’s your move?

Got Question, Need Help? – Reach out

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

Why are clients now demanding proof of trustworthy AI design?

Regulated industries now include ethical AI governance in procurement requirements. One agency CEO lost a seven-figure government contract because they could not demonstrate how their AI-powered service would be transparent and fair to all citizens.

What is the financial cost of ignoring AI trust in design?

70% of customers who have a single poor AI interaction are open to brand switching, with 53% reducing spending immediately. Companies risk $2-5M in revenue loss from customer churn caused by poor AI implementations.

What is the Trust Equation for AI systems?

Trustworthy AI = Competence + Reliability + Transparency + Value Alignment. Competence includes actual, communicated, and perceived capabilities.

Why do 70% of AI initiatives fail?

They fail due to poor user adoption and trust issues, not technical flaws. BCG found 74% of companies struggle to achieve and scale value from AI investments.