What strategies help designers build transparent AI systems?

Quick Answer: Six strategies make AI decision-making understandable: progressive disclosure, confidence indicators, natural language explanations, visual representations, interactive interfaces, and explainable AI techniques such as LIME and SHAP. Balance transparency with IP protection using high-level explanations and differential privacy.

Key Characteristics:
  • AI is a sophisticated prediction machine making probabilistic guesses, not an all-knowing oracle
  • Progressive disclosure layers transparency like an onion, letting users peel back detail as needed
  • Confidence indicators build more trust than vague statements
  • Explainable AI tools like LIME and SHAP reveal which features most influenced a decision
Real Example:

The article opens with a scenario based on Amazon's 2015 AI recruiting tool that rejected female candidates because it was trained on 10 years of predominantly male resumes. The AI learned to favour male candidates with no explanation given to applicants. This illustrates the core transparency gap the article addresses.

Article

Strategies for Designing Transparent AI

Master AI design leadership with Australia’s expert guidance.

Riley ColemanRiley Coleman
October 12, 2024·9 min read

Strategies for Designing Transparent AI

Strategies for Building Transparent AI

Transparency vs IP, Security & Privacy

How to Test for Transparency

Demystifying AI’s Decision Making : Transparency

Imagine you’re a talented software engineer named Sarah. You’ve applied to your dream job at a leading tech company, confident that your skills and experience make you the perfect candidate.

Weeks pass, and you receive a terse email: “We regret to inform you that your application was unsuccessful.” Unbeknownst to you, an AI-powered recruitment tool, designed to streamline hiring, has deemed you unsuitable. The reason? The AI, trained on historical data, learned to favor male candidates in tech roles. It’s as if a faceless computer program simply said “no” to your career aspirations, with no explanation or recourse.

This isn’t a hypothetical scenario or a scene from a dystopian TV show—it’s the real-world consequence of Amazon’s 2015 AI recruiting tool, a project eventually scrapped due to its inherent gender bias. The “computer says no” punchline has become an unsettling reality in our AI-driven world.

This cautionary tale illuminates a critical challenge we face today: the lack of transparency in AI decision-making. As product creators and tech professionals, we’ve become enamored with AI’s promise, often overlooking its limitations and potential for harm.

The uncomfortable truth is that there’s a significant transparency gap in AI product development. While we’ve made remarkable strides in AI capabilities, we still struggle to explain how these systems arrive at their decisions and communicate their uncertainties to the humans affected by them.

The Prediction Paradox:

At its core, AI isn’t the all-knowing oracle we sometimes imagine it to be. It’s a sophisticated prediction machine, making educated guesses based on patterns in data.

When an AI identifies a “smiling face,” it’s not understanding joy or emotion. Instead, it’s recognizing a pattern of pixels and estimating, “There’s a 92% chance this arrangement matches what I’ve labeled as a smile.”

This probabilistic nature is both AI’s strength and its limitation. It can process vast amounts of data to make predictions beyond human capability, but these predictions always carry a degree of uncertainty. As product creators, it’s our ethical responsibility to communicate this uncertainty clearly to our users.

The Transparency Tightrope

We find ourselves walking a delicate tightrope. On one side, there’s the allure of “AI magic”—the idea that our products can effortlessly solve complex problems. On the other, there’s the pressing need for transparency—explaining how our AI works, including its limitations and potential biases.

Balancing these needs isn’t just a technical challenge. It’s a design, communication, and ethical imperative. How do we create AI systems that are both powerful and understandable? How do we build user interfaces that reveal the nuances of AI decisions without overwhelming users?

Strategies for Building Transparent AI

  1. Progressive Disclosure
    Don’t overwhelm users with information all at once. Layer it, starting with a simple overview and allowing users to drill down for more details. Think of it as a “transparency onion”—users can peel back layers of explanation as their interest or need grows.
  2. Confidence Indicators
    Clearly communicate the AI’s level of certainty in its decisions. A weather app showing an “80% chance of rain” is more informative and trustworthy than one that simply says “It might rain.”
  3. Natural Language Explanations
    Generate human-readable explanations for AI decisions. Aim for clarity and simplicity, as if you’re explaining the AI’s reasoning to a curious friend. Use tools like the Hemingway Editor to ensure your explanations are accessible to a broad audience.
  4. Visual Representations
    Leverage the power of data visualization. Charts, graphs, or heat maps can illustrate how different factors influence an AI decision. Remember, a well-designed image can convey complex information more effectively than paragraphs of text.
  5. Interactive Interfaces
    Allow users to explore and experiment with your AI. Google’s “What-If Tool” is an excellent example, enabling users to tweak inputs and observe how outputs change. This playful interaction can demystify AI decision-making and build user trust.
  6. Explainable AI (XAI) Techniques
    Implement tools like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) to provide insights into your AI’s decision-making process. These techniques can help identify which features most influenced a particular decision.
IBM Analytics

Addressing Concerns: Transparency vs IP, Security & Privacy

You might be thinking, “But if we explain everything, won’t we be giving away our secret sauce?” It’s a valid concern.

Here’s how we can strike a balance:

  1. Provide high-level explanations of system logic without revealing exact algorithms.
  2. Use differential privacy techniques to share insights about your AI’s behavior without exposing the model itself.
  3. Implement robust security measures alongside transparency features.
  4. Aggregate data when providing explanations to avoid revealing individual user information.
  5. Give users control over what personal information is used in explanations.


Remember, it’s not about choosing between transparency and other concerns. It’s about finding innovative ways to address them all simultaneously.


The True North of Transparency

Ultimately, we can’t be certain our transparency features are effective until we test them with real users.

User testing

Here’s how:

  1. Comprehension Testing:
    Ask users to explain back to you how they think the AI made its decision. If they can’t, it’s time to refine your explanations.
  2. Trust Metrics:
    Measure user trust before and after interacting with your transparent AI features. Are you building bridges or burning them?
  3. Decision Quality:
    Assess whether transparency features actually help users make better decisions. Transparency isn’t just about understanding—it’s about empowerment.
  4. Diverse User Groups:
    You need to conduct user testing with at least 20 people and ensure they represent the widest cross section of your users including levels of AI literacy and different demographic backgrounds. What’s clear to a tech enthusiast might be Greek to someone else.

The Path Forward: A Culture of Transparency

Implementing these strategies isn’t just about ticking boxes or appeasing regulators. It’s about fostering a culture of openness and accountability in AI development.

When we admit our AI’s limits and work to make it more understandable, something remarkable happens: users become more engaged, more trusting, and more forgiving when things go wrong. They become partners in the process, and their insights help us build better, more ethical AI systems.

As we push the boundaries of AI capabilities, we must remember that people matter most. We have a responsibility to advance together, bridging the gap between innovation and understanding.

The path to earning and keeping user trust in our AI products lies in transparency. By shedding light on our AI’s decision-making processes, we not only create better products but also build a stronger, more ethical partnership between humans and AI.

Our choices now will determine whether AI becomes our greatest ally or our undoing. By harnessing AI’s potential wisely and transparently, we can elevate human capabilities and secure a brighter tomorrow for all.

Tools to help

Google’s What-If Tool

This interactive tool allows users to visualize and investigate machine learning models, with no coding required.

IBM AI Fairness 360

An open-source toolkit to help detect and mitigate unwanted bias in machine learning models and datasets


I put together a suggested UX Testing protocol to help you plan and implement Transparent user testing on your AI product or service.

Thought-Provoking Questions for Your Product

As you reflect on your own AI products or services, consider these questions:

  1. Can your users easily access a plain-language explanation of how your AI makes decisions that affect them?
  2. How would you explain your AI’s decision-making process to a regulator or a journalist?
  3. If a user asked to see what data influenced an AI decision about them, could you provide it?
  4. Have you tested if users can predict how their input will change your AI’s output?
  5. Does your UI design inadvertently hide important information about AI limitations and potential biases?
  6. How are you documenting your AI’s development for future audits or explanations?
  7. How do you review and update your AI’s transparency measures as the system evolves?

Remember, every line of code we write, every model we train, and every product we launch is an opportunity to set a new standard for transparent AI. It’s not always easy, but it’s always worth it. So, let’s roll up our sleeves and get to work. The future of ethical, transparent AI is in our hands.

Feedback on this article would be greatly appreciated

Share this issue with your friends

Resources for Ai Design Leadership Excellence

Continue your AI design leadership journey with these carefully curated resources:

Ready to advance your AI design leadership expertise? Our proven frameworks and community support ensure sustainable professional growth in the evolving design landscape.

This approach to AI design leadership ensures human-centered design principles remain at the forefront of technological advancement, creating meaningful impact for users and sustainable value for organizations.

Resources for Ai Design Leadership Excellence

Continue your AI design leadership journey with these carefully curated resources:

Ready to advance your AI design leadership expertise? Our proven frameworks and community support ensure sustainable professional growth in the evolving design landscape.

This approach to AI design leadership ensures human-centered design principles remain at the forefront of technological advancement, creating meaningful impact for users and sustainable value for organizations.

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

How do you balance AI transparency with protecting intellectual property?

Provide high-level explanations without revealing exact algorithms. Use differential privacy techniques. Aggregate data in explanations and give users control over what personal information is used.

What is progressive disclosure in AI design?

A transparency strategy that layers information like an onion. Users start with a simple overview and can drill down for more technical detail as needed.

How should designers test whether their AI transparency features work?

Use comprehension testing, trust metrics, decision quality assessment, and testing with diverse user groups of at least 20 people across different AI literacy levels.

What are LIME and SHAP in explainable AI?

Techniques that reveal which features most influenced a particular AI decision, making the decision-making process more transparent and auditable.