How do design leaders know when to override AI recommendations?

Quick Answer: Use the Human Wisdom Check: evaluate Human Impact (wellbeing), Context Layer (recent events), Relationship Web (trust/dynamics), Ethical Compass (conflicting principles), and Future Lens (precedent). Multiple concerns = human-led with AI input. AI has three limitations: ethics blindspots, context conundrums, and relationship amnesia.

Key Characteristics:
  • The Wisdom Paradox: AI overreliance weakens human decision-making capabilities
  • Three AI limitations: ethics blindspots, context conundrums, relationship amnesia
  • Humans process across System 1 (intuitive), System 2 (analytical), and ethical frameworks
  • Three strategic shifts: make invisible visible, preserve wisdom, build ethical literacy
Real Example:

AI flagged an employee for declining metrics. Human conversation revealed she was managing her mother's dementia diagnosis. Human judgment offered support instead of performance review, retaining talent and demonstrating organizational values.

Leadership

Human-Centred AI means knowing when to use AI is just as important as when NOT to use AI.

Developing judgment about when human decision-making should supersede AI recommendations.

Riley ColemanRiley Coleman
November 14, 2024·8 min read

Human Decision Making AI Ignorance of Full Picture

Practical Checklist for Determining AI or Human Decision Required

Last Tuesday, our AI-powered performance management system flagged an employee for a performance review. The data was clear and quantifiable: missed deadlines, late arrivals, decreased output. By every quantifiable measure, it was the right call.

But as I sat across from Sarah, I found myself processing a rich tapestry of information that no AI could access. I was doing what humans do naturally. I was time travelling through data, emotions, and relationships all at once.

The signs were visible. Dark circles under her eyes. A slight tremor in her hands. Her usually immaculate desk was now scattered with post-it notes.

The emotional landscape: the way her voice caught when mentioning home, her apologetic body language, the absence of her usual demeanour.

The relationship context: six years of consistently good performance, her role as an informal mentor to junior staff, her tendency to take on others’ workloads without complaint.

Through gentle conversation, the full story emerged – her mother’s recent dementia diagnosis. This moment crystallised something crucial about the gap between AI and human decision-making.

The Human Time Machine

When humans make decisions, we’re not just processing present data – we’re time travellers. Sarah’s six years of excellence weren’t just data. They were lived experiences that informed my view of her current situation. Every interaction, every project successfully delivered, every team member mentored.

They weren’t just log entries. They were threads in a tapestry of trust and relationships.

This temporal intelligence allows us to:

  • Draw on emotional archives of past experiences
  • Read subtle present-moment signals
  • Anticipate future implications for relationships and team dynamics
  • Integrate learning from every interaction into our decision-making framework

The Two Systems at Play

Nobel laureate Daniel Kahneman says humans decide on two levels. System 1 is fast and intuitive. System 2 is slow and analytical. Both were active as I spoke with Sarah:

System 1 instantly sensed something was “off.” It noticed subtle changes in her demeanour. It processed non-verbal cues. It compared her current behaviour to past patterns.

System 2 carefully evaluated the performance metrics, recalled her track record, and considered possible underlying causes.

This dual-processing system lets humans merge data with intuition, facts with feelings, and metrics with meaning. But there’s a third element at play – one that’s nearly invisible yet crucial: our ethical framework.

The AI Approach: Brilliant But Blind

AI’s capabilities are staggering. Modern systems can process more information in a minute than a human could in a lifetime. Think of it as having “mathematical intelligence.” It’s the ability to find patterns and correlations that humans might miss. This raw processing power is genuinely revolutionary.

But here’s where we face three fundamental challenges:

1. The Ethics Blindspot

Unlike data points that can be quantified and processed, ethical considerations often exist in the unspoken spaces between decisions. They live in the cultural nuances, personal values, and shared human experiences that shape our choices.

2. The Context Conundrum

AI processes information within strict parameters but struggles with the fluid, contextual understanding that humans take for granted. It also relies on the right data being collected. This must be done without creating a surveillance culture. And, there must be a clear definition of “good performance.”

3. Relationship Amnesia

AI can track interaction patterns. But, it lacks understanding of relationship currencies. These include: trust built over time, social capital earned through support, and unspoken agreements that form the basis of professional relationships.

The Growing Gap: The Wisdom Paradox

The gap between AI’s skills and human wisdom isn’t just theoretical. It’s creating what I call the ‘wisdom paradox.’ The more we rely on AI for decisions, the less we use our own decision-making skills. It’s like a muscle that weakens from disuse.

Each time we default to automated decision-making over human judgement, we’re not just making a single choice – we’re creating a precedent.

More concerningly, we’re losing opportunities to develop and maintain our ethical decision-making skills.

The ripple effects extend beyond individual decisions:

  • Teams interpersonal relationships becoming more fractured
  • Managers losing confidence in their intuitive judgement
  • Organisations losing their collective wisdom
  • Ethical decision-making skills atrophying from disuse

The Bridge We Need to Build

The solution isn’t to abandon AI – its capabilities are too valuable. Instead, we must shift our focus. We need to move from “data-driven decision making” to “data-informed and ethical decision making.”” This means:

Making the Invisible Visible

  • Explicitly discussing and documenting the ethical considerations in our decisions
  • Creating frameworks that help identify when human judgement should override automated recommendations
  • Training teams to articulate their ethical reasoning for their decision making

Preserving Human Wisdom

  • Maintaining spaces for human judgement in automated processes
  • Valuing and developing emotional intelligence alongside technical skills
  • Creating systems for data-informed human decision-making

Building Collective Ethical Literacy

  • Developing a shared vocabulary for discussing ethical considerations
  • Creating communities of practice around ethical decision-making
  • Establishing feedback loops that capture both quantitative and qualitative outcomes
  • Nurturing the collective wisdom that helps organisations navigate complex human situations

The Human Wisdom Check: When to Pause Your AI Tools

Before accepting an AI-generated recommendation or output, pause and consider these reflection points:

1. The Human Impact Test Ask yourself:

  • Will this decision significantly impact someone’s life or wellbeing?
  • Could there be personal circumstances I know about that the AI doesn’t?
  • Would a face-to-face conversation reveal important nuances?

🚩 If yes, bring human wisdom into the process.

2. The Context Layer Consider:

  • Are there recent events or changes that provide crucial context?
  • Is there historical or cultural context the AI might miss?
  • Are there unwritten rules or norms that apply here?

🚩 Rich context requires human interpretation.

3. The Relationship Web Reflect on:

  • Could this affect trust or relationships you’ve built?
  • Might this impact community or group dynamics?
  • Are there stakeholder relationships to consider?

🚩 Complex relationships need human understanding.

4. The Ethical Compass Question whether:

  • Multiple ethical principles are in conflict
  • There’s a gap between what’s legal and what’s right
  • Different cultural values might lead to different conclusions

🚩 Ethical complexity demands human wisdom.

5. The Future Lens Think about:

  • What precedent might this set?
  • Could there be long-term implications not visible in the data?
  • Might this decision affect future choices or relationships?

🚩 Long-term impact needs human foresight.

Simple Decision Framework:

Multiple 🚩 = Human-led decision with AI as input

One 🚩 = Balanced AI-Human collaboration

No 🚩 = AI-led with human oversight

Remember: AI is a powerful tool, but human wisdom is what gives it meaning and direction. Use these reflection points not as rigid rules, but as prompts to engage your uniquely human perspective.

In the age of AI always pause and ask yourself “Just because we can, does it mean we should?”

The Path Forward

The AI system was right about Sarah. The metrics had declined. But it was blind to the human context that made those metrics meaningful. It couldn’t time-travel through her history with the organisation. It couldn’t feel the weight of her current struggles. Nor could it understand the long-term effects on team morale and trust.

As we rush to embrace AI’s efficiency, we must ensure we’re not building systems that are blind to these uniquely human elements.

The most sophisticated AI system in the world can’t understand the full impact of a mother’s dementia diagnosis on her daughter’s performance. And that’s precisely why we need to preserve and cultivate our human capabilities.

The time to act is now. Not just to develop AI literacy, but to develop ethical literacy alongside it. We can ensure that our AI-enhanced future remains fundamentally human-centered by making the invisible visible and by consciously exercising our ethical decision-making muscles.

RC

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

When should human judgment override AI recommendations?

Use the Human Wisdom Check: Could this cause irreversible harm? Is essential context missing? Would this break trust? Could this treat similar people differently? Does this set a precedent we would regret? Multiple concerns mean human-led decisions.

What is the Wisdom Paradox in AI decision-making?

The more we rely on AI for decisions, the less we exercise our own decision-making skills, like a muscle weakening from disuse.

What three fundamental limitations does AI have in decision-making?

The Ethics Blindspot (ethical considerations exist in unspoken spaces), the Context Conundrum (AI struggles with fluid understanding), and Relationship Amnesia (AI cannot grasp trust built over time).

How do humans make decisions differently from AI?

Humans draw on emotional archives, read subtle present-moment signals, and anticipate future implications. Nobel laureate Daniel Kahneman's dual-processing model shows System 1 and System 2 work together, merging data with intuition.