Why are smart companies slowing down hiring in the AI era?

Quick Answer: Companies are reducing hiring because AI-augmented workers deliver 30% more output, but the real issue is convergence — every candidate’s portfolio looks identical when AI does the production. The smartest organisations now hire for judgement, taste, and critical thinking rather than execution speed. Goldman Sachs reports 34% of organisations cut hiring due to AI productivity gains. The differentiator is knowing when not to use AI.

Key Characteristics:
  • Goldman Sachs data: 34% of organisations cut headcount due to AI, and adoption is accelerating
  • The convergence problem: when everyone uses the same AI tools, portfolios become indistinguishable — Google, Anthropic, McKinsey are going the opposite direction
  • “Extra 30%” means strategic thinking, original research, cross-domain pattern recognition, and building trust — not just faster output
  • The US justice system’s COMPAS algorithm shows AI-assisted decisions can embed systemic bias — humans rubber-stamped discriminatory outputs
  • The shift that’s coming: hiring managers will ask “Where do you not let AI near your process?” — the pause is where value lives
Real Example:

The US justice system deployed COMPAS, an algorithm to predict recidivism. It predicted Black defendants would reoffend at nearly twice the rate of white defendants with similar histories. Judges trusted the system and rubber-stamped its outputs. The same pattern is now emerging in hiring: when humans defer to AI-generated assessments without critical evaluation, systemic biases get encoded and amplified at scale.

ways of Working

The One Thing AI Can't Fake (And Why Hiring Managers at Google & Anthropic Are Testing For It)

Leading AI companies aren't optimising quality away.

Riley ColemanRiley Coleman
February 4, 2026·5 min read

The Smartest Companies Are Slowing Down. Here's Why.

A few weeks ago, I was deep in a rabbit hole. I'd been analysing the top design leadership subreddits, trying to understand what's actually keeping leaders up at night.

One post stopped me cold. 920 upvotes. The title: "How do I evaluate a portfolio that is 90% AI generated?"

The hiring manager wrote:

"I'm hiring for a Senior UI role. Every portfolio looks the same. Perfectly polished Bento grids, abstract 3D shapes, pristine copy. But when I ask 'Why did you choose this layout?', the answer is vague. I suspect 90% of the work is Relume/Midjourney output. How are you filtering for actual design thinking versus curation skills?"

The top responses were fascinating. One leader said whiteboard challenges are back. Another asks candidates to show "the ugly phase" - the messy sketches, the failed prompts, the bad ideas. A third stopped looking at portfolios entirely. They now pay candidates for a one-hour critique of an existing app.

I'd heard versions of this concern more than a dozen times in the past year. But seeing it laid out like that, with 920 people agreeing - something clicked.

We have a problem. And it's not the one most people think.

The Paradox Nobody's Talking About

Here's what's strange.

Google, Anthropic, McKinsey, Amazon, Goldman Sachs, Deloitte. These companies are betting billions on AI. They're building the tools that promise to make knowledge work faster.

And yet.

When it comes to their own hiring, they're going the opposite direction. Google brought back whiteboard interviews. Deloitte doubled their case study time. Goldman Sachs added more in-person rounds.

Why would companies that sell AI efficiency demand slower, more human evaluation of their own people?

Because they've figured something out that the rest of us are only starting to see.

What the Research Shows

In late 2024, MIT researchers ran an experiment. They gave knowledge workers AI assistants and tracked what happened to their thinking over time.

The finding was uncomfortable.

People using AI showed 20% less activity in the brain regions tied to critical thinking and decision-making. The researchers called it "cognitive offloading." I'd call it something simpler: our thinking muscles get weaker when we stop using them.

This isn't about whether AI is good or bad. It's about what happens when we hand over certain kinds of work without realising what we're giving up.

I noticed this in my own work in mid 2024 before the study came out. When I first started using AI tools, I went all-in. AI-first was the goal. At first I felt like I had super-powers and productive it skyrockets. But after a few months, I caught myself doing something worrying. I was accepting suggestions I hadn't fully thought through. Not because I was lazy. Because my brain had quietly started to trust the machine more than my own judgement.

The research gave me a name for it: automation bias. The more we use AI, the more we tend to trust it - even when we shouldn't.

The impact for me though, it was so much worse than slipping on reading an article, email or proposal. A sharp very noticeable decline in memory and my creativity - the thing i pride myself on - went from a sharp pencil to a dull one. I was even sent for brain scans by my doctor.

When i finally realised that neuroplasticity, it goes both ways - the old if you dont use it, you lose it... It's actually grounded in science. It took me months to re-build what I had lost... and it had only been a few months since I lost it.

The Convergence Problem

Here's where it gets interesting for hiring.

When everyone uses the same AI tools, trained on the same data, optimised for the same outcomes - the outputs start to look the same.

That Reddit post captured it perfectly. Every portfolio: Bento grids. Abstract 3D shapes. Pristine copy. Technically excellent. Completely interchangeable.

But the problem goes deeper than portfolios.

If everyone's cover letter sounds similar, and everyone's interview prep comes from the same AI coaching tools, and everyone's case study uses the same frameworks - how do you find the person who actually thinks differently?

HR leaders are starting to ask this question out loud. One told me recently: "AI can help someone perform at 90% of average. But we're not hiring for average. We're hiring for the extra 30% that only humans bring."

That extra 30% is the part that's getting harder to see.

What "Extra 30%" Actually Means

It's worth being specific here.

When I talk to hiring managers, they describe it in concrete terms:

  • The ability to challenge a brief, not just execute it
  • Knowing when to break a pattern rather than follow it
  • Reading the room in ways that don't show up in data
  • Making ethical calls in grey areas where there's no clear answer
  • Building trust with difficult stakeholders

These aren't nice-to-haves. They're the things that separate competent work from work that actually moves things forward.

And here's the uncomfortable truth: these skills don't get better when you use AI more. They get better when you practice them directly.

A Lesson from the Justice System

I want to share an example that's stayed with me.

In the United States, courts use an algorithm called COMPAS to help predict whether someone is likely to reoffend. The idea was to make sentencing fairer by removing human bias.

The system was designed with human oversight. Judges were told to use the AI score as one factor among many.

What actually happened? Judges started relying on the scores more and more. When the AI said someone was high-risk, they went along with it - even when their own experience suggested otherwise.

A legal scholar called it "the illusion of objectivity." The AI looked so precise, so scientific, that it became hard to question.

The result was bias in a different form. The algorithm had been trained on historical data that reflected existing inequalities. So it learned to rate Black defendants as higher risk, and white defendants as lower risk - even when they weren't.

Human oversight was there. But the humans had stopped really thinking.

What the Smart Companies Understand

The organisations going slower on hiring aren't anti-AI. Many of them are building the AI.

What they understand is this: some decisions need more human thinking, not less. Hiring is one of them.

When Google brings back whiteboard interviews, they're not being nostalgic. They're creating a situation where candidates have to think in real time, without AI assistance, in front of people who are watching the process - not just the output.

When that hiring manager asks for "the ugly phase," they're looking for evidence of struggle. Struggle is a sign that someone actually wrestled with the problem rather than just generating a polished answer.

Gartner predicts that by 2027, 40% of organisations will require AI-free assessments for roles involving important decisions. That's not paranoia. That's pattern recognition.

The Shift That's Coming

I think we're going to look back at this moment as a turning point.

For the past two years, the question has been: "How do we use AI to go faster?"

The next question will be: "Where do we need to go slower on purpose?"

Not because speed is bad. But because some things only develop through friction. Critical thinking. Judgement. Ethics. The ability to sit with ambiguity.

I call it - strategic friction. Knowing when to build it into - to your workflows and to your customer's experiences. That is the challenge now - reversing 10 years of design convention - trying to optimise, trying to remove friction.

These are the capabilities that separate someone who can execute from someone who can lead. That is why I teach designer that the PAUSE IS WHERE THE VALUE IS.

If it werent - you're job could be automated away.

What This Means for You

If you're hiring: consider where you might be screening out the signal you actually need. Perfect AI-assisted outputs might look impressive, but they don't tell you how someone thinks under pressure.

If you're a candidate: the best investment you can make isn't mastering the latest AI tool. It's building the skills that can't be automated. Show your messy thinking. Explain your trade-offs. Demonstrate that you can disagree with a brief when it matters.

If you're leading a team: start asking where you need deliberate friction. Not everywhere. But in the places where judgement matters most.

The smartest companies are already doing this. They're building moments into their processes where humans have to think - really think - without AI assistance.

Not because they don't trust AI.

Because they understand what makes humans valuable.

I'd love to hear what you're seeing in your own hiring or job search.

Are portfolios getting more homogeneous?

Are interviews changing?

What's working and what isn't?

Let me know - riley@ai-flywheel.com

Riley Coleman

Written by

Riley Coleman

Founder, AI Flywheel

Riley helps design leaders build trustworthy AI experiences. They have trained 304+ designers and led 7 cohorts of the Trustworthy AI programme.

Share this article

Want more insights like this?

Join 1,000+ design leaders getting weekly insights on trustworthy AI.

Frequently Asked Questions

How is AI changing hiring in design and knowledge work?

AI-augmented workers produce roughly 30% more output, leading 34% of organisations to reduce headcount (Goldman Sachs, 2025). But the deeper shift is qualitative: when every candidate uses the same AI tools, portfolios converge. Companies like Google and Anthropic are counter-intuitively investing in human research and interviewing because they understand that AI amplifies but doesn’t replace original thinking, judgement, and taste.

What is the AI convergence problem in hiring?

The convergence problem occurs when knowledge workers all use the same AI tools — their work starts looking identical. Portfolios, case studies, and even interview responses become indistinguishable. Companies are discovering that the real value lies not in AI-assisted production but in the human capabilities AI cannot replicate: critical thinking, cross-domain pattern recognition, ethical judgement, and knowing when NOT to use AI.

What skills should designers develop to stay hireable in the AI era?

Five capabilities AI cannot replicate: the ability to challenge a brief rather than just execute it, knowing when to take a position even when the data is ambiguous, reading a room in ways that data cannot capture, making ethical calls in grey areas where there’s no clear answer, and building trust through authentic human connection. These skills improve with deliberate practice, not AI acceleration.

What can the justice system teach us about AI-assisted decision making?

The US justice system’s COMPAS algorithm — designed to predict recidivism — was found to predict Black defendants would reoffend at nearly twice the rate of white defendants with similar histories. Judges, trusting the algorithm, rubber-stamped its outputs. The lesson: when humans defer to AI without critical evaluation, systemic biases get encoded and amplified. This pattern is now repeating in hiring, performance reviews, and portfolio assessment.