AI is now part of almost every business strategy conversation.

But in the rush to automate, optimise, and scale… there are two fundamental human realities that are being consistently overlooked.

And if we ignore them, we don’t just lose value, we create new problems.


1. Automating the “easy work” can frequently make jobs worse, not better

On paper, this sounds like a win:

“Let’s use AI to handle the 80% of simple, repetitive tasks so our people can focus on higher-value work.”

Take a contact centre as an example.

  • 80% of calls are simple: opening hours, return policies, basic requests
  • 20% are complex: complaints, edge cases, frustrated customers

So we automate the 80%.

What’s left for the human?

Only the hardest, most emotionally draining 20%.

No quick wins. No easy conversations. No mental reset between calls. Just constant escalation, frustration, and pressure.

Over time, that role becomes:

  • More stressful
  • More cognitively demanding
  • More emotionally exhausting

And what happens next?

People slow down. They take longer breaks. They disengage.

Not because they’re lazy, but because the job has fundamentally changed in a way that humans aren’t wired for.

So instead of improving efficiency, we risk:

  • Burnout
  • Lower productivity
  • Higher attrition

All from a strategy that looked perfectly logical on a whiteboard. In reality the project doesnt see the promised efficiencies, and leaders are pointing fingers at change managers.


2. AI handles the “normal”… but who’s ready for the abnormal?

AI models are exceptionally good at one thing:

Describing and handling how things usually happen.

But business risk doesn’t live in the “usual.”

It lives in:

  • Low-frequency
  • High-impact
  • Often unpredictable scenarios

The edge cases. The crises. The moments where things go very wrong.

The more we rely on AI to handle the majority of work, the more two things quietly happen:

  1. Human expertise erodes If people aren’t regularly practicing judgment, problem-solving, and decision-making: they lose it.
  2. Attention drifts If the system “usually works,” people stop actively monitoring it.

That combination is dangerous.

Because when something doesn’t follow the pattern (when a serious issue emerges) the AI doesn’t have a playbook…

…and the humans aren’t ready to step in.

This is how small gaps turn into major incidents.


The real issue: AI strategy without human strategy

Neither of these problems are technical.

They’re human.

They come from treating AI as a replacement tool, instead of redesigning work with people in mind.

Because introducing AI doesn’t just change what gets done.

It changes:

  • The shape of roles
  • The skills required
  • The energy and cognitive load on your people
  • The way expertise is built and maintained

A better way to think about it

If you’re designing an AI strategy, don’t just ask:

  • “What can we automate?”

Also ask:

  • What does this leave our people doing all day?
  • Are we concentrating stress into fewer interactions?
  • How do we preserve learning, judgment, and expertise?
  • Who is ready for the rare but critical scenarios?

Because success with AI isn’t just about efficiency.

It’s about designing systems where humans can still perform at their best.


So what should we do differently?

If AI is going to reshape work, then we need to be just as intentional about designing the human experience around it.

A few practical considerations:

1. Don’t remove all the “easy work” Leave a mix.

Easy interactions aren’t a waste, they:

  • Give people quick wins
  • Provide mental recovery between harder tasks
  • Help build confidence and rhythm

Instead of full automation, consider partial automation or intelligent routing that still allows humans to handle a balanced workload.


2. Design roles, not just processes When you automate a process, you’re redesigning a job, whether you mean to or not.

Ask explicitly:

  • What does a day in this role now look like?
  • Is it sustainable for a human to perform at this level all day?

If the answer is no, the design isn’t finished.


3. Protect and build expertise deliberately If AI handles the majority of “normal” scenarios, you need a plan for how people:

  • Stay sharp
  • Practice decision-making
  • Build judgment over time

This might look like:

  • Simulation exercises
  • Rotating exposure to different case types
  • Creating space for learning, not just execution

Expertise doesn’t maintain itself.


4. Keep humans meaningfully “in the loop” Not as passive observers, but as active participants.

Design for:

  • Ongoing oversight (not blind trust)
  • Clear escalation paths
  • Situations where humans are expected to step in

Because when something goes wrong, reaction time matters, and that only comes with engagement.


5. Measure the right things If you only track efficiency, you’ll miss the warning signs.

Also look at:

  • Employee fatigue and engagement
  • Time spent per interaction (and why it’s increasing)
  • Quality of outcomes in edge cases

What gets measured shapes behaviour, and right now, most metrics are pushing us in the wrong direction.


This is the difference between using AI… and actually designing for it.


Final thought

AI doesn’t fail most organisations because the technology isn’t good enough.

It fails because we design for process… and forget to design for people.

And that’s a much harder problem to fix later.



Leave a Reply

Your email address will not be published. Required fields are marked *