Creating the cybernetic human: Managing the human-AI interface
While reading recently about generational trends (Gen X, Gen Z, Baby Boomers, etc.), I came across a McKinsey podcast featuring Professor Andrew Scott from London Business School. One comment about AI particularly stood out:
“As machines get better at being machines, humans have to get better at being more human... So human empathy, EQ, et cetera, will all become more important for employment.”
At first, I thought, “Good point!” But then I paused: Is it really that simple? Can we really draw such a clear line between what machines do and what humans do?
With so much discussion about AI – how it works, how we can use it, whether it will replace jobs or rule the world – my own interest, as a leadership and management specialist, is in how leaders interact with AI, and how this shapes organizations and the people within them. So here, I want to unpack some thoughts prompted by Scott’s statement.
Models of human-AI interaction
The Centaur model
Scott’s point fits neatly with what’s often called the centaur model: humans and AI working side by side, each playing to their strengths. AI handles data-heavy tasks; humans bring empathy, judgment, and creativity.
This is a comforting narrative – and perhaps a bit of a hangover from our cultural wariness of technology. It suggests AI should be “kept in its place,” echoing anxieties from the Industrial Revolution through to the computerisation of offices. This runs deep – we’re all familiar with dystopian fiction about the dangers of technology, from The Forbin Project to Black Mirror. But in reality, we’ve already integrated tech deeply into our lives. Think about your smartphone – it’s an extension of you. So perhaps we need to move beyond the centaur mindset.
Loop models: Who’s in charge?
Other models, especially from military and safety-critical contexts, explore varying levels of human control:
- Human-in-the-Loop (HITL): Humans make the final call, with AI providing input. Think of doctors using AI to aid diagnoses.
- Human-on-the-Loop (HOTL): AI acts autonomously, with humans supervising. Autonomous vehicles often follow this model.
- Human-out-of-the-Loop (HOOTL): AI acts entirely on its own. We already see this in high-frequency trading.
The further we move along this spectrum, the more uncomfortable it feels – maybe we’ve all watched Terminator too often. But these models are limited in scope. They focus on control, not collaboration.
Cyborgs and teaming: A symbiotic future
What interests me more are cyborg and teaming models. These move beyond control and towards true integration.
- Cyborg model: AI enhances human capability – supporting memory, decision-making, or communication. For example, call center agents using AI prompts during live conversations.
- Teaming model: Humans and AI work together like high-performing teams – sharing goals, adapting to each other, and collaborating creatively.
A typical AI-human teaming approach in strategic planning might look like this:
- Define the challenge collaboratively.
- Curate and analyze data using AI.
- Co-create scenarios for future possibilities.
- Use AI to model outcomes and risks.
- Develop response strategies together.
Companies like Accenture and Morgan Stanley are already doing this.
This feels radically different – but is it? Think about learning to drive: it once felt unnatural, but now you and your car function as a single system. We already have experience integrating with machines. We just need to adapt again.
Developing the skills to coexist productively with AI
The more integrated our relationship with AI becomes, the more human skills we need to harness.
Zirar, Ali and Islam (2023) identify three critical skill areas:
- Technical skills: Fluency with tools like LLMs (e.g. ChatGPT), VR, and automation platforms.
- Human skills: Emotional intelligence, communication, collaboration, leadership.
- Conceptual skills: Creativity, judgment, critical thinking, sense-making.
This isn’t entirely new. We adapted to personal computing, the internet, and social media. We learned word processing, graphic design, desktop publishing. AI is the next step – yes, a big one, but not unmanageable.
The key is engagement. As David Emerald, author of The Power of TED, reminds us, we can choose to become creators, not victims – focusing on outcomes, taking small steps, and influencing change. Engagement builds confidence, which builds agency.
Leaders: Create the conditions for AI fluency
If humans and AI are to become teammates – not just tool users – leaders have a critical role in enabling the transition.
This process requires psychological safety. To explore its potential, people need space to play, experiment, and fail with confidence.
Dr. Timothy Clark’s Four Stages of Psychological Safety provides a useful lens:
- Inclusion Safety: Everyone feels accepted and invited to participate.
- Learner Safety: It's safe to ask questions, admit gaps, and try new things.
- Contributor Safety: People are trusted to apply their skills and ideas.
- Challenger Safety: It's OK to question assumptions and drive innovation.
This is a journey. But it’s one that leaders must actively support – if they want their organizations and their people to fully, and enthusiastically, embrace the promise of AI.
Conclusion: More human, not less
In the end, I’m inclined to agree with Professor Scott. The key to thriving in an AI-driven world lies in being more human, not less – more empathetic, more imaginative, more collaborative. Developing those human skills of communication, critical and creative thinking, teamworking, leadership and collaboration. AI may be fast, powerful and scalable, but it lacks human context, emotion, and values.
Trying to “stay in control” through rigid centaur or HITL models may feel safe – but it also limits potential. By embracing cyborg and teaming models, we unlock something greater: the capacity to become AI-enhanced people and organizations.
Think of it as our chance to become the 21st-century equivalent of the Six Million Dollar Man or Woman – augmented, adaptive, and extraordinary.
And yes – just in case you're wondering – AI helped me write this!