In 2023, ChatGPT became the fastest technology in history to reach one billion users (OpenAI, 2023). The industry celebrated a milestone. I saw a warning flare: we are far more obsessed with machines that look human than with machines that actually help humans.
The AI industry has trapped itself. We have tied our ambition to an outdated fantasy: machines that talk like us, feel like us, mimic our empathy and our cognition.
We celebrate when AI passes Turing tests. Yet, in boardrooms, hospitals, classrooms and factories, I see the same pattern: AI is rarely moving the needle where it matters: on human capability, resilience and outcomes.
How we got stuck: the Turing Trap
The roots of this dysfunction are old. Alan Turing’s 1950 paper suggested that behaviour imitation was a practical test of machine intelligence (Turing, 1950).
It was never intended as a goal. It became one anyway.
Some of venture funds, boards and product teams I advised overwhelmingly calibrated success around "human-likeness".
Products were judged not on their contribution to user outcomes but on their capacity to feel natural, appear intelligent and simulate connection.
It is no surprise what followed. Billions spent. Demos that dazzled. Business cases that collapsed.
I lived through the internal post-mortems. I watched AI tutors that could write beautiful Socratic dialogues fail to improve test scores. I watched healthcare chatbots praised for emotional intelligence that could not navigate a simple triage protocol.
I personally advised a global platform that rolled out an AI “empathy engine” for mental health support only to find, six months later, that user loneliness and disengagement had increased, not decreased.
The problem was not the technology. It was the goal. We optimised for imitation, not for empowerment.
The hidden economic damage
The consequences are deeper than failed products. They ripple through labour markets, trust systems and social cohesion.
When AI mimics human tasks, it enters into direct competition with human workers. Erik Brynjolfsson called this the "Turing Trap" (Brynjolfsson, 2022). I saw it happen first-hand with clients who deployed AI to automate administrative roles, only to discover massive organisational distrust, skills degradation and wage polarisation.
MIT research (Acemoglu and Restrepo, 2022) confirms what I witnessed on the ground: AI that substitutes for labour exacerbates inequality.
The tech makes the rich richer, the skilled more powerful and the rest more replaceable.
A client CPO for a global logistics company, have been replacing frontline supervisors with AI-driven scheduling systems: they have seen a productivity spiked and then collapsed. Error rates soared. Why? Because what looked like a replicable scheduling task was in fact dependent on informal, tacit knowledge (something the chatbot could model).
The lesson for me is that superficial similarity is not real capability. Imitation is not understanding. Replication is not contribution.
Agentic AI could be a way out, if we are disciplined
There is an alternative.
Agentic AI systems designed to augment rather than replace human capability has shown promise.
I have seen it work.
At a healthcare network, deploying AI as a care coordination assistant freed up doctors' time and improved patient outcomes. Not by pretending to be doctors, but by being better at organising follow-ups and detecting gaps.
At an advanced manufacturing firm, pairing engineers with AI-driven simulation tools doubled design throughput without reducing headcount.
In pharma, AlphaFold3’s molecular predictions accelerate experimentation but humans still guide the validation and application.
In each case, success came not from mimicking human reasoning but from complementing human strengths.
But agentic AI is no silver bullet. If deployed carelessly, it risks creating a thin stratum of hyper-augmented elites while leaving the majority behind. I have already seen this dynamic: firms that pair AI augmentation with strong workforce development strategies thrive. Those that do not, create resentment and widening gaps.
The Stanford Digital Economy Lab (2025) found the same. Augmentation can reduce wage gaps but only when paired with deliberate human capital investment. If we simply layer AI augmentation on top of existing structural inequalities, we will only deepen them.
The real work ahead
Governance remains painfully behind. The EU AI Act, the C2PA standards and the corporate "AI ethics" playbooks are steps in the right direction but too often designed as checklists rather than living, adaptive systems.
As Shannon Vallor (2024) argues, good governance must actively enable human flourishing, not just prevent disaster. I have been in the rooms where these decisions are made. The temptation to optimise for marketing optics, not systemic impact, is immense.
Real governance is slow, messy, multidisciplinary. It is deeply unsexy. It demands uncomfortable trade-offs between innovation velocity and societal resilience.
If we are serious about making AI a force multiplier for humanity, we need:
Deliberate system design: Prioritising capability expansion, not surface imitation
Integrated policy frameworks: That promote broad access, resilience and human-centred outcomes
Massive investment in education and upskilling: Especially for vulnerable and marginalised populations
Ethical standards that evolve dynamically: Informed by real-world performance, not speculative worst-case scenarios
It is slower. It is harder. It is the only path that works.
Choose divergence
Every technological revolution starts with imitation and spectacle. Real transformation comes only when we move beyond it.
Today, we can continue chasing digital puppets that look like us but hollow out our institutions, our work and our trust. Or we can build tools that make us more: more capable, more resilient, more empowered.
The future of AI will not be decided by technical capacity alone. It will be decided by where we choose to aim.
I will publish later today Part 2, where I outline a practical framework to build AI systems that make a deliberate choice to benefit humanity.