How AI almost made me useless
Triggered by Shopify CEO Tobi Lütke’s leaked AI memo, this is the story of how blind trust in AI led to failure and what it took to rebuild from it.
What follows are extracts and hard-won lessons from my failures over the last two years of AI initiatives. These experiences have fundamentally changed how I approach this technology.
I used to think I was ahead of the curve. I had the tools. I had the swagger. I had AI.
When ChatGPT hit my workflow, it felt like I'd discovered a cheat code. Content? Automated. Code? Generated. Presentations? Spun up in minutes. My productivity skyrocketed. Clients were impressed. My ego inflated. I was soaring.
But like Icarus, I didn't notice how dangerously close I was flying to the sun.
When AI became my crutch
The first warning sign flashed during a client engagement with a fintech startup. I was leading technical strategy and had become dependent on AI-generated code to meet deadlines. Under intense time pressure, I used an AI coding assistant to generate a script for backend user authentication. It worked. It passed our tests. We shipped it.
Two weeks later, a junior engineer spotted something alarming: the script lacked proper input sanitisation. It left our system wide open to injection attacks. We had unknowingly deployed a vulnerability that could have exposed thousands of user accounts. Pure luck saved us from a breach.
That engineer deserved praise. Instead, I got defensive. I blamed the AI. I blamed the timeline. I blamed everything except myself.
It wasn't the AI's fault. It was mine. I hadn't reviewed the code. I hadn't asked the right questions. I had stopped thinking critically.
The myth of effortless mastery
Around the same time, I was guiding a friend, a talented freelance designer, in incorporating AI into her creative process. She enthusiastically tested generative design tools to accelerate her work. One of her submissions to a high-profile client initially received praise until someone recognised it bore a striking resemblance to an obscure indie artist's signature style.
She hadn't copied intentionally. The AI had been trained on that artist's work and the similarity wasn't coincidental. It was algorithmic plagiarism.
The client withdrew the work. The artist threatened legal action. My friend, who'd simply trusted the tool, found herself trapped in an ethical quagmire facing real financial and reputational damage.
This incident haunted me because I was guilty of the same shortcut. I'd used AI to generate pitch decks I presented as my own work. I never verified the logic, never sourced the data, never questioned the training background of the tools I relied on. I was moving too fast, prioritising speed over substance.
I had fallen in love with the output, not the process. I felt powerful. But it wasn't mastery, it was mimicry.
The Dunning-Kruger AI trap
The most humiliating moment came during a meeting with a scale-up client. I had used a large language model to craft a go-to-market strategy. On the surface, it looked impressive: polished, intelligent, complete with market forecasts and detailed customer personas.
Then the client asked a simple question: "What data sources did you use to validate these buyer assumptions?"
I froze. I had no answer. The LLM had generated the personas and segmentation logic and I hadn't validated any of it. I had blindly trusted the format, tone and confident presentation without demanding evidence.
That single question shattered my credibility.
In the painful post-mortem that followed, we discovered multiple hallucinated statistics. The demographic targeting was fundamentally flawed. The buyer intent model was built on fabricated behavioral signals. It looked professional but was essentially fiction.
I had fallen squarely into the Dunning-Kruger zone of artificial intelligence. I wasn't applying strategic thinking; I was parroting AI outputs. I wasn't managing the technology, it was managing me.
The erosion you don’t see coming
As my dependency on AI deepened over these two years, I noticed something disturbing: my skills weren't improving, they were deteriorating. I was getting lazier.
My writing became formulaic. My problem-solving abilities dulled. When AI failed me, I struggled to find alternatives.
And I wasn't alone. My development team, early adopters of AI-assisted coding, began struggling with manual debugging. Junior talent bypassed crucial foundational learning because AI "helped" them code until it didn't. When systems broke down, we lacked the fundamental skills to fix them quickly.
The realisation hit hard: this wasn't augmentation. It was atrophy.
The fall is quiet, until it isn’t
A promising startup project collapsed spectacularly under my watch in the second year of our AI journey.
They had used AI to generate an entire marketing campaign. Speed trumped scrutiny. The copy seemed flawless but hidden in the fine print were false claims about sustainability. It wasn't deliberate deception, just inaccurate information the AI had fabricated. The startup faced devastating accusations of greenwashing and was eviscerated on social media.
It happened because they trusted too completely. Because they made assumptions. Because they treated AI as an infallible partner instead of a tool requiring oversight.
Icarus had fallen and the sea was mercilessly cold.
The climb back requires humility
After nearly two years of accumulating failures, I went back to fundamentals. I rebuilt workflows around rigorous human review. I created systematic checklists to audit AI outputs. I trained my team to think like Daedalus: skeptical, methodical, grounded in reality.
We established new principles:
Every AI output is a draft, not a deliverable.
Every confident response demands verification.
Every shortcut carries a hidden cost.
We launched internal AI literacy sessions. We developed warning systems that flagged potential issues: "This recommendation may contain hallucinations" or "This output requires security review."
Ethical grounding in a frictionless world
We implemented robust systems to properly attribute AI-generated content, not just for legal compliance but for ethical clarity. Clients deserve transparency about when a logo, copy, or code was influenced by AI. And original creators deserve credit, not just compensation.
We established mandatory human-in-the-loop protocols for sensitive AI applications in medical and financial contexts. No more blind trust. No more black-box dependencies. We learned to ask tougher questions and challenge assumptions. We made questioning AI outputs a core part of our onboarding: learn to use AI effectively but also learn to recognise when it's wrong.
By identifying flaws in AI, we gradually rebuilt confidence in our human judgment.
The myth, the machine and the mirror
Greek myths weren't mere stories. They were warnings. Icarus wasn't foolish, he was seduced. Just as we are today. By speed. By scale. By the illusion of effortless perfection.
But AI isn't flight. It's fire. Used wisely, it illuminates and warms. Used recklessly, it consumes everything.
AI didn't make me a better professional. Hitting rock bottom did. And climbing back out made me wiser.
These past two years of AI experimentation taught me more from failure than success ever could. The lessons were painful but necessary.
Don't be Icarus. Be Daedalus. Build wings that bend, not break. And never, ever stop questioning the machine.
Thanks for sharing, my friend—some really valuable lessons here.
I could add examples from our own work, but honestly, I think you’ve said it all.
My take: AI is a powerful productivity tool—like having an army of interns at your fingertips. But just like interns, it needs proper management: training, guidance, review, and course correction.
And if you’re presenting work that isn’t fully your own, whether it’s from an intern or Claude/ChatGPT/etc., it’s only fair to give credit where it’s due
(This response was polished by ChatGPT)