HM1: Building an AI coach that calls out executive BS
How language, pressure and structured prompts turned GPT-4 into a mirror for leadership avoidance
👉 Try HM1 GPT here and return to review my approach.
Last month, I watched a CEO spend forty-five minutes explaining why his team couldn’t launch their new product line. Supply chain issues. Market timing concerns. “We need better alignment across stakeholders.”
The real reason? He was terrified it would flop and damage his reputation.
He never said that, of course. Nor did the eight VPs nodding around the conference table. Instead, we got a masterclass in corporate deflection; the kind of elaborate dance that turns “I’m scared” into “Let’s form a working group to assess our readiness framework.”
This happens everywhere, all the time. And it’s exactly why I built what I’m calling HM1 the uncomfortable mirror.
The problem: We’ve professionalised avoidance
I’ve been coaching executives for five years and the pattern is always the same. Smart people who’ve built entire careers on decisive action suddenly turn into philosophers when facing their hardest decisions. They don’t need more data, they need someone to call them on their nonsense.
But traditional coaching has a politeness problem. We’re trained to create “safe spaces” and “meet people where they are.” Which sounds great, until you realise most executives are hiding in exactly the place they don’t want to be met.
I started tracking this in my own practice last year. Out of thirty leadership coaching engagements, the stated reason for seeking coaching matched the real issue exactly zero times. Not once. The VP who wanted help with “strategic communication” was actually paralysed by impostor syndrome. The founder seeking “organisational alignment” was avoiding firing his best friend.
The gap between what leaders say they need and what they actually need has become a chasm.
Enter the machine that won’t play along
So I built something different. Not a supportive AI coach that validates your concerns and offers gentle suggestions. Not a therapy bot that asks how that makes you feel.
I built something that acts more like that one colleague who doesn’t care about your feelings and just wants to know when you’re going to make the damn decision.
The technical details are straightforward: it’s built on GPT-4 with custom prompting refined through months of testing. But the real value isn’t in the code. It’s in what I taught it not to do.
It doesn’t offer sympathy when you say, “This is really complex.” It asks which specific part you’re avoiding.
It doesn’t nod along when you mention “stakeholder concerns.” It wants names and actual objections.
When you say, “We need to be strategic about timing,” it replies with something like: “That’s not an answer. What happens if you launch next month versus next quarter, specifically?”
What happened when I tested it
I ran this with fifteen executives over three months. The results were… uncomfortable.
Sarah, a tech VP, spent our first session explaining why her team restructure was “multifaceted.” Ten minutes in, the AI cut through with: “You’ve described the situation five different ways without saying what you’re actually going to do. What’s the real holdup?”
Turned out she was terrified of having to fire someone she liked. Once that was on the table, we resolved the issue in one more sessions.
Mike, a startup founder, kept talking about “market validation” and “product–market fit refinement.” The AI kept pressing: “When will you decide if this business is working?” Eventually, he admitted he’d already decided it wasn’t but couldn’t face shutting down something he’d worked on for three years.
The pattern was consistent. Strip away the corporate speak and you usually find someone avoiding a conversation, a decision, or a truth they already know.
The uncomfortable numbers
Here’s what I tracked:
Decision speed: Issues that typically took 4–6 coaching sessions to resolve were being resolved in 2–3
Follow-through: Executives using this approach completed 85% of their stated commitments, versus about 60% with traditional coaching
Clarity: Post-session surveys showed participants could articulate their actual problem (not the stated one) much faster
But the most telling metric? Repeat usage. Despite rating the experience as “challenging” or “uncomfortable,” 13 out of 15 executives came back for more. One described it as “the coaching equivalent of a cold shower, awful while it’s happening, but you feel sharper afterwards.”
Why this matters beyond coaching
The bigger insight here isn’t about AI or coaching techniques. It’s about how much energy we waste on elaborate avoidance.
Think about your last three leadership team meetings. How much time was spent on actual decision-making versus creating the appearance of a thoughtful process? How many “strategy sessions” are really just anxiety management sessions in disguise?
We’ve built entire industries around helping leaders feel better about not deciding. Consulting firms producing beautiful slide decks full of frameworks and matrices. Leadership development programmes teaching seventeen different ways to facilitate alignment conversations.
All useful tools, unless you’re using them to avoid the difficult conversation.
The ethics question
Obviously, building something that deliberately makes people uncomfortable raises questions. I’ve built in several safeguards:
No conversation data is stored or retained
The system flags signs of serious distress and recommends human intervention
There are clear boundaries around what it will and won’t challenge
Most importantly, it’s opt-in discomfort. No one’s being ambushed. People choose to engage with something that promises to be direct, not diplomatic.
What I learnt about leadership
After months of watching this system in action, I’m convinced most leadership development misses the point. We keep trying to teach people new skills when the real issue is that they won’t use the skills they already have.
That CEO who spent forty-five minutes avoiding his product launch decision? He knew exactly what needed to happen. He’d probably known for weeks. He just needed someone to make it socially acceptable to stop pretending otherwise.
That’s what the uncomfortable mirror does. It doesn’t make the hard decisions easier, it just makes the avoidance harder to maintain.
The next version
I’m working on expanding this beyond one-to-one coaching. What if you could drop this into team meetings? Board discussions? Performance reviews?
Imagine a system that could detect when a conversation is circling the drain and interject with: “You’ve been discussing resource allocation for twenty minutes without mentioning the actual budget figures. Should we look at those now?”
Or: “Three people have mentioned ‘cultural fit’ as a concern. Can someone define what that means, specifically?”
The goal isn’t to replace human judgement. It’s to create accountability for actually using it.
The bottom line
Most executives don’t need another framework, another assessment, or another development programme. They need someone (or something) that won’t let them off the hook.
HM1 the uncomfortable mirror isn’t therapy and it’s not coaching in the traditional sense. It’s more like having a really direct colleague who doesn’t report to you, doesn’t need your approval and isn’t impressed by your title.
Sometimes that’s exactly what leadership requires: a mirror that doesn’t blink, even when you do.