I’m not building Artificial General Intelligence but I’m still enabling It
Personal reflection on the latest DeepMind’s AGI updates and systemic blind spots
Let me be clear: I’m not in a lab training frontier models. I’m not scaling architectures or leading alignment research. I’m not building Artificial General Intelligence (AGI). But I’m still involved.
I advise companies on how to use AI. I help teams integrate models into products and systems. I shape strategies that bring artificial intelligence closer to people, decisions and markets.
While I’m not building AGI, I’m helping pave the road to it and lately, that role has started to weigh on me; not because I think AGI is inherently dangerous but because I’ve seen this movie before and I know how easily good intentions get swallowed by systemic incentives.
This isn’t the first time we got carried away
Social media started with a similarly ambitious promise. We were going to connect the world. Flatten hierarchies. Give everyone a voice. Create more informed societies.
We got a very different reality. Instead of connection, we got polarisation. Instead of empowerment, we got platform dependency. Instead of truth, we got engagement-optimised misinformation. We got all of it fast, before anyone could meaningfully intervene.
Social media didn’t need general intelligence to reshape the fabric of society. It just needed an incentive structure that rewarded reach over rigour, virality over veracity, optimisation over understanding. Now, as we inch toward AGI, we’re applying that same logic to something far more powerful.
DeepMind’s “Responsible Path” feels familiar
Recently, DeepMind published a blog titled “Taking a responsible path to AGI.”
It was thoughtful. It was well-articulated. It checked all the right boxes: transparency, safety, long-term benefit, collaboration.
But it also followed a script we’ve heard before.
Acknowledgement of risks
Confidence in internal safeguards
Optimism that the benefits will outweigh the harms
Commitment to responsibility defined internally
It mirrors how social platforms once framed their impact: the belief that thoughtful actors could steer exponential systems through complexity, that if you had the right values and enough research, the outcomes would stay aligned. We’ve learned that values alone are not enough when the system rewards speed, scale and dominance… AI is following the same curve.
I’ve told myself a comfortable story
In my work, I often use terms like:
Responsible integration
Ethical AI adoption
Scalable, human-aligned AI
They sound good. They feel right but increasingly, I’m asking: what do they really mean?
Because here’s the uncomfortable truth:
I’ve helped deploy systems I didn’t fully understand
I’ve enabled abstraction layers that obscure model behaviour
I’ve accelerated adoption without always questioning the broader consequences
That doesn’t make me reckless but it does make me part of the feedback loop. The more we build, deploy and normalise AI, the harder it becomes to slow down, even if we later discover we should have.
What social media taught us
Social media didn’t fail because its founders were malicious. It failed because its incentives (scale, engagement, growth) overpowered intent. The same thing is happening in AI.
We’re pushing systems into healthcare, education, finance, hiring: domains where stakes are high, complexity is deep and unintended consequences compound. Now we’re talking about building AGI: a system that, by definition, can outperform humans in a broad range of cognitive tasks.
If basic recommendation engines broke reality’s consensus, what happens when cognition becomes a commodity? What happens when decision-making gets outsourced: not just to software but to agents trained to optimise in ways we can’t audit?
The question isn’t just “what could go wrong?” It’s “why would this go right, given everything we’ve already seen?”
DeepMind’s framing isn’t malicious, it’s incomplete
DeepMind’s post is earnest. I believe they care. I believe many labs do but sincerity is not a safeguard.
What’s missing from their vision is constraint, structural constraint. The kind that doesn’t just say “we’re responsible” but actually slows things down. Introduces friction. Accepts that winning might not be the goal.
Instead, the blog outlines principles without detailing enforcement. It speaks of benefits without addressing power asymmetries. It uses words like “science-led” and “bold responsibility” but doesn’t name the forces that will pressure even the most thoughtful lab to move faster than it should.
We don’t need AGI to reach existential risk, the pre-AGI world is already full of challenges we’ve failed to contain: surveillance, bias, misinformation, disempowerment… all of these are tractable, none of these are solved. So what exactly are we building on top of?
Where this leaves me
I’m not anti-AI. I believe it has transformative potential. I’ve seen it unlock real value. I’ve helped make it useful, accessible, scalable. But I can’t pretend the direction of travel is benign by default and I can’t hide behind “not building AGI” as if that absolves me of responsibility.
The ecosystem is interconnected. The choices we make as practitioners: what we deploy, what we abstract, what we normalise, feed into the momentum that shapes the entire field.
And that means I need a new lens. A new way of thinking about responsibility that isn’t just about good intentions or model safety. But about systems, incentives and long-term consequences.
What comes next
I’ve started writing a new playbook for myself. Not a moral essay, but a practical set of principles for how I want to operate in this space going forward. How to make decisions with integrity in a system that doesn’t make that easy. How to ask better questions. How to say no. How to slow down where it matters. How to shift from enabler to steward.
That playbook isn’t finished yet but it’s coming.
If you’re reading this, chances are you’re somewhere in the AI value chain too. And even if you’re not building AGI directly, you’re helping shape the conditions in which it arrives. So I hope you’ll consider writing your own version of that playbook.
→ In the next part of this series: “The playbook for AI practitioners who don’t want to be bystanders”
I’m not sure when I’ll be ready to publish it but let me know if you want an early draft.
The experience with social media has been concerning. I was glad to read you have used it as a warning to how AI could follow a similar path. Thanks for sharing.