Are You Chasing the Wrong Horizon? (1 of 4)
Why we overestimate the short term, underestimate the long term and what that means for AI.
This post is part of a four-part series: Thinking Clearly About AI. The series doesn’t try to predict the future. It looks at patterns from past disruptions and asks better questions about the present. Each post explores one idea.
We’ve always been bad at timing.
It isn’t just executives. It’s human nature. We anchor on what feels immediate and we struggle to picture what plays out slowly. This bias has tripped us up with every major technological shift.
AI will be no different. The risk for leaders today is not that you fail to act, but that you act on the wrong horizon: chasing short-term hype while ignoring long-term transformation.
The Bias We Can’t Escape
Psychologists Daniel Kahneman and Amos Tversky called it “availability bias.” We overweight what’s in front of us now and underweight what compounds quietly in the background. Futurist Roy Amara summarised it in a line that has become law in technology circles: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
The dot-com bubble is the most famous example. In 1999, analysts declared that physical stores would be dead within five years. E-commerce penetration in the US didn’t hit 10% until 2017. The short-term prediction was wildly wrong. Yet today, Amazon’s $1.8 trillion valuation reflects the reality that online commerce did indeed reshape the economy, just on a slower fuse and in a deeper way than predicted.
The same was true for cloud computing. Gartner was publishing reports about “utility IT” back in 2008. Most enterprises didn’t migrate critical workloads until the last five years. By 2024, cloud accounts for nearly half of IT budgets (Flexera). The short term was slower than expected. The long term has been more systemic than anyone imagined.
Mobile followed the same pattern. Early hype in the 2000s centred on ringtones and WAP browsing. Few predicted that smartphones would become the hub of identity, commerce and social life. Today, more than 5.5 billion people carry one (GSMA, 2024), and 60% of global e-commerce flows through them.
We’re not just bad at timing. We’re consistently wrong in the same way: too much, too soon, then too little, too late.
What the Data Says About AI Now
AI is showing the same split between short-term overestimation and long-term underestimation.
Short term: McKinsey’s State of AI 2023 report found that 55% of firms had adopted AI in at least one function. But only 23% reported meaningful financial impact. Microsoft’s own data on Copilot shows 90% of users say it makes them more productive, yet adoption across enterprises is still under 15%. The near-term reality is underwhelming.
Long term: Goldman Sachs projects $7 trillion of added global GDP by 2030. PwC estimates AI could contribute 14% to global GDP by 2030. The World Economic Forum forecasts that while 83 million jobs may be displaced by automation, 69 million new ones will be created, reshaping labour markets rather than simply erasing them.
In other words: in the short run, the promise exceeds reality. In the long run, reality may exceed the promise.
When Short-Term “Failure” Plants Long-Term Seeds
IBM’s Watson Health is a case in point. A decade ago, it was marketed as a revolution in cancer care. The pitch was seductive: Watson would analyse thousands of research papers and help doctors make better treatment decisions.
Reality was harsher. Hospitals struggled to integrate it. Doctors didn’t trust its recommendations. By 2022, IBM had sold the unit for parts. On the surface, it looks like a cautionary tale of hype that went nowhere.
But talk to hospital administrators today and you’ll hear a more nuanced story. Watson forced them to digitise records, clean up data, and experiment with AI governance. Those foundations are now being used by the next wave of diagnostic systems. Watson failed as a business. But as infrastructure, it laid groundwork.
Blockbuster is another reminder. In 2000, Reed Hastings offered to sell Netflix for $50 million. Blockbuster’s CEO turned him down. The company was focused on short-term DVD revenues, not long-term streaming economics. They chased the wrong horizon. And they didn’t survive to see the long one.
Executives today risk making the same mistake with AI: dismissing what doesn’t pay back quickly, only to be caught unprepared when the deeper shifts arrive.
Why CEO Struggle With Horizons
In practice, I see CEOs make three recurring errors when it comes to horizon thinking:
Collapsing horizons. They roll one AI strategy into a single deck, instead of separating 1–3 year productivity gains from 10–20 year structural shifts.
Overweighting ROI. They demand year-one returns on initiatives that inherently need long gestation. Projects get killed just before they start to compound.
Forgetting patience. Leaders rotate every three to five years. By the time the long horizon arrives, a new leadership team is in place, often with no memory of the seeds planted earlier.
These errors explain why so many companies miss disruptive shifts. They were looking at the wrong time horizon.
What Horizon Discipline Looks Like
The companies that have navigated disruption best weren’t clairvoyant. They were disciplined about managing horizons.
Amazon launched AWS without knowing cloud would dominate. But they invested patiently for over a decade before it became a profit engine.
Netflix didn’t predict streaming in 2000. They built a culture of adaptability that let them pivot when the horizon shifted.
Toyota didn’t forecast future demand with precision. They built a system, the Toyota Production System, that made them capable of learning and adjusting faster than competitors.
Each of these examples reflects the same principle: discipline across multiple horizons, not faith in one forecast.
The Question for You
The temptation in AI right now is to chase visible returns in 2025: cost savings, automation, productivity. Those matter. But they are not the whole picture.
The bigger horizon is slower, quieter and harder to put on a slide:
Governance models reshaping how industries operate.
Power concentrating around platform providers.
Labour markets fragmenting into new categories of work.
Entire value chains restructured by capabilities we can’t yet imagine.
So ask yourself:
Are you optimising for the next quarterly board meeting or for the organisation you want to be in 2040?
Do you have a portfolio of bets across horizons or are you collapsing them into one “AI strategy”?
Are your KPIs patient enough to let long-term seeds take root or are they built to choke them before they sprout?
The biggest risk isn’t moving too slowly on AI, it is chasing the wrong horizon.
If you’re trying to figure out AI and disruption, I offer one-to-one executive coaching. I don’t claim to have all the answers (nobody does) but I help clarify thinking, avoid predictable mistakes and build the organisational fitness to adapt. If that sounds useful, reach out.