HumanWORKS

Tag: business

  • The architecture of hype: why the AI revolution is being built on foundations of exquisitely capitalised nonsense

    The architecture of hype: why the AI revolution is being built on foundations of exquisitely capitalised nonsense

    Let me start with a confession that will surprise precisely nobody who has spent more than fifteen minutes in a technology strategy conversation with me: I have a visceral, almost physiological reaction to bullshit.

    Not the garden variety sort – the innocent exaggeration of a CV, the polite fiction that yes, that presentation was very insightful, thank you Derek. That kind of low-grade dishonesty is the social lubricant that keeps civilisation from grinding to a halt, and I’ve made my peace with it in the way one makes peace with the weather or the persistent existence of LinkedIn motivational posts about ‘hustle culture’. I still think these things are annoying but they aren’t likely to bring down the world’s economy in isolation.

    No, the bullshit I’m referring to operates at an altogether more impressive scale. It is the kind of bullshit that gets capitalised at astronomical valuations, the kind that attracts sovereign wealth fund investment, the kind that employs thousands of people whose job title might as well be ‘Professional Narrative Maintenance Engineer’. It is bullshit elevated to an art form, and – I say this with a degree of professional admiration for the sheer craftsmanship involved – the artificial intelligence sector has produced some of the finest examples the technology industry has ever witnessed.

    (I should note that this article is going to be long, somewhat technical in places, and deeply unfashionable in its conclusions. If you’re looking for breathless optimism about how AI will cure cancer, end poverty, and finally teach your labrador to use the washing machine, I’d recommend the nearest LinkedIn feed. What follows is closer to a systems analysis of why the current moment feels less like the dawn of a new era and more like watching someone build a cathedral on quicksand whilst insisting the foundations are ‘disrupting traditional geology’.)

    the epistemology of horseshit: a brief taxonomy of technological deception

    The technology sector has always operated within a framework where excitement about potential breakthroughs runs several laps ahead of material reality. This is not, in itself, a problem. Optimism about future capability is what attracts capital, and capital is what funds the research that occasionally – yes, occasionally – produces genuine transformation. The internet was overhyped before it was underestimated. Mobile computing was dismissed before it was everywhere. The cycle of hype and correction is as predictable as our fine British weather and approximately as amenable to control.

    What makes the current AI moment distinctive is not the presence of hype – we’ve had that since the 1960s, when researchers at Dartmouth cheerfully predicted that ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’ and then spent the next six decades discovering that they had, perhaps, been a touch optimistic.

    (Forgive me as I have flashbacks to helping my then girlfriend Becca with her Prolog homework in the early 2000s during her time as a computer science undergraduate before she rightly fucked the whole subject off and did biology and then genetics at Edinburgh. I appreciate the irony of a man who has spent the last twenty five years in tech telling someone to run away from computer science at high speed but she was far more interested in biology than the numbers underpinning computer science which ended up being my thing)

    The distinction in progress lies in the velocity of capital deployment relative to the clarity of the value proposition.

    In the dot-com era, at least the bullshit had a certain transparency to it. ‘We’ll sell dog food on the internet’ was a comprehensible business proposition, even if Pets.com’s execution suggested otherwise. The crypto boom had the decency to be obviously speculative – anyone buying a $300,000 blockchain validated JPEG of a bored ape knew, on some level, that they were participating in a collective hallucination about value. They did, right?

    The AI sector, by contrast, benefits from a peculiar form of what I’ll call Complexity Camouflage™ – the phenomenon whereby the genuine technical sophistication of the underlying systems provides near-perfect cover for the strategic deployment of impossible promises. When a vendor walks into your boardroom claiming their Cognitive Decision Engine® will deliver 80% efficiency gains across your operations, the sheer density of the jargon creates a kind of epistemic fog in which otherwise intelligent people find themselves nodding along because the alternative – admitting they don’t fully understand what’s being proposed – feels professionally dangerous.

    (I’ve sat in meetings where this has happened. I’ve watched the nods. I’ve felt the gravitational pull of the collective pretence. It takes a particular kind of bloody-mindedness to raise one’s hand and say ‘I’m sorry, could you explain what that actually means in terms of things that happen in reality?’ – a bloody-mindedness I’ve cultivated over twenty five years of consulting and which has made me simultaneously valued and occasionally unwelcome, depending on whether the person running the meeting wanted truth or theatre. As my writing will attest to – I offer the truth because honesty is at least actionable whereas bullshit is not.)

    The real-world data tells a rather different story from the pitch deck. Genuine AI implementations typically deliver efficiency gains in the region of 15–30%, which is perfectly respectable and often worth the investment. Any claim significantly exceeding this threshold should be treated with the same scepticism one would apply to a man in a pub explaining that his uncle definitely knew the Queen.

    the bubble mechanics: or, how we keep doing this to ourselves

    Historical precedents for the current moment are not difficult to find, and their consistency would be almost comforting if the consequences weren’t so reliably catastrophic.

    The expert systems boom of the 1980s promised professional-level human intelligence simulation and delivered what amounted to very expensive if-then statements that couldn’t handle the messy ambiguity of actual decision-making. The resulting ‘AI winter’ set the field back by a decade and destroyed careers with the thoroughness of a controlled demolition.

    The dot-com crash demonstrated that ‘eyeballs’ and ‘burn rate’ are not, as it transpires, adequate substitutes for revenue. The fibre-optic backbone survived; the companies that laid it, largely, did not – some major winners aside. Amazon? Yes. Black Star? Not so much. Google? Yes. Webcrawler? Not really.

    The cryptocurrency and NFT frenzy offered decentralisation and ‘smart utility’ to a user base that, in retrospect, was primarily composed of people hoping to get rich by selling digital assets to the next person hoping to get rich by selling digital assets. A magnificent perpetual motion machine of speculation that worked right up until the moment it stopped, which is rather the nature of these things.

    Each cycle follows the same elegant choreography: genuine innovation attracts genuine interest; genuine interest attracts speculative capital; speculative capital inflates expectations beyond any reasonable connection to reality; reality eventually asserts itself with the subtlety of a freight train or a punch to the face by a heavyweight boxer; and then the infrastructure built during the mania remains, creating the actual foundation for the next wave of genuine innovation. The dot-com crash left us with e-commerce as a core of the internet. The crypto crash left us with distributed ledger experimentation albeit as an echo of a banking crisis that created paranoia. The question for the current AI cycle is what the wreckage will leave behind that proves genuinely useful.

    (For what it’s worth, my bet is ‘quite a lot’. The underlying technology is real and materially useful in ways the previous bubbles’ technologies often weren’t. The problem isn’t the technology. The problem is what happens when you pour several hundred billion dollars of venture capital onto something before anyone has worked out what it’s actually for beyond the general-purpose answer of ‘making things better, somehow, trust us, look at this demo’.)

    the investor’s dilemma: or, FOMO as a fiduciary strategy

    The current market is defined by what some analysts have termed ‘The Great Compression’ – a phenomenon whereby the traditional venture capital stages collapse into one another like a telescope being shut, forcing investors into what amounts to a ‘Winner Takes Most’ mindset at every entry point. This creates an acute dilemma: the pressure of missing the next Google or Microsoft compels professional capital to enter at valuations that everyone privately acknowledges are disconnected from any reasonable expectation of near-term revenue, whilst simultaneously recognising that the sector exhibits characteristics that any honest observer would describe as ‘bubbly’.

    This is the investor’s version of the ancient philosophical paradox about the crocodile: you know it’s probably going to bite you, but you’ve been told there might be treasure in its mouth, and your limited partners are watching, and the fund down the road already has its arm in there, and – well, you can see how these things escalate.

    What makes the current situation particularly intriguing from a systems perspective is the emerging bifurcation between pure-play AI laboratories and the diversified incumbents. OpenAI, Anthropic, and their peers are essentially one-trick ponies – extraordinarily sophisticated one-trick ponies, to be sure, but their entire existence is predicated on the continuing escalation of AI capability. If the technology plateaus or the market corrects, they have nothing to fall back on. They are the metaphorical equivalent of a restaurant that serves only one dish: spectacular when the dish is in fashion, catastrophically exposed when tastes change.

    Microsoft, Alphabet, and Amazon occupy an entirely different structural position. These are the landlords of the AI revolution – they own the data centres, the cloud infrastructure, the distribution networks through which AI products reach users. Whether OpenAI succeeds or fails, Azure still gets paid. Whether Anthropic’s models prove transformative or merely adequate, Google Cloud Processing still gets its cut. The platform providers win regardless, in much the same way that the people who sold shovels during the gold rush made rather more reliable returns than the people actually panning for gold.

    (As someone who has spent years advising organisations on technology strategy, this is the bit that makes me simultaneously fascinated and slightly nauseated. The structural dynamics are genuinely elegant from an analytical perspective. They’re also a perfect illustration of how capital markets reward positioning over innovation, infrastructure over invention, and being in the right place over being the right person. Which is, I suppose, a lesson that extends well beyond technology. Techno-feudalism – love it or hate it – means many of the older generation of technology success stories not only are diversified but are actually gaining revenue from the current bubble that leaves them shrugging but also stuffing their pockets regardless.)

    the scaling wall: or, why more of the same stops working

    Here is where things get properly interesting, and where the narrative that has sustained several hundred billion dollars of investment starts to develop some rather inconvenient cracks.

    The entire AI revolution – or at least the current chapter of it – has been built on a single, elegant premise: scaling laws. Between 2020 and 2024, the empirical evidence seemed to demonstrate that making models bigger, feeding them more data, and throwing more compute at the training process would produce smooth, predictable improvements in capability. Bigger was better. More was more. The relationship between investment and intelligence appeared to be, if not linear, then at least logarithmically reliable.

    This was a magnificent story for fundraising purposes. If intelligence scales predictably with compute, then intelligence becomes a simple function of capital deployment. Pour in more money, get out more intelligence. The implications were intoxicating: with sufficient investment, artificial general intelligence – whatever that means, and the inconsistency of the nomenclature should itself be a warning sign – was simply a matter of when, not whether.

    The problem, which has been quietly accumulating in the empirical literature since late 2024, is that the scaling laws appear to be hitting a wall.

    Not a temporary wall. Not a ‘we need a bit more data’ wall. A ‘the fundamental architecture of these systems has inherent limitations that more compute cannot overcome’ wall. A problem less of how to convince investors to hand over cash and more simple laws of mathematics that remain inconveniently in place like the walls of a particularly well cooled cell.

    Research into the persuasiveness of frontier language models – the very largest, most expensive systems currently in operation – found that they were only marginally more effective than models an order of magnitude smaller. For tasks requiring coherence and consistency, we appear to be approaching what might be called an ‘effective ceiling’ on the returns to simply making transformers bigger. The brute-force approach to intelligence – the one that justified all those datacenter investments and all those NVIDIA GPU purchases – is encountering a mathematical reality that no amount of fundraising narrative can negotiate away.

    (This is why I often end up having conversations in the tech space regarding large scale LLM models and how smaller SLM designs have far better capability due to focused intent and a reduced impact of what can colloquially be called “context rot”)

    This is, to use a technical term, a problem.

    It is a problem because the valuations of pure-play AI labs are predicated on the assumption that capability will continue to scale with investment. If that assumption fails – if we’re approaching the point where doubling the compute budget produces a 0.5% improvement in capability rather than a 10% improvement – then the entire financial architecture of the sector needs to be reassessed. The emperor may not be entirely naked, but he’s certainly rather more exposed than the pitch deck suggested.

    the inference pivot: thinking harder because growing bigger stopped working

    The industry’s response to the scaling wall has been characteristically ingenious and characteristically insufficient. Rather than acknowledge that the fundamental approach might have limitations – an admission that would be financially catastrophic for approximately everyone involved – the narrative has pivoted to ‘inference-time compute’. Instead of making the models bigger during training, you give them more time to ‘think’ when answering questions.

    This is represented by systems like OpenAI’s o1 and o3 reasoning models, which are explicitly designed to spend longer processing a query before producing a response. It is, in essence, the AI equivalent of that advice your university tutor gave you about exam technique: ‘spend more time thinking before you start writing’.

    The approach works, to a point. Giving these systems more computational budget at inference time does improve their performance on problem-solving tasks. This is genuine, measurable, and not bullshit.

    What is bullshit – or at least a significant omission in the narrative – is the suggestion that this represents a fundamental breakthrough rather than a shift in where the costs accumulate. Training-time scaling pushed costs into building the model. Inference-time scaling pushes costs into running the model. The total bill doesn’t shrink; it just arrives on a different line of the invoice.

    The implications are significant:

    Higher inference budgets mean higher costs per query, which means the $20/month subscription that currently makes these tools accessible to consumers becomes progressively harder to sustain as the models are asked to do more complex work. There is a reason that enterprise AI pricing looks rather different from consumer pricing, and that reason is mathematics.

    Inference-time compute cannot be scaled exponentially without a corresponding exponential increase in chip production, which operates on manufacturing timelines measured in years rather than the software iteration cycles measured in weeks. You can write new code overnight. You cannot build a new semiconductor fabrication facility overnight, no matter how much venture capital you have or how persuasively you describe the opportunity to potential investors.

    Perhaps most fundamentally, these models do not learn whilst working. Unlike a human professional who accumulates expertise through practice, an AI system running at inference time is essentially spending computational resources to extract the maximum value from its existing training. It can think harder, but it cannot think differently. The ceiling may be higher than with pure training-time scaling, but it is still a ceiling.

    (This is one of those observations that seems obvious once stated but that I’ve found curiously absent from most mainstream technology commentary. The incentive structures of the media ecosystem reward breathless excitement rather than measured analysis, because ‘AI HITS FUNDAMENTAL LIMITS’ generates fewer clicks than ‘AI WILL TRANSFORM EVERYTHING’. I say this without bitterness – it’s simply the mathematical reality of attention economics, and it would be hypocritical of me to complain about market dynamics I’ve spent my career analysing.)

    openai and the art of being too big to fail

    OpenAI presents perhaps the most instructive case study in the dynamics of the current moment, and the leadership of Sam Altman provides a masterclass in what happens when Silicon Valley founder mythology collides with the material constraints of physics and finance.

    Altman’s strategic playbook – informed by his early-career failure with Loopt and his subsequent years running Y Combinator – can be understood as a systematic application of the principle that in venture capital, narrative is as important as reality, and potentially more so. The ‘Loopt loop’, as one might call it, involves rapid iteration, aggressive fundraising, political manoeuvring, and the cultivation of an image of visionary inevitability that makes continued investment feel less like a choice and more like a historical obligation.

    The numbers, viewed dispassionately, tell a story that the narrative works hard to contextualise:

    OpenAI exited 2025 at a $20 billion revenue run rate. This sounds impressive until you learn that the company is reportedly burning through tens of billions annually and is, by its own optimistic forecasts, years away from profitability. The gap between revenue and expenditure is being bridged by a combination of investor capital, Microsoft’s infrastructure subsidy, and what can only be described as faith.

    This faith is not entirely unreasonable – the technology is genuinely transformative in many applications, and first-mover advantages in platform markets can be decisive. The question is whether the faith is proportionate to the valuation, and at $150 billion, one begins to wonder what proportion of the global economy’s problems would need to be solved by large language models to justify the price tag.

    (Rather a lot, is the answer. Rather more than seems plausible when you consider that the core technology still struggles with basic numerical reasoning, confidently produces incorrect information with the serene assurance of a politician, and has yet to demonstrate anything approaching the kind of generalised intelligence that the ‘AGI’ branding implies. Hopes of cancer being solved through the generative discipline of AI remain as unlikely as my dog winning the Nobel Prize – this doesn’t mean my dog is useless, just that her particular skills don’t readily align with solving world conflict)

    What is more concerning than the valuation itself is the trajectory it implies. OpenAI’s response to the scaling wall has been to expand aggressively into adjacent markets – healthcare, e-commerce, entertainment – whilst simultaneously planning for $1 trillion in datacenter investment. The company has ramped its lobbying spending to $3 million in 2025, hired consultants across the political spectrum, and positioned AI as a matter of national security in competition with China.

    This is the ‘too big to fail’ playbook, executed with considerable skill. By intertwining its fate with national interests and government contracts – including a $200 million Department of Defence deal – OpenAI is building a political architecture that makes its continued funding feel like a strategic necessity rather than a commercial decision. The Loopt loop has been extended from ‘iterate and fundraise’ to ‘iterate, fundraise, and make yourself politically indispensable’.

    Brilliant. Absolutely fucking brilliant.

    Whether it’s sustainable is an entirely different question, and one that the political architecture is specifically designed to make it feel unpatriotic to ask.

    anthropic: the quiet bet on being useful rather than revolutionary

    Anthropic’s strategic positioning provides an instructive contrast to OpenAI’s maximalist approach, and – I should declare an interest here, as someone who works extensively with AI tools and has opinions about how they should be built – it appears to be the more intellectually honest of the two strategies, for whatever that’s worth in a market that rewards narrative over nuance.

    Where OpenAI pursues broad artificial general intelligence as both a technical goal and a fundraising story, Anthropic has pivoted toward enterprise-grade ‘agentic’ AI – systems designed to be governed, audited, and trusted at scale within specific industries. The ‘Claude for Healthcare’ initiative and its focus on practical cognitive partnership represent a recognition that the ‘pilot era’ is over, and that the market is shifting from ‘isn’t this impressive’ to ‘does this actually work, and can you prove it, and what happens when it doesn’t’.

    This is a narrower vision, to be sure. It lacks the messianic grandeur of ‘we’re building God’ that characterises certain Silicon Valley narratives with a fervour that would make a revivalist preacher envious. What it offers instead is the considerably less dramatic but potentially more durable proposition of ‘we’re building tools that do specific things reliably, in contexts where reliability actually matters’.

    The strategic logic is sound: by targeting vertical AI sectors where domain expertise and proprietary data create defensible positions, Anthropic may avoid what I’ll call the General-Purpose Bullshit Trap – the tendency of broad capability claims to dissolve upon contact with the messy specificity of real-world problems. Healthcare, with its stringent regulatory requirements and genuine life-or-death stakes, is a domain where ‘approximately right most of the time’ is not an acceptable performance standard, and where the ability to be governed and audited is not a nice-to-have but a fundamental requirement.

    (This reminds me of the old saying “operation successful, patient dead” which, whilst having some benefit for surgeon’s ongoing training offers scant consolation to the relatives of the newly deceased. In matters of life or death, being right is a non-negotiable and best kept away from probabilitstic technology based roulette wheels.)

    Whether this strategy generates the kind of returns that justify Anthropic’s own substantial valuation is another matter entirely, and one that depends on assumptions about market size and willingness to pay that I suspect are rather more uncertain than the pitch materials suggest.

    microsoft: the landlord always wins

    If you want to understand why Microsoft’s position in the AI landscape is structurally almost unassailable, consider the following thought experiment.

    Imagine that tomorrow, every AI startup simultaneously discovered that large language models were a dead end. That the technology had fundamental limitations that could not be overcome. That the entire sector’s investment thesis was built on a misunderstanding of what these systems could achieve.

    Microsoft would be fine.

    Not ‘fine’ in the sense of ‘slightly disappointed’. Fine in the sense of ‘still generating $50 billion per quarter from cloud services, productivity software, and an enterprise ecosystem that is more deeply embedded in global business operations than any other technology platform in human history’.

    This is the structural advantage that pure-play labs cannot replicate and that, candidly, most commentary about the AI ‘race’ systematically underestimates. Microsoft doesn’t need AI to work for its business model to function. AI is a growth accelerator for a company that was already growing at a rate most organisations would consider spectacular.

    Yes, Copilot is anaemic in comparison to the front runners, but Azure generated over $75 billion in revenue in fiscal 2025–2026, and the AI component of that growth – whilst significant – sits within a diversified revenue base that includes productivity software, enterprise services, gaming, and cloud infrastructure.

    The ‘Foundry’ approach – offering access to over 11,000 models from multiple providers including OpenAI, Meta, and DeepSeek – is particularly instructive. Microsoft has effectively commoditised the model layer, positioning itself as the platform through which AI is consumed regardless of which model provider succeeds. If OpenAI wins, Microsoft wins through its investment and Azure integration. If Anthropic wins, Microsoft wins through Foundry access. If some as-yet-unknown competitor emerges, Microsoft wins through platform distribution.

    It is the infrastructure play in its purest form: own the pipes, and it doesn’t much matter what flows through them.

    (As someone who has spent years advising organisations on technology strategy, I find this position simultaneously admirable from an analytical perspective and slightly depressing from a ‘wouldn’t it be nice if the most innovative companies captured the most value’ perspective. The market rewards structural advantage over technical brilliance with a consistency that should trouble anyone who believes in meritocracy, but which will surprise precisely nobody who has observed capital markets for more than a calendar quarter. The world may crave equality but reality is quick to illustrate the naïveté of believing it is anything other than a pipe dream beyond equality of opportunity)

    google: the silent predator with the hardware moat

    If Microsoft wins through diversification, Alphabet wins through something potentially more formidable: vertical integration across the entire AI stack, from custom silicon to proprietary data to fundamental research.

    The part of this story that receives insufficient attention – largely because chip design is less narratively exciting than chatbots – is Google’s Tensor Processing Unit programme. Whilst the rest of the industry pays what I’ll call the ‘NVIDIA Tax’ – the premium for hardware that NVIDIA can price aggressively due to its near-monopoly on AI training accelerators – Google has been quietly building its own custom silicon for the better part of a decade.

    The economics are significant. The TPU v6e offers roughly four times better performance per dollar compared to NVIDIA’s H100 for large language model training and high-volume inference. Midjourney’s migration from NVIDIA clusters to TPU v6e resulted in an annualised saving of $16.8 million, which is the kind of number that makes CFOs sit up rather sharply. When you’re operating at the scale of Google’s AI ambitions, the unit-cost advantage of custom silicon compared to buying NVIDIA’s margin-rich products represents a structural competitive advantage that compounds over time.

    (In a secondary context Apple, whilst doing the equivalent of a somersault off a diving board into an empty swimming pool, are also advantaged by diversification, and have quietly started to shift toward Gemini from OpenAI)

    The second moat – and this one is harder for competitors to replicate than anything involving silicon – is data.

    Google has spent twenty-five years indexing the world’s information, processing search queries that reveal human intent at a scale no other entity can match, and – through YouTube – accumulating the largest repository of video data on Earth. Sam Altman may have used AI to hoover up Reddit as a gargantuan and growing data source, but in an era where AI models are trained on data and where the public web has been largely exhausted as a training resource, Google’s proprietary data assets represent a differentiation that cannot be replicated through engineering talent or capital deployment alone.

    You can build a better chip with enough investment. You cannot retrospectively accumulate twenty-five years of search click-stream data and two decades of video uploads. The data moat is temporal as well as structural, and this is the kind of advantage that should keep competitors awake at night rather more than it appears to.

    Then there is Waymo, which sits in the peculiar category of ‘moonshots that are quietly becoming real businesses’. Developing autonomous driving systems requires a decade-long commitment to data collection, safety validation, and regulatory navigation that few organisations have the patience or the balance sheet to sustain. Google has both, and the resulting dataset represents a physical-world intelligence moat that extends the company’s advantages beyond the purely digital.

    the great decoupling: or, the moment reality reasserts itself

    What we’re witnessing in 2026 is the beginning of what might be called the Great Decoupling – the divergence between the narrative of universal AI transformation and the material reality of who actually captures value from these technologies.

    The initial phase of the AI hype cycle treated the sector as a monolith. ‘AI is the future’ implied that all participants in the AI ecosystem would benefit from the rising tide. Venture capital flowed with democratic enthusiasm into startups, labs, infrastructure providers, and application developers, driven by a FOMO so intense it had practically achieved sentience of its own.

    The correction, which is now underway, involves the market developing opinions about which participants will actually capture durable value and which are destined to become cautionary tales in future business school case studies. The ‘decade-high down rounds’ now appearing in startup fundraising data represent the market beginning to distinguish between genuine innovation and ‘AI-washing’ – the superficial application of AI branding to products that are, at their core, doing what they always did, just with a more fashionable vocabulary.

    (The parallels with the ‘digital transformation’ hype of the previous decade are almost painfully exact. Replace ‘AI-powered’ with ‘digitally transformed’ and you have approximately the same phenomenon: organisations spending significant sums to rebrand existing capabilities in the language of the moment, creating a magnificent illusion of progress that evaporates upon contact with the question ‘what specifically has changed?’)

    The winners emerging from this decoupling share three characteristics:

    Infrastructure ownership. Dominance in the physical layer – chips, data centres, networking – provides insulation against volatility in the model and application layers. You can swap models; you cannot easily swap data centres. Organisations citing multi-cloud resilience eventually find out in less than ideal circumstances that your code isn’t upping sticks from Azure to AWS without both massive egress charges, downtime and a complete refactor of your code base.

    Data moats. Access to exclusive, high-quality datasets that cannot be replicated through web scraping or synthetic generation. These moats are self-reinforcing: the more users interact with your platform, the more data you accumulate, the better your models become, the more users you attract. It’s a virtuous cycle, or a monopolistic feedback loop, depending on one’s perspective and tolerance for market concentration. If you’re Google, it’s the former. If you’re OpenAI, it’s the wall that prevents them from beating Google even if they succeed.

    Integrated distribution. The ability to embed AI capabilities within existing, high-margin products that millions of people already use daily. Copilot in Office 365 doesn’t require users to adopt a new platform or change their workflow; it simply appears within the tools they’re already using, with all the gentle inevitability of moss growing on a north-facing wall. It may be the worst of all current AI tools but we’ve seen before that Microsoft can be far from the best and still be ubiquitous nonetheless.

    Microsoft and Google possess all three characteristics in abundance. OpenAI and Anthropic possess none of them independently, which is why their long-term survival depends on either building them, buying them, or maintaining partnerships with entities that have them – partnerships in which the leverage increasingly tilts toward the infrastructure owners.

    the geopolitical dimension: or, when your startup becomes a national security asset

    The global adoption data from late 2025 reveals a dimension of the AI story that most technology commentary treats as peripheral but which is, in structural terms, rather more significant than the latest model benchmark: AI is becoming a geopolitical asset, and the countries that have invested early in digital infrastructure and institutional adoption are pulling ahead with a momentum that will be extremely difficult to reverse.

    China, for example, is spending significant funds in bringing AI into military contexts with trained robotics innovations that could see robot soldiers and drones at the front line – not necessarily operating as combatants but, at the very least, as augments to the challenges of feeding supply lines in theatre.

    The UAE leads global adoption at 64%, followed by Singapore at 61%, with South Korea experiencing an 80% increase in adoption since late 2024. These figures represent not just consumer enthusiasm but institutional commitment – government services, educational systems, healthcare infrastructure built around AI capabilities that are rapidly becoming expectations rather than novelties.

    This geopolitical reality is what makes the Sam Altman political turn both strategically rational and profoundly unsettling. By framing AI as a ‘technology race against China’ and securing government contracts and political alliances, Altman is executing a play that extends well beyond commercial strategy into the domain of national industrial policy. The $200 million Department of Defence contract is not primarily about revenue; it’s about making OpenAI’s continued funding feel like a matter of national interest rather than a commercial investment decision.

    This is sophisticated statecraft disguised as corporate strategy, and it represents a structural shift in how frontier technology companies relate to government power that deserves rather more scrutiny than it currently receives. When a private company’s commercial interests become intertwined with national security narratives, the normal mechanisms of market discipline – the possibility of failure, the requirement for profitability, the accountability to customers rather than taxpayers – tend to weaken with a speed that should concern anyone who believes that markets function best when participants can actually fail.

    The challenge in recent weeks is complex though – IT services organisations are seeing prices crash as hype based general consensus implies AI is about to make human consulting a thing of the past. The complexity lies in multiple domains – that the crash of large company share prices through the indirect impact of AI hope creates overall market impact, as well as increasing concentration of revenues in hyped companies leads to a potential for an economic crash when one of OpenAI or Anthropic implodes taking a massive amount of investor hope with it.

    the make-or-break window: 2028–2032

    For pure-play AI laboratories, the period between 2028 and 2032 represents what analysts are increasingly describing as a ‘crucial make-or-break window’. The scaling wall, the inference-cost challenge, and the structural advantages of diversified incumbents create a narrowing corridor through which companies like OpenAI and Anthropic must navigate to achieve long-term viability.

    Survival likely requires three things happening simultaneously:

    First, a successful pivot from general-purpose models to vertical AI – specialised tools with demonstrable return on investment in specific industries like healthcare, legal services, or biopharma. The era of ‘AI that does everything, sort of’ is ending; the era of ‘AI that does this specific thing very well, provably, with auditable results’ is beginning. This pivot requires domain expertise that most AI labs don’t currently possess, and acquiring it costs time and money that the scaling wall is making increasingly scarce. Narrow AI is far from a new thing – and what most of us who have been in the technology industry have cited as an indicator of success far more than generalised intelligence. Having tools that can take care of systems through simple deterministic machine learning models offers far more value than flawed transformer architecture – one only need look at tools as diverse as the Apple Watch or an automated observabilty platform to see machine learning is far stronger a bet than generative.

    Second, a dramatic improvement in inference efficiency that makes the per-query economics of AI services genuinely profitable at consumer price points. This is, fundamentally, a hardware problem as much as a software problem, and it’s a hardware problem that currently has ‘NVIDIA’ written on the bottleneck in rather large letters. Obviously the closer we get to the brick wall, the more likely Jensen Huang may be nervous of a stock price collapse when it’s identified that we can’t just do more.

    Third, the successful execution of the political strategy – deep institutional alliances that ensure continued access to the capital, power (in the literal, electrical sense), and regulatory accommodation that frontier AI development requires. This is the Altman playbook, and its success depends on political conditions that are, by their nature, less predictable than engineering timelines.

    Any one of these challenges would be formidable. All three simultaneously, against competitors with deeper pockets, broader distribution, and more diversified revenue bases, represents a degree of difficulty that should give pause to anyone pricing these companies as though success were inevitable.

    the uncomfortable conclusion: or, what happens when the tide goes out

    The philosopher Harry Frankfurt drew a useful distinction between lying and bullshit. A liar knows the truth and deliberately misrepresents it. A bullshitter is indifferent to truth altogether – the relationship between their statements and reality is, at best, coincidental. The distinction matters because it suggests that much of what passes for AI industry discourse isn’t deliberately deceptive so much as it is fundamentally unconcerned with whether its claims correspond to anything real.

    The scaling laws will continue to improve AI capability, albeit at diminishing rates. The technology will find genuine applications that create genuine value. The infrastructure built during this era of excess will serve as the foundation for the next wave of innovation, as it always does.

    What will not survive is the narrative – the story that intelligence scales predictably with investment, that artificial general intelligence is a few more billion dollars of compute away, that the companies promising transformation are all equally likely to deliver it, and that the market dynamics of frontier technology somehow exempt these companies from the ordinary constraints of physics, finance, and human organisational capacity.

    The companies that will win are those that combine genuine technical capability with structural advantages in infrastructure, data, and distribution. Microsoft, with its diversified revenue base and commodity-model platform strategy. Google, with its vertical hardware-to-data integration and unit-cost advantages in custom silicon. Perhaps Amazon, with its AWS dominance for tools like Bedrock and Q, combined with decades of logistics data. Ultimately, the landlords, the infrastructure owners, the companies that get paid regardless of which application-layer narrative proves correct.

    The companies that face existential risk are those whose entire proposition depends on the narrative continuing to hold – on scaling laws continuing to scale, on capital markets continuing to fund unprofitable growth, on political alliances continuing to provide the cover that commercial performance cannot. With growing trouble appearing politically in the US and their president continuing with reckless abandon in trying to rewrite economics as purely governed by tariffs, just convincing Trump may be insufficient to maintain the narrative. America may end up beaten by China not through some Far East innovation but rather by companies absorbing modern narcissistic marketing leading to their downfall.

    This is not a prediction of failure. It is an observation about structural risk, and about the difference between a business proposition and a fundraising narrative.

    As someone who has spent twenty five years in technology consulting – someone whose actual job involves helping organisations distinguish between what technology can genuinely do and what someone with a pitch deck says it can do – I find the current moment simultaneously fascinating and slightly terrifying. Fascinating because the underlying systems analysis is genuinely complex and intellectually rewarding – navigating this territory is literally why my day rate is what it is. Terrifying because the capital deployed against these assumptions is of a scale where being wrong has consequences that extend well beyond the venture capital ecosystem into pension funds, sovereign wealth, and the broader financial system.

    The tide will go out, as it always does. What matters is whether we’ve built something real whilst it was in, or merely arranged the deckchairs in a particularly impressive formation on an increasingly exposed beach.

    (Rather more the latter than the former, I suspect. Though I’ve been wrong before, and the recursive nature of being someone who analyses systems for a living means I’m perpetually aware that my own analysis might itself be a form of pattern-matching that mistakes correlation for insight. The snake eats its tail. Welcome to my brain.)

  • The Curse of Competence: Why Excellence Makes You a Hostage to Your Own Skills

    Let’s talk about probably the most perverse reward system ever devised outside of experimental psychology labs: the modern workplace’s response to demonstrated competence.

    It goes something like this:

    You solve a problem effectively.

    People notice.

    They bring you more similar problems.

    You solve those too. Congratulations! You’ve now been rewarded with a permanent problem-solving role that will follow you like a particularly clingy ghost through the remainder of your professional existence. I hope you enjoy whatever it is you were doing!

    Welcome to the Curse of Competence – that strange phenomenon whereby doing something well once guarantees you’ll be doing it repeatedly until either your skills deteriorate from soul-crushing boredom or you fake your own death and restart your career under an assumed identity in a different industry.

    The Competence Trap: Hotel California for Skills

    The competence trap functions with the elegant simplicity of a particularly well-designed venus fly trap. The initial experience is quite pleasant – recognition! appreciation! the warm glow of being needed! – right until the moment you realise you’re now permanently stuck doing that one thing you happened to be good at during that meeting in 2019.

    “But surely,” I hear you protest, “organisations would want to develop their talented people? Move them around to leverage their abilities? Create growth paths that capitalise on demonstrated excellence?”

    Oh, my sweet summer child. That would require both forward thinking and the willingness to temporarily sacrifice immediate efficiency for long-term gain – two qualities approximately as common in corporate environments as unicorns who are also certified public accountants. (Why think of the future when you have next quarter breathing down your neck!)

    The reality operates on a principle I’ll call Organisational Path Dependence: once you become known as “the Excel person” or “the one who can calm down Client X” or “the presentation wizard,” that identity becomes fixed in the corporate hivemind with a permanence that ancient Egyptian stonemasons would envy. (The pyramids may be magnificent but I’m sure Sarah has been doing that trick with the finance software for as long as it took the slaves – I mean aliens – to build them)

    This phenomenon creates magnificent absurdities like:

    – The senior developer still fixing basic code because they were good at it as a junior five years ago

    – The marketing director still writing all the copy because once, in 2015, they composed a particularly effective email

    – The finance executive who can’t escape quarterly planning because they created a spectacular spreadsheet during the Obama administration

    Each trapped in their own personal Groundhog Day of competence, doomed to repeat their past excellence in perpetuity while watching less capable colleagues fail their way upward with spectacular regularity. (It’s amazing how there’s a waterline where you fall upwards once you get into the management realm, whilst the mere plebs of the world huddle around metaphorical fires worrying about the 675 metrics they have to hit just to keep doing the actual fucking work.)

    The Reward for Carrying Water: A Bigger Bucket

    The corporate response to demonstrated capability follows a pattern so predictable it should be taught in business schools under the probably-more-honest-than-most-bootcamps “How to Systematically Burn Out Your Best People 101.”

    Step 1: Identify person who executes Task X effectively

    Step 2: Give person more of Task X

    Step 3: When they handle that well, add even more Task X

    Step 4: Express confusion when person becomes increasingly desperate to never see Task X again and/or goes off sick citing mental burnout

    This system operates with the precision of a Swiss watch designed by particularly sadistic engineers. Its elegance lies in how it masquerades as recognition while functioning as punishment. “You’re so good at this!” translates directly to “You’ll never escape doing this!” – a sort of Sisyphean life where rocks and infinite hills got replaced with the mind numbing shuffling of digital detritus in a tastefully styled office with seemingly unironic motivational quotes. It’s up to you which is worse (I’ve always liked rocks).

    In that sense, the demonstrated empathy on show is rather like responding to someone who swims well by throwing them into progressively deeper bodies of water while adding increasingly heavy weights to their ankles.

    “But you’re so good at not drowning Hannah! We’re just creating opportunities for you to further develop this clearly demonstrated capability!”

    What makes this particularly diabolical is how it’s presented as a compliment. “We keep giving you these projects because you’re so good at them!” they say, nodding earnestly, as though permanently consigning you to the same repetitive task is a recognition of your value rather than an exploitation of your reliability. Meanwhile, those who don’t have any obvious skills spend at least 75% of their time practicing their acceptance speech for the invariable falling upward promotion trajectory that invariably awaits. (That’s because the generally accepted way to deal with awful leaders is by throwing them somewhere else in the hope that maybe that person might have a semblance of a backbone, and the ability to have an uncomfortable conversation rather than palming them off because their current manager has neither.)

    The effective people by comparison? Well the reward for carrying water is, inevitably, a bigger bucket, and a PIP if they fail to carry the bucket that may or may not now contain all of Earth’s water system.

    The Competence/Growth Inversion Principle

    Behold the magnificent irony at the heart of professional development: the relationship between demonstrated competence and actual career growth typically exhibits a strong negative correlation.

    I call this the Competence/Growth Inversion Principle, and it works like this:

    – The more crucial your current contribution, the less the organisation can “afford” to move you. (In the corporate world, why would we want to move people out of roles that get stuff done as it might mean we’d have to think about one or more of succession planning, increased competition at the next level of hierarcy, or pulling ones thumb out of one’s backside.

    – The more reliably you solve certain problems, the more tightly you become identified with those problems (so you’ve worked out how to use functions beyond =SUM? You’re the Excel “guru” now – no, there’s no payrise).

    – The more irreplaceable you become in a specific function, the less likely you are to escape it (it’s like a black hole has appeared in space yet rather than being able to observe light falling into the abyss, it’s your career prospects disappearing over the event horizon).

    Meanwhile, observe the person who is mediocre at multiple things rather than excellent at one thing. They often advance with puzzling speed, largely because:

    1. They’re never quite good enough at any one thing to become indispensable in that role

    2. Their consistent mediocrity creates no specific attachment to any particular function so they are always ready to go (mostly to shit, but in a way that allows them to tell management what they want to hear abstract of what reality is)

    3. Their broad but shallow exposure creates the illusion of versatility

    4. Nobody fights to keep them in their current role because nobody particularly values what they’re currently doing

    This creates the magnificent spectacle of organisational advancement functioning almost as natural selection for a particular type of non-excellence – not outright incompetence (though that certainly happens), but rather the careful cultivation of being just good enough at many things to avoid the curse of being excellent at one thing. (In that sense, it’s a skill – but probably not the sort of skill we should be lauding if we’re being honest).

    The competence trap thus creates a perverse incentive structure where rational career actors might deliberately avoid demonstrating too much excellence in any single domain lest they become permanently associated with it. Is that what a company should look like?

    The Specialist’s Lament

    For those caught in the competence trap, work often devolves into a peculiar form of specialised repetition that feels less like career development and more like being a particularly well-educated hamster on a wheel.

    I recently spoke with a mid-career professional – let’s call her Grace – who made the career-limiting mistake of creating an exceptional PowerPoint presentation in 2018. This singular event, which lasted approximately 45 minutes, has somehow become her professional identity for the next seven years.

    “I have two degrees and fifteen years of experience,” she told me with the thousand-yard stare of someone who has created one too many slide transitions, “but I’m now introduced in meetings as ‘our PowerPoint person’ like I’m some sort of sentient template. I’ve debated changing my surname to PPTX-Smythe.”.

    Another victim of the competence trap – we’ll call him Marcus – described being “the data guy” despite having originally been hired as a strategic planner with significant decision-making responsibility.

    “I made one particularly good PowerBI dashboard during my first month,” he explained, “and now I haven’t been invited to a strategy meeting in three years. Meanwhile, I’ve watched three consecutive bosses implement catastrophically bad strategic decisions that I could have helped prevent, but apparently, my only role now is to create colourful visualisations of the resulting disasters.”

    The specialist’s lament echoes across industries and functions: “I am so much more than this one skill, and yet this one skill has somehow become my entire professional identity.”

    The Three Deadly Career Virtues

    Particularly prone to the competence trap are those who exhibit what I’ll call the Three Deadly Career Virtues: reliability, efficiency, and conflict avoidance.

    These seemingly positive attributes combine to create the perfect victim profile:

    1. Reliability ensures you’ll get the job done without requiring management attention, making you the path of least resistance for similar future tasks

    2. Efficiency means you can handle increasing volumes of the same work, creating the illusion that this arrangement is sustainable (I mean why wouldn’t it be given companies operate under the idea of continuous, infinite growth as if that’s really a thing)

    3. Conflict avoidance makes you less likely to push back when your role becomes increasingly narrowed to your area of demonstrated competence

    Together, these virtues create what appears from the outside to be the ideal employee but is actually a person being slowly entombed in their own capabilities like a museum exhibit: “Here we have a perfectly preserved specimen of an Excel wizard in their natural habitat. Note how they continue to pivot tables despite their growing despair.”

    In short, the exploitable get exploited. It’s a tale as old as time, but without the whimsy of listening to a song about Beauty and the Beast (bite me, I’m a Disney fan).

    These qualities typically combine with a work ethic instilled since childhood that makes refusing tasks feel morally wrong, creating the perfect conditions for indefinite exploitation of specific skills at the expense of broader development. (The reward for childhood trauma that likely made you a people pleaser to mitigate anger? Some adult trauma, delivered digitally via the Microsoft office suite.)

    The Double-Bind of Demonstrated Expertise

    Those caught in the competence trap face a particularly cruel double-bind when they attempt to escape:

    Scenario 1: Do you continue demonstrating excellence in your pigeonholed role, further cementing your association with it while watching growth opportunities go to others?

    Scenario 2: Deliberately perform worse in hopes of being released from your specialisation, thereby risking your professional reputation and potentially confirming the organisation’s unspoken belief that you’re only good for this one thing anyway. (PIPs are available for those below the “safe” watermark of those who operate the metrics rather than those who have to comply with them).

    Neither option offers a particularly appealing path forward. It’s rather like being asked whether you’d prefer to be slowly suffocated by a pillow or a duvet – the instrument differs but the outcome remains distressingly similar.

    This double-bind often leads to the most reliable people in organisations quietly updating their LinkedIn profiles at 11pm while sighing heavily into their third glass of wine. (Or, in my case, manipulating my psychology by engaging hyperfocus simply by waiting till the last second before I have so much adrenaline and cortisol in my system, there’s approximately zero chance I’m going to be sleeping).

    The only apparent escape routes involve:

    1. Leaving the organisation entirely (the “corporate witness protection program” approach)

    2. Finding a sponsor powerful enough to override the organisational imperative to keep you exactly where you’re “most valuable”

    3. Developing such a spectacular new skill that it overshadows your existing competence trap (approximately as likely as teaching your cat to prepare your taxes, but if you can say AI in every other sentence, you may have a shot)

    The “Go-To Person” Paradox

    Perhaps the most insidious aspect of the competence trap is how it’s disguised as a compliment. Being the “go-to person” for anything sounds like recognition rather than the professional equivalent of being sentenced to repeat the same year of school indefinitely.

    “Sarah’s our go-to for client presentations” sounds like praise until you realise Sarah hasn’t done anything except client presentations since the iPhone 7 was cutting-edge technology.

    “We always rely on Dave for the monthly reporting” seems like an acknowledgment of Dave’s value until you notice Dave gazing longingly out the window every 30th of the month like a prisoner marking days on a cell wall, grappling with an Excel spreadsheet so large and creaky that it might masquerade as a haunted house on the weekend.

    Being the “go-to person” is less an honour and more a subtle form of organisational typecasting – one where you’re permanently cast as “Person Who Does That One Thing” in the ongoing corporate production of “Tasks Nobody Else Wants To Learn and How We Found Suckers To Do Them.”

    The Organisational Amnesia Phenomenon

    Compounding the competence trap is what I call Organisational Amnesia – the curious inability of workplaces to remember anything about you except the specific skill for which you’ve become known.

    You may have:

    – Published thought leadership in your industry

    – Successfully led cross-functional projects

    – Developed innovative approaches to longstanding problems

    – Demonstrated exceptional leadership qualities

    – Acquired three new languages and the ability to communicate telepathically with squirrels

    Please be aware that, much like that stock market advice you got, past performance does not indicate any potential future likelihood of similar success.

    Instead in planning meetings, you’ll still be referred to as “Morgan from accounting who does the thing with the spreadsheets.”

    This selective institutional memory creates situations where highly capable individuals with diverse skills and interests become one-dimensional caricatures in the organisational narrative – reduced to a single function like characters in a particularly lazy sitcom that runs for seventeen years with no sign of stopping, serving as escapism for the masses who can say “hey my life is bad, but I’m not as bad as Seymour from Uncomfortable Conclusions”.

    The Competence Escape Velocity Theory

    For those determined to break free of the competence trap, I propose the Competence Escape Velocity Theory, which states that escaping your pigeonhole requires simultaneously:

    1. Building a coalition of influential advocates who see your broader potential

    2. Secretly training replacements who can take over your current responsibilities (extra points if AI does is – management love that stuff, i.e. less spending on people who might complain)

    3. Creating visible wins in areas unrelated to your competence trap

    4. Developing a reputation for something – anything – other than your current specialisation (perhaps not soiling oneself at the Christmas party – keep some standards)

    5. Being willing to risk the identity security of being “the person who does X well”

    This multi-pronged approach represents your best chance of achieving escape velocity from the gravitational pull of your own competence – a manoeuvre approximately as complex as launching a rocket while simultaneously convincing mission control that you’re actually still on the launchpad.

    The difficulty explains why so many choose the simpler option: updating their CV and finding an organisation where they haven’t yet revealed their particular talents, creating a brief window of opportunity before the whole cycle begins again.

    The Mediocrity Advantage

    This analysis reveals a counterintuitive truth: there are significant professional advantages to strategic mediocrity – or at least to the careful management of where and when you demonstrate exceptional capability.

    The truly savvy career operator maintains a carefully calibrated performance level:

    – Good enough to be considered valuable

    – Not so good as to become indispensable in any one function

    – Visibly competent at politically advantageous skills

    – Carefully average at career-limiting responsibilities

    This calculated approach to skill demonstration represents a sophisticated response to organisational incentive structures that routinely punish excellence with more of the same work rather than growth opportunities. The sad reality is that this is ultimately bullshit of the highest order – and something that needs to be addressed at a broader level.

    After all, it’s not that organisations consciously design systems to reward mediocrity and punish excellence – it’s simply the emergent property of prioritising short-term efficiency over long-term development, immediate needs over strategic talent deployment, and the path of least resistance over optimal resource allocation. Who’d have thought focusing solely on the next thing – be that a quarter, task, or fixing a catastrophe might have such a significant impact?

    Beyond the Competence Ghetto

    So is there an alternative to this dysfunctional system? Perhaps. But it requires organisations to fundamentally reconsider their approach to talent development and individuals to strategically manage their skill demonstrations.

    For organisations, escaping this trap means:

    1. Creating systematic rotation programs that prioritise development of people over short-term efficiency

    2. Rewarding knowledge transfer rather than exclusive ownership of capabilities – which creates structural problems for both the business and the poor souls who get trapped

    3. Explicitly valuing versatility alongside specialisation

    4. Building redundancy for critical skills rather than relying on individual “heroes”

    5. Measuring managers on their team members’ growth rather than merely their output

    For individuals navigating existing systems, survival strategies include:

    1. Deliberately cultivating multiple, visible areas of competence to avoid single-skill typecasting

    2. Strategically training others in your “special skills” to reduce your uniqueness

    3. Explicitly negotiating skill deployment and development pathways before demonstrating new capabilities

    4. Creating alternative identity markers in the organisation beyond your functional skills

    5. Recognising when the only escape route might be the exit door – sadly, sometimes it becomes the only option if your organisation isn’t willing or able to change.

    The Uncomfortable Conclusion

    The competence trap reveals an uncomfortable truth: organisations frequently talk about developing talent while implementing systems that systematically prevent it. The gap between rhetoric and reality creates the professional equivalent of quicksand – the harder you work to prove your value, the more firmly you become stuck in a narrowing role.

    It may not be a conscious decision that is manifested by an evil corporate mind, but its impact on the wellbeing of their staff, and the associated headaches created by the need for mental gymnastics creates problems that are both human and financial.

    Perhaps the final irony is that recognising this dynamic represents its own form of competence – one that, if demonstrated too visibly, might land you permanently in the “organisational development” role where you can spend the remainder of your career explaining this phenomenon to others without actually being able to escape it yourself. (I fear I may have fallen into this hole by writing articles but, hey, at least I may have a future in some form of corporate stand up).

    The true meta-skill, then, might be learning exactly when to display competence, when to conceal it, and when to decide that an environment incapable of appropriately developing talent deserves neither your excellence nor your loyalty – being good at things you don’t want to do probably isn’t the route forward if you want to do bigger and better things.

    In that sense, the most valuable skill in navigating modern organisations might not be any particular technical capability but rather the wisdom to recognise when your competence is being weaponised against your own development – and the courage to seek environments where excellence is a pathway rather than a prison.

    The original copy of this article was published on my personal LinkedIn on April 25th, 2025. You can find the original link here: https://www.linkedin.com/pulse/curse-competence-why-excellence-makes-you-hostage-your-turvey-frsa-kympe/?trackingId=XmAsaUBPQQGJHP0dYoGpKw%3D%3D

  • The Power Paradox: Why Those Most Eager to Lead Should Probably Be Locked in the Office Supplies Cupboard

    Let’s discuss a serious issue that has plagued human societies since approximately fifteen minutes after we climbed down from the trees and someone declared themselves “Chief Banana Distributor” – namely, that the people most desperate to be in charge are precisely the ones who should be kept as far away from power as humanly possible, preferably in a soundproof room lined with pictures of kittens and motivational posters about ‘synergy’ so they can at least feel at home.

    Such a reflection, whilst possibly exaggerated for effect, isn’t merely a cynical observation on my part – one only need look around at the liberal sprinkling of proverbial self styled “hard men” in our contemporary political environment.

    It’s a structural problem that manifests with the reliability of a British train cancellation announcement – predictable, depressing, and somehow still surprising when it actually happens. (Depressing might not be the case for all people as my right hand man at work actually likes cancellations – on the proviso that he gets a decent refund. Bless you Marrows).

    Consider the psychological profile of your average power-seeker. The person who looks at a leadership position and thinks, “Yes, what the world desperately needs is ME telling everyone else what to do.”.

    This individual – and I’m sure you’ve met a few like I have – typically possesses the exact cocktail of traits you’d want to avoid in someone making consequential decisions: unshakeable self-belief detached from actual competence, a conviction that complex problems have simple solutions they alone can see, and an ego so robust it could survive a direct nuclear strike.

    Meanwhile, the person who might actually make a decent leader – thoughtful, self-aware, cognisant of their limitations, capable of balancing competing perspectives – is often found desperately trying to avoid being nominated for the role whilst muttering something about “just wanting to get on with some actual work.”

    What we’ve got here is a classical selection problem that would make Darwin reach for a stiff drink. Don’t worry me old mucker, Charlie – we’ve got some ideas!

    The Douglas Adams Rule of Leadership

    The late, great Douglas Adams perfectly captured the paradox of leadership in “The Hitchhiker’s Guide to the Galaxy” when he wrote:

    “The major problem – one of the major problems, for there are several – one of the many major problems with governing people is that of whom you get to do it; or rather of who manages to get people to let them do it to them. To summarize: it is a well-known fact that those people who must want to rule people are, ipso facto, those least suited to do it. To summarize the summary: anyone who is capable of getting themselves made President should on no account be allowed to do the job.”

    This isn’t just witty science fiction (I mean it is that also), but rather it’s practically a mathematical theorem that plays out with depressing regularity across organisations from corporate boardrooms to parish councils to national governments. No locale is safe – Vogon inhibited or no.

    Sadly, the desire for power often correlates inversely with the wisdom to wield it responsibly. Those most attracted to leadership positions tend to be those most enamoured with the trappings and status rather than the actual responsibility of stewarding an organisation or community through difficulty and uncertainty.

    The Confidence/Competence Inversion

    I’ve spent enough time in corporate environments to witness what I’ll call the Confidence/Competence Inversion Principle: the relationship between someone’s certainty about their capabilities and their actual abilities often bears an unfortunate negative correlation.

    You know ThatGuy™. I talked about them briefly a few weeks ago in one of my recent articles.

    They’re the one who speaks first, loudest, and with unwavering certainty about topics they discovered approximately 37 minutes before the meeting. (I can play catch up on learning with AI, you know!)

    The one who has never encountered a moment of self-doubt that couldn’t be immediately crushed under the weight of their own magnificence (behold the glory that is constrained within this mid-range Next two-for-one suit!).

    The one whose confidence in their prescriptions is matched only by their complete ignorance of the subsequent clean-up operations required after their brilliant ideas implode. (I always find it remarkable the amount of people who think they are great drivers but constantly have near misses with accidents – funny that).

    These individuals don’t merely suffer from the Dunning-Kruger effect; they’ve turned it into a leadership philosophy, that would have a whole saleable framework of what was involved in being as good as them – if it wasn’t for the ego delusion and the fucking inability for them to do any actual work of value.

    These people have mistaken certainty for competence, volume for insight, and stubbornness for principle.

    Meanwhile, somewhere in your organisation sits someone with actual expertise – thoughtful, nuanced, aware of complexity – who prefaces every contribution with “This might be wrong, but…” or “I’m not entirely sure about this…”

    Guess which one gets promoted?

    Precisely.

    The Reluctant Leader Hypothesis

    There’s a persistent myth in modern management that leadership requires unbridled enthusiasm for the role. That the person who wants it most deserves it most. This is roughly equivalent to suggesting that the person most eager to perform brain surgery on you – despite having no medical training but owning a really sharp kitchen knife and having watched several YouTube tutorials – should be allowed to crack on. (Several videos – not one. How much more evidence do you need!?)

    Perhaps we should consider what I’ll call the Reluctant Leader Hypothesis: those best suited to positions of responsibility are often those most aware of its burdens and limitations.

    History offers some support for this idea.

    Cincinnatus, the Roman dictator who relinquished power voluntarily to return to his farm.

    George Washington refusing a third term and establishing the peaceful transition of power.

    Even the mythological King Arthur, a man pulled from obscurity by a sword that apparently had better leadership selection mechanisms than most modern organisations. (There’s a real thought – maybe we should seek out mythical swords to determine who should be king, except I’ve just checked the stock levels at the Mystic Warehouse, and they’re all out).

    What unites these examples isn’t merely their reluctance, but their sense of service rather than entitlement. Leadership as duty rather than as a prize. Authority as responsibility rather than playground dynamics of who has the sharpest title. You know – God forbid – actual leadership.

    The Corporate Selection Problem

    In theory, modern organisations should have sophisticated methods for identifying and developing genuine leadership talent. In practice, most promotion systems operate with all the nuance and discernment of a hungry toddler at a birthday party buffet, grabbing the brightest, loudest things while ignoring the vegetables of quiet competence sitting forlornly on the side.

    The standard corporate selection process rewards several traits that have at best a tenuous relationship – and arguably an inverse one – with actual leadership capability:

    Unwavering self-promotion – Because nothing says “I’m focused on organisational success” like an obsessive documentation (and associated proclamations) of personal achievements

    Strategic visibility – Ensuring one is seen doing things rather than simply doing them well (because why do the work when you can just take the credit?)

    Confident proclamations – Making assertions with certainty regardless of their relationship to reality

    Relationship cultivation with existing power structures – Proving one’s fitness to lead by demonstrating a profound capacity for strategic flattery and a fondness for the taste of human excrement of staff who, obviously coincidentally, sit further up the hierarchy

    None of these correlate strongly with the ability to navigate complexity, build consensus, acknowledge uncertainty, or make difficult decisions under pressure – you know, the actual job of leadership.

    The Quiet Competence Conundrum

    Meanwhile, genuine capability often manifests in ways that are systematically overlooked or undervalued:

    Thoughtful consideration – Interpreted as indecisiveness rather than prudence

    Nuanced perspectives – Dismissed as “complexity” in a world enamoured with false certainty

    Acknowledgment of limitations – Seen as weakness rather than self-awareness

    Focus on work rather than self-promotion – Resulting in the organisational invisibility of the actually competent

    The result is a persistent filtering mechanism that elevates the confidently inadequate whilst overlooking the quietly capable. It’s not merely an unfortunate coincidence but a structural feature of systems that mistake confidence for competence, certainty for clarity, and self-promotion for achievement.

    Beyond the Binary: The Confident-Competent Unicorn

    Despite my ongoing affinity for hyperbole, surrealism, and aligned topics, let’s acknowledge the legitimate counterargument: confidence and competence aren’t mutually exclusive.

    Occasionally – about as frequently as a total solar eclipse visible from your precise geo-coordinates where you read this article – these qualities align in a single individual.

    These rare creatures – the confident-competent – do exist.

    They combine genuine capability with the self-assurance to deploy it effectively.

    They’re the unicorns of the organisational world, and finding one feels about as likely as discovering your cat has been quietly paying half your mortgage.

    The problem isn’t that these individuals don’t exist; it’s that our selection mechanisms are catastrophically bad at distinguishing them from their more common doppelgängers: the confident-incompetent. From a distance, and particularly to existing leadership equally afflicted with the confidence/competence inversion, they appear identical – how are people going to deduce the difference between bullshit and brilliance if at least part of their own rise to the top involved a suitable amount of bluff and bluster?

    The Selection Renovation Project

    If we accept that our current approaches to identifying leadership talent are fundamentally broken, how might we improve them? How do we find those capable but not necessarily clamoring for power?

    Here are some horribly unfashionable suggestions that would probably get me removed from any corporate HR function within approximately 17 minutes:

    1. Value proven problem-solving over persuasive self-presentation

    Track record of quietly solving complex problems without creating new ones might be a better indicator of leadership potential than the ability to create a compelling PowerPoint about one’s own magnificence. Projects that never go red are probably better places to find leaders compared to the ”heroes” who always seems be in the thick of the latest corporate bomb site.

    2. Seek evidence of epistemic humility

    The capacity to say “I don’t know” or “I was wrong” indicates an intellectual flexibility essential for navigating uncertainty. Someone who can’t recall the last time they were mistaken isn’t displaying confidence; they’re displaying delusion.

    3. Observe behaviour under genuine pressure

    Not the manufactured pressure of interviews or presentations, but the authentic stress of unexpected challenges. Character reveals itself not in rehearsed moments but in unscripted responses to difficulty. As the old saying goes – “adversity introduces a man unto himself”.

    4. Listen to those being led

    The people working directly with potential leaders often have the clearest perspective on their actual capabilities. 360-degree feedback isn’t perfect, but it’s frequently more accurate than upward-only assessment, because often the nature of senior leadership is that they don’t have the understanding of the detail, because the detail has probably changed in the last 20 years since they were doing the actual work on the ground.

    5. Create selection mechanisms that don’t reward self-promotion

    Design processes that identify capability without requiring candidates to engage in competitive displays of ego and certainty. (The amount of people I see overlooked simply because they aren’t extroverted enough still baffles me to this day).

    6. Value the questioners, not just the answerers

    Those who ask thoughtful questions often have a deeper understanding of complexity than those offering immediate, confident solutions.

    The Fundamental Recalibration

    Perhaps most fundamentally, we need to recalibrate our collective understanding of what leadership actually is. It’s not about being the loudest, the most certain, or the most eager.

    It’s most certainly not about having immediate answers to every question or projecting an image of infallibility.

    Leadership in a complex world requires the capacity to:

    – Navigate uncertainty without resorting to false certainty

    – Integrate diverse perspectives without losing decisiveness

    – Acknowledge limitations without abdicating responsibility

    – Maintain direction without ignoring changing conditions

    – Build consensus without avoiding necessary conflict

    None of these capabilities correlate strongly with the traits we typically filter for in our leadership selection processes. None emerge reliably from processes designed to identify the most confident rather than the most capable.

    The Reluctant-But-Capable Draft

    Maybe we need leadership term limits with mandatory periods of actual work in between. “Congratulations on your three-year stint as Director of Strategic Initiatives! Please enjoy your new two-year residency in Customer Support where you’ll experience the joyful consequences of all those ‘streamlining processes’ you implemented. Your corner office has been converted into a supply cupboard, but we’ve left you a lovely desk lamp.”. (That sort of thing tends to sharpen the mind in a way that no abstracted thinking can really illustrate when there’s a chance that making one’s subordinates lives hell might come back to burn one’s own backside in future)

    I’d like to propose the Turvey-Serve-y leadership selection process. (I’ll admit the naming needs work).

    Imagine an organisational world where leadership positions came with an obligation rather than a corner office, premium brand electric vehicle, and stock options.

    Where selection focused on demonstrated capability rather than performed confidence.

    Where the question wasn’t “Do you want to lead?” but rather “Given your demonstrated capabilities, would you be willing to serve?”

    Servant leadership isn’t a particularly new idea, and this approach would likely encounter immediate resistance from those most invested in the current system – particularly those whose rise has been fuelled more by confidence than competence. It would require restructuring incentives, reconceptualising leadership development, and fundamentally challenging our collective assumptions about what leadership looks like – far from an easy or overnight job.

    It would mean real change to ensure the new breed of servant leaders are empowered with the tools to generate real success, rather than loaded up with seventeen tons of load like the Little Donkey until said donkey has collapsed and needs to be put to sleep.

    It would be difficult, messy, and uncertain – much like actual leadership itself.

    The Uncomfortable Conclusion

    The power paradox has no simple resolution. The very nature of power attracts those who desire it for its own sake rather than for what it enables them to accomplish for others. Our selection mechanisms systematically mistake confidence for competence, certainty for clarity, and self-promotion for achievement.

    Yet perhaps acknowledging this paradox is the first step towards mitigating its worst effects. Perhaps by recognising the inverse relationship between power-seeking and suitability for leadership, we can begin to design systems that select for the qualities we actually need rather than those that shout loudest for attention. These plans will take time, but that’s surely an area where we should invest our thinking if we want a better world over time.

    In the meantime, perhaps the most practical heuristic remains a profound skepticism toward those most eager to lead. The person telling you they were born for leadership is precisely the one you should escort gently but firmly to the nearest supplies cupboard, where they can organise the paper clips into a splendid hierarchy of their own design while composing a 15-page manifesto on ‘The Future of Office Supply Optimisation: A Leadership Journey’.

    By contrast, the truly qualified leader is probably hiding under their desk right now, hoping that this particular chalice of responsibility passes them by, ideally to land on the desk of someone with enough confidence to be utterly untroubled by their complete lack of qualifications.

    The original copy of this article was published via my personal LinkedIn on April 17th, 2025. You can find the original link here: https://www.linkedin.com/pulse/power-paradox-why-those-most-eager-lead-should-locked-turvey-frsa-ydrde/?trackingId=7afVev12RM2JcryLdCRZuA%3D%3D