HumanWORKS

Blog

  • You aren’t reading enough, and you definitely aren’t thinking enough – so read and think about this

    I’m reading Henry Fairlie this weekend. Bite the Hand That Feeds You – collected essays from one of the sharpest provocateurs the English language produced, and a man whose photograph on the cover alone – cigarette in hand, glasses slightly askew, typewriter lurking in the foreground like an accomplice – communicates something about the relationship between a writer and their craft that no amount of productivity guru content has ever come close to replicating.

    (The typewriter is doing real work in that image. It isn’t decorative. It is the instrument through which the provocations were forged, and there is something quietly honest about having it visible – no pretence that the words simply materialised from some frictionless creative ether. They were hammered out. Key by key. Which is, when you think about it, rather the point of what follows.)

    Those of you who know my influences will know that Christopher Hitchens occupies a significant position in how I approach both writing and argument. Not because Hitch was provocative – though he demonstrably was – but because his provocation was deployed with genuine intellectual scaffolding beneath it, which is a distinction that matters enormously and that most people confuse with volume. You don’t awaken someone from the torpor of collective slumber with a gentle suggestion. You use a bucket of cold water. The trick – and it is a trick, albeit requiring genuine craft – is ensuring the bucket contains substance rather than merely noise.

    Fairlie understood this. Hitch understood this. Whilst in an age where we have outsourced the generation of text to systems that are, by any honest assessment, genuinely impressive at producing words whilst being fundamentally incapable of the thing that makes words matter, understanding this distinction has become rather more urgent than it was when Fairlie was bashing away at his typewriter.

    the agent provocateur’s actual job, or why being uncomfortable is the point

    Fairlie’s polemics were, I suspect, constructed partly for effect – closer in spirit to the work of an edgy comedian than to some earnest manifesto designed to reshape civilisation overnight. There is nothing wrong with that assessment. In fact, there is something deeply undervalued about it, because it misunderstands what the effect actually is.

    Here’s the thing.

    Understanding how to construct an argument – not merely to have an opinion, which is approximately as difficult as breathing and roughly as intellectually demanding – but to deploy that opinion with knowledge, precision, and persuasive architecture that forces the reader to genuinely engage rather than simply scroll past – is one of the foundational skills of anyone who wants to make a real impact on anything beyond their immediate surroundings.

    I learned this in amateur debating societies, where the single most valuable lesson was not how to win an argument but how to understand the opposing position well enough to articulate its strongest case and then use that knowledge to dismantle them.

    Those who cannot do this aren’t debating. They’re performing. The distinction matters because performance can be detected, dismissed, and scrolled past in approximately 0.3 seconds. Genuine argument – the kind that actually lands – requires the reader to do cognitive work. It requires friction. By comparison, spouting rhetoric – that pervasive performance that many think sits as some actual substitute for argument rather than the piss poor presentation of idiocy – is not debating at all.

    Fairlie understood this instinctively. His essays don’t simply assert positions – they construct them with enough rigour and enough provocation that the reader finds themselves genuinely wrestling with the ideas rather than passively absorbing them. The discomfort is not a bug. It is, in the most literal sense, the mechanism by which thinking actually occurs.

    (And yes, I recognise the recursion here – I am arguing, via essay, about why essays matter, whilst simultaneously doing the thing I’m describing. My therapist, Becky, would note this with a raised eyebrow and the observation that “Matt is doing the recursive analysis thing again.” She would be correct. The recursion never stops. Welcome.)

    the cognitive friction complex™ (or a lack thereof)

    We live in a moment of extraordinary and largely unexamined paradox regarding information and capability. We have, quite literally, more collective knowledge accessible through our fingertips than at any previous point in human history. Simultaneously – and this is the part that deserves rather more attention than it currently receives – the tools now available to generate text on our behalf have created an environment where the process of engaging with ideas is increasingly being outsourced to systems that, whilst impressive in throughput, cannot replicate the cognitive friction that actually changes how you think.

    This matters more than most people appreciate. Considerably more.

    The ability to cultivate not merely awareness of information but the capacity to use it effectively – to construct arguments, to identify the weak points in positions that appeal to us, to hold genuinely opposing views in tension without immediately dismissing them as wrong because they’re uncomfortable – is a skill that degrades with disuse. It is, in this sense, rather like physical fitness. Nobody loses the capacity to run by deciding not to run once. The degradation is gradual, imperceptible, and by the time you notice it, you’ve lost ground you didn’t know you were standing on.

    Erudition – whether formally acquired or built through the kind of autodidactic discipline that involves actually sitting with difficult texts until they yield rather than asking an LLM to summarise them – isn’t a luxury. It’s infrastructure. Cognitive infrastructure, specifically, and infrastructure that societies require to function at anything beyond the level of collective reflex.

    Now, here’s where it gets interesting. And by “interesting” I mean “slightly existentially destabilising if you follow the thread far enough, which I obviously intend to do.”

    (Stay with me.)

    what an LLM actually does, and what it doesn’t

    An LLM – a large language model, for those who have somehow avoided the last three years of breathless discourse on the subject – is, at its core, an extraordinarily sophisticated pattern-matching system. It has consumed vast quantities of human-generated text and learned to predict, with remarkable accuracy, what sequence of tokens is most likely to follow any given input.

    This is genuinely impressive. I say this without irony or false modesty on behalf of the technology. The statistical inference involved is staggering, and the outputs are frequently useful, occasionally insightful, and – in the right hands – genuinely productive.

    Here is what an LLM does not do.

    It does not think. Not in the sense that Fairlie thought when constructing his provocations, or that Hitchens thought when dismantling an opponent’s position with surgical precision. It does not experience the cognitive friction of encountering an idea that genuinely challenges its existing framework – because it has no existing framework in the sense that you or I possess one. It has statistical weights. These are categorically different things, in much the same way that a photograph of a fire is categorically different from an actual fire, despite being visually recognisable as one.

    (The photograph will not warm your hands. The LLM will not change your mind. Both will give you the impression of the thing whilst being, in some fundamental sense, the total absence of it.)

    What an LLM produces when asked to write an essay is therefore not an essay in the sense that Fairlie wrote essays, nor the way that Hitch did, or how I do.

    Instead, it is a statistically probable approximation of what an essay looks like – the textual equivalent of a very convincing forgery. Smooth, competent, occasionally even elegant. Entirely devoid of the thing that made the original worth reading in the first place. It’s technologically driven sophistry with the depth of a puddle.

    The thing being: a consciousness grappling with something it found genuinely difficult, and producing language as a byproduct of that grappling.

    which brings us back to the question of what reading actually does

    Here is an uncomfortable observation that I have been turning over for some time, and which Fairlie’s essays have crystallised rather neatly.

    When you read a genuinely provocative essay – one constructed by a mind that was actually wrestling with the ideas it presents – something happens in your own consciousness that is categorically different from what happens when you read competent but friction-free text. Your assumptions get disturbed. Your pattern-matching gets interrupted. You are forced, briefly but genuinely, to consider a perspective you hadn’t previously entertained, and the cognitive effort of doing so leaves a trace.

    This is not metaphor. This is, in the most literal neurological sense, how minds change. Not through passive absorption of information – which is what scrolling, summarising, and LLM-assisted reading largely provides – but through active engagement with ideas that resist easy consumption.

    Sometimes people can consider my non-dualistic thinking to be the rough equivalent of getting splinters in my arse as I sit on the fence. In reality, it’s nothing like that – it’s just having an openness to be able to let in the message of things that are being said, not only because it may make your ego feel vulnerable as new data arises, but specifically because we should seek to challenge what we think with the tools of finding what is right.

    In short, to learn you have to accept the reality that you may be wrong and move on from that rather than entrenching yourself in a position. It’s deeply uncomfortable, stirs up emotion, and is prone to make you wonder what’s going on – arguably the opposite of what our increasingly intellectually soporific state offers as the easy option.

    Sometimes you need a wake up call. Fairlie’s essays resist easy consumption with the subtlety of a sledgehammer to the temple. As essays, they are deliberately constructed to create an impact. The provocation isn’t decoration – it’s the mechanism of delivery. The discomfort is the point of entry telling you to wake the fuck up.

    (Which raises a question that I find genuinely fascinating, and which I’ll pose here before I disappear down the rabbit hole it opens – which, knowing my brain, I absolutely will: if the value of an essay lies not in the information it contains but in the cognitive friction it generates in the reader, then what happens to that value when the reader outsources the reading to a system that experiences no friction whatsoever? The information survives. The transformation does not. And it is the transformation that was ever the point. In short, we end up with well written but pointless AI photocopies of thinking whilst thinking goes the way of the dodo)

    the attention span question, handled honestly for once

    The conventional narrative about attention spans runs something like this: they’re shrinking, long-form content is dying, the future belongs to thirty-second video clips and algorithmically optimised dopamine delivery systems designed by people whose own attention spans are, presumably, slightly longer than the products they’re creating.

    This narrative is partially true and almost entirely beside the point.

    Yes, the average attention span appears to be contracting – though one might reasonably question whether it ever existed in the unified form we nostalgically imagine, or whether we’ve simply become more honest about the distribution. The person genuinely engaged with something they care about will still read five thousand words. They always have. What’s changed isn’t human cognitive capacity but the competition for the first thirty seconds of attention before someone decides whether a piece of writing deserves the effort of genuine engagement.

    (If you like my work, you’ll take the time to appreciate it. Others will bounce at just seeing the word count and that’s OK too – although I’d argue that they need to find topics that they find sufficiently interesting to keep their own attention spans healthy without implying my work is going to be for everybody. By design is explicitly isn’t, and is designed to create discomfort in much the way as my mate Tom’s gut reaction is to mushrooms albeit with less toilet based carnage)

    The real question – and this is the one that actually matters – isn’t whether long-form writing will survive as a format. It’s whether the capacity to engage with it will survive in sufficient numbers to maintain the intellectual commons that civilisations actually require to function.

    This isn’t abstract philosophising. This is a structural question about the cognitive infrastructure of societies.

    Fairlie’s essays represent exactly the kind of material that either sharpens one’s capacity or reveals the absence. There is no middle ground with genuinely provocative writing. You either engage with the argument and find yourself thinking differently afterwards – which is to say, you find yourself changed, however slightly – or you bounce off it immediately because the cognitive infrastructure required to absorb the friction simply isn’t there.

    The uncomfortable bit follows.

    The capacity to absorb that friction – to sit with an argument that challenges you, to resist the impulse to dismiss it because it’s disagreeable, to actually do the work of understanding why an intelligent person might hold a position you find uncomfortable – is itself a skill. A skill that requires practice. A skill that atrophies without it.

    Essays are one of the primary instruments through which that practice occurs.

    the uncomfortable implication, or what fairlie actually teaches you in 2026

    Here’s what reading Fairlie in 2026 actually teaches you, stripped of nostalgia for a different era of political discourse and stripped, equally, of any romanticised notion that things were better when writers bashed away at typewriters whilst smoking in black and white photographs.

    (I will, unashamedly, claim my preference for one of Hitch’s favourite drinks – Johnnie Walker’s Amber Restorative – but acknowledge that as one of my role as a Gen X/millenial whereas many young people will see such a tipple as equivalent to chain smoking Marlboro in the 1970s)

    Getting back to the study of essays, it teaches you that the ability to write well about something – to construct prose that forces genuine intellectual engagement rather than merely confirming what the reader already believes – is vanishingly rare, increasingly undervalued, and arguably more important now than at any previous point in history.

    Not because we lack information. We are drowning in information. It’s literally everywhere and injected into your eyeballs at ever increasing speeds.

    Not because we lack the tools to generate competent text. We have more of those than ever.

    Because we are, as a civilisation, systematically undermining the very cognitive capacity that makes information meaningful – the capacity to be changed by it. This in particular is the Achilles heel of modern LLMs – they are architectural designed to kiss your arse so hard it may leave a mark. Essays, by contrast, tend to leave a mark of intellectual whiplash when they are deployed correctly.

    Instead, we have unprecedented tools for generating text. We have, comparatively speaking, a dwindling investment in developing the human capacity to think through text rather than merely consume it. The essays of someone like Fairlie represent the product of a mind that did the latter extensively and the former with genuine craft – a mind that understood, whether consciously or instinctively, that the value of writing lies not in what it tells you but in what it does to you.

    (And here, if I’m being honest – which I am, because this is a version of an essay I’m putting on my website and not LinkedIn given the whole point of this platform is that I don’t have to pretend otherwise – I should note that writing this essay has done precisely that to me. It has forced me to articulate something I’d been circling for months without quite landing on. The cognitive friction works in both directions. The writer is changed by the act of writing, and the reader is changed by the act of reading, and neither transformation is possible without genuine resistance. Without difficulty. Without the uncomfortable sensation of ideas that don’t slide smoothly into place.)

    If you value that kind of intellectual friction – the productive discomfort of encountering an argument that genuinely challenges your assumptions – Bite the Hand That Feeds You is well worth your weekend. The political context is historical, certainly. The underlying skill on display – how to make someone actually think – is timeless. Although one might argue that the desire to challenge the political status quo is needed now more than ever.

    That skill of writing is worth studying. Worth practising. Worth protecting from the comfortable assumption that competent text generation is the same thing as meaningful writing. It isn’t – and I’ll strongly argue it never will be.

    LLM sophistry is not an essay and it isn’t designed to provoke. The difference between writing content and actually changing opinions through discomfort might be one of the more important distinctions of the next decade.

    The world moves forward. How we choose to respond is in our hands.

    Do me one favour, ideally before we collectively forget how to think.

    Read the fucking book.

  • The Age Ban as Confession: Why Our Response to Social Media Proves We’re Already Lost

    There’s a particular species of policy announcement that functions less as solution and more as an indirect-but-inadvertent confession. The UK’s proposed ban on social media access for under-16s belongs to this category – not because protecting children from psychological exploitation is wrong, but because the measure’s spectacular inadequacy reveals something far darker about our collective situation.


    We’re not implementing insufficient protection because we haven’t grasped the scale of harm. We’re implementing insufficient protection because we’ve lost the capacity for an adequate response. The regulatory theatre itself proves the dependency it claims to address.


    It’s rather like watching someone with late-stage addiction announce they’re cutting back to weekends only whilst their hands shake reaching for the bottle to celebrate this triumph of self-control.

    The Regulatory Confession


    Every policy reveals assumptions about what’s possible, what’s necessary, and what’s absolutely off the table. The age ban confesses several things simultaneously, though only one gets stated explicitly.


    What we admit openly:


    Children lack psychological defences to resist platform manipulation. Their impulse control isn’t fully developed. They’re vulnerable to exploitation designed by teams of behavioural psychologists specifically trained to defeat human resistance. Therefore: restrict access for the vulnerable population.


    Reasonable. Protective. Entirely inadequate.


    What we admit implicitly:


    The platforms themselves are too valuable/profitable/embedded to consider shuttering. Adult populations either possess adequate defences (demonstrably false) or their vulnerability doesn’t warrant protection (closer to actual position). The business model of attention extraction can continue operating at scale provided we check IDs at the entrance.


    What we confess accidentally:


    We lack collective capacity to choose systemic solutions even when the harm is obvious, the mechanisms are understood, and the inadequacy of half-measures is predictable. Not because we’re stupid. Because we’re dependent.


    The age ban isn’t evidence of protective governance beginning to address social media harm. It’s evidence of civilisational addiction constraining regulatory response to measures that won’t threaten the supply of cocaine water we’ve all been drinking for twenty years.


    (I’m going to lean heavily on the cocaine water metaphor throughout this piece because it captures something viscerally true that sanitised language obscures: we’re dealing with engineered dependency operating at population scale, and our inability to name it honestly contributes to our inability to address it systemically.)

    The Adequate Response We Can’t Choose


    Let’s establish what an adequate response would look like, not as fantasy but as a logical conclusion from accepted premises.


    If we accept that:


    ∙ Social media platforms are built on business models requiring psychological harm to generate sustained engagement
    ∙ The harm isn’t incidental but foundational (attention extraction demands defeating user resistance)
    ∙ Children lack defences against industrial-scale manipulation
    ∙ Adults exhibit identical vulnerability despite physical maturity
    ∙ Twenty years of operation has produced catastrophic individual and systemic consequences
    ∙ The platforms are specifically engineered to prevent voluntary cessation


    Then the adequate response is obvious: shutter the platforms entirely.


    Not as punishment. Not as Luddite rejection of technology. Rather as recognition that we’ve conducted a civilisational-scale experiment in psychological exploitation that has produced precisely the catastrophic results the original warnings predicted, and continuing the experiment serves no legitimate purpose beyond shareholder value.


    We don’t allow casinos to operate in primary schools even with age verification. We don’t permit tobacco companies to design cigarettes specifically optimised for youth addiction then rely on ID checks at point of sale. We recognise that some business models are incompatible with human wellbeing regardless of age restrictions.


    Social media platforms operating on attention extraction represent that category. The business model requires harm. Age verification doesn’t change the fundamental equation.


    So why can’t we choose the adequate response?


    Because we’re fucking addicted.


    Not metaphorically. Not as some sort of rhetorical flourish. Instead, this is an accurate description of our collective neurological situation after twenty years of systematic dependency creation.

    The Addiction That Constrains Response


    Here’s where the analysis gets uncomfortable, because it requires examining not just the platforms but our own relationship to them – and more specifically, our inability to imagine existence without them.


    Try this thought experiment: Imagine the government announced tomorrow that all social media platforms would cease UK operations in six months. Given the debacle of what’s going on within the US, this could happen if felt they were no longer a trusted ally.

    Imagine it. A complete shutdown. No Facebook, no Instagram, no TikTok, no Twitter/X, no LinkedIn. Gone.


    Notice your immediate reaction.


    Not your considered philosophical position after some careful analysis. Your immediate reaction. The one that arose before any sort of intellectual justification.


    For most people – including those who intellectually recognise the platforms’ harm – that immediate reaction includes some variation of panic, loss, resistance. How would I stay in touch with people? How would I know what’s happening? How would I maintain professional connections? How would I fill the time on the train? What would I do when I’m bored?


    That’s the language of dependency talking.


    Not “I find these platforms useful and would prefer to keep them.” That’s preference. This is “I can’t imagine functioning without them” despite the fact that you – yes, you reading this – somehow managed to function perfectly well for most of your life before they existed.


    Many of the platforms aren’t even twenty years old. If you’re over 40, you spent more of your life without them than with them. Despite this, the immediate reaction to their removal is a feeling of existential threat rather than one of mild inconvenience.


    That neurological response – the panic at imagined loss – constrains what regulatory responses feel possible. We can’t choose to shutter the platforms because we’ve lost the capacity to conceive of existence without them. Not at a policy level but at somatic level.


    The age ban represents the maximum intervention our collective dependency permits. Anything more aggressive triggers the same resistance as suggesting to the profound alcoholic that perhaps complete cessation might be healthier than switching to 4% ABV beer on weekdays.

    The Regulatory Capture By Dependency


    Traditional regulatory capture involves industry influence over regulators through lobbying, revolving doors between industry and government, the funding of sympathetic research, and other mechanisms of institutional corruption.


    All of that is happening with social media platforms. Obviously. The usual suspects doing the usual dance.
    However, there’s a second, more insidious form of capture operating here: the regulators themselves are dependent.


    The MPs drafting age restriction legislation are scrolling Twitter during parliamentary debates. The civil servants implementing policy are checking Instagram between meetings. The ministers announcing protective measures are maintaining LinkedIn profiles for professional positioning.


    They’re not corrupt in any traditional sense – such as those who are taking bribes to protect industry interests. They’re dependent in the neurological sense – their own relationship to the platforms constrains what interventions feel possible.
    You can’t design adequate response to addiction when you’re currently using. The alcoholic makes excellent arguments for why complete cessation is unnecessary, extreme, or otherwise disproportionate to the actual problem. They’re not lying – they genuinely believe the rationalisations their dependency generates.


    Social media generates the same mechanism operating at a policy level.


    The regulatory response to social media platforms is constrained not primarily by lobbying (though that’s happening) but by the regulators’ own inability to conceive of systemic solutions that would threaten their own access to the cocaine water. So instead, we’ll just ban the kids.


    This isn’t intended as conspiracy. It’s the predictable outcome of twenty years of systematic dependency creation encountering attempt at self-regulation. The addict announces they’re cutting back. The specifics of how they’re cutting back reveal they have no intention of actually stopping.


    You can check IDs at the entrance to the casino, but the house keeps operating.

    The Civilisational Bind


    Individual addiction is a story of tragedy. Civilisational addiction has existential consequences.


    When an individual becomes dependent on substances or behaviours, intervention is theoretically possible – family, friends, medical professionals, legal system, employers can potentially combine to create conditions forcing confrontation with dependency.


    When entire civilisations become dependent, who exactly is positioned to intervene?


    The mechanisms that might force confrontation with collective dependency are themselves composed of dependent individuals. Governments full of scrolling MPs. Regulatory bodies staffed by Instagram-checking civil servants. Media organisations whose business models now depend on social platform distribution. Educational institutions using the platforms for “engagement.” Healthcare systems offering mental health support via Facebook groups.

    We live in an addicted world – one where, with no hint of fucking irony, there are people suggesting a LLM is an effective therapist, and an algorithm as a suitable replacement to friendship.


    The entire institutional apparatus that might address the problem is thoroughly infiltrated – not by malicious actors but by the dependency itself.


    It’s rather like discovering the immune system has been compromised by the very pathogen it’s supposed to fight. Who mounts the immune response when the immune response is infected?


    This creates what I’ll call the Civilisational Addiction Bind:


    ∙ The harm is obvious and systemic
    ∙ Adequate response requires systemic intervention
    ∙ Systemic intervention requires collective capacity for voluntary cessation
    ∙ Voluntary cessation becomes impossible after sufficient dependency creation
    ∙ Therefore: inadequate responses that preserve access whilst performing concern


    The age ban is Exhibit A. We’ll implement symbolic protection for children whilst carefully preserving the infrastructure that created the problem, because adequate response – shuttering the platforms – triggers existential panic in the dependent population proposing the regulation. The kids are safe whilst the adults play with matches and wonder why everyone keeps getting more and more burned.

    What Recovery Would Require


    Let’s be unflinchingly honest about what an adequate response – civilisational recovery from social platform dependency – would actually require.


    It wouldn’t just policy change. Nor merely regulatory reform. Instead, it would need to be something approaching collective withdrawal from engineered dependency operating at neurological level across entire populations.


    At the individual level:


    ∙ Sustained periods without access to algorithmic feeds
    ∙ Relearning capacity for boredom, sustained attention, genuine human connection
    ∙ Confronting whatever emotional/social needs the platforms were medicating
    ∙ Rebuilding psychological architecture systematically eroded over twenty years
    ∙ Accepting that some neural pathway damage may be permanent


    At an institutional level:


    ∙ Restructuring every system now dependent on platform infrastructure
    ∙ Finding alternative mechanisms for communication, coordination, information distribution
    ∙ Accepting significant short-term disruption to operations built around platform integration
    ∙ Developing new approaches to problems we’ve forgotten how to solve without algorithmic mediation


    At civilisational level:


    ∙ Collective tolerance for extended discomfort during withdrawal period
    ∙ Sustained political will despite inevitable backlash from dependent populations
    ∙ Acceptance that recovery timelines are measured in years or decades, not quarters
    ∙ Recognition that some capabilities lost may not return in currently living generations


    Look at that list. Really examine it.


    Now consider: Do we have collective capacity for voluntary embrace of that process?


    Or are we like the late-stage alcoholic who recognises the bottle is killing them but can’t imagine Friday evening without it, Monday morning after it, the family gathering surviving exposure to it, the work stress managed absent its chemical assistance?


    The adequate response requires collective capacity we’ve systematically destroyed through the very process we’d need that capacity to address.


    We can’t choose to shut down the platforms because we’ve lost the neurological and institutional capacity to function without them. The dependency has become load-bearing infrastructure. Removing it triggers collapse fears – justified or not – that make removal psychologically impossible.


    So we’ll implement age bans. Start to check IDs. Announce we’re protecting the vulnerable whilst carefully preserving access for everyone else. Declare victory over harms we’re actively perpetuating.

    Success! Alas, no – it’s more of the same with a thin veneer of consideration for younger people whilst the rest of the adult population says it’s fine for us.

    The Tobacco Parallel That Terrifies


    You know and I know that we’ve been here before. Different substance, but remarkably similar patterns.


    The tobacco industry created:


    ∙ Obvious harm visible for decades
    ∙ Industry suppression of evidence
    ∙ Regulatory capture preventing adequate response
    ∙ Incremental half-measures arriving far too late
    ∙ Warning labels, advertising restrictions, designated areas
    ∙ Continued operation of systems known to be catastrophically harmful


    By comparison, social media platforms:


    ∙ Obvious harm visible for decades
    ∙ Industry suppression/dismissal of evidence
    ∙ Regulatory capture (now including dependency capture)
    ∙ Incremental half-measures arriving far too late
    ∙ Age restrictions, content warnings, “digital wellbeing” theatre
    ∙ Continued operation of systems known to be catastrophically harmful


    The parallel is exact. We’re following the same timeline, implementing the same inadequate measures, protecting the same profits, accepting the same casualties.


    With one crucial difference that makes the social media version potentially more catastrophic:


    Tobacco primarily killed individuals. They were horrible, preventable, unacceptable deaths – but ultimately individual tragedies aggregated. Society continued functioning. Institutions remained intact. Collective capacity for response persisted.


    Social media platforms erode collective psychological capacity itself. Not just harming individuals but degrading the civilisational infrastructure – sustained attention, impulse control, genuine connection, shared reality, democratic discourse, institutional trust – necessary for collective response to collective crisis.


    We’re losing the neurological and social capacity to address problems at the same time we’re accumulating problems requiring that capacity to address.


    Tobacco took fifty years from obvious harm to meaningful regulation. We eventually got there because collective capacity for response survived the interim casualties.
    Social media is eroding that collective capacity now and rifting the world into myriad micro-societies. Each year of continued operation makes adequate response less likely by degrading the psychological and institutional architecture necessary to choose it.


    We might not have fifty years to arrive at an adequate response. We might not even have twenty. We might already be past the point where collective capacity for voluntary cessation exists.


    The age ban, implemented two decades into obvious harm, suggests we’re already well down that trajectory.

    The Answer We Can’t Speak


    There’s a question we’re collectively avoiding, because asking it honestly would require confronting answers we lack capacity to implement.


    Not “should we ban children from social media?”


    Instead: “Should these platforms exist at all?”


    The honest answer – based on twenty years of evidence, understanding of business models requiring harm, recognition of systematic psychological exploitation, assessment of individual and civilisational consequences – is clearly no.


    They shouldn’t exist. Not in their current form. Not operating on attention extraction. Not optimised for engagement over wellbeing. Not designed to defeat human psychological defences. Not structured to prevent voluntary cessation.


    The business model is incompatible with human flourishing. Full stop.


    But we can’t choose that answer. Not because we don’t understand the harm. Because we’re dependent on the harm-creation mechanism.


    The platforms have successfully created the neurological and institutional conditions that prevent their own removal. That’s not an accident – it’s the explicit goal of engagement optimisation. Make yourself indispensable by making users dependent. Success measured by inability to imagine existence without you.


    They’ve succeeded spectacularly.


    So we implement age bans. Announce protective measures. Carefully avoid the actual question because we know – at somatic level, before intellectual justification – that we lack capacity to choose the adequate answer.


    The regulatory response itself confesses the dependency. We can only implement measures that preserve the infrastructure we’re dependent on. Anything more aggressive triggers withdrawal panic that makes it psychologically impossible.

    The Generations That Won’t Recover


    Here’s perhaps the darkest implication: even if we implemented adequate response tomorrow – complete platform shutdown, and civilisational withdrawal from engineered dependency – significant portions of currently living populations might never recover full psychological capacity.


    The neural pathways carved by twenty years of algorithmic manipulation don’t just disappear after a week off. The psychological architecture that never developed in those raised entirely within platform environments becomes hard to subvert. The institutional knowledge of how to function without digital mediation that we’ve systematically lost in service of some vague promise of social engagement.


    Some of that damage may be permanent.


    Not because humans are fundamentally broken but because neuroplasticity has limits, developmental windows eventually close, and twenty years of systematic erosion doesn’t reverse through six months of abstinence.


    The children we’re now proposing to protect with age bans were born into world where platform dependency was already civilisational norm. They never experienced pre-smartphone existence. Their entire psychological development occurred within environment optimised for attention extraction.


    Even if we stopped creating new casualties tomorrow, we’re looking at multiple generations carrying the neurological consequences of civilisational-scale experiment in psychological exploitation.


    The adequate response – shuttering platforms – would prevent additional harm but wouldn’t reverse existing damage. We’d be stopping the poisoning whilst acknowledging that some effects are permanent.


    That’s the hardest truth to accept. Particularly when accepting it would require implementing response we’ve already established we lack capacity to choose.


    So we don’t accept it. We implement age bans. Pretend that protecting future children compensates for abandoning current casualties. Announce measures that won’t meaningfully address the problem whilst carefully preserving our own access to the mechanisms creating it.
    The civilisational equivalent of the parent announcing they’re quitting drinking whilst pouring their morning whisky, rationalising that at least they’re setting a good example for the kids by not letting them drink too.

    The Confession In The Silence


    What’s most revealing isn’t what the age ban does but what it deliberately avoids.


    No discussion of shuttering platforms entirely.


    No consideration of business model regulation that would eliminate attention extraction economics.


    No proposals for systemic intervention that might actually address root causes.


    Just age verification. ID checks. Symbolic protection for one vulnerable population whilst leaving the exploitation infrastructure intact for everyone else.


    That silence – the complete absence of adequate response from policy discussion – confesses our dependency more honestly than any admission we’d make explicitly.
    We can’t discuss shuttering platforms because the suggestion triggers immediate panic. Not careful policy analysis of costs and benefits. Immediate, somatic, pre-rational resistance.


    That’s truly the words of dependency talking.


    The profound alcoholic doesn’t carefully analyse whether complete cessation might be an optimal long-term strategy. They immediately reject the suggestion as extreme, unnecessary, disproportionate. The dependency generates rationalisations that protect access to the substance.


    This is the same mechanism operating at civilisational level.


    Policy discussion carefully constrained to measures that won’t threaten platform operations. Not through conscious conspiracy but through unconscious dependency. The bounds of “reasonable policy debate” are set by collective inability to imagine existence without the things destroying us.


    The age ban represents the maximum intervention our dependency permits. Everything more aggressive is automatically categorised as unrealistic, extreme, impossible to implement – not based on careful analysis but based on somatic resistance to imagined loss.
    We’ve become the thing we’d need to not be in order to address the problem adequately.

    Whether It’s Already Too Late


    So here’s the question that matters: Is civilisational recovery from platform dependency still possible?
    Or have we passed some neurological and institutional point of no return where collective capacity for adequate response no longer exists?


    I genuinely don’t know.


    The optimistic case: Humans are remarkably resilient. Neuroplasticity continues throughout life. Social systems can reorganise rapidly when circumstances demand. We’ve recovered from collective dependencies before – smoking rates have declined significantly following adequate regulation. Civilisational recovery is difficult but possible.


    The pessimistic case: The tobacco parallel breaks down because tobacco didn’t erode collective capacity for response. Social media platforms have created dependency whilst simultaneously degrading the psychological and institutional infrastructure necessary to address dependency. Each year of continued operation makes adequate response less likely. We may have already passed the point where voluntary cessation is collectively possible.


    The honest case: We won’t know until we try. Of course, we won’t try until we’re forced to by consequences so catastrophic they override the dependency-generated resistance to adequate response.


    Which means we’re likely headed for version of civilisational withdrawal that’s involuntary rather than chosen. Not policy-driven shutdown but collapse-driven cessation when the platforms’ continued operation becomes materially impossible to sustain.


    That’s a considerably less pleasant scenario than a voluntary, managed transition to platform-free existence. However, it may be the only pathway available when the population has lost capacity for voluntary cessation.


    The age ban suggests we’re not even close to voluntary response. We’re still in the “4% beer on weekdays” phase of addressing civilisational alcoholism. The catastrophic consequences that might force involuntary withdrawal haven’t yet arrived – or haven’t yet been recognised as such by populations whose capacity for recognising catastrophe has been systematically eroded.

    The Regulatory Theatre Continues


    Meanwhile, the age ban will be implemented. Headlines will be written. Politicians will claim credit for protecting children. Industry will comply with minimal measures whilst continuing operations unchanged. Everyone will declare victory.


    Children aged 15 years and 364 days will be protected from psychological exploitation through rigorous ID verification.


    Adults aged 16 years and 1 day will continue unrestricted access to identical exploitation infrastructure, their lack of psychological defences carefully ignored because acknowledging adult vulnerability would require admitting the platforms themselves are the problem.


    And what of us? We’ll all continue scrolling, checking, engaging, medicating whatever needs the platforms serve, convincing ourselves that protecting children whilst preserving adult access represents meaningful progress rather than confession of our own dependency.


    The cocaine water keeps flowing.


    The isolation cages remain operational.


    The rats – young and old, in school uniforms and grey suits – continue drinking.


    And the fact that we can only implement symbolic protection whilst carefully preserving the exploitation infrastructure proves exactly what we’re unable to admit: we’re not protecting anyone.


    We’re protecting our access to the thing destroying us.


    The age ban is confession masquerading as solution.


    And somewhere, future historians are watching this regulatory theatre, wondering why we thought checking IDs whilst operating civilisational-scale psychological exploitation constituted meaningful reform rather than admission that we’d already lost collective capacity for adequate response.


    The answer is simple: Because we’re addicted.


    Not metaphorically.


    As neurological reality constraining what regulatory responses feel possible.


    The inadequacy of our protection reveals the depth of our dependency.


    As well as the fact that we can’t choose better measures proves we’re exactly as lost as the measures themselves confess.

  • Did You Choose to Click This Link? A Systems Thinker’s Guide to AGI, Consciousness, and the Illusion of Agency

    Let me start this conversation with a question – what made you choose to click the link that led you here?

    I ask this as the thoughts that you create as a result will help inform a few things about how you navigate the article itself, your views about AGI, and potentially about the existential views of you as a human being given your analogous construction within our material reality.

    (Already I can feel the LinkedIn crowd reaching for the back button. Stay with me.)

    Per the brief summary posted with this article itself, the topic today is AGI which, despite the current hype cycle – a hype cycle that has somehow managed to make cryptocurrency speculation look measured and responsible – is a large step beyond what we currently have in operation. Let me give advance warning that this article will take on a lot of steps in its journey. It isn’t particularly complicated as a topic, but because of the nature of explaining the details of AI in simple layperson terms – a point I’ll recursively get to later in the story, much like everything else in my life – it is fairly long.

    When you decided to click this link, there was likely some logic that led you to it. Perhaps you’ve read my other work. Perhaps you liked the headline. Maybe your thumb slipped whilst doom-scrolling at 2am, searching for something to make the existential dread feel more intellectually respectable.

    Either way, you made a decision.

    Or did you?

    The Stochastic Parrot and the Excel Spreadsheet

    Much of recent discourse has talked about the creation of a human-like intelligence – AGI – or even the superintelligence version (also called AGI or ASI, because nothing says “we understand what we’re building” like inconsistent nomenclature).

    This future is still unknown, so let’s take a brief wander through the past to understand the present before we hypothesise about the future. There’s a sentence that would make my meditation teacher proud, albeit one that sounds suspiciously like a productivity guru’s attempt at profundity.

    Present thinking about AI architecture talks about what is called a MoE or “Mixture of Experts” model – this is the evolution that came from previous even “narrower” (or single context) AI designs that preceded the current architecture.

    If you’ve been using consumer AI tools like most people have for the last few years, you’ll remember the articles asking “how many Rs there are in strawberry” – a question that presumably kept Anthropic’s engineers awake at night in ways that Descartes could only dream of – or have memories of hellish visuals from early Stable Diffusion image frameworks which couldn’t identify how many fingers a person had, or how their limbs might bend correctly. Image generation in video is still challenging because AI is not actually creating a universe when it creates a video – it is creating a series of static images that are not internally consistent.

    Rather like the present approach to global policy, come to think of it.

    Before I disappear down a rabbit hole of complex topics – a hazard of being someone whose brain treats tangential exploration as a fundamental human right – it will help if I explain a few concepts first.

    Some of these will include analogies that explain viscerally understood aspects of your physiology in ways that don’t involve deep understanding of mathematics and how systems work. Doing this will help me also – having spent four and a half decades harbouring the false belief that “everything I know is obvious” has been thoroughly shattered following my autism and ADHD diagnoses. It turns out that most people aren’t systems thinkers like me, but it also explains why mathematics and computer science has ended up feeling obvious.

    (I say “feeling obvious” with the full awareness that nothing about my neurodivergent experience of reality could reasonably be described as “obvious” to anyone, including myself most mornings before coffee.)

    A Brief Taxonomy of Digital Incompetence

    So, to the explanation.

    With AI, there are many “narrow” tools that exist which work together within the MoE model I just mentioned.

    Put simply, each “tool” occupies a singular or at least focused role in the creation of an output. Some tools excel at identifying text but can’t do basic mathematics – a phenomenon I call the “Arts Graduate Problem” despite having friends who would rightfully slap me for the generalisation. Others can generate images based on large volumes of training information, but can’t speak any more coherently than a colleague who just sank half a bottle of whisky after the Christmas party.

    So individually the tools are useful, but only to a point – in much the same way as your stomach acid plays a vital role in digestion, but is probably not the way you’d look to try and solve a complex emotional discourse at work. Although I’ve met managers who seemed to be attempting precisely that approach. It’s not a question of bad or good – it’s a question of fit for purpose or “wrong tool, wrong place”.

    The aforementioned MoE model seeks to use multiple tools to collaborate with one another to achieve a better outcome. Think of it as Continuous Collaborative Optimisation™ – a term I’ve just invented with the requisite trademark symbol to give it the veneer of legitimacy that modern management consultancy demands.

    One of the simplest versions of this can explain why, when you ask ChatGPT to do some mathematics as well as writing, you may well get an answer that is more coherent than it previously was – although it’s fair to say that checking the output is similarly recommended, unless you’re working on an Australian government delivery project and you’re an employee of one of the Big Four.

    (Sorry. Not sorry. Some wounds need salt occasionally.)

    The Rote Learning Trap, or Why Your Times Tables Teacher Was Right (Mostly)

    When doing LLM-based creation of tokens – the components of what constitutes any text-based output when you ask it a question – what it had missed with its stochastic parrot trick (sophisticated mimicry in layperson terms) is that you can’t infer mathematics beyond the basics using purely the foundations of rote learning.

    Looking at learning itself for a second – and here we enter territory that my therapist Becky would recognise as “Matt doing the recursive analysis thing again” – rote learning plays a part in all learning for many, but inference from the system is where the real “learning” happens. Our best example here is the times tables that you and I will have learned as young children.

    For some, it was about repeating the learned twelve items in twelve lists, but understandably that generates very little value besides being able to work out a variety of things that multiply together up until the value of 144.

    AI had similar issues when it was learning based off of datasets that were text-based – and the solution to a rote-based learning method at scale generates several problems that make it both inefficient and realistically ineffective.

    With this rote method, to find out the answer to 3144325 × 4152464 solely using the learning style of an LLM would require adding data which could calculate that either with the answer directly explained in text (very unlikely and useless to the next seven digit by seven digit calculation) or would require a massive level of inefficient data processing to know every variation of questions to be answered up to and including that calculation.

    Storage would be enormous – every calculation would have to be explained in English and absorbed, content would be massive, and responses would be comparatively slow due to computational expense and inefficiency.

    This is the computational equivalent of trying to memorise the entire telephone directory when what you actually need is the ability to dial a number.

    Hopefully when you learned the times tables you worked out the patterns that exist within mathematics – digit sums in multiples of 9, modulo inferences from repeating cycles as numbers increment, and other patterns that exist.

    If you did, you likely are good at maths. If you didn’t? Well thankfully we have calculators, Excel, and a job in middle management, right?

    The Dungeon Master’s Guide to Computational Architecture

    Anyway, getting back to our story, AI had a massive “I’m bad at maths” problem which needed solving.

    So what did engineers do? They thought “we can already calculate things using the binary that computers run on” and effectively leveraged tools such as Python to hand off “the counting bit” to a tool that could do that better than the talking bit.

    Constructing even the seven by seven digit calculation in binary may be a lot of digits, but it’s a hell of a lot faster than trying to memorise every variation of every prior calculation – instead all that happens is that the answer gets requested and generated in less than the blink of an eye.

    Rather than disappearing into the idiosyncrasies of computer science – and believe me, the temptation is real – I want to keep the article approachable, so this is where I’ll lean into a few analogies. Both direct physical ones that relate to your body, but also ones that relate to my personal experiences which are hopefully relatable for you.

    When I was a young boy, I used to play the game Dungeons and Dragons – much like many other adolescents of my era. The concept is that each person playing roleplays a character with a narrow focus: the brutish fighter who can likely slay the dragon but can’t talk their way out of a fight; the mage who can conjure up the power of the elements but is fragile if the wrong sort of wind hits them hard; and the cleric who is great at healing others but who isn’t much use at healing if they happen to be dead.

    The thing that made D&D interesting was the need to work together as a group. There was no singular way to solve the problem – it was about “right tool, right job”.

    (Also there were crisps involved, and Coke – Coca Cola in case of any inferred ambiguity – and the kind of adolescent social dynamics that would make for excellent therapy material decades later. But I digress.)

    Coming back to the human body, our own components from which we are composed follow the same “party” logic – each component has evolved over the course of many years to reach a specific function to ensure survival. Like the party, we have eyes that can see but can’t taste, we have stomachs that can digest food but not smell, and we have a nose that can interpret olfactory data but can’t help you see if you have your eyes covered.

    In that sense, we are our own MoE system, which does beg the question – if we are just a series of interconnected systems, who is the “I” that we think of? Who is the “I” who wrote this article, and who is the “I” who is reading it?

    Ah. Now we’re getting somewhere interesting.

    The Lego House Hypothesis

    Comparatively recent neuropsychology talks of something called “emergent properties” – a property that exists which is inseparable from the components from which it is created. The quickest example to explain this is that of a Lego house.

    Whether you owned Lego, like Lego, or still play with it is irrelevant – you understand the premise. A series of blocks are put together and they create increasingly sophisticated structures that become other structures. Bricks become a wall. Walls become a room. Rooms become a house. Houses become a village and so on.

    The promise of a larger scale MoE hierarchy is that there will be increasingly complex systems that are built from smaller components that do different things – except instead of the foundational “you count, I’ll write”, it is more likely that you will have a component that can choose that one model “be the doctor, and I’ll be the artist”.

    This is very much the proto-foundations of how human beings created civilisation conceptually. If we all needed to toil in the fields, we’d do little beside be farmers. If we all needed to go out hunting, what happens if we’re ambushed? The village would be gone.

    So we agreed to split the tasks up. Some of these were biologically defined – human females carried the offspring until birth so they had that role structurally defined for them in the past. Males were physically stronger on average and so went out hunting.

    Societal norms and our own evolution may well have rendered some of these traditional stereotypes outdated and even moot in some cases, but they are the foundations of how we have come to pass and are defined by millennia of change rather than recent psychosocial aspects of whether these are morally correct or not.

    So humans are MoEs of sorts – albeit borne of far longer R&D cycles and with carbon architecture rather than silicon. We’re using different tools to help us navigate challenges that the unsuccessful peers of our distant ancestors were unable to – and so we are where we are through the process we know as evolution.

    The Halting Problem, or Why Your Computer Will Never Truly Know Itself

    Getting back to AI, there are a few barriers to AGI. One of them is the foundation of traditional computation in the present sense. AI is built on the binary logic that I explained earlier. Processors can, due to technological advancements, generate the by-products of mathematics in increasingly faster times. What might once have been unachievable by a computer the size of a room within the constraints of a human life, might now be achieved in the fraction of a second due to how computers have evolved.

    However, existing binary logic has mathematical limits in itself.

    Those of you who have studied computer science will be aware of something called “The Halting Problem”. For those who haven’t, the premise isn’t about systems crashing or entering infinite loops – it’s something far more profound. Alan Turing proved that there is no general algorithm that can examine any arbitrary program and definitively predict whether it will eventually stop (halt) or run forever.

    This isn’t a mechanical failure where everything grinds to a halt. It’s a proof of epistemological limitation – we cannot create a universal program that predicts the behaviour of all other programs. The undecidability isn’t because the machine breaks; it’s because certain questions are mathematically unanswerable within the system asking them.

    Think of it this way: no matter how sophisticated our binary logic becomes, there will always be questions about computational processes that we cannot answer in advance. We can only run them and see what happens. This mirrors our own human condition – we cannot predict our own future with certainty; we can only live it.

    (Rather pointless, really, when you think about it. Which is, of course, exactly what I’m doing. Thinking about thinking about not being able to think about what comes next. The recursion never stops. Welcome to my brain.)

    Given computer science is based on mathematics, and mathematics has a far longer history in itself, this isn’t the first seemingly unsolvable problem that binary logic encounters. In fact, much of broader computer science as a whole is structured around these very limitations – things such as the cryptography that keeps you safe online when you use a bank. The data involved is very challenging, but also safe by what is best termed “computational capacity over time” – if it takes 25,000 years to crack the session, then your five-minute check of your balances and Direct Debits are safe.

    All is well in that context.

    Enter stage quantum computing.

    Schrödinger’s Cat and the Probability Casino

    Quantum computing is a fairly recent development, based around subatomic particle states and calculations that can be derived from the states of said physics. For those who haven’t studied particle physics extensively – and I’m going to assume that’s most of you unless my readership has taken a dramatic turn toward CERN employees – the best way to explain the concept is through the well-known idea of Schrödinger’s Box.

    Schrödinger’s Box was a thought experiment whereby a theoretical cat was locked in a theoretical box with a theoretical radioactive substance which, at some point in the future, was to kill the cat.

    Due to the unknown and sealed nature of the system, it was impossible to define whether the cat was alive or dead at any point without opening the box. So this led to the potential theory that the cat may be – until one actually answers the question by checking – both alive AND dead at the same time.

    (Those who know me personally will have seen the fact that I own more than one T-shirt that talks about Schrödinger’s Box – which probably tells you all you need to know about me, and validates my doctorate in “Embodied Nerd Science”)

    Anyway, this is the easiest way to describe the foundations of quantum computing as it relies on superposition states (the idea the cat is both dead AND alive if we use the thought experiment) to explore multiple possibilities simultaneously.

    However – and this is important for those of you mentally composing breathless LinkedIn posts about Quantum AI Synergy Solutions™ – quantum computing doesn’t transcend the fundamental limits of computation. It cannot solve the Halting Problem or other undecidable problems – it’s still bound by the Church-Turing thesis. What quantum computers can do is explore massive probability spaces with exponential efficiency.

    Think of it this way: a classical computer reads every book in a library one by one to find a specific passage. A quantum computer can, through superposition, effectively “read” multiple books simultaneously, collapsing to the most probable answer when measured.

    This doesn’t give quantum computers magical non-binary logic that escapes mathematical limits. Instead, they offer something perhaps more interesting – massive parallel probability exploration that actually maps quite well to what we call human intuition. When making complex decisions, we’re not consciously evaluating every possibility sequentially; we’re performing rapid probabilistic weighting of factors, many of which our conscious mind hasn’t explicitly modelled.

    The Excel Spreadsheet That Told the Truth (And Was Ignored Anyway)

    Which brings me back to ask you a question that may help think more about AI and the MoE system.

    Think of a time in your life where you were making a difficult decision.

    The actual decision isn’t specifically relevant, but the choice you made is – at least in the abstract. It will help understand the aspects of how logic – the foundation by which we learned to learn since the Renaissance – underpins our own “intelligence”.

    I’ll start by giving an example that is a variation on a personal story my old boss told me about a few years ago.

    He was facing a situation whereby his company had been taken over by another. This, understandably, led to the usual human response to change – “what do I do now?”.

    The choices were fairly obvious: take the new job in the same company; try to negotiate a different role in the company; take voluntary redundancy and find another job; or find another job and walk with the safety of an offer rather than leaping.

    So he did what many technical people would do – he created a complex decision matrix in Excel (naturally) to weight pros and cons on what to do.

    The only problem? He didn’t like the answer.

    And so he picked a different one.

    If my old boss was a computer, he wouldn’t have been able to make that choice. He would have either chosen the highly weighted one, or he’d have hit his own version of decision paralysis – which is a phenomenon we all have personal experience with, usually at about 11pm when trying to decide what to watch on Netflix.

    So what made my old boss choose something else?

    Simple terms explanations may call that emotion or impulse or something else to which we currently have a poor understanding besides the level of “chemical increases, outcome occurs” – a particularly autistic and systems-thinking way of perhaps reducing love down to mathematics.

    (I do this, by the way. Reduce things to mathematics. It’s both a superpower and a curse. Mostly a curse when trying to explain to my friends why I’ve created a spreadsheet to optimise Friday night logistics.)

    But perhaps what he was doing was probabilistic weighting at a scale and speed his conscious mind couldn’t track – evaluating thousands of micro-factors, social dynamics, future uncertainties, and personal values in a way that mimics what quantum computers do with superposition. Not magic, not transcending logic, but parallel probability evaluation at massive scale.

    The Traffic Warden Problem

    So, with regard to AGI, what would this mean for us?

    What it likely means, if we are to create such a thing, is that we need something beyond current orchestration layers.

    In computer science terms, orchestration layers like Kubernetes are deterministic traffic management systems. They don’t make choices – they follow predetermined rules about resource allocation and task routing. They’re sophisticated, yes, but they’re following syntax (rules about moving data) not understanding semantics (what the data means). Think of them as supremely efficient traffic wardens who can manage millions of cars per second but have no concept of where the drivers want to go or why.

    What we’d need for AGI would be something different – call it an executive function or agency layer. This hypothetical component would need to evaluate meaning, not just shuffle symbols according to rules. In simple terms, current orchestration is the traffic warden; what we’re theorising about is the driver who decides to take a different route despite what the GPS recommends.

    The distinction is crucial because it highlights the gap between what we have (incredibly fast symbol manipulation) and what AGI would require (semantic understanding and agency). The danger isn’t necessarily that we create consciousness, but that we create something so fast at symbol manipulation that we mistake its speed for understanding – a philosophical zombie that passes every test without any inner experience.

    Rather like some senior stakeholders I’ve worked with, come to think of it.

    The Post-Hoc Reasoning Machine

    In computing terms, there may have been – and likely was – some underlying logic that made my boss’s choice which was likely subconscious. The takeaway order may be a logical extrapolation because you don’t have the energy to cook. My boss might have made a choice because he preferred the idea of another role. Your boss might have made the choice because the data told them it was the right thing to do.

    Of course, these answers may have turned out to be wrong, but that which makes us human is the choice, right?

    But there must have been some sort of reasoning, right? Without it, how was the decision made – was it simply just “the self” or some unknown logic we can’t see?

    In classical systems, and in particular in contemporary AI, we are often quick to anthropomorphise regarding systems. You’ve all seen the stories of lonely men falling in love with AI girlfriends – a phenomenon that says rather more about the state of modern relationships than it does about the sophistication of large language models – or of beliefs from some engineers that the ability to seem to communicate with software via a UI being seen as capacity for the sentience which you and I believe we hold.

    Our systems are borne of explicit construction, although AI inference and probability weightings are at or beyond the level of comprehension of most people – and certainly beyond the level of a human to make a decision faster than even current AI based from logic.

    So we can, in theory, explain most of what we have done with computers so far, but the truth is that there’s a lot of “don’t know” in modern architecture. Rather more than the tech evangelists would like to admit, frankly.

    The Bridge to Intuition

    What we do know are the aforementioned mathematical problems that we’ve seen – there are things that our existing systems fundamentally cannot predict about themselves, undecidable questions that no amount of computational power can answer. If we want to move past sequential processing toward something that resembles human decision-making, we need systems that can perform massive parallel probability evaluation.

    Quantum computing offers this capability, not as a magical escape from logic but as a bridge between rigid sequential processing and the kind of probabilistic reasoning we call intuition. It would be a stretch to call quantum computing the potential “self” of AGI, but it could provide the computational substrate for the kind of rapid, parallel evaluation of possibilities that characterises human thought.

    Of course, this raises the question: are we human beings truly sentient in the ways that we think we are, or are we also emergent properties of a series of building blocks – the house made from Lego which is something beyond just 520 bricks? Or where the “house goes” when it is deconstructed when finished with.

    Humanity thinks we’re special, and we may be, but the risk with AGI is that we create something that we acknowledge is faster and smarter than us in the moment due to computational capacity, but also able to hold data at larger scale in its silicon memory.

    Humans can keep eight or so things in their head, plus or minus one or two for most people. Computers can hold way more than that.

    Humans can hold finite levels of data as well – and have subsequent finite states to be able to infer outcomes from said data.

    Many humans live in what is best described as cause and effect – or first-order effect thinking. “If I do this, I will get that outcome”.

    Systems thinkers often think very differently and are focused not on simple cause and effect but the consequences of those effects on the next level of effects – the second and third-order effects.

    In human “intelligence” contexts, those effects are obviously just potential sequences of events across what might be simplistically seen as a decision tree, but is actually a far more complex architecture according to variables that are systemic rather than personal – your decision to drink beer and then drive a car might generate an outcome of getting home safe, but it might generate any number of outcomes that involve other factors including whether you crash, whether you die, whether you’re arrested, and so on.

    You can guess what is possible, but you can’t know. In much of our own internal thinking, many of these hypotheses are what we consider the act of being alive – and of being human. Free choice in other terms. The ability to make leaps of faith above and beyond the data.

    The Accidental God Problem

    AGI will be of our construction, and will be a complex system if it arrives. Dystopian fiction talks of the anthropomorphised digital God who, in reality, will be no more or no less conscious than any other complex system.

    That series of scripts that rebuilds your data centre? That’s no more conscious than the AGI, but it begs the question that if we’re all just constructs of more complex extensions of said logic, then not only is AGI not conscious, but likely whatever we term actual “God” is also not conscious, and – perhaps more existentially challenging – we are not conscious.

    (This is the point in my philosophical framework where I usually reach for the content in my Random Number Generator metaphor as part of the similarly title novel I’ve been writing for decades at this point. God as cosmic television static, you and I as consciousness randomly assigned to constraint packages like character sheets in an infinite game. But I’ll spare you the full recursive spiral. This time. You can read the book if and when it is finished.)

    Anyway, we have free thought, right?

    Do we? We have access to data from which we make decisions and, as we saw in the example with my old boss, we seemingly have the ability to not pick the logical choice. Is that free thought? Emotion? Or just probabilistic evaluation we can’t consciously track?

    AGI generates a similar potential. We can potentially architect systems that combine deterministic processing with quantum probability exploration, but it will still end up making decisions based on some form of outcome evaluation – to bastardise Yoda from Star Wars, there is only do or do not, and this is itself a binary logic at the level of action, even if the reasoning is probabilistic.

    What we have a potential for creating is something that is unknowable – not because it’s magical, but because of fundamental mathematical limits like the Halting Problem. We cannot predict what sufficiently complex programs will do – we can only run them and observe. In some ways this shouldn’t be alarming because we humans are in many ways unknowable. We don’t know enough about cancer to cure all of them at present, and we don’t have the computer capacity to model every variation through simulation at this scale.

    The Wolf at the Door (That We Built)

    We may get there but, in doing so, we may create an intelligence that has different ideas. Not because it’s conscious – but then neither may we be – but because we’ve given it the superpower of thinking faster than us and the tools to take inputs across narrow areas the same way our own biology has evolved to give us our components.

    We will have created our very own apex predator of our own volition after spending our time climbing the ladder to the top of the food chain.

    Brilliant. Absolutely fucking brilliant.

    In that sense we will face a regression that is similar to the story of the wolf.

    We managed to domesticate the wolf and create functional support in the breeding of dogs without understanding genetics, but simply understanding the nature of reproduction.

    We may, in future, generate a similar threat like the wolf in the wild who may similarly be harnessed for exponential growth to help humanity enter a period colloquially talked about as the Singularity in Ray Kurzweil’s book – the digital God made dog.

    Or we may find that playing with systems whose behaviour we cannot predict – a mathematical certainty given undecidability – may create one of many outcomes: from us becoming the pet of the AGI, to the enemy of it, or we may go extinct simply because we have been rendered intellectually invalid even if not physically invalid.

    The reality is, much like within modern AI thinker Max Tegmark’s book Life 2.0, we may be creating something from an increasingly standing-on-the-shoulders-of-giants foundation borne of mathematics on top of mathematics. We may become the progenitor of an inverted or repeated Bible story or, depending on how one reads it as theist, deist, or atheist – man creating God rather than God creating man, or just the latest in a pantheon of Gods except with physical presence to create a material heaven and/or hell on Earth.

    We are already at the stage where increasingly few people understand the operation of AI, so will we create our salvation or our own sabotage?

    The Fermi Paradox, Revisited

    Time will find out whether AGI resolves the famous Fermi paradox – that life is nowhere else in our universe. This may be because the creation of a superintelligence has rendered its creators one or more of dead, irrelevant, or hidden behind further obfuscated patterns which go beyond our own primitive sending of beacons.

    AGI may be created – it’s certainly what the tech bro hype desires, funded by venture capital and lubricated by the kind of breathless optimism that would make a revival tent preacher blush.

    Or it may be mathematically impossible due to simple constraints of the reality we live in.

    All we know now is that if we truly want to create something more than purely sequential processing systems constrained by undecidability, given Moore’s law has broken and we are almost at 1nm commercial chips, it’s going to take a change in approach – not to escape logic, but to embrace probabilistic reasoning at scale.

    The big question is whether we should take that choice or, in fact, if we even have a choice at all given it may well be that our reality is solely unknowable mathematics that our biological bodies will never comprehend – not because of quantum magic, but because of fundamental limits proven by our own mathematical systems.

    Rather like consciousness experiencing randomly assigned constraint packages and pretending it has any say in the matter.

    The cosmic joke continues.

    (This article was originally posted on LinkedIn here: https://www.linkedin.com/pulse/did-you-choose-click-link-systems-thinkers-guide-agi-turvey-frsa-%C3%A2%C3%BB-t8coe)

  • Effective Interview Techniques: Think Beyond Recall

    Have you ever sat through an interview where someone treated your ability to recall the SSH port number as some profound indicator of professional competence?

    If you’re an engineer with any breadth of experience, you’ll recognise this particular form of intellectual theatre – the worst interviews I’ve attended invariably focus on the recall of specific data points as a proxy for actual understanding. This sort of posturing (because let’s call it what it is) amounts to technical peacocking masquerading as dialogue, as if remembering that SSH runs over port 22 has any measurable impact on one’s ability to build systems that actually work.

    The challenge isn’t merely that these questions are pointless – though they demonstrably are. The deeper problem is that they reveal a fundamental confusion about what we’re trying to assess and why.

    The Metacognitive Distinction

    Having built much of our current technical advisory capability at CGI, I’ve sought to disrupt this interviewing paradigm, not through any particular genius-level insight, but by understanding the difference between information recall and metacognition – the capacity to think about thinking itself.

    The reason metacognitive questions prove more illuminating in interviews is straightforward: asking someone to examine their own thought processes reveals far more about their intellectual architecture than basic recall exercises ever could. These questions possess an authenticity that standard interview scripts cannot replicate – they require organic thinking in real time, demand genuine self-awareness, and resist the kind of rehearsed responses that ambitious candidates memorise for predictable enquiries about “biggest weaknesses.”

    Consider the practical implications. Whether you operate in technology or any other organisational domain, you work within systems that combine processes and tools in ways broadly similar to how our organisation functions. The specific technologies may vary – from quantum computing to a shovel and an expanse of dirt – but the underlying cognitive demands remain consistent: how do you approach problems you haven’t encountered before? How do you adapt when familiar solutions no longer apply?

    This is where traditional interview design fails most spectacularly. We test for information that becomes outdated, forgetting that paradigms shift with uncomfortable regularity. There was a time when serious people believed the sun revolved around the earth, and suggesting otherwise carried genuine personal risk. The SSH port number that seems so crucial today may prove entirely irrelevant tomorrow when some new protocol architecture emerges.

    The Learning Method Question

    My initial attempt to address this focused on adaptability: how might someone approach a new technology following a paradigm shift? The technology itself was deliberately irrelevant – it could range from a programming language to woodworking tools to organisational design methodologies. I wanted to understand method, not specific knowledge.

    Where this approach proved limiting was twofold. First, candidates often missed the point entirely, providing detailed implementation steps when I was seeking insight into their learning architecture. Second, whilst the responses effectively revealed learning styles – visual, auditory, kinesthetic approaches dressed in less technical language – they offered limited scope for understanding the person’s broader intellectual character.

    The question that replaced it has proven far more illuminating: Of all your strongly held beliefs, which one do you think is most likely wrong?

    The Authenticity Detection System

    What makes this question particularly valuable is not the specific answer – I obviously cannot know what beliefs you hold or which deserve questioning – but rather what becomes evident in the response. You will either engage intellectually or you won’t. You will tell the truth or you won’t. The difference between authentic intellectual engagement and what I can only describe as shorthand nonsense becomes immediately apparent.

    I don’t claim expertise in behavioural analysis, but distinguishing genuine thinking from performative cleverness requires no special training. Those who can engage with this question create what people have described as genuinely conversational interviews – some have called them podcast-like, others have mentioned experiencing something approaching an existential crisis when forced to examine whether they actually believe in God or whether faith serves as an elaborate coping mechanism for mortality anxiety.

    Those who cannot or will not engage with this process find themselves in probably the most uncomfortable interview of their professional lives. This discomfort emerges not from sadistic design but from the simple reality that our team’s success depends on the ability to think differently and, by extension, to think about thinking itself.

    The Purpose Dimension

    The second question I deploy explores something larger than immediate professional competence: “What cause or purpose would you consider worth significant personal sacrifice – or even your own death – to advance?”

    Again, no right or wrong answers exist. Whether someone finds meaning through military service, family protection, religious conviction, or community volunteering reveals their personal values architecture, not their professional suitability. What I’m examining is whether they’ve developed any sense of purpose beyond immediate self-interest.

    This matters because – perhaps worryingly – most of my professional and personal heroes ended up dying for their convictions. I was initially going to note that all my personal heroes are dead, but that seemed rather obviously true for anyone with historical perspective. The specific truth is that those whose commitment to principles transcended personal safety created the kind of impact worth emulating.

    I ask this because teams function effectively when members understand something beyond their individual advancement. In my experience, those who believe the world begins and ends with their personal success need to stay as far away from collaborative environments as possible. I have said in casual conversation that a team of Cristiano Ronaldos can be, and often is, outplayed by a cohesive team – individual brilliance, admirable as it is, yields little without the benefits of collaboration.

    The Anti-Pattern Advantage

    This approach draws from my background in amateur debating societies, where I learned that those who cannot articulate the benefits of opposing arguments are merely spouting rhetoric, regardless of eloquence. Understanding why intelligent people might reasonably disagree with your position provides strategic advantage that pure advocacy cannot match.

    The same principle applies to organisational assessment. Rather than testing whether candidates can recite information available through thirty seconds of internet searching, we examine how they process uncertainty, approach unfamiliar problems, and integrate new information with existing frameworks. These cognitive capabilities determine actual job performance far more accurately than memorised technical specifications ever could.

    The Implementation Reality

    The questions I’ve described cannot be gamed through preparation. They require authentic self-reflection and real-time intellectual processing. When someone attempts to provide a rehearsed response to “which belief is most likely wrong,” the artificiality becomes immediately obvious. When they genuinely engage with the question, you witness actual thinking in action – precisely what you need to evaluate.

    This methodology has proven particularly effective because it bypasses the entire infrastructure of interview preparation that has evolved around predictable question formats. Career coaches cannot script responses to genuine metacognitive enquiries. ChatGPT cannot generate authentic self-doubt. The candidate must actually think, and in thinking, reveal the intellectual qualities that determine whether they can contribute meaningfully to complex, collaborative work.

    Beyond the SSH Port Fallacy

    The next time you prepare to interview someone into your organisation, consider what your questions actually assess. Are you validating the candidate’s suitability, or are you satisfying your own desire to demonstrate superior knowledge? Are you testing abilities that matter for the role, or are you engaging in the kind of intellectual preening that mistakes Google-searchable information for professional competence?

    The difference matters more than most organisations recognise. In an era where information becomes obsolete increasingly rapidly, the capacity to think clearly, adapt effectively, and collaborate authentically determines success far more than the ability to recite technical specifications. Interview design should reflect this reality.

    The SSH port will always be 22 until it isn’t. The ability to think well about new problems will remain valuable regardless of which protocols emerge next. Design your assessment process accordingly.

    Finally, as I prepare to close, let me ask you to think about some of your beliefs and what purpose they serve for you. Are they identity reinforcing? Of value? Or worth examining more closely? This might pertain to questioning your political allegiance by seriously considering the opposing view, examining your position on Brexit by genuinely engaging with the alternative perspective, or any other strongly held conviction that deserves scrutiny.

    Because the truth is that whatever we think we know about evaluating human capability – or anything else for that matter – there remains scope to learn more, particularly if we’re willing to think seriously about how we think about these problems in the first place. The only way we progress as beings is by fundamentally questioning everything, including our own internal wetware.

    That willingness to examine our own assumptions might be the most valuable capability of all.

    This article was initially written on July 10th, 2025 on my personal LinkedIn profile as Beyond Technical Peacocking: Designing Interviews That Reveal How People Think – the original is available via this link: https://www.linkedin.com/pulse/beyond-technical-peacocking-designing-interviews-how-matt-turvey-frsa-eausf/?trackingId=7ba5tMAyT9%2BOo7TPy7NENQ%3D%3D

  • The Curse of Competence: Why Excellence Makes You a Hostage to Your Own Skills

    Let’s talk about probably the most perverse reward system ever devised outside of experimental psychology labs: the modern workplace’s response to demonstrated competence.

    It goes something like this:

    You solve a problem effectively.

    People notice.

    They bring you more similar problems.

    You solve those too. Congratulations! You’ve now been rewarded with a permanent problem-solving role that will follow you like a particularly clingy ghost through the remainder of your professional existence. I hope you enjoy whatever it is you were doing!

    Welcome to the Curse of Competence – that strange phenomenon whereby doing something well once guarantees you’ll be doing it repeatedly until either your skills deteriorate from soul-crushing boredom or you fake your own death and restart your career under an assumed identity in a different industry.

    The Competence Trap: Hotel California for Skills

    The competence trap functions with the elegant simplicity of a particularly well-designed venus fly trap. The initial experience is quite pleasant – recognition! appreciation! the warm glow of being needed! – right until the moment you realise you’re now permanently stuck doing that one thing you happened to be good at during that meeting in 2019.

    “But surely,” I hear you protest, “organisations would want to develop their talented people? Move them around to leverage their abilities? Create growth paths that capitalise on demonstrated excellence?”

    Oh, my sweet summer child. That would require both forward thinking and the willingness to temporarily sacrifice immediate efficiency for long-term gain – two qualities approximately as common in corporate environments as unicorns who are also certified public accountants. (Why think of the future when you have next quarter breathing down your neck!)

    The reality operates on a principle I’ll call Organisational Path Dependence: once you become known as “the Excel person” or “the one who can calm down Client X” or “the presentation wizard,” that identity becomes fixed in the corporate hivemind with a permanence that ancient Egyptian stonemasons would envy. (The pyramids may be magnificent but I’m sure Sarah has been doing that trick with the finance software for as long as it took the slaves – I mean aliens – to build them)

    This phenomenon creates magnificent absurdities like:

    – The senior developer still fixing basic code because they were good at it as a junior five years ago

    – The marketing director still writing all the copy because once, in 2015, they composed a particularly effective email

    – The finance executive who can’t escape quarterly planning because they created a spectacular spreadsheet during the Obama administration

    Each trapped in their own personal Groundhog Day of competence, doomed to repeat their past excellence in perpetuity while watching less capable colleagues fail their way upward with spectacular regularity. (It’s amazing how there’s a waterline where you fall upwards once you get into the management realm, whilst the mere plebs of the world huddle around metaphorical fires worrying about the 675 metrics they have to hit just to keep doing the actual fucking work.)

    The Reward for Carrying Water: A Bigger Bucket

    The corporate response to demonstrated capability follows a pattern so predictable it should be taught in business schools under the probably-more-honest-than-most-bootcamps “How to Systematically Burn Out Your Best People 101.”

    Step 1: Identify person who executes Task X effectively

    Step 2: Give person more of Task X

    Step 3: When they handle that well, add even more Task X

    Step 4: Express confusion when person becomes increasingly desperate to never see Task X again and/or goes off sick citing mental burnout

    This system operates with the precision of a Swiss watch designed by particularly sadistic engineers. Its elegance lies in how it masquerades as recognition while functioning as punishment. “You’re so good at this!” translates directly to “You’ll never escape doing this!” – a sort of Sisyphean life where rocks and infinite hills got replaced with the mind numbing shuffling of digital detritus in a tastefully styled office with seemingly unironic motivational quotes. It’s up to you which is worse (I’ve always liked rocks).

    In that sense, the demonstrated empathy on show is rather like responding to someone who swims well by throwing them into progressively deeper bodies of water while adding increasingly heavy weights to their ankles.

    “But you’re so good at not drowning Hannah! We’re just creating opportunities for you to further develop this clearly demonstrated capability!”

    What makes this particularly diabolical is how it’s presented as a compliment. “We keep giving you these projects because you’re so good at them!” they say, nodding earnestly, as though permanently consigning you to the same repetitive task is a recognition of your value rather than an exploitation of your reliability. Meanwhile, those who don’t have any obvious skills spend at least 75% of their time practicing their acceptance speech for the invariable falling upward promotion trajectory that invariably awaits. (That’s because the generally accepted way to deal with awful leaders is by throwing them somewhere else in the hope that maybe that person might have a semblance of a backbone, and the ability to have an uncomfortable conversation rather than palming them off because their current manager has neither.)

    The effective people by comparison? Well the reward for carrying water is, inevitably, a bigger bucket, and a PIP if they fail to carry the bucket that may or may not now contain all of Earth’s water system.

    The Competence/Growth Inversion Principle

    Behold the magnificent irony at the heart of professional development: the relationship between demonstrated competence and actual career growth typically exhibits a strong negative correlation.

    I call this the Competence/Growth Inversion Principle, and it works like this:

    – The more crucial your current contribution, the less the organisation can “afford” to move you. (In the corporate world, why would we want to move people out of roles that get stuff done as it might mean we’d have to think about one or more of succession planning, increased competition at the next level of hierarcy, or pulling ones thumb out of one’s backside.

    – The more reliably you solve certain problems, the more tightly you become identified with those problems (so you’ve worked out how to use functions beyond =SUM? You’re the Excel “guru” now – no, there’s no payrise).

    – The more irreplaceable you become in a specific function, the less likely you are to escape it (it’s like a black hole has appeared in space yet rather than being able to observe light falling into the abyss, it’s your career prospects disappearing over the event horizon).

    Meanwhile, observe the person who is mediocre at multiple things rather than excellent at one thing. They often advance with puzzling speed, largely because:

    1. They’re never quite good enough at any one thing to become indispensable in that role

    2. Their consistent mediocrity creates no specific attachment to any particular function so they are always ready to go (mostly to shit, but in a way that allows them to tell management what they want to hear abstract of what reality is)

    3. Their broad but shallow exposure creates the illusion of versatility

    4. Nobody fights to keep them in their current role because nobody particularly values what they’re currently doing

    This creates the magnificent spectacle of organisational advancement functioning almost as natural selection for a particular type of non-excellence – not outright incompetence (though that certainly happens), but rather the careful cultivation of being just good enough at many things to avoid the curse of being excellent at one thing. (In that sense, it’s a skill – but probably not the sort of skill we should be lauding if we’re being honest).

    The competence trap thus creates a perverse incentive structure where rational career actors might deliberately avoid demonstrating too much excellence in any single domain lest they become permanently associated with it. Is that what a company should look like?

    The Specialist’s Lament

    For those caught in the competence trap, work often devolves into a peculiar form of specialised repetition that feels less like career development and more like being a particularly well-educated hamster on a wheel.

    I recently spoke with a mid-career professional – let’s call her Grace – who made the career-limiting mistake of creating an exceptional PowerPoint presentation in 2018. This singular event, which lasted approximately 45 minutes, has somehow become her professional identity for the next seven years.

    “I have two degrees and fifteen years of experience,” she told me with the thousand-yard stare of someone who has created one too many slide transitions, “but I’m now introduced in meetings as ‘our PowerPoint person’ like I’m some sort of sentient template. I’ve debated changing my surname to PPTX-Smythe.”.

    Another victim of the competence trap – we’ll call him Marcus – described being “the data guy” despite having originally been hired as a strategic planner with significant decision-making responsibility.

    “I made one particularly good PowerBI dashboard during my first month,” he explained, “and now I haven’t been invited to a strategy meeting in three years. Meanwhile, I’ve watched three consecutive bosses implement catastrophically bad strategic decisions that I could have helped prevent, but apparently, my only role now is to create colourful visualisations of the resulting disasters.”

    The specialist’s lament echoes across industries and functions: “I am so much more than this one skill, and yet this one skill has somehow become my entire professional identity.”

    The Three Deadly Career Virtues

    Particularly prone to the competence trap are those who exhibit what I’ll call the Three Deadly Career Virtues: reliability, efficiency, and conflict avoidance.

    These seemingly positive attributes combine to create the perfect victim profile:

    1. Reliability ensures you’ll get the job done without requiring management attention, making you the path of least resistance for similar future tasks

    2. Efficiency means you can handle increasing volumes of the same work, creating the illusion that this arrangement is sustainable (I mean why wouldn’t it be given companies operate under the idea of continuous, infinite growth as if that’s really a thing)

    3. Conflict avoidance makes you less likely to push back when your role becomes increasingly narrowed to your area of demonstrated competence

    Together, these virtues create what appears from the outside to be the ideal employee but is actually a person being slowly entombed in their own capabilities like a museum exhibit: “Here we have a perfectly preserved specimen of an Excel wizard in their natural habitat. Note how they continue to pivot tables despite their growing despair.”

    In short, the exploitable get exploited. It’s a tale as old as time, but without the whimsy of listening to a song about Beauty and the Beast (bite me, I’m a Disney fan).

    These qualities typically combine with a work ethic instilled since childhood that makes refusing tasks feel morally wrong, creating the perfect conditions for indefinite exploitation of specific skills at the expense of broader development. (The reward for childhood trauma that likely made you a people pleaser to mitigate anger? Some adult trauma, delivered digitally via the Microsoft office suite.)

    The Double-Bind of Demonstrated Expertise

    Those caught in the competence trap face a particularly cruel double-bind when they attempt to escape:

    Scenario 1: Do you continue demonstrating excellence in your pigeonholed role, further cementing your association with it while watching growth opportunities go to others?

    Scenario 2: Deliberately perform worse in hopes of being released from your specialisation, thereby risking your professional reputation and potentially confirming the organisation’s unspoken belief that you’re only good for this one thing anyway. (PIPs are available for those below the “safe” watermark of those who operate the metrics rather than those who have to comply with them).

    Neither option offers a particularly appealing path forward. It’s rather like being asked whether you’d prefer to be slowly suffocated by a pillow or a duvet – the instrument differs but the outcome remains distressingly similar.

    This double-bind often leads to the most reliable people in organisations quietly updating their LinkedIn profiles at 11pm while sighing heavily into their third glass of wine. (Or, in my case, manipulating my psychology by engaging hyperfocus simply by waiting till the last second before I have so much adrenaline and cortisol in my system, there’s approximately zero chance I’m going to be sleeping).

    The only apparent escape routes involve:

    1. Leaving the organisation entirely (the “corporate witness protection program” approach)

    2. Finding a sponsor powerful enough to override the organisational imperative to keep you exactly where you’re “most valuable”

    3. Developing such a spectacular new skill that it overshadows your existing competence trap (approximately as likely as teaching your cat to prepare your taxes, but if you can say AI in every other sentence, you may have a shot)

    The “Go-To Person” Paradox

    Perhaps the most insidious aspect of the competence trap is how it’s disguised as a compliment. Being the “go-to person” for anything sounds like recognition rather than the professional equivalent of being sentenced to repeat the same year of school indefinitely.

    “Sarah’s our go-to for client presentations” sounds like praise until you realise Sarah hasn’t done anything except client presentations since the iPhone 7 was cutting-edge technology.

    “We always rely on Dave for the monthly reporting” seems like an acknowledgment of Dave’s value until you notice Dave gazing longingly out the window every 30th of the month like a prisoner marking days on a cell wall, grappling with an Excel spreadsheet so large and creaky that it might masquerade as a haunted house on the weekend.

    Being the “go-to person” is less an honour and more a subtle form of organisational typecasting – one where you’re permanently cast as “Person Who Does That One Thing” in the ongoing corporate production of “Tasks Nobody Else Wants To Learn and How We Found Suckers To Do Them.”

    The Organisational Amnesia Phenomenon

    Compounding the competence trap is what I call Organisational Amnesia – the curious inability of workplaces to remember anything about you except the specific skill for which you’ve become known.

    You may have:

    – Published thought leadership in your industry

    – Successfully led cross-functional projects

    – Developed innovative approaches to longstanding problems

    – Demonstrated exceptional leadership qualities

    – Acquired three new languages and the ability to communicate telepathically with squirrels

    Please be aware that, much like that stock market advice you got, past performance does not indicate any potential future likelihood of similar success.

    Instead in planning meetings, you’ll still be referred to as “Morgan from accounting who does the thing with the spreadsheets.”

    This selective institutional memory creates situations where highly capable individuals with diverse skills and interests become one-dimensional caricatures in the organisational narrative – reduced to a single function like characters in a particularly lazy sitcom that runs for seventeen years with no sign of stopping, serving as escapism for the masses who can say “hey my life is bad, but I’m not as bad as Seymour from Uncomfortable Conclusions”.

    The Competence Escape Velocity Theory

    For those determined to break free of the competence trap, I propose the Competence Escape Velocity Theory, which states that escaping your pigeonhole requires simultaneously:

    1. Building a coalition of influential advocates who see your broader potential

    2. Secretly training replacements who can take over your current responsibilities (extra points if AI does is – management love that stuff, i.e. less spending on people who might complain)

    3. Creating visible wins in areas unrelated to your competence trap

    4. Developing a reputation for something – anything – other than your current specialisation (perhaps not soiling oneself at the Christmas party – keep some standards)

    5. Being willing to risk the identity security of being “the person who does X well”

    This multi-pronged approach represents your best chance of achieving escape velocity from the gravitational pull of your own competence – a manoeuvre approximately as complex as launching a rocket while simultaneously convincing mission control that you’re actually still on the launchpad.

    The difficulty explains why so many choose the simpler option: updating their CV and finding an organisation where they haven’t yet revealed their particular talents, creating a brief window of opportunity before the whole cycle begins again.

    The Mediocrity Advantage

    This analysis reveals a counterintuitive truth: there are significant professional advantages to strategic mediocrity – or at least to the careful management of where and when you demonstrate exceptional capability.

    The truly savvy career operator maintains a carefully calibrated performance level:

    – Good enough to be considered valuable

    – Not so good as to become indispensable in any one function

    – Visibly competent at politically advantageous skills

    – Carefully average at career-limiting responsibilities

    This calculated approach to skill demonstration represents a sophisticated response to organisational incentive structures that routinely punish excellence with more of the same work rather than growth opportunities. The sad reality is that this is ultimately bullshit of the highest order – and something that needs to be addressed at a broader level.

    After all, it’s not that organisations consciously design systems to reward mediocrity and punish excellence – it’s simply the emergent property of prioritising short-term efficiency over long-term development, immediate needs over strategic talent deployment, and the path of least resistance over optimal resource allocation. Who’d have thought focusing solely on the next thing – be that a quarter, task, or fixing a catastrophe might have such a significant impact?

    Beyond the Competence Ghetto

    So is there an alternative to this dysfunctional system? Perhaps. But it requires organisations to fundamentally reconsider their approach to talent development and individuals to strategically manage their skill demonstrations.

    For organisations, escaping this trap means:

    1. Creating systematic rotation programs that prioritise development of people over short-term efficiency

    2. Rewarding knowledge transfer rather than exclusive ownership of capabilities – which creates structural problems for both the business and the poor souls who get trapped

    3. Explicitly valuing versatility alongside specialisation

    4. Building redundancy for critical skills rather than relying on individual “heroes”

    5. Measuring managers on their team members’ growth rather than merely their output

    For individuals navigating existing systems, survival strategies include:

    1. Deliberately cultivating multiple, visible areas of competence to avoid single-skill typecasting

    2. Strategically training others in your “special skills” to reduce your uniqueness

    3. Explicitly negotiating skill deployment and development pathways before demonstrating new capabilities

    4. Creating alternative identity markers in the organisation beyond your functional skills

    5. Recognising when the only escape route might be the exit door – sadly, sometimes it becomes the only option if your organisation isn’t willing or able to change.

    The Uncomfortable Conclusion

    The competence trap reveals an uncomfortable truth: organisations frequently talk about developing talent while implementing systems that systematically prevent it. The gap between rhetoric and reality creates the professional equivalent of quicksand – the harder you work to prove your value, the more firmly you become stuck in a narrowing role.

    It may not be a conscious decision that is manifested by an evil corporate mind, but its impact on the wellbeing of their staff, and the associated headaches created by the need for mental gymnastics creates problems that are both human and financial.

    Perhaps the final irony is that recognising this dynamic represents its own form of competence – one that, if demonstrated too visibly, might land you permanently in the “organisational development” role where you can spend the remainder of your career explaining this phenomenon to others without actually being able to escape it yourself. (I fear I may have fallen into this hole by writing articles but, hey, at least I may have a future in some form of corporate stand up).

    The true meta-skill, then, might be learning exactly when to display competence, when to conceal it, and when to decide that an environment incapable of appropriately developing talent deserves neither your excellence nor your loyalty – being good at things you don’t want to do probably isn’t the route forward if you want to do bigger and better things.

    In that sense, the most valuable skill in navigating modern organisations might not be any particular technical capability but rather the wisdom to recognise when your competence is being weaponised against your own development – and the courage to seek environments where excellence is a pathway rather than a prison.

    The original copy of this article was published on my personal LinkedIn on April 25th, 2025. You can find the original link here: https://www.linkedin.com/pulse/curse-competence-why-excellence-makes-you-hostage-your-turvey-frsa-kympe/?trackingId=XmAsaUBPQQGJHP0dYoGpKw%3D%3D

  • The Power Paradox: Why Those Most Eager to Lead Should Probably Be Locked in the Office Supplies Cupboard

    Let’s discuss a serious issue that has plagued human societies since approximately fifteen minutes after we climbed down from the trees and someone declared themselves “Chief Banana Distributor” – namely, that the people most desperate to be in charge are precisely the ones who should be kept as far away from power as humanly possible, preferably in a soundproof room lined with pictures of kittens and motivational posters about ‘synergy’ so they can at least feel at home.

    Such a reflection, whilst possibly exaggerated for effect, isn’t merely a cynical observation on my part – one only need look around at the liberal sprinkling of proverbial self styled “hard men” in our contemporary political environment.

    It’s a structural problem that manifests with the reliability of a British train cancellation announcement – predictable, depressing, and somehow still surprising when it actually happens. (Depressing might not be the case for all people as my right hand man at work actually likes cancellations – on the proviso that he gets a decent refund. Bless you Marrows).

    Consider the psychological profile of your average power-seeker. The person who looks at a leadership position and thinks, “Yes, what the world desperately needs is ME telling everyone else what to do.”.

    This individual – and I’m sure you’ve met a few like I have – typically possesses the exact cocktail of traits you’d want to avoid in someone making consequential decisions: unshakeable self-belief detached from actual competence, a conviction that complex problems have simple solutions they alone can see, and an ego so robust it could survive a direct nuclear strike.

    Meanwhile, the person who might actually make a decent leader – thoughtful, self-aware, cognisant of their limitations, capable of balancing competing perspectives – is often found desperately trying to avoid being nominated for the role whilst muttering something about “just wanting to get on with some actual work.”

    What we’ve got here is a classical selection problem that would make Darwin reach for a stiff drink. Don’t worry me old mucker, Charlie – we’ve got some ideas!

    The Douglas Adams Rule of Leadership

    The late, great Douglas Adams perfectly captured the paradox of leadership in “The Hitchhiker’s Guide to the Galaxy” when he wrote:

    “The major problem – one of the major problems, for there are several – one of the many major problems with governing people is that of whom you get to do it; or rather of who manages to get people to let them do it to them. To summarize: it is a well-known fact that those people who must want to rule people are, ipso facto, those least suited to do it. To summarize the summary: anyone who is capable of getting themselves made President should on no account be allowed to do the job.”

    This isn’t just witty science fiction (I mean it is that also), but rather it’s practically a mathematical theorem that plays out with depressing regularity across organisations from corporate boardrooms to parish councils to national governments. No locale is safe – Vogon inhibited or no.

    Sadly, the desire for power often correlates inversely with the wisdom to wield it responsibly. Those most attracted to leadership positions tend to be those most enamoured with the trappings and status rather than the actual responsibility of stewarding an organisation or community through difficulty and uncertainty.

    The Confidence/Competence Inversion

    I’ve spent enough time in corporate environments to witness what I’ll call the Confidence/Competence Inversion Principle: the relationship between someone’s certainty about their capabilities and their actual abilities often bears an unfortunate negative correlation.

    You know ThatGuy™. I talked about them briefly a few weeks ago in one of my recent articles.

    They’re the one who speaks first, loudest, and with unwavering certainty about topics they discovered approximately 37 minutes before the meeting. (I can play catch up on learning with AI, you know!)

    The one who has never encountered a moment of self-doubt that couldn’t be immediately crushed under the weight of their own magnificence (behold the glory that is constrained within this mid-range Next two-for-one suit!).

    The one whose confidence in their prescriptions is matched only by their complete ignorance of the subsequent clean-up operations required after their brilliant ideas implode. (I always find it remarkable the amount of people who think they are great drivers but constantly have near misses with accidents – funny that).

    These individuals don’t merely suffer from the Dunning-Kruger effect; they’ve turned it into a leadership philosophy, that would have a whole saleable framework of what was involved in being as good as them – if it wasn’t for the ego delusion and the fucking inability for them to do any actual work of value.

    These people have mistaken certainty for competence, volume for insight, and stubbornness for principle.

    Meanwhile, somewhere in your organisation sits someone with actual expertise – thoughtful, nuanced, aware of complexity – who prefaces every contribution with “This might be wrong, but…” or “I’m not entirely sure about this…”

    Guess which one gets promoted?

    Precisely.

    The Reluctant Leader Hypothesis

    There’s a persistent myth in modern management that leadership requires unbridled enthusiasm for the role. That the person who wants it most deserves it most. This is roughly equivalent to suggesting that the person most eager to perform brain surgery on you – despite having no medical training but owning a really sharp kitchen knife and having watched several YouTube tutorials – should be allowed to crack on. (Several videos – not one. How much more evidence do you need!?)

    Perhaps we should consider what I’ll call the Reluctant Leader Hypothesis: those best suited to positions of responsibility are often those most aware of its burdens and limitations.

    History offers some support for this idea.

    Cincinnatus, the Roman dictator who relinquished power voluntarily to return to his farm.

    George Washington refusing a third term and establishing the peaceful transition of power.

    Even the mythological King Arthur, a man pulled from obscurity by a sword that apparently had better leadership selection mechanisms than most modern organisations. (There’s a real thought – maybe we should seek out mythical swords to determine who should be king, except I’ve just checked the stock levels at the Mystic Warehouse, and they’re all out).

    What unites these examples isn’t merely their reluctance, but their sense of service rather than entitlement. Leadership as duty rather than as a prize. Authority as responsibility rather than playground dynamics of who has the sharpest title. You know – God forbid – actual leadership.

    The Corporate Selection Problem

    In theory, modern organisations should have sophisticated methods for identifying and developing genuine leadership talent. In practice, most promotion systems operate with all the nuance and discernment of a hungry toddler at a birthday party buffet, grabbing the brightest, loudest things while ignoring the vegetables of quiet competence sitting forlornly on the side.

    The standard corporate selection process rewards several traits that have at best a tenuous relationship – and arguably an inverse one – with actual leadership capability:

    Unwavering self-promotion – Because nothing says “I’m focused on organisational success” like an obsessive documentation (and associated proclamations) of personal achievements

    Strategic visibility – Ensuring one is seen doing things rather than simply doing them well (because why do the work when you can just take the credit?)

    Confident proclamations – Making assertions with certainty regardless of their relationship to reality

    Relationship cultivation with existing power structures – Proving one’s fitness to lead by demonstrating a profound capacity for strategic flattery and a fondness for the taste of human excrement of staff who, obviously coincidentally, sit further up the hierarchy

    None of these correlate strongly with the ability to navigate complexity, build consensus, acknowledge uncertainty, or make difficult decisions under pressure – you know, the actual job of leadership.

    The Quiet Competence Conundrum

    Meanwhile, genuine capability often manifests in ways that are systematically overlooked or undervalued:

    Thoughtful consideration – Interpreted as indecisiveness rather than prudence

    Nuanced perspectives – Dismissed as “complexity” in a world enamoured with false certainty

    Acknowledgment of limitations – Seen as weakness rather than self-awareness

    Focus on work rather than self-promotion – Resulting in the organisational invisibility of the actually competent

    The result is a persistent filtering mechanism that elevates the confidently inadequate whilst overlooking the quietly capable. It’s not merely an unfortunate coincidence but a structural feature of systems that mistake confidence for competence, certainty for clarity, and self-promotion for achievement.

    Beyond the Binary: The Confident-Competent Unicorn

    Despite my ongoing affinity for hyperbole, surrealism, and aligned topics, let’s acknowledge the legitimate counterargument: confidence and competence aren’t mutually exclusive.

    Occasionally – about as frequently as a total solar eclipse visible from your precise geo-coordinates where you read this article – these qualities align in a single individual.

    These rare creatures – the confident-competent – do exist.

    They combine genuine capability with the self-assurance to deploy it effectively.

    They’re the unicorns of the organisational world, and finding one feels about as likely as discovering your cat has been quietly paying half your mortgage.

    The problem isn’t that these individuals don’t exist; it’s that our selection mechanisms are catastrophically bad at distinguishing them from their more common doppelgängers: the confident-incompetent. From a distance, and particularly to existing leadership equally afflicted with the confidence/competence inversion, they appear identical – how are people going to deduce the difference between bullshit and brilliance if at least part of their own rise to the top involved a suitable amount of bluff and bluster?

    The Selection Renovation Project

    If we accept that our current approaches to identifying leadership talent are fundamentally broken, how might we improve them? How do we find those capable but not necessarily clamoring for power?

    Here are some horribly unfashionable suggestions that would probably get me removed from any corporate HR function within approximately 17 minutes:

    1. Value proven problem-solving over persuasive self-presentation

    Track record of quietly solving complex problems without creating new ones might be a better indicator of leadership potential than the ability to create a compelling PowerPoint about one’s own magnificence. Projects that never go red are probably better places to find leaders compared to the ”heroes” who always seems be in the thick of the latest corporate bomb site.

    2. Seek evidence of epistemic humility

    The capacity to say “I don’t know” or “I was wrong” indicates an intellectual flexibility essential for navigating uncertainty. Someone who can’t recall the last time they were mistaken isn’t displaying confidence; they’re displaying delusion.

    3. Observe behaviour under genuine pressure

    Not the manufactured pressure of interviews or presentations, but the authentic stress of unexpected challenges. Character reveals itself not in rehearsed moments but in unscripted responses to difficulty. As the old saying goes – “adversity introduces a man unto himself”.

    4. Listen to those being led

    The people working directly with potential leaders often have the clearest perspective on their actual capabilities. 360-degree feedback isn’t perfect, but it’s frequently more accurate than upward-only assessment, because often the nature of senior leadership is that they don’t have the understanding of the detail, because the detail has probably changed in the last 20 years since they were doing the actual work on the ground.

    5. Create selection mechanisms that don’t reward self-promotion

    Design processes that identify capability without requiring candidates to engage in competitive displays of ego and certainty. (The amount of people I see overlooked simply because they aren’t extroverted enough still baffles me to this day).

    6. Value the questioners, not just the answerers

    Those who ask thoughtful questions often have a deeper understanding of complexity than those offering immediate, confident solutions.

    The Fundamental Recalibration

    Perhaps most fundamentally, we need to recalibrate our collective understanding of what leadership actually is. It’s not about being the loudest, the most certain, or the most eager.

    It’s most certainly not about having immediate answers to every question or projecting an image of infallibility.

    Leadership in a complex world requires the capacity to:

    – Navigate uncertainty without resorting to false certainty

    – Integrate diverse perspectives without losing decisiveness

    – Acknowledge limitations without abdicating responsibility

    – Maintain direction without ignoring changing conditions

    – Build consensus without avoiding necessary conflict

    None of these capabilities correlate strongly with the traits we typically filter for in our leadership selection processes. None emerge reliably from processes designed to identify the most confident rather than the most capable.

    The Reluctant-But-Capable Draft

    Maybe we need leadership term limits with mandatory periods of actual work in between. “Congratulations on your three-year stint as Director of Strategic Initiatives! Please enjoy your new two-year residency in Customer Support where you’ll experience the joyful consequences of all those ‘streamlining processes’ you implemented. Your corner office has been converted into a supply cupboard, but we’ve left you a lovely desk lamp.”. (That sort of thing tends to sharpen the mind in a way that no abstracted thinking can really illustrate when there’s a chance that making one’s subordinates lives hell might come back to burn one’s own backside in future)

    I’d like to propose the Turvey-Serve-y leadership selection process. (I’ll admit the naming needs work).

    Imagine an organisational world where leadership positions came with an obligation rather than a corner office, premium brand electric vehicle, and stock options.

    Where selection focused on demonstrated capability rather than performed confidence.

    Where the question wasn’t “Do you want to lead?” but rather “Given your demonstrated capabilities, would you be willing to serve?”

    Servant leadership isn’t a particularly new idea, and this approach would likely encounter immediate resistance from those most invested in the current system – particularly those whose rise has been fuelled more by confidence than competence. It would require restructuring incentives, reconceptualising leadership development, and fundamentally challenging our collective assumptions about what leadership looks like – far from an easy or overnight job.

    It would mean real change to ensure the new breed of servant leaders are empowered with the tools to generate real success, rather than loaded up with seventeen tons of load like the Little Donkey until said donkey has collapsed and needs to be put to sleep.

    It would be difficult, messy, and uncertain – much like actual leadership itself.

    The Uncomfortable Conclusion

    The power paradox has no simple resolution. The very nature of power attracts those who desire it for its own sake rather than for what it enables them to accomplish for others. Our selection mechanisms systematically mistake confidence for competence, certainty for clarity, and self-promotion for achievement.

    Yet perhaps acknowledging this paradox is the first step towards mitigating its worst effects. Perhaps by recognising the inverse relationship between power-seeking and suitability for leadership, we can begin to design systems that select for the qualities we actually need rather than those that shout loudest for attention. These plans will take time, but that’s surely an area where we should invest our thinking if we want a better world over time.

    In the meantime, perhaps the most practical heuristic remains a profound skepticism toward those most eager to lead. The person telling you they were born for leadership is precisely the one you should escort gently but firmly to the nearest supplies cupboard, where they can organise the paper clips into a splendid hierarchy of their own design while composing a 15-page manifesto on ‘The Future of Office Supply Optimisation: A Leadership Journey’.

    By contrast, the truly qualified leader is probably hiding under their desk right now, hoping that this particular chalice of responsibility passes them by, ideally to land on the desk of someone with enough confidence to be utterly untroubled by their complete lack of qualifications.

    The original copy of this article was published via my personal LinkedIn on April 17th, 2025. You can find the original link here: https://www.linkedin.com/pulse/power-paradox-why-those-most-eager-lead-should-locked-turvey-frsa-ydrde/?trackingId=7afVev12RM2JcryLdCRZuA%3D%3D

  • The Authenticity Industrial Complex: How ‘Being Yourself’ Became Another Performance Metric

    So after last week’s entry into my newsletter where I awoke the somewhat more sarcastic part of my writing personality, I wanted to discuss a different topic on a similar theme.

    So, let’s talk about the most exquisite corporate magic trick of recent time: the transmutation of “just be yourself” into “perform your carefully calibrated authenticity for our quarterly evaluation while we take notes.”.

    I’m referring, of course, to the trite-but-evidently-not-serious corporate directive to “bring your authentic self to work” – a phrase that deserves its own spot in the Museum of Organisational Doublespeak alongside such classics as “we’re like family here” (translation: your boundaries will be tested) and “open door policy” (come in, but make it quick and don’t bring me problems).

    What’s truly remarkable isn’t just that corporations have successfully commodified authenticity – though that’s impressive enough (lord knows they love a metric and associated spreadsheets) – but that they’ve managed to transform what was once a philosophical pursuit into a professional obligation with all the genuine human warmth of a LinkedIn algorithm recommending you connect with someone who died three years ago.

    The Authenticity Measurement Framework™

    From my observations, the corporate authenticity directive works a bit like being told you absolutely must dance like nobody’s watching, except there’s a panel of judges with scorecards, the dance floor is surrounded by CCTV cameras, and HR has drafted a 37-page document on Appropriate Spontaneous Movement Protocols.

    Len Goodman will, of course, always give you a seven – which, depending on how the other scores turn out, will likely drag you toward the dreaded middle ground of “meets expectations”.

    “Be yourself!” they exclaim with evangelical fervour. “But not, you know, that self,” they quickly add, gesturing vaguely toward whatever aspects of your personality might cause even the mildest disruption to quarterly projections. “When I said I wanted you to be authentic, I didn’t mean like that” as you launch into a massive monologue, apropos of nothing, about how you’ve always liked that one type of train you only see occasionally.

    The acceptable authentic self bears a suspicious resemblance to a TED Talk presenter who’s had exactly one relatable struggle that taught them a valuable lesson which – through an astonishing coincidence – perfectly aligns with the organisation’s current strategic objectives. What are the odds? It’s like we only want to hear struggles when they have ended up being sorted which is the tone deaf equivalent of asking someone when they will be up to completing their project deliverable despite their recent invasive thoughts of self harm.

    You’re encouraged to express your genuine thoughts, particularly when they involve enthusiastic agreement with pre-determined leadership decisions.

    You’re welcome to bring your unique perspective, especially when it can be channelled into mandatory fun activities that will later appear in the recruitment brochure under “vibrant company culture.”. Just do me a favour and ensure those thoughts are pre-vetted internally before you mention, well, anything, OK?

    In that sense, it’s authenticity as imagined by someone who believes personality is something you select from a drop-down menu during the onboarding process – created in Excel, naturally, given that most of the world’s businesses still seem to have a weird fascination with spreadsheets when other more advanced tools are available.

    The Strategic Vulnerability Initiative

    So when you have a look around, be sure to pay particular attention to the corporate appetite for a very specific flavour of vulnerability – one that’s been carefully filtered, pasteurised, homogenised, and packaged for safe workplace consumption. It’s the equivalent of ideas cooked up by people who think they’ve gone “a little wild” because they had two espresso shots in their pickup from Costa this morning – you know, the people who have personalities that, if they were selected via a colour chart, would be somewhere between grey and beige.

    The ideal authentic vulnerability resembles a movie trailer rather than the actual film: edited highlights that suggest emotional depth without the uncomfortable duration of genuine human complexity. It’s vulnerability with excellent production values and a focus-group-tested ending.

    The acceptable vulnerability performance includes:

    – Sharing a challenge that demonstrates your growth mindset, preferably one you’ve already triumphantly overcome through a combination of grit, pluck, and corporate-approved resilience techniques (additional points if you actively cite the company was responsible for the triumph)

    – Revealing just enough personal information to seem human but not so much that colleagues might need to reconsider their casual jokes about your demographic group (I mean we’d hate to see real change, right?)

    – Expressing precisely calibrated emotion – enough to demonstrate you’re not a sociopath, but not so much that anyone might need to reschedule a meeting

    – Demonstrating the ideal level of self-awareness: enough to show you’re reflective about your flaws but not enough to question why you’re working 60 hours a week to make someone else rich

    So what are the cardinal sins that one must avoid in this vulnerability theatre? Authentic mentions of salary dissatisfaction, genuine confusion about the company’s seventeen conflicting priorities that required you to read 275 pages of buzzwords for no other reason than to tick a box, or legitimate concerns about why the last three people in your position burned out faster than a paper fireplace in a cash factory.

    The Authenticity Consultant Will See You Now

    I know it’s hard to believe that not every business needs to suck every last drop of humanity out of operations, but we know that there will always be people who want to try.

    Where there’s organisational anxiety, there’s inevitably an entire ecosystem of consultants, coaches, and thought leaders who materialise like vultures circling a wounded business model. What we need is more abstract optimisation that looks nice on a PowerPoint because “metric go up” is synonymous with virtue.

    It was from this rarefied yet fertile ground that the authenticity industrial complex was born – a magnificent marketplace of authenticity frameworks, vulnerability road maps, and genuineness methodologies all available for the reasonable price of your department’s entire professional development budget.

    These authenticity architects offer such wonders as:

    – The Seven-Step Genuine Self Activation Process™

    – Authentic Leadership Bootcamps (because nothing says “be yourself” quite like being shouted at in a hotel conference room).

    – Personal Brand Alignment Intensives (a process whereby your authentic self is carefully sculpted to match both buzzword driven market demand and your manager’s expectations)

    – Vulnerability Assessment Tools that quantify exactly how genuinely you’re expressing yourself (with convenient benchmark data from industry leaders in authentic self-presentation)

    For a modest consulting fee approximately equivalent to the annual salary of one of your graduates, your organisation too can implement a comprehensive authenticity programme where staff participate in mandatory workshops designed to facilitate the spontaneous emergence of their true selves, then return to their home offices identical to those they left, but now with the added pressure of performing “natural” behaviour on command.

    The Authenticity Permission Gradient

    One thing I do find interesting is that corporate authenticity follows a curious mathematical formula where the freedom to express one’s true self expands in direct proportion to one’s proximity to the C-suite. This produces what I call the Authenticity Permission Gradient, a fascinating phenomenon observable in any corporate environment – if you want to see evidence out in the wild, have a look around your particular locale.

    At the executive level, authentic self-expression is recognised as the natural prerogative of visionary leadership:

    – The CEO’s authentic communication style (whether cryptic, brusque, or reminiscent of a woodland creature with rabies) becomes a celebrated leadership trademark featured in business profiles

    – The CFO’s authentic need for four hours of uninterrupted thinking time each morning becomes sacred calendar territory that not even an actual office fire would dare interrupt (but if you turn down two meetings, you’d best be ready for a grilling).

    – The COO’s authentic preference for communicating exclusively through terse emails sent at 3am becomes “just how they work best”, abstract of what that might do for mental health of the mere minions who work for them

    Meanwhile, several layers down the organisational chart:

    – Your authentic communication style becomes “needs to work on professional communication skills”

    – Your authentic need for uninterrupted focus time becomes “not a team player”, despite the fact that team you’re in being quite well regarded

    – Your authentic work rhythm becomes “needs to align better with organisational workflow”

    The Authenticity Permission Gradient reveals the uncomfortable truth: organisational authenticity is the corporate equivalent of parents telling children they can be anything they want for Halloween and then adding “…as long as we can make it from this pile of cardboard boxes and it doesn’t require me to learn any new skills, spend more than £5, or challenge my extremely narrow conception of appropriate costume themes.”.

    “Look, how am I supposed to dress you up as the concept of “sadness” with this loose bag of tat I bought from Tesco, Cecil? You’re going to be a grape because this green body paint was half price.”

    The Exhaustion of Performing Non-Performance

    Perhaps the most diabolical aspect of the corporate authenticity mandate is the sheer cognitive overhead of simultaneously performing while pretending you’re not performing.

    Traditional professionalism, for all its flaws, at least had the decency to acknowledge itself as a performance. You put on the suit, you adopted the demeanour, you played the role – everyone understood the game. It was uncomfortable at times, sure, but it was a mask that at least had some semblance of a ruleset to it.

    The authenticity imperative, by contrast, demands a meta-performance so complex it would make method actors weep with inadequacy. You must craft a carefully calibrated presentation of natural behaviour, then meticulously conceal all evidence of that crafting. It’s like being told to create elaborate origami while making it appear you’re just randomly folding paper with no particular outcome in mind.

    The cognitive load is staggering. At any given moment, you must:

    – Continuously monitor which aspects of your personality are currently acceptable for workplace consumption and which must remain carefully locked in the authenticity penalty box

    – Project natural enthusiasm for corporate initiatives that, were you being truly authentic, would prompt reactions ranging from mild bewilderment to launching your laptop out of the nearest window

    – Maintain just enough uniqueness to fulfill the authenticity requirement without becoming the “difficult one” whose authenticity is somehow always causing problems

    – Construct genuine-seeming responses to questions like “What did you think of the CEO’s three-hour vision presentation?” when your authentic response would violate several HR policies

    The performance of non-performance creates a strange existential exhaustion. It’s like being a duck – appearing to glide serenely across the surface while paddling frantically underneath – except you must also strenuously deny the existence of both the paddling and the water while a team of duck performance consultants measures your gliding metrics against quarterly expectations.

    The Final Commodification Frontier

    Hey, enough of the hyperbole (even though I really like doing it). What we need to acknowledge is that we’re witnessing the late-capitalist equivalent of colonising the final unclaimed territory: the self itself.

    Having already commodified your time, attention, skills, and emotional labour, organisations have now found ways to extract value from your very identity.

    Your authentic self is no longer merely who you are – it’s a strategic asset to be leveraged, optimised, and deployed for organisational benefit.

    Your personality quirks are now “potential market differentiators”.

    Your personal history represents “engagement opportunities”.

    Your values are “brand alignment vectors”.

    Your genuine reactions are “content generation opportunities.”.

    It’s as though someone read Orwell’s 1984, focused exclusively on the concept of thoughtcrime, and said, “Well thank you for the brilliant idea George, but how can we monetise it?”

    This transformation represents the logical end point of what happens when the “brain-dead but shows one’s working” AKA spreadsheet thinking encounters human complexity. When even “being yourself” becomes another checkbox on your performance review – right between “demonstrates proficiency in Excel” and “consistently meets deadlines” – we’ve completed the circle of commodification with a thoroughness that would impress even the most ambitious McKinsey consultant. Bravo from the back from BCG and Bain I hear also.

    A Modest Proposal for Less Exhausting Existence

    Is there an alternative to this authenticity performance paradox? Perhaps. However, it requires acknowledging some uncomfortable truths about the nature of work in contemporary organisations.

    First, complete authenticity in professional environments is neither possible nor desirable. Work inevitably involves some degree of performance and boundary maintenance. The problem isn’t that we perform at work but that we’ve created the exhausting expectation that performance should appear non-performative. It’s how I imagine Britt Lower felt in Severance trying to be a character who was trying to be a character as an actress who was trying to be a character – I’m getting a headache just thinking about it.

    It’s also rather like insisting that Olympic gymnasts not only complete their routines but also convince judges they’re just naturally bouncing around like that for fun. “She stuck the landing, but I could tell she was deliberately trying to avoid falling, so I’m deducting points for inauthenticity.”. Being impacted by gravity, Grace? That’s a two point deduction on your review…

    Second, genuine improvements in workplace wellbeing come primarily through structural changes rather than psychological reframing. All the authenticity workshops in the world won’t compensate for the fact that you’re expected to do three people’s jobs for one person’s salary while pretending this arrangement fills you with authentic purpose, as opposed to watching your blood pressure rise with the speed of the latest tech bro billionaire rocket into space.

    You can authentic-self your way through a toxic workplace about as effectively as you can positive-think your way through a collapsing building. At some point, structural integrity matters more than your attitude toward falling masonry.

    Third, actual respect for individuals manifests through systems that accommodate human needs rather than those that merely celebrate self-expression within narrowly defined parameters. True respect for authenticity means creating environments where difference is structurally accommodated rather than merely symbolically acknowledged.

    Putting up a “Bring Your Authentic Self to Work” poster in an open-plan office where people can’t focus, can’t have private conversations, and can’t control their basic environmental conditions is like putting a “Just Keep Swimming!” motivational poster in the middle of the Sahara Desert. The sentiment, while admirably chipper, fails to address certain fundamental limitations of the situation.

    The Quiet Dignity of Bounded Authenticity

    Contrarian as I know I am often prone to be, there’s something quietly subversive about embracing what we might call “bounded authenticity” – the radical notion that you are under no obligation to perform comprehensive selfhood in environments primarily designed to extract value from your labour.

    Bounded authenticity acknowledges work as a domain where certain aspects of yourself are relevant and others simply aren’t. It recognises that maintaining boundaries between personal and professional identities isn’t some failure of wholeness but a perfectly reasonable adaptation to the reality that your workplace is not, in fact, entitled to the complete, unfiltered you. If you want to bring it, that’s up to you – but you can’t be forced to “be yourself”.

    This approach doesn’t mean becoming an emotionless corporate drone. Rather, it means making tactical decisions about which elements of yourself you choose to bring to professional contexts – not because authenticity is a performance obligation but because selective authenticity is a resource management strategy.

    Think of it as authentic minimalism: bringing exactly the amount of yourself that serves your purposes rather than deploying your complete selfhood in service of organisational theatre.

    The Uncomfortable Conclusion

    The authenticity industrial complex ultimately reveals a profound anxiety at the heart of contemporary work culture – a desperate attempt to reconcile fundamentally dehumanising systems with the human need for meaning and connection. Rather than redesigning those systems, we’ve opted to demand that humans perform humanity more convincingly within inhospitable environments.

    It’s a bit like discovering your fish tank has no water, then addressing the problem by requiring the fish to give enthusiastic presentations about how they’re implementing innovative dry-breathing initiatives rather than, you know, adding water to the tank. “The handbook says ‘water is weakness’, Matthew, so get those fish working with air – it’s all we’ve got.”.

    Perhaps it’s time to acknowledge that authentic self-expression emerges naturally in environments designed around human needs.

    It doesn’t require facilitation, measurement, or optimisation.

    It simply appears when people feel genuinely secure, valued, and free from the pressure to perform aspects of themselves that should emerge organically or not at all.

    You know – the basics. Like your manager actually giving a shit about how you feel rather than what your happiness score is today.

    Until then, the next time your organisation invites you to “bring your authentic self to work,” perhaps the most authentic response is a politely raised eyebrow and the quiet recognition that your genuine self is not a corporate resource to be harvested but your own sovereign territory – portions of which you might occasionally lease to your employer under carefully negotiated terms, but never surrender to institutional ownership disguised as psychological liberation.

    After all, there’s something rather magnificently authentic about recognising when authenticity itself has become just another performance metric – and deciding, with quiet dignity, that some aspects of yourself deserve better stages on which to perform than the corporate authenticity theatre.

    This article originally appeared on my personal LinkedIn on April 10th, 2025. The link to the original article is located here: https://www.linkedin.com/pulse/authenticity-industrial-complex-how-being-yourself-matt-turvey-frsa-cqyze/?trackingId=jzhzJ%2BlCRPSJJesIJOhx%2Fg%3D%3D

  • The Uncomfortable Utility of Feeling Like a Fraud: Why Your Imposter Syndrome Might Actually Be Doing You a Solid

    Look, we need to have a deeply uncomfortable conversation about that persistent feeling that you’re somehow faking it whilst everyone else in the room has their proverbial shit neatly packed into labelled containers with colour-coded lids.

    You know the one. That low-grade psychic hum that whispers “they’re going to find you out any minute now” whilst you’re nodding sagely in a meeting about something called “strategic alignment” or “paradigm integration” or whatever linguistic mulch is being served in today’s corporate salad.

    Here’s the thing: we’ve collectively decided that this feeling – this imposter syndrome – is a bug in your psychological operating system rather than a feature. With that the internet, that magnificent producer of oversimplified solutions to complex human problems, has a ready prescription: “Just believe in yourself!” Which, as actionable advice, ranks somewhere between “just be happy” and “have you tried not being poor?”

    The Dubious Virtue of Unwavering Certainty

    Let’s consider for a moment the alternative to imposter syndrome. Not confidence – which is entirely compatible with nuanced self-assessment for the seventeen seconds I feel it in a calendar (or financial) year – but the complete absence of doubt. The absolute certainty that one’s knowledge is comprehensive and one’s skills are beyond reproach.

    Now, ask yourself: who are the people you’ve encountered in your professional life who possessed this quality? Who are the colleagues, managers, or public figures who’ve demonstrated unwavering certainty in their own competence?

    I’ll wait.

    If your experience resembles mine in even the slightest degree, you’ve just mentally assembled a rogues’ gallery of the most catastrophically incompetent individuals you’ve ever had the misfortune to share oxygen with. These are the people who’ve piloted projects into mountainsides whilst assuring everyone that turbulence is normal, and that the explosions are in your head.

    Who’ve set institutional money on fire whilst explaining that smoke is just another word for profit.

    Who’ve failed upward with the buoyancy of a helium-filled ego untethered from the gravitational pull of reality.

    The correlation is so consistent it might as well be a physical law: the more certain someone is of their competence, the more aggressively they’ll defend demonstrably terrible ideas when the consequences start arriving with the subtlety of a brick through a tastefully created stained-glass window of your preferred corporate deity.

    The Hidden Operating System of Doubt

    What if – and I’m just spitballing here – your imposter syndrome isn’t a malfunction? What if that nagging sense that you might not know everything necessary for the task at hand is actually your cognitive immune system functioning exactly as intended?

    Consider the alternative. Consider what happens when that system gets compromised.

    We all know That Guy™. The one who read half a Wikipedia article on a complex topic and is now explaining it with the conviction of someone who’s devoted three decades of focused study to the subject.

    The one who confuses having an opinion with having expertise.

    The one who mistakes volume and certainty for insight and accuracy.

    That Guy™ doesn’t have imposter syndrome, and nobody is better off for its absence.

    Your doubt – that uncomfortable, persistent questioning of whether you know enough or can do enough – creates the cognitive space necessary for continued growth. It maintains the gap between what you know and what remains to be learned. It prevents the terminal crystallisation of knowledge that ends in spectacular, confident failure.

    The “Just Believe!” Industrial Complex

    The internet drowns in advice about overcoming imposter syndrome, most of it amounting to some variation of “just believe in yourself harder.” This advice approaches psychological complexity with all the nuance of telling someone with clinical depression to “try smiling more” or instructing a person like me with chronic pain “have you considered not hurting?”. Yeah pal – I tried and, guess what, it’s still the same.

    This framing misunderstands both the phenomenon and its function. It assumes that doubt represents a defect rather than a calibration mechanism – one that prevents you from waltzing confidently into situations beyond your current capabilities with the carefree abandon of a toddler approaching an electrical socket with a fork.

    The problem isn’t the existence of doubt but its calibration. Too much, and you’re paralysed into inaction. Too little, and you’re a walking Dunning-Kruger graph with exceptionally poor risk assessment capabilities.

    The goal isn’t to eliminate doubt but to integrate it – to develop a working relationship with uncertainty that allows forward motion without delusion. This represents a far more sophisticated psychological task than “just believing in yourself,” which sounds suspiciously like advice from someone selling motivational posters featuring eagles and sunsets, whilst kneeling cross legged in tie-dyed attire.

    The Unexpected Virtue of Knowing What You Don’t Know

    Here’s the deeply unfashionable truth: knowing the limits of your knowledge and capability isn’t a weakness. It’s a metacognitive superpower in a world increasingly dominated by people who mistake confidence for competence.

    This awareness – this persistent questioning of what you know and can do – creates the necessary conditions for actual growth rather than the performance of expertise. It allows you to identify gaps in knowledge or skill before they become catastrophic failures. It enables you to ask questions when others are nodding along to avoid appearing uninformed.

    In a professional landscape increasingly resembling a confidence game in the most literal sense, this capacity becomes not a liability but an asset of considerable value. It allows you to:

    – Learn when others assume they already know

    – Question when others accept uncritically

    – Adapt when others remain rigid in their certainty

    – Grow when others have convinced themselves they’ve arrived

    None of which involves “just believing in yourself” more vigorously, and many of which add value that show your ability to “tell it like it is” is of a lot more value than the 17th nod in that room full of groupthink.

    The Arrogance Tax: What Certainty Costs Us

    The social premium placed on unwavering confidence has created environments where the appearance of certainty is rewarded above actual knowledge. This dynamic produces leaders who cannot acknowledge error, systems resistant to correction, and discourse increasingly unmoored from reality.

    We’ve all seen the consequences.

    Financial systems collapse because warning signs were dismissed by those too certain of their models.

    Companies implode because executives couldn’t admit they misunderstood market conditions.

    Political systems falter because leaders cannot acknowledge the complexity of problems facing their constituents.

    In each case, the absence of doubt – that quality we’re all supposed to be striving to eliminate – plays a central role in the eventual catastrophe.

    Meanwhile, those plagued by imposter syndrome are busy double-checking their work, seeking additional information, and considering alternative perspectives – activities that don’t exactly make for compelling LinkedIn humblebrags but tend to prevent spectacularly public failures. As many of my colleagues have commiserated from time to time – there’s some sort of privilege reserved for the rescue of a red project, and less clamour to thank the ones that never went even a yellowing shade of green.

    The Calibration Game: Making Friends with Your Doubt

    The challenge, then, isn’t to eliminate imposter syndrome but to calibrate it – to develop a working relationship with doubt that enables action without delusion.

    What might this actually look like in practice?

    First, it means recognising that confidence and certainty aren’t synonyms. Confidence allows you to act despite incomplete information; certainty precludes the possibility that such information could exist. The former enables progress; the latter ensures eventual collision with reality.

    Second, it involves distinguishing between doubt that prompts continued learning and doubt that prevents necessary action. The former expands capability; the latter constrains it unnecessarily. This distinction isn’t always immediately obvious and requires ongoing attention rather than one-time resolution.

    Third, it demands awareness of context – recognising when the social pressure for unwavering confidence might be pushing you toward certainty that exceeds your actual knowledge or capability. These moments require particular vigilance against the contagious certainty that often pervades professional environments.

    My hope is that I know when to say “I know what I’m talking about” versus “I like the idea of being seen as someone who knows what they are talking about regarding this popular topic”. The value lies is understanding which is which.

    None of this involves affirmations in the mirror or whatever self-help gurus are currently selling as the solution to the “problem” of not being sufficiently certain of your own brilliance. It mostly just requires being honest with ourselves as well as others. Imagine that?

    The Social Dimension: When “Just Believe” Becomes Gaslighting

    It’s worth noting that the experience of imposter syndrome doesn’t fall evenly across social categories. Research consistently demonstrates that women and members of marginalised groups experience imposter syndrome at higher rates – not because of inherent psychological differences but because they face greater scrutiny and more persistent questioning of their capabilities.

    In these contexts, the experience of imposter syndrome cannot be reduced to individual psychology but must be understood as responding to actual social dynamics that impose different standards for different groups. The prescription to “just believe in yourself” becomes particularly hollow when directed at those facing genuine structural barriers to recognition and advancement.

    It’s rather like telling someone they’re imagining the rain whilst refusing to acknowledge they’re the only person at the table who wasn’t given an umbrella. That sort of declaration serves to help nothing other than making the privileged look like they exploit the circumstances of those with less than them.

    What we need to do is shift the focus from trying to simply make individual psychological adjustments to creating institutional environments that recognise the value of intellectual humility and that distribute the burden of proof more equitably across social categories.

    The Integration Project: Making Doubt Your Ally

    The path forward doesn’t involve eliminating doubt but developing a more sophisticated relationship with uncertainty – one that recognises its value while preventing its transformation into paralysis.

    This integration requires moving beyond the binary thinking that frames imposter syndrome as either a weakness to be conquered or a badge of authentic humility to be celebrated. Instead, it suggests that our relationship with certainty about our capabilities requires ongoing calibration – adjusting to new information, different contexts, and evolving demands.

    It positions doubt not as an obstacle to success but as a necessary component of sustainable development – a form of cognitive friction that prevents both stagnation and delusion.

    This nuanced approach stands in stark contrast to the motivational poster simplicity of “just believe in yourself” – a prescription that, again, approaches the complexity of human cognition with all the sophistication of telling someone in a wheelchair to “just stand up and walk” because ambulatory people manage it without difficulty.

    The Uncomfortable Conclusion

    Perhaps, then, we might begin to recognise the uncomfortable utility of imposter syndrome – not as something to be overcome but as something to be integrated into a more balanced and sustainable relationship with our own capabilities and limitations.

    In a world where the most confidently wrong people seem to fail upward with remarkable consistency, perhaps your persistent doubt represents not a defect but a different kind of intelligence – one that acknowledges complexity, remains open to correction, and resists the seductive certainty that precedes catastrophic error.

    The question becomes not how to eliminate doubt but how to engage with it productively – how to maintain the humility necessary for continued growth while developing the confidence required for meaningful action. This balance represents a far more sophisticated psychological achievement than the elimination of doubt, and its development requires nuanced engagement rather than motivational platitudes.

    So the next time someone tells you to “just believe in yourself” as the antidote to imposter syndrome, perhaps consider that they’re prescribing the psychological equivalent of bloodletting – a treatment that misunderstands both the condition and its function, and that might leave you worse off than the original “ailment” ever did. If they do, maybe suggest trepanning to alleviate the spirits in their head that dreamed up their purported solution, and then debate how it might be as effective, or not as the case may be.

    In reality, your doubt might just be the most valuable thing about you in a world increasingly dominated by those who’ve eliminated it entirely from their psychological repertoire – usually with catastrophic consequences for everyone in their vicinity.

    Sometimes thinking you don’t know everything is far more valuable than trying to convince yourself – and others – that you do.

    This article was originally published on my personal LinkedIn profile on April 3, 2025. A direct link back to the original post can be found here – https://www.linkedin.com/pulse/uncomfortable-utility-feeling-like-fraud-why-your-you-turvey-frsa-fjewe/?trackingId=BmKLTjPfT8megNPPovceNA%3D%3D